Digital FAKE "AI" is no problem. Only the REAL AI is
Get real, already. And learn from Iron Man.
I am actively seeking work as blogger, (ghost-) writer, speaker, researcher, popularizer... on all the topics in this and my other posts. Thanks for your attention, and support, by subscribing or in any other way.
Some days ago L. Rosenberg asked on Medium whether we have "reached peak Human" and we'll be overcome by Artificial Intelligence. While that post does contain some interesting data and food for thought, I also think it is a good summary of certain absurd concerns about AI and wrong ways to frame the whole "relationship" between AI and human beings.
The problems start at the very beginning of the piece, with the explanation that "reaching Peak Human" means reaching (all emphasis mine):
(Excerpt 1) "the point where AI systems can outthink the majority of individual humans, thus defining the moment in time when we "peaked” as an intellectual force on planet earth."
"After all, once we pass this milestone, we will steadily lose our cognitive edge until AI systems can outthink all individual humans - even the most brilliant among us. At that later point, we will say that AI has achieved the more dangerous milestone of Superintelligence. And while most people focus on the second milestone, I believe we should track the first because it could happen within months, not years."
SO WHAT? Why on Earth should we care to "lose our cognitive edge" to some THING, nothing more, that we can literally shut down by just pulling its plug? If tomorrow morning an alien race, extremely fragile and unable to physically harm us unless we beg it to do so landed on Earth to rule us, would we accept that or laugh in their face, before kicking their asses back in space?
Assuming we didn't well before the arrival of AI, "losing our cognitive edge" is something that will happen IF we make it happen by NOT stopping something that is absolutely stoppable.
(Excerpt 2) "Until recently, the average human could easily outperform even the most powerful AI systems when it comes to basic reasoning."
Why on Earth should reasoning alone be something that determines who "rules"? There are plenty of extremely smart, extremely high IQ humans who nobody would trust with tending an infant more than 10 minutes. And we are in trouble EXACTLY because we let some such people run wild without restraints. Why on Earth should we be even stupider and let anything non-human do the same?
(Excerpt 3) Someone developed a custom IQ test that "does not appear anywhere online and therefore is not in the training data, and gave that “offline test” to Open AI’s new “o1” model, and it scored an IQ of 95 [on that test] that it was NOT trained on. This is still an impressive result. That score beats 37% of adults on the reasoning tasks... At this rate of progress, it is very likely that an AI model will be able to beat 50% of adult humans on standard IQ tests this year."
Again: so what? Even if it had any conscience it would still have no right whatsoever to rule us, and it doesn't have a conscience, or dignity. It's just stuff we can and really should use in much better ways than playing morbid IQ contests, and stuff we can annihilate with the flip of a switch before it becomes dangerous.
(Excerpt 4) "Does this mean we will reach Peak Human in 2024? Yes and no. First, I predict yes, at least one foundational AI model will be released in 2024 that can outthink more than 50% of adult humans on pure reasoning tasks. From this perspective, we will exceed my definition for Peak Human and will be on a downward path towards the rapidly approaching day when an AI is released that can outperform all individual humans, period".
This treats AI as if it were some virus, that is some physical object that could leak from some lab any time. Funny, really.
(Excerpt 5) "We humans have another trick up our sleeves. It’s called collective intelligence, and it relates to the fact that human groups can be smarter than individuals... I bring this up because my personal focus as an AI researcher over the last decade has been the use of AI to connect groups of humans together into real-time systems that amplify our collective intelligence to superhuman levels. I call this goal Collective Superintelligence and I believe it is a viable pathway for keeping humanity cognitively competitive even after AI systems can outperform the reasoning ability of every individual among us... I believe we are just scratching the surface of how smart humans can become when we use AI to connect human groups together in larger and larger numbers..."
Again? Thinking we'd have to be cognitively competitive in order to overcome something trapped inside a bunch of transistors is like thinking that to overcome a professional boxer tied to an electric chair to we'd need to be stronger than him. Or fearing lung cancer from "sentient" cigarettes that we could avoid by just not smoking.
Besides... amplifying human intelligence by connecting "human groups together in larger and larger numbers" through software? If this sound familiar, it's because it is.
Even "the internet" was supposed to be collaborative, peer to peer superintelligence, and look how it's ended, exactly because we let it to the MERCE of a few guys with way too much money, way too much self-worth, but almost zero knowledge of anything but programming, and zero oversight from above or outside.
(Excerpt 6) This is about a slightly different issue, but still off-base: " [The author is] passionate about pursuing Collective Superintelligence because it has the potential to greatly amplify humanity’s cognitive abilities, and unlike a digital superintelligence it is inherently instilled with human values, morals, sensibilities, and interests."
Being "passionate about greatly amplifying... human values, morals... etc", you say? That's me too.. I just say we already have all it takes to act with "human values, morals, sensibilities, and interests", without plugging into something that wastes huge amounts of energy that should be used in much better ways and, more likely than not, would make most people even dumber (the Internet, remember?).
The fault here is believing, accepting, surrendering, hoping... that we need to put physical or virtual electrodes in our brains, Matrix-like, to do stuff that would really, seriously improve our lives like this, this or this.
(Excerpt 7)"Amplifying our collective intelligence might help us maintain our edge long enough to figure out how to protect ourselves from being outmatched."
Here it goes again: the concern about some superintelligent, immortal species in space that can rule us... but it totally, totally depends on us to both land and to remain alive on Earth. Me, I'd say that if we have to submit to a dictator, they'd better be humans, right? Because humans die, digital systems can be eternal (more on immortality below).
(Excerpt 8) The results of a very recent study would suggest that "AI has reached at least the same level, or even surpassed, the average human’s ability to generate ideas in the most typical test of creative thinking. I’m not sure I fully believe this result, but it’s just a matter of time before it holds true."
Whatever. It doesn't matter if it is, or will ever become true, until the whole attitude remains the equivalent of "let's research viruses, but if we discover lethal ones we will just have to release it in the atmosphere, the more the better".
It still makes no sense. Using the author's words, this sentence "Whether we like it or not, our evolutionary position as the smartest and most creative brains on planet earth is likely to be challenged in the very near future..." lacks the obvious conclusion that every average primary school student that gets enough sleep every night would reach all by himself: "...if we are so stupid to let it happen".
Let’s do more? Sure!
Final quote, with which I agree 100%: "we need to be doing more to protect ourselves from being outmatched."
Sure we do. Let's stop wasting energy on way more AI data centers that we would ever need, and that may never exist if enough nVidia engineers quit tomorrow, as they could very comfortably do (I would, in their shoes).
Let's stop running AI models for the public, use AI for stuff that matters.
Let's slow down everything, not just AI, because that would be the collective superintelligence we would need the most right now.
Above all, let's stop listening to corporations, or governments that say this is unavoidable.
Get real, already. AI is not thermodynamics, or any other physical law. It's not entropy, that can only increase. AI is not inexorable. AI is a human artifact and nothing else, for heaven's sake. A mere, damned artifact that needs us to exist, but we have NO OBLIGATION WHATSOEVER TO EVER LET "live" (note the quotes) or treat, let roam or otherwise dignify as human. Learn from Iron Man:
Get real, already. The problem is not AI
Get real, already, and kick AI back into the box where it must remain (1). The problem is not AI. It never was. The problem is the very, very few people who are telling AI is inexorable AND control and will control how it is used.
The REAL "AI" we should worry about is the one, which is just as immortal, ubiquitous and omnipotent as the fake AI we're told to fear, but already took over more than one century ago. It's corporations, that is the same entities who are aggressively pushing this idea that we have to surrender to AI.