Many authorities argue that current Artificial Intelligences (AIs) are or will soon be really intelligent entities, so "different, but equal" to humans in the human-specific sense that rejecting them would be evil, helpless, or both. Many other authorities argue that AIs are mere parrots that just look "smart", and will remain parrots for a long time.
Whatever. Here, I argue that the rest of us, starting from decision makers, must just let the scholars debate those points and go on with their lives, sticking to the little bits of advice that follow.
Ignore prophets of AI doom and toddlers with too much money
A few months ago, one of the general partners of a venture capital firm that, among other things, has led an investment of more than $200mn into generative AI graciously shared his surely disinterested assurance that AI will save us all, by making three main arguments:
Ignore all prophets of AI doom, AI will not kill us all
AI is wonderful
AI NOW!!! Because, China!!!
The first point is absolutely right. But the proofs of the second point are so... bad and unreal, that it's impossible to take seriously anything that builds on it. AI is wonderful because, among other things:
Every human will have an AI tutor (if a child) or AI mentor/therapist (when adult) "that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful"
"Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet."
Point 1 is Solaria all the way down): so inhuman, so sociopathic, so clueless about what real parenting and human education are that it makes me sick.
Point 2 makes sense only if one still believes that economic growth as today can continue indefinitely, without side effects, and that "technology doesn’t destroy jobs and never will". Just don't stop for a second to consider material limits, or the fact that "allowing [everybody] to build AI as fast and aggressively as they can" as he and his firm selflessly propose, would happen too quickly and too broadly to "upskill" every affected worker QUICKLY ENOUGH.
This extremely narrow-minded blind faith may partly explain where the first point comes from. However, it's just more of the same cult that's been screwing the world for years, now just faster, thanks to AI.
There are a few good little things to save here and there, for example the observation that AI art may enable people without technical skills to create and share their artistic ideas. But in general, that whole piece, and any other piece like that, is the same self-deluded techno-solutionism that's been around since at least 1996: "let software move fast and break everything, and all will be well", because kids very good at coding but almost nothing else say so, and they badly need a Hail Mary pass right now. So no, thanks. Yes, AI can and must be used to save us all, but in a very different way.
Consider this when someone says AI is human
As a species, we are still unable to agree on what equality should be like among all human beings, let alone accepting and seriously practicing it. If anything, we are already too used to the opposite, that is treating humans like machines (1), because of people who reduced themselves, or were forced, to "think" and live as machines. Until this situation persists, any version of "AI is really sentient, like humans" would just increase the confusion and fossilize certain errors.
Besides, we already know how it ends when AI is treated as humans, because the first AI was corporations. With all the abuses of corporate personhood we already know, accepting as human another category of immortal, ubiquitous, super-opaque, superfast entities would be really, really, really stupid. Even if, see previous section, those new entities did not happen to be controlled by their immediate precursors.
Finally, if someone proved tomorrow, beyond doubt, that AI is a new life form with the same dignity as humans, that would be just another reason to restrain AI, not the opposite, for its own good. Because we humans would be the parents of this child, and every good, responsible, mentally healthy parent:
first of all... just postpones having children until he's sure he's mature and stable enough to raise them properly, right?
doesn't let children roam alone in the real world, with Internet access, years before they are mature enough to handle it
would never raise children or AIs in THIS way
Conclusion: don't give a damn if AI is "human" or not
Philosophizing is not just good, it's absolutely essential. But whatever "real General Intelligence" is, and whatever it is that entitles to human rights, this is not the time to mix that with concrete decisions and policies about the economy and human society in general. At that level, now and in the foreseeable future, sticking to these three points:
AIs are talking boxes. Extremely useful talking boxes but nothing more. Not even animals
WE all the human beings are the ONLY ones "different but equal"
Since AIs are just boxes, it's never "AI did this". It's "the human who wrote or ran this AI did this"
is the only smart way to go, if not an absolute necessity. For all practical, medium term human problems that matter, getting sucked into discussions (or funding...) around the real "intelligence" or "humanness" of AI is an intolerably stupid waste of time and money.
It doesn't matter what the truth is. We cannot afford to consider what we call AI as nothing but a mere TOOL to urgently apply, as much as possible, to all the many, REAL problems it is good for . And no, the AI explosion of the last two years does not mean "too late, the cat is out of the bag now". If that was a valid argument, slavery would still be legal worldwide.
see "AI will make things worse" here