We've raised AI just like we raised some children
Why expect different results then?
A couple of weeks ago I wrote that fears that "AI will kill us" are exagerated, being fears about something that "literally ceases to exist the moment anyone pulls the plug off the computers it runs on. AI itself cannot "kill" anything more real than avatars in some simulation. AI is the most vulnerable, helpless thing ever."
By pure chance, yesterday I found a post discussing exactly "why we can't just unplug it: by the time we discover it is a superintelligence it will have spread itself across many computers and built deep and hard defenses for these. That could happen for example by manipulating humans into thinking they are building defenses for a completely different reason."
Personally, an AI free to "spread itself across many computers" seems to me just a sub-case of, quoting myself again, "[giving] a computer running any AI software full, direct, unchecked control on bacteriological labs, nuclear missiles silos or anything like that... it would be US who killed us, NOT AI, and we'd deserve it."
This said, today I want you to know that in that same post Albert Wenger says something great.
It is absurd to expect a good outcome...
Says Wenger that, as far as AI existential risks are concerned,
"The key here will be to invert the approach to training that we have taken so far. It is absurd to expect that you can have a good outcome when you train a model first on the web corpus and then attempt to constrain it via reinforcement learning from human feedback (RLHF)."
"This is akin to letting a child grow up without any moral guidance along the way and then expect them to be a well behaved adult based on occasionally telling them they are doing something wrong."
The first sentence is so right and obvious that I am sad I did not write it myself, but it's the second sentence that really struck me. Because it immediately brought back to my memory a concern read years ago that... "modern parenting is making monsters" (emphasis mine):
"To make children constantly choose is to abdicate one’s responsibility for being a parent... It used to be that childhood operated under instruction... [Today, instead,] an important aspect of our culture of choice is that it absolves people (INCLUDING PARENTS) of a responsibility of care towards others. To put it another way, our culture of choice contains this message: I am not responsible for you because you are responsible for you."
This is not a new problem, of course. In 2023, one of the first results of a search for news about "parent abdication" is a reminder that:
"even after repeated disappointments the parental duty to try remains. Failure after good-faith efforts can be excused. A failure even to try can't be."
which is thirty years old, and isn't even the oldest one since "When the Parent Abdicates‐Children Who Take on Adult Roles" is from 1978. Other results from similar searches include:
"Emotionally absent parents deprive their children of the possibility of building intimate and "valuable relationships with others"
Children of such parents are "often confused about what's right or wrong)"
Children of uninvolved parents "don't know how to handle people and relationships"
Kids with neglectful parents may have "trouble forming healthy relationships"
[Uninvolved Parenting] makes it difficult to learn appropriate behaviors and limits in social situations
Last but not least, allow me to include two paragraphs are from "The Disappearing Child, written by Neil Postman in 1983:
"[There is today a] conception of "child's rights" [that] rejects adult supervision and control of children and provides a "philosophy" to justify the dissolution of childhood. It argues that the social category "children" is in itself an oppressive idea and that everything must be done to free the young from its restrictions."
"This view is in fact [very old], for its origins may be found in the Dark and Middle Ages when there were no "children" in the modern sense of the word."
Children or chatGPT? What is the difference?
Replace "children" with "chatGPT and similar programs" and "parents" with "managers of said programs" and all the statements above make at least as much sense as before, and all point back to Wenger's advice, don't they? Not to mention that if Postman's statements do also apply to AI, the Metaverse isn't the only apparently modern thing that is reactionary, rather than innovative.
Now, please do NOT take what follows as an accusation towards anybody's specific parent! After a while, everybody is personally responsible for his or her actions period, and to some extent we are all guilty of the current state of things, by inaction if nothing else. This is just an invitation to think (and act) about this thought that right now is stuck in my brain:
If we have certain problems with Artificial Intelligence it might just be because, as a society, we managed AI training just like we've been (not) managing or supporting parenting, for at least two generations if you look at the dates of those statements. And when I look at the companies driving AI now I find hard to not see some of their managers as kids who, stealing Wenger's words, were let "grow up without any moral guidance along the way and now are expected to be a well behaved adult based on occasionally telling them they are doing something wrong". Please note that I said managers, not developers or researchers! What do YOU think?
Speaking of neglectful parenting...
Comparisons of AI training with human parenting aren't intellectual exercises. Before explaining why I say so, consider this AI-powered, "neglectful parenting on steroids" scenario:
Half a generation from now, baby Shan comes home to a crib attached to a sleek black box, containing an AI that monitors all her vital signs thanks to advanced biotelemetry sensors
As the weeks pass, the black box starts singing soothing lullabies to baby Shan, comforting her when she cries, and generally interacting with Shan, its AI learning and adapting to her needs, becoming a constant Companion, talking to her.
As Shan grows into a toddler, the AI in the black box is transferred to a Companion teddy bear, which runs around with her.
As Shan starts school, the Companion is seamlessly integrated into a wristwatch that she wears.
By the time Shan is an adult, she has had a lifetime with a Companion that has always acted in her best interest, been infinitely patient, stayed a confidant, and whom she has trusted her entire life.
This begs two questions:
(Fake) Daemons all the way forward?
Companion? That "Companion" is by far the closest thing to the Daemons of "His Dark Materials" that our reality may ever get! Trouble is, those Daemons were "physical manifestation of a human soul", while Shan's "Companion" could only be a digital simulation of a foster parent, no matter how "real" it would seem to its assigned human. Considering what such a Companion may be if initially developed "without moral guidance etc..." is left as exercise for the reader. Please comment!
Half a generation from WHEN?
Smart voice assistants for dumber children are already here
so are TikTok companions that "feel raw and real" for hardly the most reassuring reasons...
or chatbots that "worm their way into our hearts"...
or fascinating companions way earlier than they should
Last but not least... Just today, in order to overcome the information overload for new parents, a software developer asked if he should implement, the following solution:
transfer verified information handpicked by childcare and parenting experts to an AI, so that
parents can ask that AI what they need via chat or voicemail
when we're not quite sure what the AI thinks we should do, we should also be able to easily ask other parents for opinions on the subject with a click of a button. Something like, "Look at this post from AI, do you think this is good advice?"
If you ask me, Point 3 seems the weakest part of the plan, both in the short and the long term: why on Earth should inexperienced parents ask advice to other inexperienced parents, instead of the childcare and parenting experts of Point 1? Shouldn't that be the first connection to facilitate?
In the long term... what happens when that solution kills itself, as it very likely will? Why should parents raised with AI support since day 1, be it chatbots or baby Shan's "Companion", ever feel the need to ask "Look at this post from AI, do you think this is good advice?"
In general, let's follow Wenger's advice: "invert the approach to AI training taken so far", please. To parents, I strongly suggest (*) to be very, very, very careful with any "experiment" with AI and parenting, for at least two reasons:
Any experiment of this kind takes at least two generations to evaluate. You may have some certainty that something was "good advice" to raise your children only when their parenting gives you decent adult grandchildren
(*) AI should be taken more seriously, and parenting a bit less dramatically. But how parents should cope with a digital society is one of the main themes of this newsletter, so subscribe to know more, and help me to share more!