Artificial typewriters, and where to find their REAL users
which side of our screens needs more scrutiny, and by which "developers"?
A recent article presenting the productive gains and security risks from generative AI is a good starting point for a couple of related but distinct debates about what, and where, those risks really are.
The primary, but by no means the only participants to the first debate should be company executives and lawmakers, to whom the article offers two points worthy of very careful consideration. One is about adequate evaluation of risks, and trust in it. Some executives say that (uppercases mine): "the efficiency benefits of coding with generative AI ARE so sky-high that there will be plenty of dollars in the budget for post-development repairs. That COULD mean enough dollars to pay for extensive security and functionality testing."
If you ask me, the switch from "ARE" to "COULD", as in "IF stockholders don't keep that money for themselves", is enough to demand such testing preventively, by law.
On the same topic, another executive honestly mentioned that "every discipline where AI has been highly successful [is one where] if something went wrong, the damage would be limited". In case you missed it, fully autonomous vehicles hardly match this profile. Regardless of cars, what really matters here is that, if AIs are basically black boxes whose internal workings is obscure to their own developers... the interactions of more apps, each independently generated by AIs unaware of each other, are even blacker boxes. Think the butterfly effect, just whole swarms of them.
The other, really sensible concern is about what others have called "deskilling on the job". Don't take programming tasks away from junior programmers to give them to AI, otherwise you "won't ever make them seniors. They have to learn the skills to make them good seniors. That is, don't repeat the mistakes we've been making with firefighters, surgeons and, in general, with "fewer millennials in the REAL workforce than any other generation before them".
Controllers are always more important than programmers
Let's consider the most important part of that article now. It's the bit around that great movie, in which what we would call "Artificial Intelligence" today is "confused with game-playing, possibly starting World War III":
"In the [Wargames] movie, NORAD officials decide to ride out the "attack," prompting the system to try and take over command so it can retaliate on its own. That was fantasy sci-fi back 40 years ago; today, not so much. In short, using generative AI to code is dangerous..."
Luckily, just one week after that article, a US law proposal to make nuclear weapons launches by AI illegal canceled for good the risk of AIs launching missiles all by themselves. Surely, that law will be scrupolously followed everywhere from the USA to China and Russia, assuring peace of mind for everybody in between.
Back to that quote, I argue that the last sentence, "using generative AI to code is dangerous...", needs, as a minimum, important clarifications.
First of all, coding and launching missiles are both dangerous activities, but different enough that treating them in the same way may not make always sense. In any case, let's not forget that the reason the world almost ended in Wargames is only that NORAD generals were using AI software, generative or not, to autonomously decide on their behalf when to launch nuclear missiles. Not to code that software. Who should do that job, and how, is another issue.
What is dangerous, in Wargames or any "critical" situation is only to take AI, however it was generated and debugged, and attach it straight to the buttons that launch missiles, without any human in the loop to confirm, or cut the wires. That is, the same general point I already made about every call to "stop AI before it kills us".
Of course, "don't let AI alone" is only half of the picture
When I first shared this thought with other authors, Tristan Louis correctly pointed out that, if it stopped there, my argument would be "reductive and fail for one simple reason: We have trained people to believe [without questioning] that what goes into the machine cannot fail". That is, don't stop at the outsourcing in and by itself, even if it is really dangerous on its own. Even more dangerous is that this outsorcing to "machines" of tasks for which critical thinking is fundamental, is being done by people who never really knew how those machines decide, and are becoming progressively unable to evaluate those processes anyway.
So yes, my original observation that what's dangerous is to let AI decide by itself, not how you code it, only covers half of the solution. To avoid problems with AI, you need TWO things, namely:
always leave actual decisions to humans only (my point)
humans that ARE able to take critical decisions, because they weren't dumbed down by AI, or AI propaganda (Tristan's point)
Who came first, programmers or readers?
Let's connect all the dots, though. Let's go further, taking that necessity of critical thinking about anything "written by AI" as a pointer to what may be the real core of this whole "should AI write or do [X] all alone" issue. That core may be something Italo Calvino, an italian writer who according to the New York Times "never wrote a bad book" and others credit for "most famously opening up a dialogue between literature and science", said almost sixty years ago.
In November 1967 Calvino delivered a lecture titled Cybernetics and Ghosts, in which he sardonically speculated that "writing would one day become a computationally reducible process".
The relevance of that lecture for our AI anguishes of 2023 is well explained in a shorter essay titled "The human reader": in a nutshell, Calvino predicted that the arrival of artificial intelligence able to write as well as a human writer would clarify the priority of human reading. With AI like chatGPT, according to Calvino, the decisive moment of literary life (or of life in general, I would say) will be that of reading, because:
machines cannot replicate (me: with the important exception mentioned below) the myriad and often unpredictable operations that occur within our reading
the deepest insights, that is our growth as human being, happen "when the words we encounter are mediated by the experiences of our senses and by our personal and collective memories".
For that essay, and I agree, the main take-home lesson from that Calvino's lecture is that "You may build a better writing machine, but it will be worthless unless you build better readers".
Yes, this isn't much of a consolations for writers left without any income, as it may happen to yours truly, if paid subscribers don't come. Still, it's something important, isn't it?
As far as building better readers goes, that can only be a task for real humans. Figuring out, or remembering, what "real" means when everything, from words to humans has become "discrete rather than continuous", and (here comes the exception) human readers are fed with fewer and fewer words every year… that is a whole different journey, of course, of which this newsletter and blog hope to become a small but useful companion.