How to tell good AI from bad (or pointless) AI
Enough of "existential threats". Let's recognize real AI stuff we need NOW
Artificial Intelligence (AI) may seem completely revolutionary, but more often than not it just greatly accelerates everything we humans were already doing anyway , both good and bad. Recognizing the good applications is extremely urgent, both in principle and because so many pressures are bearing down on AI that we cannot afford to waste resources on pet or harmful projects. How can we do that? Here I try to answer, first listing, without any pretense of completeness, some good, badly needed applications of AI, then trying to define what, exactly, they all have in common.
Healthcare and sciences
In healthcare, AI assistants can (within limits discussed in the conclusions) help a lot to make this basic human right more efficient and affordable by finding patterns and facts that human doctors missed because they were "buried in records",. Among other things, AI can:
detect sepsis, a leading cause of death in hospitals
recognize Parkinson's disease in eye scans, years before other symptoms appear
greatly speed up proper diagnoses of rare diseases
help discovering new treatments and therapies for many other diseases, by describing how proteins fold or unfold
find new antibiotics for drug-resistant infections or predict which microbes may resist to antibiotics, which is great in an age when pathogens get closer and closer to humans
Outside of healthcare, AI is already:
finding and classify many astronomical objects
digging up dinosaur bones
gauging the impact of different climate policies
Energy production and management
Here, AI tools are used to:
figure out exactly where solar panels should be installed
develop models and simulations for nuclear fusion (which, if you ask me, would be essential for the cities of 2070)
Ecosystems and human infrastructures
AI can support optimal protection and management of both natural and human physical "systems" by:
helping firefighters to spot some fires before the first 911 call comes in
finding corroded lead pipes that pollute drinking water
optimizing routes and maintenance of planes
building integrated models of logistic services and the physical infrastructures they live in, that would, quoting from Hivekit
allow important optimizations like "for our ride-hailing company, find the optimal position for each driver during the course of the day to minimize time to pick-up."
give the masses access "the same level of coordination that was historically instrumental in controlling them"
Skill PRESERVATION
Automation has always been used also to downgrade or eliminate workers, and with them their skills. So far, this has always worked out more or less well... in the long run. We could never return to the Moon like we did 50 years ago, exactly because the Apollo astronauts flew on rockets hand-welded by welders so good that nobody can match them today. But it doesn't matter now, because there are much better ways to weld rockets, or anything else.
Today, AI is just extending that "opportunity" to get rid of workers and skills to the most skilled workers. Problem is, unlike previous waves of automation, AI is doing it, too fast for our own good, even in sectors where we really, really want humans skilled enough to really take over when machines without any meaningful historical record of safety fail.
This is already an issue with, at least surgeons, firefighters and airplane pilots. We need all those professionals to acquire certain skills, and to keep them fresh with continuous, adequate training, and I'm not sure this is happening. The most practical way to do so may be to use AI-enhanced simulators and other tools, specifically to prepare humans for all the cases, that will happen, when AI alone fails or is not available.
Call out fake innovation and useless complexity
Here are two examples of what I mean, on which I would really appreciate both feedback on their feasibility and opportunities to work on related projects.
AI-powered analysis of existing patents could quickly find prior art, to block dumb patents and thus foster real innovation. Maybe, the same analysis may even discover "what's actually worth automating with AI".
On another, much more critical front, I want AI to do to law and tax codes what Seldon Hardin did to Lord Dorwin in the Foundation novels, that is call out all the cruft, and throw it away (do read that quote, it's important). That is, I want AI to parse whole codes, in order to:
write the shortest and simplest versions of the same codes that produce exactly all the same effects
point out all the parts, and the resulting procedures, that are ambiguous, uselessly complex, impossible to apply or mutually contradictory ....
so that all the humans with the right skills (i.e. all lawyers, judges, law students...) can parse and clean the results, possibly with rewards for every bug they find, until human lawmakers and ministers can safely, officially decree that those simpler codes are the new law of the land.
EDIT, added 2023/10/09 15:07 CEST: as proof of the need for such checks, look at this case where one ambiguous “and” in one US law may have major impact on thousands of federal prison sentences. Stuff like this must be catched BEFORE sentences.
For completeness, I must also mention the symmetrical possibility to make necessary complexity in laws bearable just thanks to AI, as in "when AI can correctly fill out a complicated tax-form for just a few cents, the regulatory burden of complicated tax rules [with maybe hundreds of edge cases] drops significantly." I must think more about this, but it's an interesting, maybe unavoidable idea.
What do all these things have in common?
All the cases I just describe are applications of AI software that already exist, don't require other mountains of money for "the next version". They also are all applications that first of all already assist, or could assist, discovery and implementation of solutions to real, pre-existing problems that impact everybody.
Second, those are all applications that work by either analyzing or finding patterns, especially ones that are hard to describe, or optimizing decisions already made by humans. "Assist", "analyze" and "optimizing" are the key words here, because computers cannot judge, just apply rules. Therefore, in most cases, they can and must help, but (almost) never decide alone (a).
Third, which may be the hardest part to get right or accept, those cases all definable by, and manageable, with data that are all, with the obvious but not incompatible exception of law codes:
only about physical, non-ambiguously measurable properties
generated by machinery, or at least only by experts, in consistent, well defined ways
(so far..) untainted by AI (Have you seen the warnings that companies should try to preserve access to pre-2023 bulk stores of data to train AI?)
available in huge, consistent, non-biased quantities. Because AI can do much less with "messier problems, with less good data to learn from"
Why I say so
The first reason is that, as Hivekit puts it, AI models of physical objects, from whole forests to parts of human bodies, are much simpler to get right than "humanesque language or media". Ditto for anxiety disorders, psychologists's practices and any other "object" that, at the end of the day, is actual behavior of never really quantifiable complete human beings or societies.
The other reason for limiting, until real breakthrough come, large real-world deployments of AI to problems entirely defined and manageable by really quantitative, machine-generated data, is in another property of AI.
In or outside offices, AI fundamentally alters the nature of productivity, by making it "much more consistent, both in speed and quality, that is MORE SCALABLE". Problem is, the properties of AI that make productivity "more scalable" are the same that make AI backfire, more and worst than other tools.
In other words, AI just intensifies and makes ubiquitous, not to mention dangerously invisible too, the "garbage in, garbage out" problem.
This deserves more attention, because with AI the "garbage in" is not just bad training data, or applying AI at the wrong level (i.e decision instead of fact gathering) or to problems that should really be solved with other methods.
With AI, the most deceitful kind of garbage is the same thing that is pushing"prompt engineering" jobs: if you only know how to ask dumb question, you will only get worthless answers. Unfortunately, the full extent of this part of the picture is not widely understood yet.
The best way to explain this issue comes from healthcare, but it's really general: imagine replacing a real doctor or nurse with an AI chatbot, trained to assist with diagnoses. Let's also assume that that chatbot was trained with complete, not partial or biased data AND that all doctors have been properly trained to use it, which too often is not yet the case.
In that ideal case, doctors may certainly write chatbot prompts with enough detail and quality to uncover diagnoses that they would otherwise miss. But abandoning to the same chatbot patients not literate enough to describe their conditions adequately might just "further worsen inequities in healthcare". Same for insurances, mortgages, and many other crucial services, not to mention "self-help" and dating.
Summing up, what all good and actually needed applications of AI possible today seem to have in common is restriction to the right kind of (real!) problems (that is, those manageable with quantitative data by objective, "qualified" sources), restriction to data or pattern gathering roles, exclusion from both decisions and giving "advice" in the wild. Do you agree?
(continues in the next post, to bring back sane discourse around AI)
PS: today I'm depressed because I just watched how a charlatan could burn 22 BILLION dollars by just calling his snake oil "tech" . Watch it, and if you think someone doing his best to spread common sense about everything tech deserves a few billionths, nothing more, of what that guy wasted...thanks for a paid subscription, or donations as explained here
(a) some obvious exceptions would be the cases I feel calling "airbag AI", that is all the cases when, as it happens with airbags, autonomous computer action is the only thing that even in theory, may save human lives when some accident happens. Surely there are other cases, feel encouraged to add them!
Real world proof that AI should check law codes as I suggested this morning: this case where an ambiguous “and” in one US law may have major impact on thousands of federal prison sentences. Stuff like this must be catched BEFORE sentences. https://www.motherjones.com/crime-justice/2023/10/supreme-court-pulsifer-criminal-justice-drug-definitions/
Speaking of "AI-powered analysis of existing patents" in my last post:
https://mfioretti.substack.com/p/how-to-tell-good-ai-from-bad-or-pointless
check out this #chatgpt analysis of an electronic circuit from 1954
https://twitter.com/BrianRoemmele/status/1710835622842360201