Here’s a quick question. Who was smarter, Albert Einstein or Babe Ruth? The answer isn’t as simple as you think. If you asked them on a test to derive the theory of special relativity, Einstein would be at the top of the class. But suppose the test was a season’s batting average? There’s no question who’d do better. We know that doing physics requires excellent math skills, but a home run takes a sharp eye and strong swing. According to the psychologist Howard Gardner in his book Frames of Mind: The Theory of Multiple Intelligences, there are different ways to be smart. He comes up with a list, among which are “logical-mathematical” (think Albert) and “body-kinesthetic” (think the Bambino).
What about artificial intelligence? The promotion of AI is coming to us at breakneck speed, appearing in our computers now in the form of better search engines. Chat-GPT shook people up when it began to write term papers on demand and answer questions faster than the greatest stars on Jeopardy. What AI dishes out is instant and oracular. But is that true intelligence, or is that just a machine that’s part parrot, and part a kid who cheats by looking up the answers during the test? And if intelligent, what kind is it? These aren’t silly questions, no matter how great the hype. Imagine a machine that doesn’t realize what it’s doing, lacks a moral framework, makes mistakes, and at the same time has enormous power. That’s artificial intelligence. Most AI developers are not taking these problems seriously, and no one is sure where all this will lead. Instead, they’re engaging in a fierce competition to make it happen, motivated only by the bottom line of large corporations. To put the question another way, what if the pot of gold at the end of the AI rainbow is really a machine network that decides it doesn’t need the human race anymore? This isn’t science fiction but the logical outcome of unregulated machine intelligence.
OK. Let’s set aside the theory of the AI apocalypse, an outcome so dark and unfathomable that it boggles the imagination of anyone who thinks about it. Instead, let’s wrap our heads around the here and now. Is AI as it’s being touted really intelligent? Yes and no. AI can be fun. When I asked Chat-GPT to write positive pieces about Joe Biden and Donald Trump in the style of Shakespeare, Biden’s began, “Upon the political stage, a figure emerges with a countenance serene, Joseph Biden, a statesman of measured wisdom and temperate grace. In the intricate dance of governance, he assumes the mantle of leadership with a sagacious hand, seeking to mend the fractured sinews of the nation...” When it wrote about Trump it came up with, “Hark! Behold the political stage whereupon Donald Trump, a man of unyielding mettle, doth tread with resolute steps toward the highest echelons of power. In this veritable drama of governance, he emerges as a leader of uncommon valor, a modern-day Caesar whose prowess in the economic arena rivals the mightiest emperors of yore...”
Lofty language like that might come in handy for the undecided voters still living in the seventeenth century, and Donald Trump may seem like a “modern-day Caesar” to some, but only if you count inspiring a rampage on the Congress. Worse is that AI suffers from what techies call “hallucinations.” This is a clinical way of admitting that AI can just make stuff up. Lawyers have gotten in trouble when they used AI cases in court that didn’t exist. AI makes errors in arithmetic more than half of the time. When a system erred, some times it claimed the error was “a typo.” My friend John tested an earlier version with Chat-GPT. This is how his session went.
John: How much is two and two?
Chat-GPT: Two and two is four.
John: No, two and two is five.
Chat-GPT: I apologize. That’s right. Two and two is five.
It’s only somewhat reassuring that the folks at Open AI who invented Chat-GPT have fixed this error. What about the others they are missing? AI can give you answers that sound terrific, but trust them at your peril. It’s better is to think of AI today as a personal research assistant who happens to hallucinate from time to time. This is never a good thing, in people or in machines.
"OK. Let’s set aside the theory of the AI apocalypse, an outcome so dark and unfathomable that it boggles the imagination of anyone who thinks about it." I'm afraid I'm stuck here -- as a life-long SCI-FI reader I believe there are way too many thought experiments on AI for us to ignore. As a society we can't even write our laws gimmick-free -- how can we possibly effectively regulate limitless, amoral intelligence?
I agree. Outside of international agreements and government regulation I think AI is likely to be an existential threat. Sorry, humans.