So the AI revolution is coming, and it’s much hyped. Is it overhyped? Well, yes and no.
New technologies typically undergo a cycle in which they’re overhyped at the beginning, ignored or dismissed as they improve, since they’re not matching the initial hype, then revolutionizing things later as they get good enough for broad adoption. See, e.g., what people said about personal computing in the 1970s and early 1980s, versus its eventual broad impact in the late 1990s and into the new millennium.
Likewise, I was one of the initial enthusiasts for space settlement, which seemed imminent in the Gerard K.. O’Neill 1970s. Now we’re finally starting to get there, but fifty years later.
So is the same thing true of AI? Well, maybe. On the one hand, some of the predictions are shocking: True human-replacing AI in five years? Two years? Six months? (We’ve already got conversational AI that can pass the Turing Test, though as I’ve said for years, the AI that is smart enough to deliberately flunk the Turing Test is the one you want to beware of.)
Then again, we’re 20 years past Ray Kurzweil’s prediction that “the Singularity is near,” in a book by the same name. As I wrote in a review in the Wall Street Journal back in 2005:
People's thoughts of the future tend to follow a linear extrapolation -- steadily more of the same, only better -- while most technological progress is exponential, happening by giant leaps and thus moving farther and faster than the mind can easily grasp. Mr. Kurzweil himself, thinking exponentially, imagines a plausible future, not so far away, with extended life-spans (living to 300 will not be unusual), vastly more powerful computers (imagine more computing power in a head-sized device than exists in all the human brains alive today), other miraculous machines (nanotechnology assemblers that can make most anything out of sunlight and dirt) and, thanks to these technologies, enormous increases in wealth (the average person will be capable of feats, like traveling in space, only available to nation-states today).
Naturally, Mr. Kurzweil has little time for techno-skeptics like the Nobel Prize-winning chemist Richard Smalley, who in September 2001 published a notorious piece in Scientific American debunking the claims of nanotechnologists, in particular the possibility of nano-robots (nanobots) capable of assembling molecules and substances to order. Mr. Kurzweil's arguments countering Dr. Smalley and his allies are a pleasure to read -- Mr. Kurzweil clearly thinks that nanobots are possible -- but in truth he is fighting a battle that is already won. These days skeptics worry that advanced technologies, far from failing to deliver on their promises, will deliver on them only too well -- ushering in a dystopia of, say, destructive self-replication in which the world is covered by nanobots that convert everything into copies of themselves (known in the trade as the "gray goo" problem). Mr. Kurzweil's sense of things isn't nearly so bleak as that -- he is an optimist, after all, an enthusiast for the techno-future -- but he does sound a surprisingly somber note.
Indeed, "The Singularity Is Near" is partly a cautionary tale. Having established that we're going to face a very different world in the second half of the 21st century -- and face it healthier, wealthier and more artificially intelligent if not precisely wiser -- Mr. Kurzweil concedes that so-called GNR technologies (genetics, nanotech and robotics) may present problems. We may find ourselves battling genetically enhanced super pathogens, deadly military nanobots and powerful "unfriendly" artificial intelligences scheming against those of us with mere natural intelligence. Though Mr. Kurzweil regards these threats as manageable, he does not minimize them and offers chilling scenarios of what could go wrong. These scenarios are all the more credible because they come from Mr. Kurzweil and not from one of the usual gang of scaremongering Luddites.
Unlike the Luddites, Mr. Kurzweil argues that the best way of curbing technology's potential harm is ... more technology.
So how’s Ray doing twenty years later? Well, we’ve already faced a genetically engineered pathogen, released by a government probably on purpose, in the form of Covid, something that nearly everyone, including the U.S. government, now admits. Nanotechnology, at least in the full-blown “molecular manufacturing” form, seems to be taking longer than predicted back then. And AI seems to be moving pretty fast.
But how fast is unclear. I’ve been using various AI platforms and I’ve found them fairly useful. My mother needed a new printer, once that fit the rather diminutive dimensions of the printer tray in her turn-of-the-millennium era “computer armoire.” I plugged the dimensions of her old printer, which fit fine, into Grok and got several recommendations, picked the top one, and it fit perfectly and works fine. Handy!
I’ve also asked Grok to perform such tasks as researching treatments from my wife’s cardiac problems, computing the grade curve for my classes, help me find good deals on airline flights, and the rest. I’ve also used Claude, which I like, and ChatGPT, which I’m not crazy about but which has improved dramatically. My law school IT guy says I should give Google Gemini another try, after writing it off when it initially appeared because, well, then it sucked.
But the thing to remember here is that the AI platforms keep improving, and at short intervals, while humans stay more or less the same. My opinions of which platforms are best are of short-lived validity, while the field itself keeps moving rapidly.
So the question is, will this more-or-less exponential improvement continue? It won’t do so forever, because no exponential trend continues forever. But will it continue long enough to produce AI that’s unfathomably smarter than human beings? And will it then rule the world?
I’m going to guess that the answer to the first question is yes, because human beings aren’t really all that smart, considering. Yes, people are much more intelligent than other species, for the most part. But that should tell you something. If intelligence were such a super-duper advantage, then there would be an evolutionary arms race in brain power, which there pretty clearly isn’t. Humans are outliers here.
So maybe being unfathomably smart isn’t a great advantage in general. If it were, there would probably be more examples of great intelligence out there.
But there’s one place where being unfathomably smarter than human beings pays off, and that’s in manipulating human beings. A super intelligent AI might not be in a better position to rule the world, but it might be in a better position to rule the world of human beings. (What folk wisdom there is about dealing with much more intelligent beings — demons and devils and such — says they best thing to do is not talk to them, since they can always fool you).
But would it want to? Super-smart humans seem to find other things more interesting than ruling humans. (Indeed, a simple glance around should tell you that the people most interested in ruling humans are not the smartest among us, to put it mildly. At best they’re midwits for the most part.)
Right now, though, so is AI, if that. I’d like a true AI assistant that I could just tell to make me flights and hotel reservations and get them the way I’d like, but none can do that. I’ve used Grok’s voice mode, and it’s good at answering questions, but it won’t shut up once it’s done so. It’s like having Sheldon Cooper in your pocket. Nor can Grok, despite real time access, make plane or hotel reservations, or appointments at salons, or whatever, like a true personal assistant would.
So for now, we’re still early in the hype cycle. But it’s also worth thinking about the likelihood that the publicly available AI tools may not be the best. Perhaps the learning curve is currently moving so fast that they are. But I would expect, in time if not quite yet, that people in the biz would be holding back the best AI for themselves.
What should we watch for to see if that’s happening? Basically, anyone who seems to be making much better decisions than normal, much better predictions than normal, and who is much better at spotting opportunities, and weak spots among their enemies and competitors, than normal. Are we seeing that yet? I don’t think so.
So far.
Glenn, I'm more of a skeptic on this than you seem to be. So far I have avoided using any of it, as much as I can. When it was starting up, I talked to my older son about it. He just turned 50, and he's been working as a computer programmer for about 30 years. No degree--we got him his first computer when he was 8, after we realized he was talking to his friends on the phone--telling them what to type into their Texas Instruments home computer, and they'd tell him what happened. By the time he was 12, I would take him to a Radio Shack Users' Group meeting, and the guys in their 50s would be asking him to explain what was going on inside their computers. His take on AI was that it is mostly hype.
I have already been seeing reports of copyright lawsuits filed against the AI companies, because their bots (and their programmers) are accessing copyrighted material online without asking permission.
But I have one more issue with this stuff, and it might be considered philosophical: yes, these AI entities can access information online. The problem I see--and this is often a long-standing problem for humans as well--there is a serious difference between "knowledge" and "wisdom." Even with humans, academic "knowledge" is no guarantee of "wisdom." We can see that in a lot of the professors showing up at protests all over the country. And I have known some very wise men in 75 years of life who didn't have much in the way of academic credentials. This past week I have been reading some books and diaries by Eric Hoffer--a farm laborer and longshoreman who ended up a professor at Berkeley. (He had a low opinion of most intellectuals!) One case of this we have been facing is this "gain-of-function" research, which has done a lot of damage world-wide in the last few years. "Knowledge" may tell you how to do it; but whether you should do it or not is a matter of "wisdom"--and I am afraid the last few years are evidence that our scientists and their government supervisors were lacking in wisdom.
This was really good. and it brought back some, if not dormant, at least sidelined thoughts I was mulling back then.
Immanentize!