Vernor Vinge has died, but even in his absence, the rest of us are living in his world. In particular, we’re living in a world that looks increasingly like the 2025 of his 2007 novel Rainbows End. For better or for worse.
I know quite a few science fiction writers, some better than others. Arthur C. Clarke and I corresponded for decades, though the only time we talked on the phone – his prediction that “long distance” would cease to exist hadn’t instantiated yet – was when he called me from Johns Hopkins to tell me that he wouldn’t, as it turned out, be able to write an introduction to the space law book I was coauthoring with Rob Merges because he appeared to be dying from Lou Gehrig’s disease. (Fortunately, that turned out to be a misdiagnosis; I believe he was actually suffering from a severe case of post-polio syndrome). I’ve known John Scalzi since Old Man’s War, was close to Jerry Pournelle and am still friends with his son Alex, and get together with John Ringo for lunch sometimes when we go to Chattanooga or he comes up to Knoxville. Charlie Stross and I exchange emails, and he was kind enough to offer to help my daughter find an apartment when we thought she might attend the University of Edinburgh. There are others, including of course Sarah Hoyt, who is a coblogger at InstaPundit.
I emailed Vinge periodically, and Helen and I interviewed him for the late, lamented podcast, The Glenn and Helen Show, where we talked at considerable length about the Singularity and the power of simulations. (Though Helen expressed considerable doubt about whether digitally simulated sex would be as satisfying as the real thing).
As happened with several of our podcast interviews, the conversation went on after the recording ended, and it was just mutually delightful. Helen, who didn’t know much about him before, went on to read Rainbows End,and liked it even though she’s not big on science fiction, or fiction in general. He was just a very smart, unaffected, not at all full of himself sort of guy. Helen was very upset when I told her he had died, and also shocked when I told her that it was going on 20 years since that 2007 interview.
Vinge is best known for coining the now-commonplace term “the singularity” to describe the epochal technological change that we’re in the middle of now. The thing about a singularity is that it’s not just a change in degree, but a change in kind. As he explained it, if you traveled back in time to explain modern technology to, say, Mark Twain – a technophile of the late 19th Century – he would have been able to basically understand it. He might have doubted some of what you told him, and he might have had trouble grasping the significance of some of it, but basically, he would have understood the outlines.
But a post-singularity world would be as incomprehensible to us as our modern world is to a flatworm. When you have artificial intelligence (and/or augmented human intelligence, which at some point may merge) of sufficient power, it’s not just smarter than contemporary humans. It’s smart to a degree, and in ways, that contemporary humans simply can’t get their minds around.
I said that we’re living in Vinge’s world even without him, and Rainbows End is the illustration. Rainbows End is set in 2025, a time when technology is developing increasingly fast, and the first glimmers of artificial intelligence are beginning to appear – some not so obviously.
Well, that’s where we are. The book opens with the spread of a new epidemic being first noticed not by officials but by hobbyists who aggregate and analyze publicly available data. We, of course, have just come off a pandemic in which hobbyists and amateurs have in many respects outperformed public health officialdom (which sadly turns out to have been a genuinely lo bar to clear). Likewise, today we see people using networks of iPhones (with their built in accelerometers) to predict and observe earthquakes.
But the most troubling passage in Rainbows End is this one:
Every year, the civilized world grew and the reach of lawlessness and poverty shrank. Many people thought that the world was becoming a safer place . . . Nowadays Grand Terror technology was so cheap that cults and criminal gangs could acquire it. . . . In all innocence, the marvelous creativity of humankind continued to generate unintended consequences. There were a dozen research trends that could ultimately put world-killer weapons in the hands of anyone having a bad hair day.
Modern gene-editing techniques make it increasingly easy to create deadly pathogens, and that’s just one of the places where distributed technology is moving us toward this prediction.
But the big item in the book is the appearance of artificial intelligence, and how that appearance is not as obvious or clear as you might have thought it would be in 2005. That’s kind of where we are now. Large Language Models can certainly seem intelligent, and are increasingly good enough to pass a Turing Test with naïve readers, though those who have read a lot of Chat GPT’s output learn to spot it pretty well. (Expect that to change soon, though).
I remember in Clarke’s novel 2001 (superior to the film in my opinion), some experts still insisted that the HAL 9000 computer only “mimicked” intelligence. Clarke suggests that they were wrong about that, or at least that it made no real difference which was going on. But the thing about the machines is that they keep getting better from one generation to the next, something that human beings are definitely not doing.
Also at play in Rainbows End is the question of trust. A computer is a powerful tool, but unlike a hatchet, or even a machine gun, the user can’t always be sure who it’s really working for. This is a major risk, and to my mind a problem with the theory that humans will be able to counter the growth AI by augmenting their brains digitally. Already our digital devices, even our cars, are sometimes working for someone else without our knowledge. I wouldn’t want that to be extended to my brain, and yet how would I know?
Well, these are problems for us to worry about, as Vernor has left us. But he did us a great service by getting that thinking started. I think I’ll read Rainbows End again, just for inspiration.
“A computer is a powerful tool, but unlike a hatchet, or even a machine gun, the user can’t always be sure who it’s really working for.“
Another key insight from the wonderful mind. I’m grateful that mind is working for the betterment of mankind!
Regarding AI in science fiction, I tried reading the Ian Banks books about the Culture, because they were favorites of Elon Musk, who names his landing barges after them. I gave up because the Culture is too depressing. The galaxy is ruled by planet sized AIs, and humans persist as more or less pets. If that’s our future, what’s the point?