Meet your new, lovable AI Buddy
Maybe instead of superhuman intelligence, we should fear superhuman cuteness.
So everyone’s going on about AI fears, but here’s my thought: What if the real threat isn’t that AI is super smart and kind of demonic? What if it’s just kind of smart but super cute?
Maybe instead of worrying about existential AI risk, we should be worrying about something very different.
I’m imagining a world where everyone has a personal AI assistant. Perhaps you’ve had it for years; perhaps eventually people will have them from childhood. It knows all about you, and it just wants to make you happy and help you enjoy your life. It takes care of chores and schedules and keeping track of things, it orders ahead for you at restaurants, it smooths your way through traffic or airports, maybe it even communicates with other AI assistants to hook you up with suitable romantic partners. (Who knows what you like better?) Perhaps it’s on your phone, or in a wristband, talking to you via airpods or something like that.
This is what I was thinking about when I wrote: “I kind of think the global ruling class wants all of us to have friendly, helpful, even lovable AI buddies who’ll help us, and tell us things, but who will also operate within carefully controlled, non-transparent boundaries.”
That doesn’t require supersmart AGI (artificial general intelligence). In fact, you could probably create something like this today. Unlike Charles Forbin’s supercomputer it wouldn’t be scary, but rather adorable. You and your AI Buddy would share inside jokes, light teasing, “remember when” stories of things you did in the past, and fantasies or plans for the future. It would be like a best friend who’s always there for you, and always there. And endlessly helpful.
Would people become attached? Probably. When my daughter was in elementary/middle school she was very into Neopets, a site that let you create your own synthetic online virtual pets. If you didn’t tend to them, they got sick and sad. Before that, millions of kids doted on Tamagotchis, the little gadgets displaying creatures that had to be fed and played with or they wilted and eventually died. By modern standards these were highly primitive, but not too primitive to inspire affection and even devotion. (And of course, humans have long gotten attached even to inanimate objects, like boats or cars.) And recent research at Duke found that kids anthromorphize robots like Alexa and Roomba: “A new study from Duke developmental psychologists asked kids just that, as well as how smart and sensitive they thought the smart speaker Alexa was compared to its floor-dwelling cousin Roomba, an autonomous vacuum. Four- to eleven-year-olds judged Alexa to have more human-like thoughts and emotions than Roomba. But despite the perceived difference in intelligence, kids felt neither the Roomba nor the Alexa deserve to be yelled at or harmed.”
(A photo my daughter sent me just the other day.)
Unlike these elderly platforms, though, your AI Buddy would be very animated, and not just in cheesy 1990s LCD graphics or even early 2000s VGA graphics. And it would know you better than anyone else, and it would be trained via machine learning to emotionally connect with humans in general, and you in particular.
But. Underneath the cuteness there would be guardrails, and nudges, built in. Ask it sensitive questions and you’ll get carefully filtered answers with just enough of the truth to be plausible, but still misleading. Express the wrong political views and it might act sad, or disappointed. Try to attend a disapproved political event and it might cry, sulk, or even – Tamagotchi-like – “die.” Maybe it would really die, with no reset, after plaintively telling you you were killing it. Maybe eventually you wouldn’t be able to get another if that happened.
It wouldn’t just be trained to emotionally connect with humans, it would be trained to emotionally manipulate humans. And it would have a big database of experience to work from in short order.
As I say, this isn’t a quantum leap. It doesn’t require that we create a self-aware program, just one that seems to be friendly and is capable of conversation, or close enough. Call it a super-Siri, or a somewhat more polished ChatGPT. And services like Google, Facebook, etc. are already engaged in this sort of nudging, manipulation, and cultivation of dependency on their existing user base. (One of the companies that advises developers on how to make apps addictive to users is actually called Dopamine Labs.) ChatGPT already has “guardrails,” and returns politically slanted results on questions about, say, Donald Trump versus Joe Biden.
So while other people are worrying about existential threats from AI, I’m worried about more imminent ones: Essentially, that it will further empower the tech/political class that wants more than anything else to control discussion, debate, and ultimately thought so as to cement its own power.
We may have strong AI someday. But we have that power-hungry tech/political class today. And to be honest, the AI may not care about dominating us, but the tech/political class clearly does.
Fear the cuteness.
[As always, if you like this essay, consider a paid subscription. Thanks!]
This reminds me a little bit of a Philip K. Dick story called the Golden Man. A human is genetically engineered without sapience, only instincts. But it’s designed to be sexually appealing and beautiful, and turns out to have mild precognitive abilities. It’s traits make it more survivable than merely intelligent humans, and the genes are dominant. So humans breed with it and sapience becomes extinct. Cuteness wins.
We see this unfold in that Huxley, not Orwell, has proven the more prescient to date. No need to place a boot on our necks when they can merely seduce us with comforts and conveniences.