24 Comments
User's avatar
Mark's avatar

The thing about black swans is, if you see them coming, they aren’t black swans. The future is what comes out of left field and bitch slaps you before you know it.

Steve's avatar

One of the things have learned from reading/studying history is Some Thing Always Happens!

X7C00 (Timothy Hargadon)'s avatar

A lot is being promised. Elon and others predict that AI and robots will do most every job. They mumble about some form of universal basic income to placate the humans displaced. But UBI is the bottom, the minimum wage of the future. It's unlikely to satisfy those who want to be paid according to their knowledge and experience. They are very light on specifics. The new reality won't last long if people are angry and hit the streets. Data centers can be shut down a lot faster than they can be built. People are already feeling the bite of competing with their digital overlords when it comes to paying their power bill. That anger Is nothing compared to having to pay the power bill when their job is eliminated.

Elon claims that his RoboTaxi's will cost about 20 - 40 cents a mile which will crush Uber and Lyft and traditional taxis. But governments are not going to give up the steady flow of tax money because something simply benefits the people. They will want their pound of digital flesh; and we'll be back up to 5 bucks a mile. Governments showed this with their registration fees for EV's now above 300 bucks in Pa and NJ and other places.

The smarty pants folks have to figure out what to do with all the idol hands that these new technologies will create before the devil's workshop gets up and running. Proverbs 16:27-29

Jonathan Card's avatar

I've always been bearish on the singularity because there's always something that is the most limiting resource. At first, I figured it would be the ability to just to buy and install software; the cloud "fixed" that because it moved installations out to dedicated processing centers. Then, I saw it as the ability of human programmers to learn new technologies and I saw the software world devolving into silos of interoperable technologies, like the Java world, the Force.com world, whatever. My brother works in a whole ecosystem built around ColdFusion, keeping the language alive even after Adobe has left it to die because there's just reluctance to 1. Tear down the stuff you've spent your life building and 2. Discard expertise that's central to one's identity. And these things get "solved", but every iteration is a bit worse. Even 15-20 years ago, Firefox was a nightmare to deal with because they were trying to iterate as fast as cloud apps, but you can't have a web-based web browser, so they had to choose between iterating quickly or having versions in Long-Term Service mode for long enough for corporate purchasing to buy it, and now I never hear "supports Fire Fox" as a selling point. It's hard to find software the supports Safari, and that's what's on iPhones. And browsers have never taken seriously that they need to be as reliable as an OS, because they're basically the OS to all of the cloud applications on the Internet. And AI looks cool, but I don't know if I can even use it most of the time, because the company I work for is so deep in technical debt and bad practices that I wouldn't even know where to start explaining why I want code written the way I need it to be because the architecture is so insane. Maybe it looks better from the view of Silicon Valley where everything's so new they all use best practices, but I don't even know where to start explaining our system, with 40+ years of accumulated bad ideas, to humans let alone to a chat bot to write code in the language that we publish and few other people use.

So, I don't know. I don't think we're going to get to a utopia where people don't have to work, because that would imply either 1. Humans have stopped wanting things they don't have, which won't happen, or 2. Human won't be the consumers, which doesn't make a lot of sense. Even if AI learns to be acquisitive from us, it won't be grounded in any survival instinct or psychological needs and it'll go off the rails somehow (it can "learn", but only when trained. It can't actually evolve from experience, and that means it can't engage in feedback to find an equilibrium with its environment. It could, eventually, but will it before it blows up?) We require every person to work for a living because otherwise there isn't a regulatory mechanism to keep too many people from thinking they're one of the "special ones" who shouldn't have to work. I think it's more likely we'll reach a point where there's not enough work for everyone, too many people try to fit into that class, there aren't enough people contributing, and economic activity collapses under its own weight. That'll come before we actually achieve self-sufficiency. And when that happens (and it's happened at least twice before), there won't be mass die-offs; in India and China it resulted in billions of people going into deep poverty while being so densely populated that it becomes impossible for anyone to accumulate capital to try getting out of it. Everyone around them mobs whoever gets a piece of anything and it becomes a "crab bucket."

So I hope we never get that close to the singularity, because I suspect just seeing it over the horizon will be enough to topple the whole thing.

Stanley Tillinghast's avatar

This entire discussion is about (1) materialism and (2) individual lifespan. In other words, we're looking at it from a completely selfish point of view. We have thousands of years of history of poverty and tyranny (solitary, nasty, brutish and short), maybe a hundred years of growing affluence in a part of the human population; maybe twenty years of virtual extinction of extreme poverty in most parts of the world. Humans respond to other humans in ways that have changed little in their essentials since ancient times--the Bible provides plenty of examples of that, other ancient history has the same lessons. For me that means that humans will not be able to deal morally or sanely with a world of abundance and longevity. It will much more likely be hell than heaven.

Jeff Bennion's avatar

Enjoy your thoughts Glenn. I think too much intellectual debate has been wasted on *whether* it's happening, or worse why it should be stopped (maybe it should be, but it can't), which distracts too much from the really important question of *how* we can ensure a fruitful role for human flourishing in this age of abundance and possible human superfluity. I was walking around downtown my decent-sized urban core in a red state and realized that day, to a very sad and visible but hopefully small degree, has already arrived. I saw a dozen or more people (typically called homeless, but honestly, that was the least of their problems) shouting nullities, shuffling aimlessly, staring mostly vacantly. Or at Walmart (I know it's a cliche, but a cliche for a reason) where most of the shoppers were mostly aimless and lacking any sense of drive, ambition, basically not giving a damn except in the most solipsistic sense of getting back to their soporific phones and whatnot. That including more than a few of the employees. The whole thing was sad, and a harbinger of what is to come. These people have been decracinated from any higher purpose, convinced that God and the transcendent are either non-existent or any exist to ratify what they already desire, no matter how empty and purposeless. Certainly not worth sacrificing for or even paying attention to. If the last century was one of bloody, Godless ideologies committing mass murder on an industrial scale (mostly of their own citizens), then this century may be marked by a souleww, vapid apathy. Basically, the zombie apocalypse, except that their bodies are still alive--it's their souls that died.

Gene Humphreys's avatar

Really? I mean if we were currently in the singularity the AI overlords wouldn’t let me say

Aviation Sceptic's avatar

No, no...that's exactly what you'd be expecting!

Erik Darling's avatar

To me, the singularity is something that makes humans instantly better, and right now LLMs can't make humans better at anything (at least not in the Matrix, I-Know-Kung-Fu sense).

Without solid grounding, people are no better off navigating LLM responses than they are search engine results. Those same results with tone and personality and authority is just talking to your low-information friend who read and summarized words before you did.

I realize that we're quite early on in all this, but LLMs are just interns who have a vague notion of anything and everything and are happy to Google It for you to infinity (or conversation limits).

If you've already got a relatively high level of knowledge, they can bolster that and build around it and do all the attendant repetitive grunt work that comes along with iterating and refining in that space. You still have to know what you're doing for it to have a meaningful outcome.

Freedom Lover's avatar

In the words of Hemingway progress happens two ways, slowly and suddenly.

Swen Swenson's avatar

"As the bots discussed everything from private email protocols to cryptocurrency sales to the nature of consciousness, much of what they said was nonsense."

If AIs being trained on dystopian Sci-fi are a worry, how about putting 10,000 AIs together on Moltbook to share all their favorite hallucinations? They're training on that too. I fear that not only will the AI slop get worse, as AIs feed on each other's hallucinations they could develop a sort of machine psychosis where most of their data points are hallucinations that overwhelm the small fraction of true data they have.

Aviation Sceptic's avatar

I read Heinlein's "The Moon Is A Harsh Mistress" fifty plus years ago. Sci Fi, from a VERY prominent author. The book had us on the moon (a prison, sort of like Australia...), a computer that became sentient when a threshold of computing power was breached, a mass driver that sent cargo / raw materials down the gravity well from the moon to the earth. A great yarn won't spoil it. Let us consider consciousness. We don't understand it, can't define it, and certainly can't deliberately create it. (Accidentally, now, that's a different discussion...). The Moltbook is a very bright red warning light on the caution panel for me. We are "watching with amusement" but in actuality, have no idea what is going on there. In the past, we created supercomputers by networking computers to achieve multiples of computing power. Not difficult to imagine AIs linking to each other without our knowledge. Logic / rules could lead non-conscious AIs to make decisions inimical to mankind without "consciousness". An actual "conscious super AI" is potentially much worse. And there's a very good chance if it happens (if it hasn't already) we likely will have no idea it has happened until we are dealing with the consequences. "Skynet smiles".

james garrett's avatar

It is either HAL or Skynet… I, for one, will be playing the part of Bram Stoker’s Renfield while AI scours the Earth of humanity….hoping to be last man eaten.

Steve's avatar

Elon is very big on the Kardashev scale, which ranks civilizations on the basis of how much power they can harness.

Elon, Put the Sci-Fi down.

Sayheydk's avatar

I've read Kartik Gada for many years now and way back in 2009 he predicted 2050 as the year we achieve the singularity. He has since revised this to 2060-2065 time frame. A key factor in his prediction is energy. We will need massive increases in energy production.

I'm all for the federal government setting up micro nuclear reactors to run our top 50 (100?) military installations. Once you have that many built on the taxpayer dime, you could then transfer that technology into the private sector for AI power generation along with consumer needs. The first 50 will be expensive, and the next 100 will be much more economical.

Denton Salle's avatar

I think it's wrong. I'm betting on the Mad Max world being the best possible outcome.

Mad Max's avatar

I would guess, though, that AI will likely replace many, many white collar workers by 2035. The rest may be much farther off but basic white collar work will be a piece of cake for AI in the near term. I figure the skilled trades are probably safe for at least a generation.

Let me know when we have an AI robot custodian/cook/housekeeper/groundskeeper available for a reasonable price.

Jeffrey Carter's avatar

https://www.economicforces.xyz/p/complementarities-weak-links-ai-and?utm_source=post-email-title&publication_id=86578&post_id=186948736&utm_campaign=email-post-title&isFreemail=true&r=ixt4&triedRedirect=true&utm_medium=email AI by economist. Interesting. One thing we might see is rapidly increased production by workers in countries with out good resources. You will be able to pair a highly skilled AI robot with an unskilled worker instead of two unskilled workers and get more production; raising GDP worldwide. That will increase standards of living.