Here are five things that I believe to be true about generative Artificial Intelligence as it exists today.
(I’ve written about a few of these points before, but it has been a couple of years. I’ll include links to those old pieces below.)
Generative AI is best understood as a satisficing technology.
Satisficing is a portmanteau of “satisfy” and “suffice.” It was coined by Herbert Simon in the 1970s.
In layman’s terms, satisficing is the process of (1) establishing the threshold where your work product is good enough, (2) working until you have reached that threshold and then (3) stopping. Think of satisficing as an alternative to maximizing — expending maximum effort to produce your very best work.
Actually-existing AI is best-suited to scenarios where good enough is all you need. The 50-page report that has to be submitted but, realistically, no one was ever going to read. Recording the minutes of a routine zoom call. Preparing an itinerary for a family trip to Disneyland.
Making that distinction between situations where you need to do your best work and ones where you just need to go through the motions is a good way to make sense of when and if the technology can be useful.
(For researchers and organizations, it is also well-suited to a lot of tasks where machine learning was already being used. If you were already using natural language processing for sentiment analysis, then generative AI is often going to be an upgrade.)
On Generative AI and Satisficing
I’ve been thinking recently about how generative AI tools might fit into our lives. The best framework I can come up with revolves around Herbert Simon’s concept of “satisficing.”
Many of the problems with generative AI are because Private Equity/Venture Capital/Unregulated Industrialists have a different satisficing threshold than the rest of us.
Ted Chiang wrote my favorite essay on this topic: “Will A.I. Become the New McKinsey.” I recommend it every chance I get.
I also just finished reading Megan Greenwell’s new book, Bad Company: Private Equity and the Death of the American Dream. The book is a riveting, a searing portrait of the costs we all collectively pay by allowing Private Equity vultures to continue to play a rigged game.
It is no surprise that generative AI has mostly been deployed to offer worse products at lower costs across a range of industries. The satisficing threshold for good enough is not set by individual journalists or doctors, or even by managing editors or hospital administrators. It is set by owners who know nothing about the product and care about nothing other than immediate returns.
One could imagine a version of generative AI that is useful to journalists, to medical professionals, to political organizers, etc. But the future of media, medicine, politics, etc is not developed in a vacuum by well-intentioned professionals.
This all seems plainly obvious if we imagine what actually-existing capital will likely try to use these technologies to achieve. And it is utterly invisible if we focus solely on the technologies themselves.
What happens after the ChatGPT free-trial period ends?
[Dall-E 2 prompt: a robot cash register, digital art]
We should be much more worried about technology’s second failure mode than its first.
When we imagine the future of any emerging technology, there are two distinct failure modes. One can either imagine what might happen when a technology works as advertised, but at larger scale, or one can imagine what would happen if a technology breaks down, or does not work as well as advertised.
Much of the discourse surrounding Artificial General Intelligence (or Superintelligence, the new term du jour) operates in the first failure mode. Eliezer Yudkowsky and the AI 2027 people assume that we will have superintelligent AI very soon, and insist that the results will be disastrous. Sam Altman and Dario Amodei agree that we’ll have superintelligent AI quite soon, but figure aw shucks it’ll just be incredible once we all adapt.
…Meanwhile, Grok is declaring itself Mechhitler and the Department of Defense has announced a $200 million contract with xAI.
I am not so worried about Grok becoming a superintelligent skynet. I am quite worried about the DoD handing critical responsibilities to an unreliable large language model run by a company that constantly overpromises and underdelivers.
Two Failure Modes of Emerging Technologies
There is a pervasive sense right now that, in the field of artificial intelligence, we are living in early times. Depending who you ask, AI is some mix of exciting, inevitable, and scary. But all agree that it is real, it is here, and it is growing. The present is merely prelude.
This is going to amplify dangerous conspiracy theories. If you keep talking about these systems as “artificially intelligent,” don’t be surprised when people find signs from God in there.
QAnon and the January 6th Big Lie both predate ChatGPT. Both were flimsy fucking theories. The QAnon crowd thought JFK Jr was secretly alive and ready to reveal himself. They clung to the belief that DC pizza joint Comet Ping Pong was holding child sex slaves in cages in the basement. Comet Ping Pong does not have a basement. The January 6 conspiracists thought that Democrats controlled Georgia’s voting machines, despite Republicans being, y’know, in charge of the whole damn state government.
Kashmir Hill had a story last month about people looking to Chatbots for answers and falling down conspiratorial rabbit holes. My friend Ben Riley has written on the topic as well. All of this is terrible. None of it is surprising. All of it will get worse. I do not see how it gets better anytime soon. (It surely won’t get better in the absence of government regulation.)
There are a lot of things that I like about Arvind Narayanan and Sayash Kapoor’s recent paper on “AI as Normal Technology.” One reason why I would much prefer that we refer to LLMs as “Machine Learning” instead of “Artificial Intelligence” is that, if you tell people you’re building digital god, some of them will surely believe you.
What's in a name? "AI" versus "Machine Learning"
Today I want to fuss over language for a bit. I’ve begun to suspect that the term “Artificial Intelligence” manages to obscure more than it reveals.
There is a bubble. It isn’t going to break anytime soon.
I have pretty firmly planted my flag in the AI skeptics camp. I don’t use AI for writing or for teaching — not because of some grand moral opposition, but because I don’t find it the least bit useful to my workflow.
I don’t think generative AI is pure vaporware, but I also don’t think it will ultimately qualify as a general purpose technology. When the dust settles, I suspect it will be transformative in roughly the same ways that the word processor was transformative.
But if I’m right about that, then AI is currently in a massive financial bubble. The multi-billion-dollar valuations, the spending spree on talent and on chips and on gigawatt-scale data centers… It all feels a lot like the late dotcom-era glut of broadband investment.
The returns are simply never going to recoup the investment costs.
Ed Zitron has been the loudest and clearest AI critic on this point. He has argued at great length, for a couple years now, that this is a financial bubble and it is about to burst.
The one point where I pretty strongly disagree with Zitron is that I don’t expect this bubble to burst anytime soon. “The market can stay irrational longer than you can stay solvent.” We are living in exceptionally irrational times. Just look at Tesla’s overvalued stock. Just look at Bitcoin’s recovery.
The entire stock market is being propped up by companies whose valuation is anchored to the AI futurity bubble. When that pops, it is going to be a cataclysmic event for the whole finance sector. And that means, in turn, that the whole finance sector will pull out every trick to keep the system running a little longer.
We should be clear-eyed and critical of these technologies. We should be skeptical of the promises made by carnival barkers like Sam Altman. But let’s put the brakes on predictions of its imminent demise.
On AI agents: how are these digital butlers supposed to get paid?
I’ve been hearing and reading a lot about AI agents lately.
Artificial Intelligence is a bad term, but "machine learning" is arguably just as anthropomorphic. The machines are solving pattern matching problems, but they aren't learning in any meaningful sense, any more (or less) than we use them to estimate, say, a model of the macro-economy . Large Language Models is an accurate description,
Two Failure modes. I like that, we've seen it. I just finished the first book of Kim Stanley Robinson's Mars Trilogy. I doubt I'll slog through the later ones. He wrote in the early 90s and his book is set in....wait for it...2025. Or it begins there. The assumption is that the Space Shuttle program went incredibly well and they found a way to push lots of those big fuel tanks into orbit and used them to build big space habitats, etc. including the Mars ship that launches with 100 scientists, etc aboard in 2025. (At the last minute the first man to walk on Mars, several years previous is added to the crew). So all things go incredibly nominally and it all ends up looking like Elon's fantasy. Except for the civil war in space, of course.
There's so much fantasy (not masquerading as science fiction, like this) about what the future will be, while hardly anyone figures on ordinary slippage and missing of deadlines, and things falling apart unexpectedly that we all experience as big projects, like having an ordinary human life, move forward. What's scary is that things do fall apart, and they fall apart in direct relation to how grandiose our expectations are.
If the AI bubble is bigger than the dotcom bubble and the housing finance bubble ("let me give your parakeet a $3M mortgage, nothing down no documents required) and it's going to last longer, that makes me fear that the reckoning will be several times worse than the Great Recession--not a happy prospect given that my current income is entirely a combination of the full faith and credit of the U.S. government that we've entrusted to the guy who wants to fire the head of the Federal Reserve and pension instruments completely invested in a financial system based on the stability of that government.
Breathe deep. Simplify. Maybe it will hold on until my sell-by date in the late 2030s or early 40s.