I’ve been thinking recently about how generative AI tools might fit into our lives. The best framework I can come up with revolves around Herbert Simon’s concept of “satisficing.”
Satisfice is a portmaneau of “satisfy” and “suffice.” Simon won the 1978 Nobel Prize in Economics for his work on the topic. He disputed the then-common assumption that people approximately behaved like perfect economically-rational agents, gathering unlimited information to make optimal decisions. In fact, as he showed that doing so would be irrational, because of the opportunity cost of limitless information-gathering.
In layman’s terms, we satisfice by (1) figuring out what conditions must be met/ what information must be gained in order to reach a good enough (satisfactory) decision, (2) researching until we reach that threshold, and then (3) settling on what you have found. Simon argued that this type of bounded rationality was a better model of actual human decision-making than the rational actor models that the economists of his day trafficked in.
Satisficing can seem identical to laziness. When I plan a vacation or purchase a new appliance, I resist the urge to spend countless hours researching the perfect hotel or trying to find the perfect dishwasher. I do enough research to find an option that suits my needs, then I stop looking. For example, I bought a grill a couple years ago. I looked on Wirecutter. I decided which of their recommendations most fit my needs (gas or charcoal? Do I need a smart grill? How big?). I checked one other website to see if the recommendations lined up. And then I confirmed it was available at the nearby hardware store.Done. It is possible that a few hours of research would’ve yielded a better grill or a slightly cheaper grill. But not so much better or cheaper as to be worth the effort. Boom. Satisficed.
I’d like to be better at satisficing in my writing practice. I have, at present, four different essays outlined that I cannot write yet, because I still have to read another book or two to feel confident enough to speak on the subject. (There is perpetually more reading to do, and I am the type of academic who feels constant pressure for not having read everything sufficiently.) Realistically, I am pretty sure it would be fine for me to just write the essays on Substack, read the books later, and then write follow-up pieces if I learn something new. That would be more productive behavior. But I always psyche myself out. I am pretty sure I would be a better writer and scholar if I got better at satisficing.
Herbert Simon developed the concept of satisficing during the golden era of broadcast media. It seems extra-relevant to social life today, because there is always more research one could do. We have endless information available on the internet. Much of it is wrong or outdated, so even if you find what you’re looking for, you might feel the urge to keep researching to make sure it is right. And that is often, ultimately, a bad use of your time. It is a good habit to know what good-enough looks like, and then move on to the next task once you’ve gotten there.
Which, in turn, brings me to the primary category of use-cases where I feel legitimate enthusiasm for developments in generative AI: I suspect ChatGPT is going to be marvelous for satisficing behavior and detrimental for everything else.
I have been (and still largely am) pretty skeptical of many of the claims about ChatGPT as a productivity revolution. The examples always seems so bland.
For instance, I keep hearing about how ChatGPT and/or AI-powered Bing can plan a trip to Disney World or recommend a meal plan, with recipe and ingredients list. And that’s true. They can do all that. You’ll want to double-check that the trip plan isn’t outdated and the meal plan isn’t a hallucination, but it is pretty astonishing how much better these LLMs are than their predecessors from just a few years ago.
I’ve written before that it’s best to understand Generative AI tools as cliche-generators. The AI isn’t going to give you the optimal Disney World itinerary; It’s going to give you basically the same trip that everyone takes. It isn’t going to recommend the ideal recipe for your tastes; it’s just going to suggest something that works.
And that sounds great, because both of those tasks are obnoxious time-sinks. (Yes, please, recommend a basic meal that my kids might eat! Offer me the same bog-standard Disney vacation that everyone else eventually settles on!)
But there’s an important distinction between these satisficing-appropriate tasks and a lot of the ambitious dream-speak in this topic area.
Consider the Writers Strike. One of the main sticking points is that the studios are clearly imagining a near-future where Generative AI has replaced all those pesky humans.
Can a Generative AI produce a sitcom script? Yeah, it can. Will the script be any good? Meh. It will be, at best, average. Maybe it will be a creative kind of average, by remixing tropes from different genres (“Shakespeare in spaaaaaaaaace!”), but it’s still going to be little more than a rehash.
Keep in mind that the studios see no problem with meh television scripts. If its cheaper to produce, they’ll sacrifice quality for profitability. (side note: have you read Ted Chiang’s latest New Yorker essay, “Will A.I. Become the New McKinsey?” It’s the best articulation of this perspective that I have seen.)
There are plenty of reasons to support the Writers Guild of America right now. But the simplest, most selfish reason is that the WGA is all that stands between you and a future of much shittier entertainment. A good writers’ room does a hell of a lot better than satisficing and cliche-generation.
Likewise, as I’ve mentioned before, AI ultimately isn’t going to replace your doctor or your lawyer. I can think of few scenarios that AI is less well-suited for than diagnosing a potentially fatal disease or working out tricky legal details. An AI could surely diagnose the simple stuff, or write a basic contract. So can WebMD and LegalZoom. That doesn’t mean you’ll want to rely on it when you need something more than satisficing.
Where I think this will be most transformative is in online productive tools. We are probably approaching a future where Microsoft unveils a legitimately awesome next-generation Clippy. It will help you make charts in Excel, suggest powerpoint presentation slides, and offer genre-specific writing advice. It will simplify a bunch of grinding, obnoxious tasks, providing good-enough solutions when that’s exactly what you need.
Over time, generative AI will probably change writing in much the same way that word processing programs did. The best-case scenario is that we develop a more robust appreciation of cliches along the way. The trouble with cliches as a writer is that they’re lazy. They’re uncreative. The benefit of cliches is that they are easily understood by the reader and easy to manufacture for the author. One ought to lean more heavily on cliches when engaged in a satisficing-level writing task (writing memos that no one will ever read, for example). Doing so is efficient. Save your energy for the moments when it matters.
My other, more dystopian instinct is that there simply isn’t going to be enough money in online productivity tools to justify all the money that has been invested in building the AI future. OpenAI burned through $540 million developing ChatGPT last year. Sam Altman has suggested they’ll need $100 billion to develop the AI of his dreams. There is not $100 billion+ of revenues to be found in Clippy-but-awesome.
So I still suspect we’re on a bad trajectory, toward AI content farms and misinformation factories. Toward everyone getting just a bit ruder to one another as we are burdened by the downstream effects of AI’s hallucination problem. Toward the second failure mode of emerging technologies, where we ignore the bugs and limitations of these tools and incorporated them into social systems where they are wildly inappropriate.
Over time, the trajectory of every new technology bends toward money. There are reasons to be excited about the ways this new technology might simplify our lives. It’s going to make satisficing so much easier, and that is often just what we need. But we should also watch the emerging revenue models closely.
The best way to influence the development of generative AI isn’t to ban LLMs; it’s to put restrictions on how it is monetized.
Great piece. The last line is a real lightbulb moment. Constricting monetization opportunities is a clever regulatory route.
And I love the connect to satisficing. The perfect explanation of LLM as tool.
I definitely think the TV industry sees opportunities to use AI to generate the satisfictory(?) script, and then hire a writer to punch it up - much cheaper than asking a team of writers to spend weeks generating the original ideas. Netflix has built its business model on producing a quantity of satisficing programming, with occasionally great series mixed in, and lots of that wouldn't be much worse with punched-up AI. I imagine AI being most useful for writing for genre where the writing is meant to be more instrumental than artful: kids' shows, cooking shows, reality TV, home improvement, etc. - and most of all, advertising copy.
Supporting the WGA is a great way to constrict monetization!