Bullet Points: Techbros-telling-stories edition
Sam Altman, geoengineering, and AI obscuring decision-making, oh my!
Hi folks, I have three quick items to share before the weekend.
(1) I have a new article in The Atlantic, reacting to Sam Altman’s latest manifesto. “It’s time to stop taking Sam Altman at his word.”
The TL;DR version is that the business model of OpenAI isn’t actually ChatGPT as a product. It’s stories about what ChatGPT might one day become. And, if you read Altman’s “The Intelligence Age” closely, what really stands out is how fantastical the stories really are.
Altman insists that, in just a few years, Generative AI will “solve all of physics.” We should maybe keep in mind that Sam Altman is not a physicist! He’s an entrepreneur. He is remarkably talented at performing the role of Silicon-Valley-visionary. But we really ought to stop equating tech-visionary-bluster with actual scientific knowledge.
At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future so we don’t get too caught up in the underwhelming details of the present. Why focus on how AI is being used to harass and exploit children when you can imagine the ways it will make your life easier? It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.
I think the piece came together quite well. Please give it a look.
(2) And speaking of tech bluster that should not be confused with actual scientific knowledge…
I am still fuming-mad about a New York Times article that was published during climate week, “Silicon Valley Renegades Pollute the Skies to Save the Planet.” (I’m not mad at the Times for publishing it. I’m mad at <gestures wildly>)
The article is about a solar geoengineering startup called Make Sunsets, run by a YCombinator alum (of course) who read Neal Stephenson’s Termination Shock and decided it was an instruction manual. Make Sunsets has decided to disrupt the solar geoengineering industry, rushing ahead of the academic research on the topic and just launching balloons full of sulfur dioxide into the atmosphere (because #YOLO amirite?)
Now on the one hand, this is a tiny startup and they will probably fail and they are certainly, at the moment, too small to actually ruin the climate.
But on the other hand, this was so goddamn obvious that I predicted it would happen over two years ago in this goddamn newsletter. Can the techbros please surprise us in a good way for once, just as a fucking treat?!?!?!?
Back in July 2022, when this newsletter had only ~500 readers and I was still getting a feel for the format, I wrote a review of Stephenson’s Termination Shock. It was a nested book review, (bundled together with a critique of Sam Altman’s blithe techno-optimism, in fact) because I didn’t yet have the confidence to just say “I read a book and now I’m going to yell about it in your inbox.” (Lol. Also LMFAO.)
Anyway, here is the relevant passage:
[I enjoyed reading the book but…] It’s also a reckless book. We’d arguably be better off if he had never written it.
The trouble is that tech billionaires take Neal Stephenson entirely too seriously. Snow Crash was the inspiration for Second Life and the Metaverse. Jeff Bezos hatched his idea for Blue Origin over a cup of coffee with Stephenson. Cryptonomicon was an inspiration for a lot of the cryptography community that went on to become early bitcoin enthusiasts. It feels sometimes like Neal Stephenson books ought to come with a warning label: “this is a fictional dystopia, not an instruction manual.” (H/T Cyd Harrell)
The problem with Stephenson’s story is that, in real science and engineering scenarios, you never have everything go according to plan. The premise underlying the book asks the reader to take a leap of faith behind two types of science and engineering. First, we have to believe that the science of geoengineering is rock-solid. Second, we have to believe the science of real-time climate modeling and forecasting has been basically perfected. You need your climate models to be extremely good in order to forecast what the effects of geoengineering will be. And you need the geoengineering not to have any surprising downstream consequences that the engineers couldn’t predict. You particularly need this because “termination shock” is itself a warning – once you start this process at scale, you cannot end it without disastrous consequences. You had better be right.
Actual science is just a lot messier than it looks in Stephenson’s books. It is far too easy to put too much faith in precision computer models. We have built an entire digital economy atop the fiction that the data fueling surveillance capitalism isn’t mostly garbage. None of it works as well as its evangelists claim. We privatize the rewards and socialize the risks, resulting in a tech billionaire-class whose most abundant gift is their unearned confidence.
Can climate modelers really offer precision-accurate predictions of how sulfur dioxide “acupuncture” on the stratosphere would work? It’s a fun simplifying assumption for a novel, but a terrifying risk to take in reality. Never once in Termination Shock’s 700 pages do Stephenson’s characters have to deal with the assumptions of a model being wrong.
Geoengineering would absolutely be a minefield of unintended consequences. It has never been attempted before. We are incapable of testing it at scale without, y’know, actually pulling the trigger and trying. The degree to which we just don’t fucking know what the unintended impacts of geoengineering would be is off the charts here. The models are based on two major volcanic eruptions, with limited contemporaneous data collection. We’re starting from an N of TWO! Model it all you want, but those models will be based on assumptions that can only be refined once we’ve pulled the trigger on the giant silver bullets.
Neal Stephenson just has too many diehard fans who roughly match the profile of the rogue billionaire in his story. Is Termination Shock going to lead some ex-Googler to launch his own rogue geoengineering scheme? Probably not. But… yeah, it might.
Now here we are, in 2024, facing 1,000-year hurricanes that wipe out inland North Carolina mountain towns, and the response from the tech billionaire class is (1) fuck it let’s just build the Neal Stephenson sulfur dioxide rocket and see what happens, along with (2) …uh, maybe ChatGPT can solve this if we give Sam Altman all of the monies?
They are unserious people with too much money and power. And they are so utterly predictable. I hate it here. Someone stop the ride; I want to get off.
(3) Some reading recommendations…
-I read Arvind Narayanan and Sayash Kapoor’s new book, AI Snake Oil over the weekend. It is excellent. Highly recommended.
-Henry Farrell’s most recent substack essay, “After software eats the world, what comes out the other end?” also left me filling a notepad with related thoughts. I feel like this one should be read in tandem with Ted Chiang’s August 31 New Yorker piece, “Why A.I. Isn’t Going to Make Art.”
There’s an overlap between these three, involving using technology to help us making choices vs using technology to avoid making choices altogether. Dan Davies’s book The Unaccountability Machine belongs smack dab in the middle of that venn diagram.
Eventually I’ll wrote a proper essay spelling out my thinking on the matter. But I suspect the month of October is going to turn into unmitigated chaos, so I’m planting this flag and offering these reading suggestions now, with the intention of following up and sharing my own thinking once things have quieted down.
Until next time,
-DK
Thanks for the reminder about AI Snake Oil. It’s been on my radar for a while. I just bought it.
Have you been reading Matt Levine's bit on Sam Altman? I love it. As usual, he doesn't explicitly express disapprobation; on the contrary, he couches his remarks in a tone of admiration. I hope you will forgive me for quoting the Altman passage from his September 30th newsletter in its entirety:
"The place you want to reach in your career is where you work for a company and you are like “you know what, I am just so rich, I don’t want you to pay me anymore, it’s fine, I’ll work for free,” and your bosses are like “nope, sorry, we insist, we cannot allow you to work here for less than $10 billion.” And then you’re like “ohhhhhhhh fine, fine, fine, I do love working here, and if there’s really no other way, I guess I will take the $10 billion.” Nothing remotely like this has ever happened to me in my life but here’s Sam Altman:
'On Thursday, Altman told some employees that there were “good reasons” he shouldn’t take equity, though he didn’t elaborate. And he said investors were pushing for an equity grant to align his financial interests with those of OpenAI, said someone who heard the comments.
Altman also said a Wednesday news report that he might get a 7% stake in the new OpenAI was “ludicrous.”'
We talked about this last week: There have been reports that, as OpenAI becomes a for-profit company, it might give Altman (its co-founder and chief executive officer, who currently owns no equity) a 7% stake worth about $10 billion. I surmised that this was not something he wanted, but something the investors wanted: “He is the founder and CEO of a hot startup, and the founder and CEO of a hot startup is supposed to own equity. Not just for his sake — not just so that he can be rich — but to align incentives.” And here is Altman saying that: He doesn’t want the $10 billion, but the investors are insisting.
Nobody in history has ever been better at, like, business negging than Sam Altman. He got OpenAI to a $150 billion valuation in part by going around saying “oh no, nobody should allow us to build our product, we’re going to destroy humanity,” and now he is allegedly going to get handed a $10 billion stake in OpenAI because he’s going around saying “oh no, nobody should give me equity, that’s ludicrous.”"