Book Review: "More Everything Forever"
A book that takes Silicon Valley's unserious ideas seriously, and then delightfully tears them apart.
The trick with writing about the ideological project of Silicon Valley lies in taking patently unserious ideas seriously. This requires some real artistry and balance. You have to simultaneously make clear to the reader why these ideas are farcical, while also highlighting why they nonetheless merit attention. It often requires explaining and exploring the ideas with greater clarity than the originating authors themselves, since many of Silicon Valley’s most verbose thinkers are just horrendous at writing.
Call it the “Curtis Yarvin problem.” Curtis Yarvin is influential among tech elites. Billionaires take him seriously. So does our current Vice President. Curtis Yarvin is also pathetic. The billionaire technologists mostly take him seriously because his central message is billionaire technologists are very special geniuses and we should put them in control of everything and have faith in their every impulse. Even their most shallow and racist impulses, and it turns out that this is the sort of thing billionaire technologists quite enjoy hearing.
So the Curtis Yarvin problem is (1) there’s this guy you’ve never heard of. (2) he’s kind of the worst. (3) let’s pay attention to him. Because he’s influential. (4) at first glance, his ideas seem ridiculous. But if you really examine them in detail, they’ll seem even more ridiculous. (5) wait, why did we bother to pay attention to him? Oh right, because people with way too much power listen to him. That’s awful.
I’ve spent years trying to master this trick. I think I’ve gotten passably good at it. But I can tell you from experience that it ain’t easy.
Adam Becker’s new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity is a masterclass in threading this particular needle. I cannot recommend it highly enough. The book is just a delightful solution to the Curtis Yarvin problem, writ large. (Though Yarvin himself only merits a couple pages.)
Becker introduces his readers to many of the big personalities and big ideas that drive Silicon Valley thinking. Eliezer Yudkowsky. Nick Bostrom. Ray Kurzweil. William MacAskill. Toby Ord. He documents how their work supports, and receives support from, the tech billionaire class — Marc Andreessen, Elon Musk, Sam Bankman-Fried, Sam Altman, Peter Thiel. And he carefully demolishes each of their claims.
There’s a nice rhythm to each of chapter. Becker takes the singularitarians and the rationalists, the effective altruists and effective accelerationists seriously. He reads them and explains them more clearly than they often explain themselves. And then he tears apart the science and the reasoning supporting their claims. Becker has a doctorate in astrophysics, and it shows. This is the sort of book I would want to write if my doctorate was in astrophysics instead of political science.
Chapter 2 (“Machines of Loving Grace”) contains the single most thorough demolition of Ray Kurzweil’s singularity nonsense that I have ever seen put to page. Becker is kind and generous to Kurzweil on a human level, explaining how the untimely death of Kurzweil’s father lead him to construct an entire philosophy around how technologists could finally conquer and transcend death. But Becker is unsparing in his critique of singularitarian misreading of Moore’s Law and the imagined exponential pace of tech innovation. He makes plain that this is simply a categorial error: Technological innovation is not increasing at an exponential rate. It only appears that way because our recollection of past innovations is logarithmic. “The inverse of the exponential function is the logarithm.” (…) “The fate of Moore’s Law,” he writes, “is the fate of all exponential trends: they end, just as Moore himself said.”
Chapter 4 takes apart longtermism and the rationalist community and its commitment to faux scientism. They mask their reasoning by assigning probabilities to statements. But the probabilities are completely made up! It’s toy reasoning, flogged until it becomes unrecognizable as such:
“Longtermists, then, are making arguments with incredibly strong conclusions—funding AI safety research is trillions of times more cost-effective than preventing the spread of malaria! Saving a billion people today isn’t as good as a minuscule chance of saving 1052 people who might exist someday!—based on arguments that rely on very small probabilities and that fall apart if those probabilities are wrong. And their estimates of those crucial probabilities are based on very little. Weigh that against the overwhelming evidence that there are people alive today who are in need, and the whole idea of longtermism looks shaky.” (170)
Chapter 5 (“Dumpster Fire Space Utopia) is on par with Zach and Kelly Weinersmith’s A City on Mars in explaining why Elon Musk’s Mars dream simply won’t happen anytime this century. It also includes one of the best dunks on Sam Altman that I have ever seen:
in 2023, Altman and Ilya Sutskever said that AGI would solve global warming. (…) “I think once we have a really powerful superintelligence, addressing climate change will not be particularly difficult for a system like that (…) You know, if you think about a system where you can say, ‘Tell me how to make a lot of clean energy cheaply,’ ‘Tell me how to efficiently capture carbon,’ and then ‘Tell me how to build a factory to do this at planetary scale’—if you can do that, you can do a lot of other things too.”
Altman is so confident in this “plan”—solving global warming by asking a nonexistent and ill-defined AGI for three wishes—that he’s willing to gamble our climate and our future on it.
The book is an elegant solution to the practical challenge of introducing readers to what Émile Torres and Timnit Gebru have termed “the TESCREAL bundle” (Transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, effective accelerationism, and longtermism), without prompting pre-exhaustion. I’ve always found “TESCREAL” to be a bit of an intimidating term — it hints at a truly foreboding amount of inscrutable reading.
Take the Extropians, as one example. I am familiar with Extropianism because of my read-all-the-WIRED-magazines project. The Extropians believe we will soon conquer death, through a mix of cryonics and brain uploading. They are very silly, and all of their scientific claims stink of quiet desperation. They also get a lot of interest from rich technologists who can toss a few million dollars at a longshot scheme to live forever.
Should you closely read up on the Extropians? God no.
But should you read a book that introduces the Extropians and then provides ample ammunition to dismiss them as fundamentally unserious individuals who think the scientific method operates via Tinkerbell mechanics? (just clap hard enough and we can invent anything!) Hell yeah. Absolutely.
That’s what Becker accomplishes in More Everything Forever. It’s a serious treatment of unserious ideas, with a nice mix of science-dunks and social commentary.
A student of mine recently asked why I don’t assign the effective accelerationists or Curtis Yarvin in my History of the Digital Future class. And the basic answer is the same as why I don’t assign Balaji’s awful book: I like my students. I wouldn’t want to make them read Balaji et al. That would be, well, mean.
But I sure could have them read More Everything Forever. That would do the trick nicely, I think.
sure you wouldn't rather call it the Moldbug Problem? Has a nice ring to it IMO.
Picked this up based on your review and already devouring / loving it. We've needed this kind of techno-utopian / transhumanist takedown for a while now.