I’ve been reading a lot about Longtermism this week.
Longtermist philosopher William MacAskill has a book coming out soon, and he’s been on quite the media blitz. You can read an adapted excerpt here, in the New York Times, or read his Time Magazine cover story, or his Foreign Affairs article or listen to Ezra Klein interview him, or read Gideon Lewis-Kraus’s in-depth profile in the New Yorker.
There’s something deeply troubling about Longtermism. I don’t mind it as a philosophical thought-experiment, but it has adopted the trappings of a social movement (one that is remarkably popular with rich technologists like Elon Musk), and we ought to ask some hard questions about who is promoting it and what it ultimately aims to achieve.
It calls to mind an essay I wrote a couple of years ago, just before the pandemic, about the 10,000 Year Clock and the deceptive appeal of long-term thinking. My criticism of the Clock is, I think, equally applicable to Longtermism. Here is the intro to the piece:
THERE IS A clock being constructed in a mountain in Texas. The clock will tick once a year, marking time over the next 10,000 years. The clock is an art installation. It is intended as a monument to long-term thinking, meant to inspire its visitors to be mindful of their place in the long arc of history. I think it is a monument to something else: a profound failure of the imagination. The clock is a testament to willful blindness, as today’s tech barons whistle past the grim realities of the oncoming catastrophe that is man-made climate destabilization. Even worse: It is a reminder that social chaos is never evenly distributed.
The 10,000 Year Clock is a project of the Long Now Foundation. The Long Now Foundation isn’t quite the same as longtermism, but it isn’t all that different either.
The Long Now Foundation is very West Coast and very late 1990s tech. It’s a Stewart Brand/Kevin Kelly/Danny Hillis project. They are tech accelerationists and techno-optimists — meaning they believe (1) that the pace of technological change continues to accelerate, (2) that these changes are, on balance, positive and ought to be promoted/supported, and (3) that we are currently in a fulcrum-point or inflection point in human history. They seek to improve the course of human events by inviting people to think on the timescale of centuries and millennia, so we can build a society that lasts.
Longtermism is very Oxford/Cambridge and is steeped in moral philosophy. William MacAskill gives off a strong Chidi-Anagonye-with-a-Scottish-brogue vibe. Longtermists are also tech accelerationists and techno-optimists. They also believe we are at a fulcrum point in human history. But they add in a layer of utilitarian calculus, arguing:
(1) future people have the same moral worth as people living today (MacAskill writes “Future people are utterly disenfranchised… They are the true silent majority.”)
(2) if we succeed in spreading the light of consciousness throughout the cosmos, there will be trillions upon trillions of future people, so their interests far outweigh our own.
(3) We thus ought to focus on preventing “existential risks” — asteroid strikes, bioweapons, and (especially) hostile artificial intelligence — that could be extinction-level events.
So, like I said, Long Now-ism isn’t quite Longtermism, and vice versa. But the two perspectives have a certain familial resemblance and ultimately suffer from the same flaws.
The tradeoff in focusing on “existential risks” is it serves as cover for ignoring non-existential risks. Wealth inequality is a non-existential risk. Racism and sexism are non-existential risks. Even climate change, according to many longtermists, is a non-existential risk.
It reminds me of a passage in the 10,000 year clock piece that, frankly, still pisses me off. The Clock was originally proposed by Danny Hillis. Hillis was an early pioneer in the field of parallel computing. By 2010, he had developed two passions — the Clock of the Long Now and cancer research. In a 2010 TED talk, he explained how his new biotech company, Applied Proteomics, was poised to make scientific breakthroughs that would cure cancer.
In a 2011 WIRED interview, Hillis was asked how he could justify focusing on the clock instead of Applied Proteomics, a biotech startup he cofounded that was meant to accelerate cancer research. “I think this is the most important thing I can work on,” Hillis replied. “More than cancer. Over the long run, I think this will make more difference to more people.”
Cancer, after all, is a problem of the here and now. Like John Maynard Keynes said, “In the long run, we are all dead.”
If we take the Longtermist/Long Now perspective seriously, then it is absolutely true that when we cure cancer just isn’t that important. 500 years from now, it simply won’t matter when the breakthroughs occurred. Every life saved will have long since ended.
But I have friends and loved ones who have died of cancer. I’d like there to be a cure sooner rather than later thankyouverymuch. Our world—the world you and I inhabit today—will be much more improved by potential breakthroughs in cancer research than by a fucking art project in a mountain owned by Jeff Bezos.
(Incidentally, Applied Proteomics sold off its assets in 2018, without ever developing a commercially viable application. That was the same year, Hillis and his colleagues began assembling the clock inside a mountain on Jeff Bezos’ West Texas ranch.)
A second problem with Longtermism is that it is premised upon preparing the world for the inevitable arrival of Artificial General Intelligence (AGI). And, if you read the actual literature on AI from university researchers who aren’t trying to oversell their achievements to get another funding round in Silicon Valley, there’s an extremely strong argument that we’re nowhere close to AGI, and that it may very well never exist.
The breakthroughs in AI research have been of the “stochastic parrot” variety. Throw enough computational power into a neural net and you can create programs that can win at chess, or at Go, or write convincing text, or render sophisticated digital art. But achieving those individual benchmark successes has no relationship to building an overarching general machine intelligence. The number of completely unsolved, still unknown hurdles is immense.
This points to a key divide in the field of AI ethics. There’s one version of AI ethics that engages in what Lee Vinsel calls “critihype.” It assumes that today’s AI can do everything its promoters say it can do, and that world-changing breakthroughs in AI are just around the corner. From that premise, it asks us to ponder the ethical ramifications of perfect, society-wide AGI.
The other version of AI ethics looks at the actual biases and limitations of AI as it exists today. We’re basically burning down a rainforest so computers can make animated pictures for us. We’re training algorithmic models based on the inputs of historically racist policing patterns and then treating the outputs as though they are objective and beyond questioning. A lot of the major problems in the actual deployment of AI in society stem from the very fact that the reality of these systems doesn’t come anywhere close to their hype. (h/t Timnit Gebru, Emily Bender, and their peers and coauthors who are doing just fantastic critical work in this area).
I’m sure it’s a fun thought experiment for Oxford philosophers to debate the implications of perfect AGI for human life and moral agency. And I’d be fine with that if it was limited to a classroom exercise. But the Longtermists are trying to play in a much larger arena, and they are being used as intellectual cover by the likes of Elon Musk and Peter Thiel. We should turn a critical eye towards the assumptions and biases in their thinking. There’s a reason why Emile Torres calls Longtermism “the world’s most dangerous secular credo.” This stuff gets dangerous if applied at scale.
Longtermism’s concept of temporality seems completely bonkers. Longtermists appear to think nothing of the past and everything of the future. It’s a strange miscalibration.
Consider, for a moment, the Bubonic Plague. It killed about half the population of Europe, the single most devastating population-scale event in recorded human history. This was about 750 years ago. How often do you give thought to the people who were lost? Do you ever mourn them?
(Of course you don’t. The past is the past.)
Now consider a hypothetical from science fiction. William Gibson’s two most recent books (The Peripheral and Agency) occur in two time periods — one in the near-future, the other in the far-future. Gibson’s far future is a techno-optimist paradise. It is filled with the future tech that today’s most wild-eyed futurists only dream about. Heads-up displays! Working robots that you can pilot with full telepresence! Functional seasteads! It is a world of abundance and wealth and fantastical artistry. But it is also a world that is notably… empty.
Separating the two time periods, the reader learns, is "The Jackpot.” The Jackpot is “no one thing … multi-causal, with no particular beginning and no end. More a climate than an event, so not the way apocalypse stories liked to have a big event … No comets crashing, nothing you could really call a nuclear war. Just everything else, tangled in the changing climate: droughts, water shortages, crop failures, honeybees gone … antibiotics doing even less than they already did.” 80% of the global population die as a result. The survivors, guilt-ridden, describe making it through as having won the jackpot.
From the perspective of Longtermism, how ought we to calculate the effects of Gibson’s Jackpot? Given another century or two, the world would repopulate, just as it did after the Black Plague. And the technological breakthroughs would remain. We would be much closer to posthumanism, to extending the light of consciousness among the stars. Over a long enough timespan, the survivors’ guilt would fade. The people of the far future do not care about us. (Nor should they. The past is the past.)
We should be clear about the value judgments Longtermism is smuggling through. The Jackpot is an extreme hypothetical version of compounded, non-existential risks. Economic inequality is not an existential risk. Public health is not an existential risk. Climate catastrophes are (most likely) not an existential risk. These are matters that deeply matter to people who exist today. They deeply matter for people’s children and grandchildren. Poverty is path dependent. But they do not matter in the longest term. (They will, after all, one day become the past. And the past is the past.)
The central problem I see with Longtermism – the part that makes it actively dangerous – is the way it prioritizes “existential risk” to the exclusion of all other types of risk. Asteroid strikes or a robot apocalypse could end humanity. They are existential risks. Climate change likely will not end humanity. It will just severely reduce humanity. It will wipe out island nations, but it will not wipe out everyone.
And, to borrow another Gibsonism, the devastation will not be evenly distributed. The Elon Musks and Peter Thiels of the world expect that they and their peer networks will be shielded from the worst effects of climate destabilization. And they may very well be correct. Wealth and status are a suit of armor. It does not provide invincibility to those who wear it, but the protection is really pretty durable.
Taken to its logical conclusion, Longtermism could be a justification for some outright ghoulish behavior. Should we put any regulatory limits on the new billionaire space-race? A Longtermist could easily conclude that any government standing in the way of Elon Musk’s interstellar ambitions is a moral abomination — directly harming the potential well-being of trillions of future humans.
It’s a recipe for our current billionaire tech barons to treat The Jackpot as a promise, rather than a warning.
Rejecting Longtermism does not mean we shouldn’t care about the future. All it really requires is that we apply a discount rate to the future — one that increases over time. Just as we mourn tragedies that occurred 50 years ago more deeply than we do tragedies that occurred 750 years ago, our responsibility to people who will live 50 years from now is greater than our responsibility to those will live 750 years from now.
Including a discount rate to the moral worth of (imagined) future humans allows us to consider the long arc of humanity, but not to the detriment of the lived experience of those actually alive today. And it makes intuitive sense, because the moral worth of past humans is, of course, effectively zero. (The past, after all, being the past). In fifty years, all those of us who have died will have no moral standing, and all who currently live will have full moral standing.
William MacAskill offers the following “common sense” thought experiment for why imagined future people should matter just as much as current people:
Suppose that I drop a glass bottle while hiking. If I don’t clean it up, a child might cut herself on the shards. Does it matter when the child will cut herself — a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs.
This… is a bit off, though. If I drop a glass bottle, and a kid immediately cuts herself on the shards, I feel the full measure of guilt from having directly caused harm. If I drop a glass bottle, the shards sit there for a century with no one picking them up, and then a child cuts herself, I still feel some portion of guilt for having directly caused harm but I also wonder why no moral agent in the intervening century stepped in to handle a bit of cleanup. (Did we completely defund the Park Service? We should not have done that. Are there no civil society organizations that engage in annual park cleanups? That’s terrible. We ought to have those! People drop glass bottles sometimes. There are kids who hike in that park!)
As time goes by, additional people have moral agency. Someone could do something about the glass in the park. Eventually the responsibility for addressing such a problem transfers from me to us. (This, in a nutshell, strikes me as the difference between moral philosophy and political philosophy. It stops being a question of “how do I live a moral life” and turns into a question of “how do we govern ourselves in a complex society?”)
I’m reminded of a passage from an unexpectedly moving Gawker piece that B.D. McClay wrote a few months ago, titled “It’s very unlikely anyone will read this in 200 years.” McClay was writing in response to Jason Stanley, a Yale philosopher and public intellectual, after he had a bit of a grandstanding public meltdown on Twitter (not his best moment, as he admitted soon after).
There was a story that went around about a teacher of mine that went like this: why, he asked his class, composed mostly of business types, is it that people build monuments? Someone in the class answers: to testify to great deeds. Really? my teacher replied. Have you ever read the text on a monument? New answer: No, we haven’t, but they testify to the human spirit. OK, my teacher said. But one day the sun is going to blow up and we’re all going to die. What use is a monument then?
I like this story and tell it often, because, to me, it is almost cheerfully bleak, but also, it is in its own way about the futility of futility. Human works mean nothing outside a human frame of reference. None of them can stand up to the sun blowing up and all life dying, because nothing can mean anything then; “meaning,” as such, will not exist. And part of that frame of reference is death and transience. The answers to the monument question are not wrong. But pigeons will shit on your monument and teenagers will make out there and the rain will fall and none of this will ever take into account who you were or what you did. You are background for the living now. It is their turn.
There is a hubris to Longtermism, and also to the 10,000 Year Clock, that I find grating. They both operate from the fundamental assumption that we are at the fulcrum-point, that we are in some sense main characters in the long arc of human history.
The Longtermists believe we are setting the stage for the next chapter in the development of humanity… that the choices and institutions we build today will have impacts that reverberate for millennia to come. The Clock-builders imagine a world centuries from now where people make pilgrimages to hear the Clock’s chime.
I suspect they are wrong — the we are not at *the* fulcrum-point, but just merely at *a* fulcrum-point. Our actions today matter, but not so much more than the actions of our ancestors or offspring. In the decades, centuries, millennia to come, we will be background for the living. It will be their turn.
Here’s how the 10,000 Year Clock essay ends:
There is a clock being constructed in a mountain in Texas. The clock will tick once a year, marking time over the next 10,000 years. The clock is an art installation. It is intended as a monument to long-term thinking, meant to inspire its visitors to be mindful of their place in the long arc of history.
The clock was conceived by a tech millionaire. It is funded by the world’s richest man, a tech billionaire. It is being built adjacent to his private spaceport, inside a mountain that he owns. You can visit the clock in the mountain in Texas someday. You can walk through its stainless steel doors, climb the staircase up to the clock face. You can turn the winding mechanism and hear one of Brian Eno’s chimes. The Long Now Foundation has a signup list—paid members get to jump the line—for tours that are scheduled to begin “many years into the future.” There’s another, quicker way to get in, though: Just ask Jeff Bezos for an invite when you see him at Davos, or ask a board member of the Long Now Foundation for an introduction.
If you can’t get in touch with Bezos through your personal networks, you shouldn’t worry about the 10,000-Year Clock. They wouldn’t say it so bluntly, but this art installation isn’t for you.
You have more pressing concerns in the here and now.
Longtermism, like the Clock, is funded by the world’s richest men. It is a philosophy that inscribes their business decisions with meaning and world-historic importance. It is meant to assuage the powerful. From a Longtermist perspective, it doesn’t matter if Tesla mistreats factory workers, or if Palantir lies about its predictive capabilities. What matters is that these Great Men, these hero-inventors, be encouraged and rewarded for their ambitions. They are extending the light of consciousness throughout the cosmos, warding off existential risks, providing bounteous gifts to the far-future of humanity.
As a classroom thought-experiment, I see little to object to. (It isn’t the best exercise, but it’s certainly not the worst.) We should build a better world for the people alive today. We should also try to pass a better world on to our children, and teach them values that encourage them to do the same. And we should guard against existential risks. Of course we should. These are non-controversial insights. Simple and uncomplicated.
We should recognized Longtermism as something more pernicious, though. It is a philosophy that says we need not concern ourselves with the fates, the dignity, or the injustices that people living today face, because those people matter no more nor less than the people who will live millennia from now. It is a philosophy that instructs our privileged elites to imagine themselves at the fulcrum of history and ignore the suffering they might cause on their path to greatness. It is a philosophy that imagines, centuries from now, people will still tell the tales of this era, and of the great men (always men. Always.) who set the course of the future.
That’s a recipe for cruelty, for suffering, for social harm.
The Jackpot is a warning, not a promise. Distrust any philosophical tradition that would suggest otherwise.
[edited August 16 to fix a couple typos]
The most amazing thing about this philosophy to me is - and tell me if this is something they actually address - what would our world be like if people at any point in the past had this philosophy? All of the major changes that brought our world to what it is today and that we generally think are good (and a lot of those we think are bad, tbh) were made by people who were trying to improve people's lives then and there, or two generations down at the outside. This idea of "it doesn't matter if we don't solve racism now because 750 years from now, who will care" well I don't know, are we assuming someone *will* solve racism somewhere between now and 750 years from now? Because if they don't then the people 750 years from now will absolutely care. And if someone does, why did they, according to this philosophy?
I think it would be really cool for AGI to exist, and I don't see a reason it shouldn't exist at some point given *we* exist, so clearly it's at least theoretically doable and I think it's a big assumption to think that *only* biological systems can do it when we can emulate so many other things biological systems do. But when I look at people today banking everything on AGI being invented sometime soon, or going for cryonics in the hope it will buy them immortality, I can't help but think of Leonardo da Vinci. If you were in Leonardo da Vinci's day and were betting on helicopters or airplanes being invented within a generation, or his work being *close enough* that it was worth throwing all the money at because, like, what's more valuable than people flying? If there's even a tiny chance it's still worth putting a lot of resources into, right?... Would you be right? Seems to me you'd be centuries off, even with all the money in the world the technology and knowledge just *wasn't there* for helicopters and airplanes to be invented just then. Having the idea wasn't enough. Having explored general ideas of how you could do it, it turns out, wasn't enough when you also needed things like motors and electricity and other ideas that were on completely separate technology tracks, to use a metaphor.
So, it seems to me, "it's theoretically possible and it would be really great if it happened and there's no reason to believe it can't eventually happen" isn't sufficient to justify investment decisions in the here and now. You do need to consider feasibility and put a discount on an uncertain long-term in favor of the more certain short-term.
Obligations to future generations in a consequentialist framework are covered by Derek Parfit in his classic Reasons and Persons (1984). Parfit essentially argues that our moral obligations to unborn and unforeseen generations are limited in part by the fact that their identities are unknown and partially dependent upon our actions today: there can be no full obligation to theoretical people with no fixed identity, which seems to be a tenet of the longtermism described here. In general, Parfit's reasoned analysis of the ways that common-sense understandings of harm as 'harm-done-to-a-particular-person' can be incorporated into a consequentialist matrix would make a good riposte to much of the justifications for disregarding current harm. I suspect it should also be part of this discussion.