30 Comments

The most amazing thing about this philosophy to me is - and tell me if this is something they actually address - what would our world be like if people at any point in the past had this philosophy? All of the major changes that brought our world to what it is today and that we generally think are good (and a lot of those we think are bad, tbh) were made by people who were trying to improve people's lives then and there, or two generations down at the outside. This idea of "it doesn't matter if we don't solve racism now because 750 years from now, who will care" well I don't know, are we assuming someone *will* solve racism somewhere between now and 750 years from now? Because if they don't then the people 750 years from now will absolutely care. And if someone does, why did they, according to this philosophy?

I think it would be really cool for AGI to exist, and I don't see a reason it shouldn't exist at some point given *we* exist, so clearly it's at least theoretically doable and I think it's a big assumption to think that *only* biological systems can do it when we can emulate so many other things biological systems do. But when I look at people today banking everything on AGI being invented sometime soon, or going for cryonics in the hope it will buy them immortality, I can't help but think of Leonardo da Vinci. If you were in Leonardo da Vinci's day and were betting on helicopters or airplanes being invented within a generation, or his work being *close enough* that it was worth throwing all the money at because, like, what's more valuable than people flying? If there's even a tiny chance it's still worth putting a lot of resources into, right?... Would you be right? Seems to me you'd be centuries off, even with all the money in the world the technology and knowledge just *wasn't there* for helicopters and airplanes to be invented just then. Having the idea wasn't enough. Having explored general ideas of how you could do it, it turns out, wasn't enough when you also needed things like motors and electricity and other ideas that were on completely separate technology tracks, to use a metaphor.

So, it seems to me, "it's theoretically possible and it would be really great if it happened and there's no reason to believe it can't eventually happen" isn't sufficient to justify investment decisions in the here and now. You do need to consider feasibility and put a discount on an uncertain long-term in favor of the more certain short-term.

Expand full comment
Nov 1, 2022Liked by Dave Karpf

Obligations to future generations in a consequentialist framework are covered by Derek Parfit in his classic Reasons and Persons (1984). Parfit essentially argues that our moral obligations to unborn and unforeseen generations are limited in part by the fact that their identities are unknown and partially dependent upon our actions today: there can be no full obligation to theoretical people with no fixed identity, which seems to be a tenet of the longtermism described here. In general, Parfit's reasoned analysis of the ways that common-sense understandings of harm as 'harm-done-to-a-particular-person' can be incorporated into a consequentialist matrix would make a good riposte to much of the justifications for disregarding current harm. I suspect it should also be part of this discussion.

Expand full comment

To be fair to Keynes, he said, “In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” He saw the long run as a cop out too.

Expand full comment

This is well said. I couldn’t say before why the longtermism stuff bothered me so, but you’re right! It’s as much of a cop out as saying we’ll be dead by 2100.

Expand full comment

I think y’all should be sure to read the NYT excerpt. I think this piece conflates a lot of ideas and trends, and misunderstands longtermism. This isn’t a defense of Bezos or Musk or any other idiot.

And for the record, it’s impossible to cure cancer. You can cure some cancers, not the whole concept..

Expand full comment

I think the author of this piece would be pleased with the effect criticism like this has had on Will MacAskill. In his most recent book, Will strongly endorses addressing climate change now because of it's unambiguous negative effects on future people. He also strongly encourages people to work on the social issues mentioned in the piece. His reasoning is that avoiding the "lock-in" of bad values (authoritarianism, racism, inequality, factory farming, etc.) has profound implications on the values of any future human civilization and the future people in it. Thanks for a thoughtful piece.

Expand full comment

Interesting insights. It's basically a philosophy that allows shallow, vain, (albeit wealthy) people to paper over their tax evasion and ill treatment of workers for promises of undefined future breakthroughs to benefit humanity. So, a shell game to disguise their greed and selfishness. Bezos ex-wife is the only billionaire that got it right...she gave billions to charities and continues to donate billions to people in need in the present. The rest refuse to accept their mortality or inevitable insignificance in the 'long-term'.

Expand full comment

This is such an incredibly sloppy reading, I don’t even know where to begin. Your targets are just all over the place. Arguments aren’t really made- it’s mainly guilt-by-association. But just to take one, let’s flip your main script: why not focus exclusively on ultra-short-term-thinking instead. Why on earth bother with basic scientific research or even a cure to cancer, when more than half the world’s population lives well below the poverty line?

Expand full comment

It's hard not to hear an echo of early Christianity in longtermism. Early Christians argued that there was no point in improving anyone's lot in this world when Jesus Christ and his dad were going to return presently and deliver every believer into a heavenly paradise. Our world was, to use a modern metaphor, a virtual reality, and those who trust in the Lord will see the veil brushed aside and see God's truth and participate in God's peace. Some early Christians sought martyrdom for in the long term we are all dead and our souls will be judged by God.

By the time Christianity was adopted as the official religion of the Roman Empire, it had changed quite a bit, and becoming an official state religion changed it further.

Expand full comment

It feels like this is all rising from a strongly deontological inclination... am I wrong, or is there something that you think is still problematic from a consequentialist standpoint?

Looking at your take on discounting for instance, I can't help but think it's a bit of a post hoc contrivance to justify what one already feels is right, based on personal experience, or whatever. Usually, a virtue in moral reasoning imo is to be able to divest yourself from that personal experience. The logical end of not doing so is that what could otherwise be serious moral analysis becomes nothing more than a survey, with a sample of one.

Expand full comment

"Just as we mourn tragedies that occurred 50 years ago more deeply than we do tragedies that occurred 750 years ago"

We do? I don't, and don't see why I should. For a concrete example, I care far more about the suffering of Giordano Bruno than about antivaxxers on ventilators.

In his book Reasons and Persons, Derek Parfit argues against the notion of a discount rate, using examples such as radioactive waste or leaving broken shards of glass in the undergrowth where some unsuspecting person might step on them in the future.

Expand full comment

MacAskill sounds like a forced-birther (anti-abortion, "pro-life") on hyper mega steroids.

There is no moral obligation to hypothesized persons who don't exist, and certainly no moral obligation to bring people into existence. There is a moral obligation to leave a livable world to future generations -- which includes people who exist now and whatever people do happen to exist in the future ... and as things are going now there won't be all that many of them.

Taking longtermism to its logical conclusion would require putting all of our resources into the impossible task of preventing the heat death of the universe that guarantees human extinction. Longtermism leads logically to such an absurd result because it isn't logically sound.

Expand full comment

In Capitalism, Organized crime has the competitive advantage, because they pay no taxes, follow no laws and can offer you a deal you dare not refuse. Where does all their money get invested? We could use Geo Thermal to halt the use of oil, but that isn't likely to happen because it would upset the status quo and their hold on geopolitics. So our future is likely to be a authoritarian thug state run by the likes of Putin or China's Xi. And these wonderful beings are going to project consciousness across the galaxy, while the rest of us are their disposable play things or slaves or armies. Since capitalism reins supreme, I suppose our short term future is Geo Engineered and Genetically Modified. Only species that are worth money, will survive. All else to the trash bin. The genes of plants and animals will be the property of corporations. Trash the planet and engineer the hell out of it, for a buck. Psychopaths and sociopaths rule supreme.

Expand full comment
Oct 31, 2022·edited Oct 31, 2022

Fascinating and well said.

There are people who 'believe' (are convinced of) the 'coming of 'technical utopia'. It is yet another example how we humans tend to our conflate relative intelligence with 'absolute' intelligence.

The human mind, including those of the tech billionaires and everyone else, has an architecture built/evolved for efficiency and speed, both of the individual and the group. For such speed, we need to have stable convictions that are executed 'without deliberation'. And thus we need to be able to create (stable) convictions and keep them. There is very little in our brains that makes sure these convictions are anything good/correct (though their effect may weed damaging ones out in the long term) and the systems we have to make them factual (science, etc.) are weak. Conspiracy theorists are not the only ones where facts and logic have only a minimal effect on convictions, it holds for all of us. Hence, belief in things like 'AGI around the corner' (which indeed is definitely not the case, but the facts do not really matter).

The second element is the moral issue about the worth of current versus the worth of future 'people'. That is the old dilemma of actively offering one to save many and here we make an ethical choice for which there is no formula, no facts, no rules, no unassailable truth, nothing that can help us decide. Many moral philosophers (including the Abrahamic teaching) have held that the act of sacrificing even one is so despicable that it invalidates any 'win' you get from it. Saving one, on the other hand, is an act that in itself translates to saving all. The end does not justify the means, because the means also 'create' the end. But there is no 'proof' here and nothing that can convince people to not believe otherwise and make these calculations. The believers in 'sacrifice the current people to save the future ones' are, however, fundamentally not that different from religions that practiced human sacrifice, or beliefs to 'improve' that led to genocide (e.g. nazism).

The mix of both (a (end-of-times-like) belief in technical utopia soon, and the moral-of-counting) is indeed a rather toxic mix, because (unfounded) techno-optimism increases the devaluation of current lives and increases the overvaluation of future lives and thereby indirectly influences our ethical choice in the matter (without actually addressing the underlying moral issue).

If we have to characterise our current age, it is the age in which we will be confronted not so much with technical utopia, but with the realisation that human intelligence is severely limited and that not taking that into account produces disasters in the here and now without any certainty that they will have been 'worth' it in the future.

Expand full comment

Your main problem with the philosophy seems to be aesthetic. You haven't presented any real argument against it, you just don't like that the type of people who promote it are "billionaire tech barons", rich technologists, "Great Men", geniuses, hero-inventors, and the many other terms you used. I'm sure you were shaking in anger writing this screed, but take a deep breath dude, it's going to be okay. Not everyone can be famous, some people are just Internet politics professors.

Your deep-seated resentment against exceptional white men is understandable, seeing as you are a mediocre white man. But try to channel your anger into sports or something, not into reactionary tirades against the very concept of thinking about the future.

Expand full comment