30 Comments

The most amazing thing about this philosophy to me is - and tell me if this is something they actually address - what would our world be like if people at any point in the past had this philosophy? All of the major changes that brought our world to what it is today and that we generally think are good (and a lot of those we think are bad, tbh) were made by people who were trying to improve people's lives then and there, or two generations down at the outside. This idea of "it doesn't matter if we don't solve racism now because 750 years from now, who will care" well I don't know, are we assuming someone *will* solve racism somewhere between now and 750 years from now? Because if they don't then the people 750 years from now will absolutely care. And if someone does, why did they, according to this philosophy?

I think it would be really cool for AGI to exist, and I don't see a reason it shouldn't exist at some point given *we* exist, so clearly it's at least theoretically doable and I think it's a big assumption to think that *only* biological systems can do it when we can emulate so many other things biological systems do. But when I look at people today banking everything on AGI being invented sometime soon, or going for cryonics in the hope it will buy them immortality, I can't help but think of Leonardo da Vinci. If you were in Leonardo da Vinci's day and were betting on helicopters or airplanes being invented within a generation, or his work being *close enough* that it was worth throwing all the money at because, like, what's more valuable than people flying? If there's even a tiny chance it's still worth putting a lot of resources into, right?... Would you be right? Seems to me you'd be centuries off, even with all the money in the world the technology and knowledge just *wasn't there* for helicopters and airplanes to be invented just then. Having the idea wasn't enough. Having explored general ideas of how you could do it, it turns out, wasn't enough when you also needed things like motors and electricity and other ideas that were on completely separate technology tracks, to use a metaphor.

So, it seems to me, "it's theoretically possible and it would be really great if it happened and there's no reason to believe it can't eventually happen" isn't sufficient to justify investment decisions in the here and now. You do need to consider feasibility and put a discount on an uncertain long-term in favor of the more certain short-term.

Expand full comment

I hardly ever read the comments because you hardly ever see a comment as well argued as this one.

Expand full comment

I do read the comments because I've found that they occasionally contain gems like that.

Expand full comment
Nov 1, 2022Liked by Dave Karpf

Obligations to future generations in a consequentialist framework are covered by Derek Parfit in his classic Reasons and Persons (1984). Parfit essentially argues that our moral obligations to unborn and unforeseen generations are limited in part by the fact that their identities are unknown and partially dependent upon our actions today: there can be no full obligation to theoretical people with no fixed identity, which seems to be a tenet of the longtermism described here. In general, Parfit's reasoned analysis of the ways that common-sense understandings of harm as 'harm-done-to-a-particular-person' can be incorporated into a consequentialist matrix would make a good riposte to much of the justifications for disregarding current harm. I suspect it should also be part of this discussion.

Expand full comment

Thank you ... I am now reading it!

Expand full comment

To be fair to Keynes, he said, “In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” He saw the long run as a cop out too.

Expand full comment

This is well said. I couldn’t say before why the longtermism stuff bothered me so, but you’re right! It’s as much of a cop out as saying we’ll be dead by 2100.

Expand full comment

I think y’all should be sure to read the NYT excerpt. I think this piece conflates a lot of ideas and trends, and misunderstands longtermism. This isn’t a defense of Bezos or Musk or any other idiot.

And for the record, it’s impossible to cure cancer. You can cure some cancers, not the whole concept..

Expand full comment

Ok. Wrong, wrong. Ok.

Wrong in theory but perhaps correct in practice.

Expand full comment

I think the author of this piece would be pleased with the effect criticism like this has had on Will MacAskill. In his most recent book, Will strongly endorses addressing climate change now because of it's unambiguous negative effects on future people. He also strongly encourages people to work on the social issues mentioned in the piece. His reasoning is that avoiding the "lock-in" of bad values (authoritarianism, racism, inequality, factory farming, etc.) has profound implications on the values of any future human civilization and the future people in it. Thanks for a thoughtful piece.

Expand full comment

Interesting insights. It's basically a philosophy that allows shallow, vain, (albeit wealthy) people to paper over their tax evasion and ill treatment of workers for promises of undefined future breakthroughs to benefit humanity. So, a shell game to disguise their greed and selfishness. Bezos ex-wife is the only billionaire that got it right...she gave billions to charities and continues to donate billions to people in need in the present. The rest refuse to accept their mortality or inevitable insignificance in the 'long-term'.

Expand full comment

Actually, actual long-termists in the EA movement really do give away lots of money for X-risk mitigation. I can understand it being seen as a convenient excuse to excuse present behavior for your archetypical Ayn Randian-tech titan, but look at what Samuel Bankman-Fried is doing with his money for instance, or Dustin Moskowitz and Cari Tuna. It's really not that strange - more investment in pandemic prevention for instance and AI safety research - on the margin, with the current level of neglectedness.

Expand full comment
Dec 22, 2022Liked by Dave Karpf

This comment did not age well.

Expand full comment

This is such an incredibly sloppy reading, I don’t even know where to begin. Your targets are just all over the place. Arguments aren’t really made- it’s mainly guilt-by-association. But just to take one, let’s flip your main script: why not focus exclusively on ultra-short-term-thinking instead. Why on earth bother with basic scientific research or even a cure to cancer, when more than half the world’s population lives well below the poverty line?

Expand full comment

Projection.

Expand full comment

It's hard not to hear an echo of early Christianity in longtermism. Early Christians argued that there was no point in improving anyone's lot in this world when Jesus Christ and his dad were going to return presently and deliver every believer into a heavenly paradise. Our world was, to use a modern metaphor, a virtual reality, and those who trust in the Lord will see the veil brushed aside and see God's truth and participate in God's peace. Some early Christians sought martyrdom for in the long term we are all dead and our souls will be judged by God.

By the time Christianity was adopted as the official religion of the Roman Empire, it had changed quite a bit, and becoming an official state religion changed it further.

Expand full comment

It feels like this is all rising from a strongly deontological inclination... am I wrong, or is there something that you think is still problematic from a consequentialist standpoint?

Looking at your take on discounting for instance, I can't help but think it's a bit of a post hoc contrivance to justify what one already feels is right, based on personal experience, or whatever. Usually, a virtue in moral reasoning imo is to be able to divest yourself from that personal experience. The logical end of not doing so is that what could otherwise be serious moral analysis becomes nothing more than a survey, with a sample of one.

Expand full comment

The problem is that longtermism can justify any level of present immorality in light of an arbitrarily large number of future individuals. Why does it matter if we accept cannibalism and divide humanity into predators and prey when in the long run we have a shiny, wonderful future to look forward to, and in those distant years people will laugh at those who objected and would have wasted all that meat? Perhaps this viewpoint is correct, and there is no reasonable moral argument that we shouldn't have some humans raising others as meat and dining on their flesh. After all, this can be justified in that it does not wipe out the human race completely and so those myriad future individuals have greater moral weight than those penned, slaughtered and butchered.

There are worse things than cannibalism, but if one puts those future generations on the scale, no current moral argument against them can prevail so long as humanity is not driven to extinction.

Expand full comment

I take this response as a denial then that there is anything wrong about this from a purely consequentialist standpoint. It's just your classic greatest good for the greatest number, the greatest good being "an arbitrarily large number of future individuals". I don't disagree with your conclusion then, for the same reason I'd pull the lever switching the train away from the track with five people, onto the one with one. Or push the fat man off the bridge.

Regardless, I think it's worth saying that it's hardly things like cannibalism that make sense in this context. Perhaps someone argues for it- but someone else could easily make a different argument against, claiming it would bring social decline accelerating the apocalypse or whatever. My point is, there is massive "sign uncertainty". One could just as easily posit a .01 increase in risk as much as a decrease.

The actual proposals and ideas out there don't have this uncertainty. Investing more in pandemic prevention, preemptive vaccine development, lobbying governments to denuclearize in START agreements, etc. These are the common sense things the marginal dollar ought to go to- they are far from being funded sufficiently by any account. If you don't make up some kind of weird discount for the value of future people, these can be seen as having the massive importance they have. We don't make up discounts for, say, the people starving to death in Somalia in order to justify buying Starbucks. We just accept that we are in the wrong and should and could do more. Yes, that may logically entail we should be surviving on potatoes. So be it. We ought to buckle down and buy potatoes.

Expand full comment

"I'd pull the lever switching the train away from the track with five people, onto the one with one. Or push the fat man off the bridge."

Sane people don't generally consider these to be morally equivalent.

Expand full comment

Agreed. As an "Internet politics professor", I'm not sure if Dave is familiar with basic math, but he will not be pleased to discover that even with his discounting solution, future humans may still be worth far more moral consideration than present humans, because human population can grow exponentially.

I'm very curious what discount rate he thinks is appropriate for weighing the value of future humans. I have a feeling it will conveniently be slightly higher than the population growth rate.

Expand full comment

Longtermism discounts current consequences on the basis of hypothesized massive future consequences. It's logical sleight of hand.

By the logic of longtermism we should put all our resources into preventing the future extinction of the human race, even though that consequence is physically impossible due the 2nd law of thermodynamics and the resultant heat death of the universe.

Expand full comment

"Just as we mourn tragedies that occurred 50 years ago more deeply than we do tragedies that occurred 750 years ago"

We do? I don't, and don't see why I should. For a concrete example, I care far more about the suffering of Giordano Bruno than about antivaxxers on ventilators.

In his book Reasons and Persons, Derek Parfit argues against the notion of a discount rate, using examples such as radioactive waste or leaving broken shards of glass in the undergrowth where some unsuspecting person might step on them in the future.

Expand full comment

MacAskill sounds like a forced-birther (anti-abortion, "pro-life") on hyper mega steroids.

There is no moral obligation to hypothesized persons who don't exist, and certainly no moral obligation to bring people into existence. There is a moral obligation to leave a livable world to future generations -- which includes people who exist now and whatever people do happen to exist in the future ... and as things are going now there won't be all that many of them.

Taking longtermism to its logical conclusion would require putting all of our resources into the impossible task of preventing the heat death of the universe that guarantees human extinction. Longtermism leads logically to such an absurd result because it isn't logically sound.

Expand full comment

In Capitalism, Organized crime has the competitive advantage, because they pay no taxes, follow no laws and can offer you a deal you dare not refuse. Where does all their money get invested? We could use Geo Thermal to halt the use of oil, but that isn't likely to happen because it would upset the status quo and their hold on geopolitics. So our future is likely to be a authoritarian thug state run by the likes of Putin or China's Xi. And these wonderful beings are going to project consciousness across the galaxy, while the rest of us are their disposable play things or slaves or armies. Since capitalism reins supreme, I suppose our short term future is Geo Engineered and Genetically Modified. Only species that are worth money, will survive. All else to the trash bin. The genes of plants and animals will be the property of corporations. Trash the planet and engineer the hell out of it, for a buck. Psychopaths and sociopaths rule supreme.

Expand full comment
Oct 31, 2022·edited Oct 31, 2022

Fascinating and well said.

There are people who 'believe' (are convinced of) the 'coming of 'technical utopia'. It is yet another example how we humans tend to our conflate relative intelligence with 'absolute' intelligence.

The human mind, including those of the tech billionaires and everyone else, has an architecture built/evolved for efficiency and speed, both of the individual and the group. For such speed, we need to have stable convictions that are executed 'without deliberation'. And thus we need to be able to create (stable) convictions and keep them. There is very little in our brains that makes sure these convictions are anything good/correct (though their effect may weed damaging ones out in the long term) and the systems we have to make them factual (science, etc.) are weak. Conspiracy theorists are not the only ones where facts and logic have only a minimal effect on convictions, it holds for all of us. Hence, belief in things like 'AGI around the corner' (which indeed is definitely not the case, but the facts do not really matter).

The second element is the moral issue about the worth of current versus the worth of future 'people'. That is the old dilemma of actively offering one to save many and here we make an ethical choice for which there is no formula, no facts, no rules, no unassailable truth, nothing that can help us decide. Many moral philosophers (including the Abrahamic teaching) have held that the act of sacrificing even one is so despicable that it invalidates any 'win' you get from it. Saving one, on the other hand, is an act that in itself translates to saving all. The end does not justify the means, because the means also 'create' the end. But there is no 'proof' here and nothing that can convince people to not believe otherwise and make these calculations. The believers in 'sacrifice the current people to save the future ones' are, however, fundamentally not that different from religions that practiced human sacrifice, or beliefs to 'improve' that led to genocide (e.g. nazism).

The mix of both (a (end-of-times-like) belief in technical utopia soon, and the moral-of-counting) is indeed a rather toxic mix, because (unfounded) techno-optimism increases the devaluation of current lives and increases the overvaluation of future lives and thereby indirectly influences our ethical choice in the matter (without actually addressing the underlying moral issue).

If we have to characterise our current age, it is the age in which we will be confronted not so much with technical utopia, but with the realisation that human intelligence is severely limited and that not taking that into account produces disasters in the here and now without any certainty that they will have been 'worth' it in the future.

Expand full comment

Your main problem with the philosophy seems to be aesthetic. You haven't presented any real argument against it, you just don't like that the type of people who promote it are "billionaire tech barons", rich technologists, "Great Men", geniuses, hero-inventors, and the many other terms you used. I'm sure you were shaking in anger writing this screed, but take a deep breath dude, it's going to be okay. Not everyone can be famous, some people are just Internet politics professors.

Your deep-seated resentment against exceptional white men is understandable, seeing as you are a mediocre white man. But try to channel your anger into sports or something, not into reactionary tirades against the very concept of thinking about the future.

Expand full comment

Hi Ludex, personal attacks are not a good look for you. You are an intelligent person and your legitimate points can stand on their own.

Expand full comment
Dec 22, 2022·edited Dec 22, 2022

I'm not so sure about the second claim ... I think their comment stands in evidence against it.

Expand full comment

Did you read the same article as the rest of us? This is a moral argument. It makes no argument about beauty. It doesn't say that ignoring present day problems is bad because it is ugly. This article says that ignoring present day problems because if one looks far enough ahead into the future they are not important is immoral.

You'll have to provide some textual evidence that this moral argument is based on resentment as opposed to moral concerns.

Expand full comment