Discussion about this post

User's avatar
Caravelle's avatar

The most amazing thing about this philosophy to me is - and tell me if this is something they actually address - what would our world be like if people at any point in the past had this philosophy? All of the major changes that brought our world to what it is today and that we generally think are good (and a lot of those we think are bad, tbh) were made by people who were trying to improve people's lives then and there, or two generations down at the outside. This idea of "it doesn't matter if we don't solve racism now because 750 years from now, who will care" well I don't know, are we assuming someone *will* solve racism somewhere between now and 750 years from now? Because if they don't then the people 750 years from now will absolutely care. And if someone does, why did they, according to this philosophy?

I think it would be really cool for AGI to exist, and I don't see a reason it shouldn't exist at some point given *we* exist, so clearly it's at least theoretically doable and I think it's a big assumption to think that *only* biological systems can do it when we can emulate so many other things biological systems do. But when I look at people today banking everything on AGI being invented sometime soon, or going for cryonics in the hope it will buy them immortality, I can't help but think of Leonardo da Vinci. If you were in Leonardo da Vinci's day and were betting on helicopters or airplanes being invented within a generation, or his work being *close enough* that it was worth throwing all the money at because, like, what's more valuable than people flying? If there's even a tiny chance it's still worth putting a lot of resources into, right?... Would you be right? Seems to me you'd be centuries off, even with all the money in the world the technology and knowledge just *wasn't there* for helicopters and airplanes to be invented just then. Having the idea wasn't enough. Having explored general ideas of how you could do it, it turns out, wasn't enough when you also needed things like motors and electricity and other ideas that were on completely separate technology tracks, to use a metaphor.

So, it seems to me, "it's theoretically possible and it would be really great if it happened and there's no reason to believe it can't eventually happen" isn't sufficient to justify investment decisions in the here and now. You do need to consider feasibility and put a discount on an uncertain long-term in favor of the more certain short-term.

Expand full comment
Scarborough's avatar

Obligations to future generations in a consequentialist framework are covered by Derek Parfit in his classic Reasons and Persons (1984). Parfit essentially argues that our moral obligations to unborn and unforeseen generations are limited in part by the fact that their identities are unknown and partially dependent upon our actions today: there can be no full obligation to theoretical people with no fixed identity, which seems to be a tenet of the longtermism described here. In general, Parfit's reasoned analysis of the ways that common-sense understandings of harm as 'harm-done-to-a-particular-person' can be incorporated into a consequentialist matrix would make a good riposte to much of the justifications for disregarding current harm. I suspect it should also be part of this discussion.

Expand full comment
28 more comments...

No posts