I have an antipathy towards the concept of longtermism. For that reason I have read and listened to the longtermist arguments from Will MacAskill – and I look forward to reading his forthcoming book on the subject – and read and listened to other longtermist thinkers, most frequently through the 80,000 Hours podcast. Most recently I read the article by Émile Torres criticising longtermism and the response by Avital Balwit defending it, and although I can’t pretend to be as smart as any of these people, I do want to think more systematically about why I believe longtermism is not compelling.
What is Longtermism?
First, future people matter. Our lives surely matter just as much as those lived thousands of years ago — so why shouldn’t the lives of people living thousands of years hence matter equally? Second, the future could be vast. Absent catastrophe, most people who will ever live have not yet been born. Third, our actions may predictably influence how well this long-term future goes. In sum, it may be our responsibility to ensure future generations get to survive and flourish.
My initial antipathy arises from the combination of the fact that the basis for longtermism’s ethical component is utilitarianism and my belief that utilitarianism is an inadequate basis for ethical assessment. However I take issue with all three of these premises. The future could be vast, or it could not, and so any decision you make either way is reasonable but not compelling. I also have a strong objection to claims to be able to predict the future, and particularly against claims to be able to predict the future impact of our present actions; I do not believe it is possible in any meaningful way.
When I examine the chain of reasoning behind longtermism, however, it is the first premise which I find peculiarly weak. MacAskill’s idea that “People in the future people, people in a century’s time, a thousand years’ time, a million years’ time – their interests count just as much morally speaking as ours do today” and that therefore we have the same moral obligations towards them as we do to our contemporaries.
The rest of the longtermist argument relies entirely on this premise, but I think that premise relies on rhetorical sleight-of-hand rather than philosophical argument. It relies heavily on Peter Singer’s child-in-the-pond thought experiment, a cornerstone of the effective altruism movement. Without it, the longtermist argument has little to no force, so it’s worth giving it in full here, taken from Singer’s website, The Life You Can Save:
On your way to work, you pass a small pond. On hot days, children sometimes play in the pond, which is only about knee-deep. The weather’s cool today, though, and the hour is early, so you are surprised to see a child splashing about in the pond. As you get closer, you see that it is a very young child, just a toddler, who is flailing about, unable to stay upright or walk out of the pond. You look for the parents or babysitter, but there is no one else around. The child is unable to keep her head above the water for more than a few seconds at a time. If you don’t wade in and pull her out, she seems likely to drown. Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago, and get your suit wet and muddy. By the time you hand the child over to someone responsible for her, and change your clothes, you’ll be late for work. What should you do?
The answer is intuitive to almost everybody: you should help the child. Singer then points out that “5.4 million children under 5 years old died in 2017, with a majority of those deaths being from preventable or treatable causes,” and argues that we owe an equal ethical duty to those children, no matter that they are not in a pond in front of us, but are perhaps thousands of miles away. But do we?
How do we determine moral obligations?
While I recognise its simple brilliance, I have never found Singer’s thought experiment convincing; the initial answer is obvious to me, but I find the extension of that answer to be far less compelling. While I accept that a “view from nowhere” is necessary for making moral assessments, I do not believe that it is sufficient for establishing moral obligations. Instead I believe that there are three grounds necessary for establishing moral obligations.
The first is Awareness: we have no moral obligation to relieve suffering that we do not know about. This is inherent to Singer’s thought experiment, since its assumption is that our obligation only comes into being when we become aware, e.g. when we see the child in the pond; and the full force of the argument only emerges when Singer brings to our awareness the suffering of other children in other places.
The second is Capability: we have no moral obligation to relieve suffering if it is not possible for us to discharge that obligation. This circumscribes what we are obliged specifically to do; I might have a moral obligation to help fight an epidemic in a far-flung country, but I am not obliged personally to fly in medical supplies if I do not know how to fly a plane. Likewise I cannot jump into a pond a continent away, and so I must rely on charities as proxies.
The third is Relation: our moral obligation to relieve suffering is relative to our relationship with the suffering party, even in cases where our moral assessment is absolute. This point is more complicated and more controversial, but if we look at the two previous grounds we can see that any moral obligation relies on the possibility of a relationship with the suffering party – first epistemically, second physically.
This is because moral obligations arise from the relations between moral actors – even in the case of religious morals, where a god takes the role of one of the moral actors – and as a result the force of those obligations relies on the strength of those relations. Where our interactions are weaker – whether due to physical distance, temporal distance, or anything else – then our obligations are weaker.
Although (under utilitarianism) I could relieve more suffering by giving money for medical supplies in Afghanistan than by giving it to my neighbour whose pension has not arrived and who faces starvation, I am more obliged to give that money to my neighbour; while if I was a member of the Afghan diaspora, newly arrived in a host country, my obligation would likely be greater to my community in the old country than my community in the new country.
Our moral obligation to those far distant is a cosmopolitan extension of our moral obligation to those near to us, and if we undermine our local moral obligation, we undermine our distant moral obligation at the same time. We can say with confidence that different people have the same moral worth; however establishing our moral obligations to those different people does not rely solely on their moral worth, but also on these other grounds.
Singer’s thought experiment is a sleight of hand, because he establishes the three conditions above as part of the scenario of the experiment, and then assumes that the conclusion of the experiment also applies when those conditions do not exist. Unfortunately that’s not how experiments work, and to some extent this illustrates the general problem with thought experiments in philosophy. They are useful tools, but not for every purpose.
Do “future people” have moral worth?
The longtermism argument takes Singer’s point and extends it, claiming that we owe an equal ethical duty to children – to people in general – even if they are thousands of years away. Macaskill therefore argues that it is both intuitive and uncontroversial that the interests of people in the future count as much as people alive today, no matter how far in the future we’re thinking, but I do not find this intuitive and therefore I find it controversial.
I think that most people would also not find this intuitive, since they are being asked to believe they have moral obligations to people of whom they have no awareness and consequently no relation, and where it is not clear what their own capability is to help those people, not just because those people are far distant (thousands if not millions of years away) but because those people – and this seems absolutely crucial to me – do not exist.
I suspect MacAskill also realises that this is not especially intuitive since he draws on another thought experiment (from the work of Derek Parfit) to demonstrate it.
You’re kind of off on a hike on a trail that’s not very often used and at one point you open a glass bottle and you drop it and it smashes on the trail and you think… You’re not really sure whether you are going to tidy up after yourself or not. Not many people use it and you decide, “It’s not worth the effort. I’m going to just leave the broken glass there.” And you start to walk on. But then you suddenly get this call. You pick it up and it’s a Skype call and you answer it. It turns out it’s from someone 100 years in the future and they’re saying that actually they walked on this very path and they accidentally put their hand on this glass and they cut themselves and so they’re now asking you politely if you can actually just tidy up after yourself so that they won’t have a bloody hand. Now you might be taken aback in various ways by getting this video call, but I think one thing that you wouldn’t think is, “Oh, I’m not going to tidy up the glass. I don’t care about this person on the phone. They’re a future person. They’re a hundred years’ time. They’re interest sort of is no concern to me.” That’s just not what we would naturally think at all. If you think that you’ve harmed someone, it really doesn’t matter when that occurs.”
Once again, this is sleight of hand. The thought experiment grants you awareness of the suffering, posits your capability to do something about it, and establishes a relationship between you and the suffering party. As with Singer’s experiment, MacAskill then assumes that, absent these things, you would have the same moral obligation; but it should be clear that the reason he has baked them in is precisely because, absent these things, you do not.
Further sleight of hand occurs when MacAskill extends his argument to the moral actors involved. In the experiment we are given a future individual to whom we have a moral obligation, but it does not follow that we are subject to the same obligation to their entire civilisation. This is not to say that we owe nothing to the future; only that this argument does not establish what we owe, or how we should act toward the future.
What do we owe the future?
Longtermists are the Ghost of Christmas Future, come to warn us to change our ways; but this seems to be a rhetorical rather than a philosophical position. Longtermists are extremely invested in longtermism, which is to be expected; the implication behind longtermism – not made explicit, as far as I know – is that failing to be a longtermist is condemning billions of (future) people to misery, so anybody arguing against longtermism is a moral monster.
In which case, consider me monstrous. I would go so far as to say that future people – as individuals – will have moral worth, but they do not currently have moral worth because they do not exist. and therefore I can have no moral relationship to them. However I do accept that there will be future people (I see them arriving all the time), that I am not wholly uninterested in them, and that I can influence the world in which they will arrive.
Where I do agree with MacAskill is on the question of values; preserving the values that we think are important is the prerequisite for the type of society that we would want to live in. But I assume that future people will not share my values no matter how hard I try, and it would come as a huge surprise to me if they did, because I see how different my values are from those who came before me, and I see how material conditions shape those values.
I am not sure where this leaves me. It seems to me that longtermism is not required to take sensible actions to preserve the near future, by which I mean the period in which I will live; and longtermism is not required for me to try to pass my values down to the following generations with which I have actual contact. It seems to me, in fact, that longtermism is not really required at all; certainly I don’t hear any voices from the future clamouring for it.
One Comment
Thank you for the thought provoking read. I learnt recently when looking into epigenetics and intergenerational trauma, that female foetuses have all of their eggs (I didn´t know), and so, the genes of three generations can exist within the same body at the same time with the experiences of the grandmother potentially directly impacting on the genes of the grandchild. That for me was massive in terms of grasping the connection to the past and the future. I feel longtermism to be quite a spiritual topic to be honest, on connection and what that means more broadly.