MacAskill and Effective Altruism
- Qalys (Quality Adjusted Life Years), which factors into consideration life quality and expectancy, attempts to compare things we normally cannot conceptualise (e.g. comparing the impacts of AIDS relief vs Blindness).
- MacAskill proposes thinking beyond that- we should think marginally and counter-factually too.
- To think marginally because you want to maximise the value of your contribution to a cause. The average doctor in the UK may bring a lot of good, but in 2017, becoming another doctor has, probably, little marginal impact.
- To think counter-factually because you want to consider what the best way of achieving your ends. Could someone else do the same job as yours, and you would be doing more good elsewhere? A common example would be millenials taking up lucrative banking jobs so they could donate larger sums to charity.
- MacAskill recognises that sometimes such jobs could have negative social effects, and thus proposes doing things that are positive (but probably less effective) or morally neutral.
- Strangely, some of the results of MacAskill’s logic are quite counter-intuitive. E.g. Buying FairTrade could be worse than buying regular goods because the marginal effects are less significant- the money would go to the richer areas where farmers could afford to meet standards.
- Author points out the MacAskill avoids the problem of structural defects and boxes us into an existing capitalist framework, neglecting injustices of the system.
- What about the algorithms/models used in EA? Why can’t values such as “justice”, or “self determination” be considered? If so- how? How do we factor into consideration making social changes?
- Considerations of power are neglected. EA would be comfortable for those living amongst the well-off in the Status Quo, but only for them.
- Furthermore, with EA, the uncertainty of certain values means that models, at some point, rely upon commonsensical understandings of what to do. In which case- what exactly does EA bring to the table?
The Problem of X-Risks and AI
- Existential Risk (or X-Risks) are risks that “”permanently and drastically curtail humanity’s potential”- the worse of the worse.
- Philosopher Nick Bostrom estimates that there could potentially have 10^52 future persons over the course of humanity, if we are to continually expand.
- Given that, preventing an X-risk seems to dwarf any considerations of preventing Malaria, or even, say, a genocide today. Even if the probability of reducing an X-risk in minimal, it remains of far greater value simply because of the size of the potential damage.
- So following EA, should we be pumping money into AI research (which many consider to be a potential threat to the humankind) instead of humanitarian work today?
Effective Altruism- a failed Utilitarianism lite?
- EA stems from Singer’s argument in “Famine, Affluence and Morality”, but is much less demanding than hard utilitarianism. MacAskill does not advocated doing as much good as possible, but just being effective with being good. That seems fair, but it’s hardly a moral insight.
- Also, how far should we take the logic of being effective? Why stop at where MacAskill asks for, why not take it further? After all, living a fairly cushy lifestyle, even donating a fair amount to charities, is less effective than it could be. So why accept EA instead of hard utilitarianism?
- The Problems of Personal Consideration remains a problem for EA as it is for Utilitarianism. If you were an Effective Altruist, why even bother consoling your friend who is feeling sad, when it clearly carries an opportunity cost of spending time doing better good elsewhere? Seems silly, right?
- Or on the contrary, would being a banker involved in dodgy trading be justified because it gives you high wages, which you could then donate a large chunk of?
- (All of this implies an absurdity of utilitarianism which hollows out individual morality!)
- Moral arbitrariness, present in both Singer and MacAskill’s ideas, should not be a reason for caring about someone more than another. Yet this seems bizarre- having the friends, families etc. that we do spring from arbitrariness, yet nonetheless such arbitrariness comes to hold significance in our personal experiences.
- Furthermore, arbitrariness is also an issue for EA. Why are some forms of suffering less worthy, just because of “arbitrary” Qaly measurements of yours? Why would it be fine for me to neglect a suffering human in Greece just because Greece is richer than Somalia?
- EA ultimately exists within an individualistic framework which assumes that only a few individuals will follow it whilst life goes on around us. How tame is this though! Beyond that, then what?