The logic of Effective Altruism is plain and simple: that if it is in your capacity to save someone and it carries a small moral cost to you, you should do what you can to help, and as effectively as possible. The problems of EA, however, stem not from the formulation of such a principle, but rather the gross simplification of such an idea void of a coherent moral framework, and myopic in its considerations.
How utilitarian do you want to be?
One criticism directed towards utilitarianism is that it is “too demanding”; it claims morality necessitates you to maximise good in all (or most, as per rule-utilitarianism) circumstances, and this is impossible to achieve. The same, arguably could be applied to EA.
Setting aside the issues concerning this argument vis-a-vis utilitarianism, I think this would be an unfair criticism towards Effective Altruism. It seems almost in every article I have read on the subject, the advocates permit some extent of living a comfortable life for individual Altruists. Furthermore, Singer, in the Boston Review, implies that there could be some scope for acting less “effectively” than otherwise. For example, he notes, that a parent whose child has died from Leukaemia should feel absolutely within their right to donate to a charity researching into Leukaemia cures.
This, it seems, is quite a problem for Effective Altruism. After all, the persuasiveness of EA as being a genuine altruist who acts, donates, and gives not on the basis of feeling good but actually doing good diminishes quite quickly. If we are fine with donating to Leukaemia charities out of our own emotional attachments to our deceased child, then why is it not fine to donate to some other inefficient charities because they have invoked the same kind of visceral emotions? Why would donating to the Leukaemia be justified when the Qalys are likely to be much lower than donating to a famine fund.
Maybe, in Singer’s defence, the death of your own child is significantly more important and invokes exceptionally feelings, and thus it is understandable that we donate to the cause of Leukemia than any other cause. Yet from an objective, third person perspective, the death of your own child is no more significant than the thousands dying from a famine. And so why should you even feel more strongly about your child in the first place? The only reasonable explanation, it appears, would be to concede that after all we are just human, and it is only human to slip into such mistakes.
This, in my view, is precisely the problem with EA. On one hand, it seeks to maintain some of basic, non-consequential moral instincts that we hold- say, reciprocity to your friends and family, abiding by the law etc.- and the idea of maximising good. It falls short of drawing a clear distinction between the two. Why does MacAskill find it problematic to engage in unethical banking in order to reap large dividends to donate? Note, that this was discouraged even when we assume that the world will go on absent our involvement– that is, someone else would do the job anyway, and whatever banking crisis is going to exist, it will exist with or without me. Implicitly, it seems, MacAskill commits himself to some kind of moral sentiment- that of disdain for complicity in wrongdoing. If so, how do we weigh these things up against the utilitarian framework of EA. Or consider this- why not sacrifice all of my time and money, even to the extent of depriving my dependents of a decent western standard of living, so I could save some people in poorer countries. After all, even if I can’t afford to send my child to school and 80% of income has gone, we would still be significantly better off than the dying Bengali. They will die if I don’t donate. My family will still live (albeit they’d probably be just above malnourished). Clearly, a bad life > no life at all; the two are not of comparable moral significance. Why should I not spend everything on giving to effective charities then?
Rule Utilitarianism to save the day?
Perhaps, we might say, following rule-utilitarianism, that this generally leads to bad outcomes in the long run. If I work in unethical banking for too long, I might get absorbed into the role and get too indulged in personal gains. Or that banking contributes to global poverty and I my efforts legitimises the act. Or, if I leave my family deprived of resources, I might feel really bad and the damaged relationships harms my dedication to good causes.
Setting aside the implausibility of the claims (we would tend to think that our complicity in a global financial crisis would be the driver of our guilt, not anything listed above), I think there are two broad problems with this.
The first problem regards uncertainties. Namely, we cannot determine on an a priori basis what would or would not be likely to yield the best outcomes. The best we can do is to induct through history, yet the future is always going to be open to uncertainties vast and numerous. I am not going to challenge the questions of whether such uncertainties makes the principles of the theory weaker, but I consider it to have severe practical concerns. Especially when Effective Altruists deal with the issues of poverty alleviation, the dynamic nature of globalisation coupled which ever-changing political currents make the future especially uncertain for us.
Why is this problematic? Simply put, it could drive the EA movement towards alleviating harms which they know are “certain” and could be solved if one is to follow the Rule-Utilitarian logic which assumes a static future. Yet many potentially great risks, not necessarily X-Risks but say just another disease, are unknown to us today. This is precisely the “shallow pond thinking” that one would fall into if we are to accept this modification of EA which attempts to solve its philosophical weaknesses.
The second issue concerns buy-in, namely that Rule-Utilitarianism really isn’t how anyone considers morality, or, for the matter, how Effective Altruists have articulated their positions either. But perhaps we’ve been thinking of morality in the wrong way for too long; maybe we should only, from now on, do the actions which tend to yield most good. I conjecture that only a minute few would actually be convinced by this argument. Already, our moral sentiments are guided towards focusing on the intrinsic value of an action which utilitarianism distances from. To impose a rule-based condition further distances that. I think few would take this seriously, and instead opt for a simpler view of EA, which would run into troubles aforementioned, or if it is, then research into the unknown future and making corresponding calculations would be well-neigh impossible to be efficacious.
What is Effective Altruism good for then?
I believe that the reason Effective Altruism has picked up momentum amongst middle class millenials in the West is precisely due to its simplicity. Do the good, do the most good, and we call live a better life. Overall, I have no doubts that it has been a force for good in raising awareness, acting as a growing social movement. At the very least, I think governments and other bodies will look into how effective their policies or recommendations are in solving poverty.
This social movement, however, is no more than a mere slogan asking us to care more. Its principles are ultimately inconsistent and counter-intuitive and its practical applications can fall into simplistic and Eurocentric thinking. The complexity of moral philosophy underpinning the first problem, which Singer conveniently glosses over, are unlikely to be resolved soon. Nor does it need to be- we would have a much better system of care and aid before we reach the point where society contemplates issues I’ve attempted to outline here. Thus the main concern lies with its over-simplification in execution. Effective Altruism must, for now, be good at being good.