Why effective altruism is a dangerous philosophy
Two ways of reading Sam Bankman-Fried — both are damning.
In this newsletter:
(1) I share my column about how Sam Bankman-Fried exposes the perils of effective altruism, and represents something approaching a professional-managerial class interpretation of Batman.
(2) A few reading recommendations.
How Sam Bankman-Fried's fall exposes the perils of effective altruism
This article was published at MSNBC with all links (of which there are many):
The spectacular collapse of FTX, the big cryptocurrency exchange founded by the wunderkind crypto titan Sam Bankman-Fried, wiped out about a million people’s investments and has dealt a massive blow to trust in cryptocurrency. But it has also done tremendous damage to a philosophy that Bankman-Fried championed: effective altruism.
Effective altruism is a niche but influential theory of how to do good in the world. It’s buzzy in Silicon Valley, Oxford University and in certain corners of progressive data analysis, and its advocates have tens of billions of dollars at their disposal. But in the hands of Bankman-Fried (commonly known as SBF), effective altruism was neither effective nor altruistic. Instead, he illustrated how the do-gooder ideology can serve as a sleek vehicle for immense social harm.
Now the downfall of SBF could contribute to the downfall of effective altruism — or at least do irreversible damage to its mainstream reputation as a virtuous movement. That’s because SBF doesn’t just have to answer for why he was allegedly defrauding customers, which resulted in wiping out countless people's investments and life savings. He also has to answer for how this happened on his watch as an evangelist for a philosophy that’s about being exceptionally good.
But if one pays close attention to the more unsettling ideas behind how effective altruism works, it should become apparent how this whole debacle unfolded.
Effective altruists claim they strive to use reason and evidence to do the most good possible for the most people. Influenced by utilitarian ethics, they’re fond of crunching numbers to determine sometimes counterintuitive ideas for maximizing a philanthropic act’s effects by focusing on “expected value,” which they believe can be calculated by multiplying the value of an outcome by the probability of it occurring.
SBF belonged to the “longtermist” sect of effective altruism, which focuses on events that could pose a long-term existential threat to humanity, like pandemics or the rise of runaway artificial intelligence. The reasoning for this focus is that more people will exist in the future than exist today, and thus the potential to do more good for more people is greater. He also adopted one of the movement’s signature strategies for effecting social change called “earning to give,” in which generating high income is more important than what kind of job one takes, because it enables people to give away more money for philanthropy. As a college student, SBF had lunch with William MacAskill, the most prominent intellectual advocate for effective altruism in the world, and then reportedly went into finance, and then crypto, based on the idea that it would allow him to donate more money. SBF had said he planned to give almost all of his vast wealth away.
SBF’s entire brand was tied up in effective altruism. The media buzz and fascination with SBF as an ascetic character helped attract investors and develop his reputation as an exception to the rule of shadiness and lawlessness in the world of crypto. His advertisements for FTX included his pledge to give away money. Journalists often noted his inattention to his unkempt appearance, his Toyota Corolla, and the fact that he had roommates as a sign of his apparent uninterest in indulging in his own money. (They paid less attention to the fact that he lived in a multi-million dollar penthouse in the Bahamas.) Unlike most other players in crypto, SBF actively sought for his industry to be regulated by the government. He set up a foundation, the FTX Future Fund, advised by MacAskill, to distribute millions in grants. (The fund’s leadership team has recently resigned and says the organization is unlikely to be able to honor many of its committed grants.)
But as SBF faces allegations of fraud and has overseen the overnight evaporation of a million people's assets, his belief system is receiving new scrutiny. His shocking admissions to a Vox reporter a couple of weeks ago, in a conversation that he later said he did not realize was on the record, provides a window into the reckoning effective altruism is now facing.
In online conversation with the reporter, SBF referred to his bids in the past to appear regulator-friendly as “just PR,” and he disavowed some of his previous statements about ethics. When the reporter asked whether he was being honest in past interviews when he said he would not do certain bad things for a greater good, such as running a tobacco company, he responded with a cryptic “heh.” At one particularly jaw-dropping point SBF responds affirmatively to a question about whether his “ethics stuff” was “mostly a front.”
There’s some ambiguity in this part and other parts of SBF’s exchange with the reporter, but broadly speaking, one can interpret the meaning of his responses in two ways.
The first possibility is that he’s confessing that his entire set of ethical commitments — including effective altruism — is a ruse. In this scenario, SBF is admitting that he’s a cynical exploiter of effective altruism for his personal enrichment.
The second possibility is that he’s saying that he’s extremely committed to effective altruism, and that he would be willing to do anything — including unsavory things — in order to get to what he saw as the greatest good. The "front" SBF would be referring to is the pretense that he's constrained by standard moral boundaries. In this scenario, he is an effective altruism extremist willing to cross any line.
Remarkably, both scenarios are plausible — and damning.
The reason the first possibility — cynical exploiter — is plausible is because when you look at SBF more closely, he's not terribly different from many captains of industry. He received plaudits for asking for crypto to be regulated by the government, but in reality he was seeking out weak regulators that he could boss around. He did set up a philanthropic fund, but the money it distributed was a mere fraction of the company's value — not necessarily different in function from the social responsibility operation of a typical corporation. And if he were a true longtermist, then why would he secretly donate as much money to Republicans as he gave publicly to Democrats, when Republicans are climate denialists and would oppose the kinds of robust government regulation that would guard against future pandemics and irresponsible development of AI technology?
The second possibility— extremist true believer — is also plausible because SBF, the vegan son of two consequentialist academics, demonstrated interest in utilitarian thinking and radical commitments to human and animal welfare as a student before even entering the job market. His first venture in crypto, before FTX, involved hiring and getting funding from the effective altruism community, and giving profits to its causes. He maintained a meaningful and financially consequential relationship with MacAskill as he became a billionaire. And before the collapse of FTX, he and his effective altruist-identifying colleagues openly talked about how they were inclined to take unfathomable levels of risk in their work in order to maximize total human happiness.
While these two scenarios reflect different outlooks on the world, both expose something alarming about effective altruism. It is a belief system that bad faith actors can hijack with tremendous ease and one that can lead true believers to horrifying ends-justify-the-means extremist logic. The core reason that effective altruism is a natural vehicle for bad behavior is that its cardinal demands do not require adherents to shun systems of exploitation or to change them; instead, it incentivizes turbo-charging them.
Defenders of effective altruism will say that some of its proponents have warned against doing harmful things to achieve a greater long-term good. But the reality is that the core philosophy allows it and as a culture it encourages it.
The value proposition of this community is to think of morality through the prism of investment, using expected value calculations and cost-effectiveness criteria to funnel as much money as possible toward the endpoint of perceived good causes. It's an outlook that breeds a bizarre blend of elitism, insularity and apathy to root causes of problems. This is a movement that encourages quant-focused intellectual snobbery and a distaste for people who are skeptical of suspending moral intuition and considerations of the real world. The key to unlocking righteousness is to "shut up and multiply." This is a movement in which promising young people are talked out of pursuing government jobs and talked into lucrative private sector jobs because of the importance of "earning to give." This is a movement whose adherents are mainly people who went to elite schools in the West, and view rich people who individually donate money to the right portfolio of places as the saviors of the world. It's almost like a professional-managerial class interpretation of Batman.
The point is not that effective altruism does no good in the world. That is certainly not the case — it has funneled plenty of money to worthwhile causes from malaria bed nets to animal welfare to pandemic preparedness. And some of its nonbillionaire adherents who have given away huge portions of their income have admirably subjected themselves to levels of material discomfort or modesty that most people in their position would never even consider. The impulse to do a lot of good and to be rigorous about it is a virtuous one and should indeed be encouraged. The problem is that this specific school of thought is a breeding ground for a fanaticism that ignores and often intensifies the sources of the very problems its purportedly trying to address.
Mainstream effective altruism displays no understanding of how modern capitalism — the system that it eagerly chooses to participate in — can explain extreme destitution in the Global South or the vulnerability of our society to pandemics. This crowd seems clueless about the reality that funding research into protecting against dangerous artificial intelligence will be impotent unless we structure our society and economy to prize public safety over capital's incentive to innovate for profit. If longtermists want to mitigate climate change, they should probably be radically reappraising an economic system that incentivizes short-sighted hyper-extractionism and perpetual growth.
To understand the myopia bred by effective altruism, look no further than the decision of MacAskill, the intellectual leader of the movement, to agree to work with a fund attached to a giant crypto exchange. Even without the fraud allegations, that’s a baffling decision for a professional ethicist. Many economists and social scientists have described cryptocurrency as a pure gambling product that resembles a Ponzi scheme. (It also has a giant carbon footprint.) Why would it make sense for a moral project to attach oneself to such a source of revenue other than a blind interest in getting one’s hands on tons of money? And how can a movement that prides itself on foreseeing long-term problems be trusted to assess risk if they can't even see the perils of attaching themselves to a shady, unregulated financial product?
The question of how to do good cannot be divorced from questions of what is just and where does power reside. This is a matter of morality: people concerned with doing good should be thinking about themselves not just as individual investors but as citizen-participants of systems that distribute suffering in the world unequally for reasons that are not natural but largely man-made. This is a matter of efficacy: if you want to alleviate suffering but you're encouraging people to go blindly into highly lucrative and thus often-predatory sectors of law, finance, tech, consulting and real estate, you're throwing gas on a lot of the fires you're trying to put out. This is a matter of practicality: building mass political movements on the left and instituting policy regimes that transform our commitments to each other is a far more plausible theory of doing good than relying on all the richest people in the world voluntarily converting into a collective of trolley-game loving philanthropist-monks.
In an interview with The New York Times’ Andrew Ross Sorkin on Wednesday, SBF tried to downplay his comments in the Vox interview. He said he “forgot” he was speaking to a reporter and said that he does believe in the causes that effective altruism stands for. He suggested that he thought FTX had to play up its focus on doing good in order to be a major business player. “I wish the world did not work this way,” he lamented.
SBF’s true motives will never be knowable. And it is entirely possible that his worldview involves some blend of the two ways of reading his quotes about his ethics being a front. Perhaps he got into the game to do good, but when the astonishing amount of money started flowing in, his ego and a desire to influence the world with his newfound power took hold, and he became more reckless and selfish because of it. In any case what we do know is that he was guided by a philosophy that didn't caution him against his catastrophic mistakes, but very well may have primed him for them.
If you’re interested in more background on the effective altruism, I strongly recommend reading this New Yorker profile on the leading lights of the movement and their questionable ideas. I also found Jon Ben-Menachem’s discussion of some of the intellectual and institutional history of effective altruism helpful.
If you’re interested in more background on the collapse of FTX, you may enjoy this Q&A I did with crypto skeptic Stephen Diehl in November.
Thanks for reading What's Left! Subscribe for free to receive new posts and support my work.
> SBF belonged to the “longtermist” sect of effective altruism, which focuses on events that could pose a long-term existential threat to humanity, like pandemics or the rise of runaway artificial intelligence. The reasoning for this focus is that more people will exist in the future than exist today, and thus the potential to do more good for more people is greater.
The many people that may or will exist in future is the commonest motivation for longtermism, but you can make the argument for reducing existential risks on non-utilitarian grounds (e.g. https://archive.ph/UnOsH#selection-2243.0-2249.413 or https://www.erichgrunewald.com/posts/a-kantian-view-on-extinction). Like, I wonder how much of the disagreement is moral versus empirical. I think most people, if they thought there was a ~1/6 chance of a catastrophe on a scale that would cause human extinction this year, would be in favour of making an effort to prevent it, longtermist or not.
> He also adopted one of the movement’s signature strategies for effecting social change called “earning to give,” in which generating high income is more important than what kind of job one takes, because it enables people to give away more money for philanthropy.
EAs wouldn't say that generating a high income is more important than what kind of job one takes. We'd say it's one among many avenues towards impact, which suits some people better than others, and which absolutely does not give a license to pursue a harmful career (see e.g. https://80000hours.org/articles/harmful-career/).
> The first possibility is that he’s confessing that his entire set of ethical commitments — including effective altruism — is a ruse. In this scenario, SBF is admitting that he’s a cynical exploiter of effective altruism for his personal enrichment.
This interpretation seems to be wrong, see this new interview (https://archive.ph/uwawF): "[Question:] You shocked a lot of people when you referred in a recent interview to the 'dumb game that we woke Westerners play'. My understanding is that you were talking about corporate social responsibility and E.S.G., not about effective altruism, right? [Answer:] That’s right."
> Defenders of effective altruism will say that some of its proponents have warned against doing harmful things to achieve a greater long-term good. But the reality is that the core philosophy allows it and as a culture it encourages it.
I don't think it's as easy as that -- I don't think you can just say "you're not allowed to do harmful things to achieve a greater [long-term] good". (The "long-term" seems incidental.) There are tons of situations where basically everyone endorses doing a (weakly) harmful thing to achieve a (greatly) greater good. For example, if you're in a situation where you can lie to save 1B people from terrible suffering, then I bet most people think it's not only acceptable, but obligatory to lie. If so, the ends clearly do sometimes justify the means, and doing harmful things (lying) is sometimes permissible to achieve a greater good (saving 1B people from terrible suffering).
So it becomes a matter of where to draw the line. That argument is harder than just denouncing people for letting the ends justify the means. (For the record, I absolutely don't think SBF's ends justified his means, and I think had he asked basically any EA whether he should commit fraud to save Alameda, they would've told him to please not do that.)
> The core reason that effective altruism is a natural vehicle for bad behavior is that its cardinal demands do not require adherents to shun systems of exploitation or to change them; instead, it incentivizes turbo-charging them. [...] The problem is that this specific school of thought is a breeding ground for a fanaticism that ignores and often _intensifies_ the sources of the very problems its purportedly trying to address.
The FTX debacle is definitely a failure and likely net-negative for pandemic prevention at least (in addition to all the customers that were harmed, of course). But can you give some examples of EAs intensifying the amount of malaria in the world, or extreme poverty, or animal suffering?
> This crowd seems clueless about the reality that funding research into protecting against dangerous artificial intelligence will be impotent unless we structure our society and economy to prize public safety over capital's incentive to innovate for profit.
It's not that we're clueless about capitalism, it's that we don't think getting rid of capitalism is (a) remotely plausible with the resources/influence we have or (b) likely to reduce catastrophic risks from AI.
> And how can a movement that prides itself on foreseeing long-term problems be trusted to assess risk if they can't even see the perils of attaching themselves to a shady, unregulated financial product?
It's unclear to me what exactly MacAskill should've predicted. Should he have suspected that SBF was committing fraud in particular? That seems really hard to have known, given that basically all investors, several of whom had billions at stake, didn't suspect it. MacAskill did consider the possibility that FTX may crash (link: https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation), but this doesn't seem like the sort of peril you want to avoid at all costs.
I do think EAs didn't really consider the hypothesis that FTX would crash due to unambiguously criminal activities like fraud in particular. What does that tell us about EAs and long-term problems? I think not much about EAs' abilities to evaluate them, but maybe it suggests there are long-term problems that EAs haven't even considered -- unknown unknowns.
> [I]f you want to alleviate suffering but you're encouraging people to go blindly into highly lucrative and thus often-predatory sectors of law, finance, tech, consulting and real estate, you're throwing gas on a lot of the fires you're trying to put out.
EA has deemphasised earning to give since at least 2013 (link: https://80000hours.org/2013/06/why-earning-to-give-is-often-not-the-best-option/). See e.g. 80,000 Hours's mistakes page (link: https://80000hours.org/about/credibility/evaluations/mistakes/#mistakes-concerning-our-research-and-ideas) (from 2015): "We let ourselves become too closely associated with earning to give. This became especially obvious in August 2014 when we attended Effective Altruism Global in San Francisco, and found that many of the attendees – supposedly the people who know us best – saw us primarily as the people who advocate for earning to give. We’ve always believed, however, that earning to give is just one strategy among many, and think that a minority of people should pursue it. The cost is that we’ve put off people who would have been interested in us otherwise."