1 Comment
Dec 7, 2022·edited Dec 8, 2022

> SBF belonged to the “longtermist” sect of effective altruism, which focuses on events that could pose a long-term existential threat to humanity, like pandemics or the rise of runaway artificial intelligence. The reasoning for this focus is that more people will exist in the future than exist today, and thus the potential to do more good for more people is greater.

The many people that may or will exist in future is the commonest motivation for longtermism, but you can make the argument for reducing existential risks on non-utilitarian grounds (e.g. https://archive.ph/UnOsH#selection-2243.0-2249.413 or https://www.erichgrunewald.com/posts/a-kantian-view-on-extinction). Like, I wonder how much of the disagreement is moral versus empirical. I think most people, if they thought there was a ~1/6 chance of a catastrophe on a scale that would cause human extinction this year, would be in favour of making an effort to prevent it, longtermist or not.

> He also adopted one of the movement’s signature strategies for effecting social change called “earning to give,” in which generating high income is more important than what kind of job one takes, because it enables people to give away more money for philanthropy.

EAs wouldn't say that generating a high income is more important than what kind of job one takes. We'd say it's one among many avenues towards impact, which suits some people better than others, and which absolutely does not give a license to pursue a harmful career (see e.g. https://80000hours.org/articles/harmful-career/).

> The first possibility is that he’s confessing that his entire set of ethical commitments — including effective altruism — is a ruse. In this scenario, SBF is admitting that he’s a cynical exploiter of effective altruism for his personal enrichment. 

This interpretation seems to be wrong, see this new interview (https://archive.ph/uwawF): "[Question:] You shocked a lot of people when you referred in a recent interview to the 'dumb game that we woke Westerners play'. My understanding is that you were talking about corporate social responsibility and E.S.G., not about effective altruism, right? [Answer:] That’s right."

> Defenders of effective altruism will say that some of its proponents have warned against doing harmful things to achieve a greater long-term good. But the reality is that the core philosophy allows it and as a culture it encourages it.

I don't think it's as easy as that -- I don't think you can just say "you're not allowed to do harmful things to achieve a greater [long-term] good". (The "long-term" seems incidental.) There are tons of situations where basically everyone endorses doing a (weakly) harmful thing to achieve a (greatly) greater good. For example, if you're in a situation where you can lie to save 1B people from terrible suffering, then I bet most people think it's not only acceptable, but obligatory to lie. If so, the ends clearly do sometimes justify the means, and doing harmful things (lying) is sometimes permissible to achieve a greater good (saving 1B people from terrible suffering).

So it becomes a matter of where to draw the line. That argument is harder than just denouncing people for letting the ends justify the means. (For the record, I absolutely don't think SBF's ends justified his means, and I think had he asked basically any EA whether he should commit fraud to save Alameda, they would've told him to please not do that.)

> The core reason that effective altruism is a natural vehicle for bad behavior is that its cardinal demands do not require adherents to shun systems of exploitation or to change them; instead, it incentivizes turbo-charging them. [...] The problem is that this specific school of thought is a breeding ground for a fanaticism that ignores and often _intensifies_ the sources of the very problems its purportedly trying to address.

The FTX debacle is definitely a failure and likely net-negative for pandemic prevention at least (in addition to all the customers that were harmed, of course). But can you give some examples of EAs intensifying the amount of malaria in the world, or extreme poverty, or animal suffering?

> This crowd seems clueless about the reality that funding research into protecting against dangerous artificial intelligence will be impotent unless we structure our society and economy to prize public safety over capital's incentive to innovate for profit.

It's not that we're clueless about capitalism, it's that we don't think getting rid of capitalism is (a) remotely plausible with the resources/influence we have or (b) likely to reduce catastrophic risks from AI.

> And how can a movement that prides itself on foreseeing long-term problems be trusted to assess risk if they can't even see the perils of attaching themselves to a shady, unregulated financial product?

It's unclear to me what exactly MacAskill should've predicted. Should he have suspected that SBF was committing fraud in particular? That seems really hard to have known, given that basically all investors, several of whom had billions at stake, didn't suspect it. MacAskill did consider the possibility that FTX may crash (link: https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation), but this doesn't seem like the sort of peril you want to avoid at all costs.

I do think EAs didn't really consider the hypothesis that FTX would crash due to unambiguously criminal activities like fraud in particular. What does that tell us about EAs and long-term problems? I think not much about EAs' abilities to evaluate them, but maybe it suggests there are long-term problems that EAs haven't even considered -- unknown unknowns.

> [I]f you want to alleviate suffering but you're encouraging people to go blindly into highly lucrative and thus often-predatory sectors of law, finance, tech, consulting and real estate, you're throwing gas on a lot of the fires you're trying to put out.

EA has deemphasised earning to give since at least 2013 (link: https://80000hours.org/2013/06/why-earning-to-give-is-often-not-the-best-option/). See e.g. 80,000 Hours's mistakes page (link: https://80000hours.org/about/credibility/evaluations/mistakes/#mistakes-concerning-our-research-and-ideas) (from 2015): "We let ourselves become too closely associated with earning to give. This became especially obvious in August 2014 when we attended Effective Altruism Global in San Francisco, and found that many of the attendees – supposedly the people who know us best – saw us primarily as the people who advocate for earning to give. We’ve always believed, however, that earning to give is just one strategy among many, and think that a minority of people should pursue it. The cost is that we’ve put off people who would have been interested in us otherwise."

Expand full comment