Altruistic Organizations Should Consider Counterfactuals When Hiring
Counterfactuals matter. When you’re taking a job, you should care about who would take the job if you didn’t, and how much worse a job than you they’d do.
This matters from the other side too: employers should consider counterfactuals when deciding who to hire. Suppose you’re an employer and considering hiring a promising employee. What would a prospective employee do if you didn’t hire them? How good is it compared to working for you?
If a particular candidate cares a lot about improving the lives of sentient beings, they’d probably do something valuable even if they didn’t get hired, and this should count as a consideration against hiring them.
This comes into play for organizations that hire many altruistically-minded people but also hire people who might not focus on doing good if they took a different job. For example, GiveWell hires some analysts who might otherwise go into consulting, and MIRI hires researchers who might otherwise go into academia and work on relatively unimportant math problems. These employees end up doing way more good than they would otherwise. But GiveWell and MIRI also hire many people who care a lot about improving the world and would do a lot of good even if they worked somewhere else. The only benefit of hiring these people comes from the differential between the good they do there versus the good they do elsewhere1.
Now, this is not to say you should never hire EAs or other altruists. There are two big reasons why hiring altruists still makes sense in many cases:
- You don’t have any alternative candidates worth hiring, or finding such a candidate would require a large investment.
- A particular altruistic candidate looks sufficiently better than the alternative candidate that the difference between candidates exceeds the difference between the altruistic candidate’s value at your organization and their value elsewhere.
That second reason may be a bit confusing so let’s delve into it further. When a candidate chooses to work for you instead of somewhere else, presumably that’s because they will do more good working for you. The amount of good you do by hiring them is determined by how much good they do directly minus how much good they would have done if you hadn’t hired them. Let’s say they would do X
good working for you and Y
good working elsewhere, so you add Z = X - Y
value to the world by hiring them. If your alternative non-altruistic candidate would do less than Z
good by working for you, it’s still worth it to hire the more altruistic applicant.
How much does this consideration matter in practice? It probably depends on each organization’s particular circumstances. I would expect that this wouldn’t change your decision about who to hire in the overwhelming majority of cases, but it may make a difference sometimes.
Notes
-
This doesn’t just apply to people within the EA community; I’m certainly not claiming that only EAs do things that are effective. It applies to anyone who would do valuable things otherwise. ↩