Where I Am Donating in 2024
Summary
It’s been a while since I last put serious thought into where to donate. Well I’m putting thought into it this year and I’m changing my mind on some things.
I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I’ve managed to reason myself out of those emotions.
Within x-risk:
- AI is the most important source of risk.
- There is a disturbingly high probability that alignment research won’t solve alignment by the time superintelligent AI arrives. Policy work seems more promising.
- Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.
In the rest of this post, I will explain:
- Why I prioritize x-risk over animal-focused longtermist work and global priorities research.
- Why I prioritize AI policy over AI alignment research.
- My beliefs about what kinds of policy work are best.
Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate.
Cross-posted to the Effective Altruism Forum.
Continue reading