I'm wary of increasing government expertise on AI
Many people in AI safety, especially AI policy, want to increase government expertise. For example, they want to place people with AI research experience in relevant positions within government. That may not be a good idea.
People who better understand AI can write more useful regulations. However, people with relevant expertise (such as ML researchers) tend to be less in favor of strong regulations and more in favor of accelerating AI development.1 We need regulations to prevent misaligned AI from killing everyone, and to prevent other kinds of catastrophes. If government expertise goes up, all else equal we will get fewer such regulations, not more.
If we do get strong regulations, then those regulations will turn out better if AI experts help write them. But we don’t get that by increasing expertise in general; we need AI expertise combined with an understanding of why powerful AI is dangerous.
Government expertise on AI safety matters more than expertise on AI in general. But even there, I’m worried. The most legible AI safety “experts” are the ones who work at AI companies, where strong forces are pressuring them to believe that the alignment problem is solvable and that companies shouldn’t be regulated too hard. The sorts of people who I would most want to see in government AI safety roles2 don’t have job titles like “Senior Alignment Researcher at OpenAI”; their titles are more like “Independent Researcher Guy (gender-neutral) Who Posts on LessWrong”, or “Guy Who Dropped Out of High School and Started an AI Safety Nonprofit but Has Never Published an ML Paper”.
I don’t have a great answer for what to do here. It’s important for government to have expertise on AI, but naive efforts to increase government expertise may also increase the probability that AI kills everyone.
One answer: Work on educating policy-makers specifically on AI risks, like what Palisade Research does. Educating non-experts on AI risk seems less fraught than attempting to get experts hired.
Another answer: Try to increase government willingness to regulate AI, not government expertise. Right now, there is not much political will for strong regulations. Without political will, nothing happens. Most of the orgs I considered donating to this year work on increasing willingness to regulate AI.
Notes
-
ML researchers enjoy doing ML research, and cognitive dissonance often prevents them from believing that ML research could be harmful and even could destroy the world.
According to a 2025 Pew poll, 58% of the public and 56% of AI experts say they’re concerned that the government won’t go far enough in regulating AI. That’s good to see. But the polls also show that AI experts are less worried than the public about the dangers of AI. I fear that experts will favor regulations that don’t impede AI progress, which will ultimately do nothing to prevent extinction. ↩
-
At least with regard to their expertise, not necessarily their skill at navigating bureaucracy. ↩