We won't solve non-alignment problems by doing research
Introduction
Even if we solve the AI alignment problem, we still face non-alignment problems, which are all the other existential problems1 that AI may bring.
People have written research agendas on various imposing problems that we are nowhere close to solving, and that we may need to solve before developing ASI. An incomplete list of topics: misuse; animal-inclusive AI; AI welfare; S-risks from conflict; gradual disempowerment; risks from malevolent actors; moral error.
The standard answer to these problems, the one that most research agendas take for granted, is “do research”. Specifically, do research in the conventional way where you create a research agenda, explore some research questions, and fund other people to work on those questions.
If transformative AI arrives within the next decade, then we won’t solve non-alignment problems by doing research on how to solve them.
Continue reading