Published on October 20, 2025 5:02 PM GMT
TL;DR: AI Safety as a cause-area has grown to a substantial size within Effective Altruism. To avoid neglect of other cause-areas and to help the field grow more efficiently, I advocate running cause-area specific conferences. They could shape a shared identity for the field, lower access barriers for non-EA talent, and strengthen connections to the broader ecosystem.
AI Safety as an archetype
Over the last years, the AI Safety field has been growing rapidly. As a result, the topic has become more prevalent within the broader Effective Altruism community. This has, for example, led to 80,000 Hours shifting their focus towards safely navigating the transition to AGI.[1] Many local EA groups experience a similar trend, with discussions becoming more and more focused on AI Safety. In my opinion, this has two disadvantages:
- People who are not involved with AI Safety as a cause area have it harder to find space for their topic within local EA gatherings.Coordination and onboarding of non-EA AI Safety people is made more difficult than it has to be.
When I say "space for topics within EA", I also include the very principles of Effective Altruism. With this interpretation in mind, the Centre for Effective Altruism (CEA) is indeed trying to combat the first disadvantage by going back to a principles-first approach in EA community building. I support this step and think it might indeed help other cause-areas and EA principles not to become neglected. However, this implies less support for AI Safety. I therefore advocate having more international or national AI Safety specific gatherings. I am particularly excited about conferences in a similar style to EAG(x) conferences. I do believe that such conferences have several benefits over EAG(x) conferences, including the following:
- The barriers to accessing the field are drastically reduced.
- People who are curious about the trajectory of AI are much more likely to stumble upon AI Safety by chance if the field has independent visibility than to first discover it through EA.Local events and conferences are often visible to broader audiences, for example, through local event-sharing platforms. Labelling them as AI Safety can help specifically attract people interested in AI.On a similar note, the visibility of AI Safety events could mean that more people learn about making AI safe, rather than just getting hyped about capabilities.People who do not want to identify with EA can more easily join the discourse on AI Safety at dedicated events.
- Having topic-specific events helps create the identity of the field. If gatherings mainly happen at EAG(x) conferences, this aspect is lost. On the other hand, people joining EAG(x) might rather feel part of the EA community.
- Having a big conference on an AI-related topic in town might attract representatives of AI labs, researchers, and politicians who are working on AI. Even if they might not become directly involved with making AI development safe, connecting to them can still be very valuable for the community.
That said, I think that such conferences should happen in addition to EAG(x) conferences, rather than replacing them. I also don't think that the field of AI Safety should become completely detached from EA. I believe that sharing the same principles of doing good and using reason when making decisions is very beneficial to the AI Safety field. Additionally, people who first join either of the two communities might benefit greatly from joining the other as well. Understanding the core EA motivation seems valuable to anyone in the AI Safety field, and vice versa, many people might be most effective in their career when working on making AGI safe.
While I currently only see AI Safety as a potentially dominating topic that could eat EA, I think the benefits of broader cause-area specific gatherings could apply to other cause-areas just the same.
Discuss
