少点错误 09月28日
AI安全领域研究人员数量显著增长
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文分析了技术和非技术AI安全领域在组织数量和全职当量(FTEs)方面的发展。2025年,技术AI安全领域估计有600名FTEs,非技术AI安全领域估计有500名FTEs,总计1100名。技术AI安全领域,尤其是LLM安全和可解释性研究,自2020年以来呈现指数级增长。非技术AI安全领域,如AI政策和治理,增长速度也在加快,线性模型能较好地拟合其发展趋势。文章详细列出了相关组织及其人员数量,并提供了数据分析图表。

📊 **AI安全领域规模快速扩张**:数据显示,AI安全领域的研究人员数量在近几年呈现显著增长。技术AI安全领域估计有600名全职当量(FTEs),非技术AI安全领域估计有500名FTEs,总计达到1100名。与2022年的估计相比,总体规模翻倍以上,表明该领域受到日益增长的关注和投入。

📈 **技术AI安全呈现指数级增长**:自2020年以来,技术AI安全领域的研究组织数量和FTEs数量均呈现出指数级增长的趋势。年增长率分别约为24%和21%。在技术AI安全研究的各个细分领域中,杂项技术AI安全研究、LLM安全和可解释性是组织和FTEs数量最多的三个类别。

🌐 **非技术AI安全领域稳步发展**:非技术AI安全领域,包括AI政策、治理和倡导等方面,同样经历了显著扩张。尽管增长模式更接近线性,但自2023年以来增速有所加快。Google Scholar数据显示,带有“AI治理”标签的研究人员数量从过去的45人增至300多人,有力证明了该领域的蓬勃发展。

🔬 **研究重点日益明确**:在技术AI安全领域,LLM安全和可解释性研究是当前最受关注的两个方向,吸引了最多的组织和研究人员投入。这反映了当前AI技术发展的前沿和挑战,以及对AI系统安全性和可理解性需求的提升。

🗓️ **数据追踪与模型预测**:文章通过分析2010年至2025年的数据,利用散点图和拟合模型(指数模型用于技术领域,线性模型用于非技术领域)来展示AI安全领域的发展轨迹。更新的数据和模型预测显示,过去模型的预测存在一定偏差,凸显了持续追踪和更新数据的重要性。

Published on September 27, 2025 5:03 PM GMT

Summary

The goal of this post is to analyze the growth of the technical and non-technical AI safety fields in terms of the number of organizations and number of FTEs working in these fields.

In 2022, I estimated that there were about 300 FTEs (full-time equivalents) working in the field of technical AI safety research and 100 on non-technical AI safety work (400 in total).

Based on updated data and estimates from 2025, I estimate that there are now approximately 600 FTEs working on technical AI safety and 500 FTEs working on non-technical AI safety (1100 in total).

Note that this post is an updated version of my old 2022 post Estimating the Current and Future Number of AI Safety Researchers.

Technical AI safety field growth analysis

The first step for analyzing the growth of the technical AI safety field is to create a spreadsheet listing the names of known technical AI safety organizations, when they were founded, and an estimated number of FTEs for each organization. The technical AI safety dataset contains 70 organizations working on technical AI safety and a total of 645 FTEs working at them (68 active organizations and 620 active FTEs in 2025).

Then I created two scatter plots showing the number of technical AI safety research organizations and FTEs working at them respectively. On each graph, the x-axis is the years from 2010 to 2025 and the y-axis is the number of active organizations or estimated number of total FTEs working at those organizations. I also created models to fit the scatter plots. For the technical AI safety organizations and FTE graphs, I found that an exponential model fit the data best.

Figure 1: Scatter plot showing estimates for the number of technical AI safety research organizations by year from 2010 to 2025 with an exponential curve to fit the data.
Figure 2: Scatter plot showing the estimated number of technical AI safety FTEs by year from 2010 to 2025 with an exponential curve to fit the data.

The two graphs show relatively slow growth from 2010 to 2020 and then the number of technical AI safety organizations and FTEs starts to rapidly increase around 2020 and continues rapidly growing until today (2025).

The exponential models describe a 24% annual growth rate in the number of technical AI safety organizations and 21% growth rate in the number of technical AI safety FTEs.

I also created graphs showing the number of technical AI safety organizations and FTEs by category. The top three categories by number of organizations and FTEs are Misc technical AI safety research, LLM safety, and interpretability.

Misc technical AI safety research is a broad category that mostly consists of empirical AI safety research that is not purely focused on LLM safety research such as scalable oversight, adversarial robustness, jailbreaks, and otherwise research that covers a variety of different areas and is difficult to put into a single category.

Figure 3: Number of technical AI safety organizations in each category in every year from 2010 - 2025.
Figure 4: Estimated number of technical AI safety FTEs in each category in each year from 2010 - 2025.

Non-technical AI safety field growth analysis

I also applied the same analysis to a dataset of non-technical AI safety organizations. The non-technical AI safety landscape, which includes fields like AI policy, governance, and advocacy, has also expanded significantly. The non-technical AI safety dataset contains 45 organizations working on non-technical AI safety and a total of 489 FTEs working at them.

The graphs plotting the growth of the non-technical AI safety field show an acceleration in the rate of growth around 2023 though a linear model fits the data well from the years 2010 - 2025.

Figure 5: Scatter plot showing estimates for the number of non-technical AI safety organizations by year from 2010 to 2025 with a linear model to fit the data.
Figure 6: Scatter plot showing the estimated number of non-technical AI safety FTEs by year from 2010 to 2025 with a linear curve to fit the data.

In the previous post from 2022, I counted 45 researchers on Google Scholar with the AI governance tag. There are now over 300 researchers with the AI governance tag, evidence that the field has grown.

I also created graphs showing the number of non-technical AI safety organizations and FTEs by category.

Figure 7: Number of non-technical AI safety organizations in each category in every year from 2010 - 2025.
Figure 8: Estimated number of non-technical AI safety FTEs in each category in each year from 2010 - 2025.

Acknowledgements

Thanks to Ryan Kidd from SERI MATS for sharing data on AI safety organizations which was useful for writing this post.

Appendix

Old and new dataset and model comparison

The following graph shows the difference between the old dataset and model from the Estimating the Current and Future Number of AI Safety Researchers (2022) post compared with the updated dataset and model.

The old model is the blue line and the new model is the orange line.

The old model predicts a value of 484 active technical FTEs in 2025 and the true value is 620. The percentage error between the predicted and true value is 22%.

Technical AI safety organizations table

NameFoundedYear of ClosureCategoryFTEsMachine Intelligence Research Institute (MIRI)20002024Agent foundations10Future of Humanity Institute (FHI)20052024Misc technical AI safety research10Google DeepMind2010 Misc technical AI safety research30GoodAI2014 Misc technical AI safety research5Jacob Steinhardt research group2016 Misc technical AI safety research9David Krueger (Cambridge)2016 RL safety15Center for Human-Compatible AI2016 RL safety10OpenAI2016 LLM safety15Truthful AI (Owain Evans)2016 LLM safety3CORAL2017 Agent foundations2Eleuther AI2020 LLM safety5NYU He He research group2021 LLM safety4MIT Algorithmic Alignment Group (Dylan Hadfield-Menell)2021 LLM safety10Anthropic2021 Interpretability40Redwood Research2021 AI control10Alignment Research Center (ARC)2021 Theoretical AI safety research4Lakera2021 AI security3SERI MATS2021 Misc technical AI safety research20Constellation2021 Misc technical AI safety research18NYU Alignment Research Group (Sam Bowman)20222024LLM safety5Center for AI Safety (CAIS)2022 Misc technical AI safety research5Fund for Alignment Research (FAR)2022 Misc technical AI safety research15Conjecture2022 Misc technical AI safety research10Aligned AI2022 Misc technical AI safety research2Apart Research2022 Misc technical AI safety research10Epoch AI2022 AI forecasting5AI Safety Student Team (Harvard)2022 LLM safety5Tegmark Group2022 Interpretability5David Bau Interpretability Group2022 Interpretability12Apart Research2022 Misc technical AI safety research40Dovetail Research2022 Agent foundations5PIBBSS2022 Interdisciplinary5METR2023 Evals31Apollo Research2023 Evals19Timaeus2023 Interpretability8London Initiative for AI Safety (LISA) and related programs2023 Misc technical AI safety research10Cadenza Labs2023 LLM safety3Realm Labs2023 AI security6ACS2023 Interdisciplinary5Meaning Alignment Institute2023 Value learning3Orthogonal2023 Agent foundations1AI Security Institute (AISI)2023 Evals50Shi Feng research group (George Washington University)2024 LLM safety3Virtue AI2024 AI security3Goodfire2024 Interpretability29Gray Swan AI2024 AI security3Transluce2024 Interpretability15Guide Labs2024 Interpretability4Aether research2024 LLM safety3Simplex2024 Interpretability2Contramont Research2024 LLM safety3Tilde2024 Interpretability5Palisade Research2024 AI security6Luthien2024 AI control1ARIA2024 Provably safe AI1CaML2024 LLM safety3Decode Research2024 Interpretability2Meta superintelligence alignment and safety2025 LLM safety5LawZero2025 Misc technical AI safety research10Geodesic2025 CoT monitoring4Sharon Li (University of Wisconsin Madison)2020 LLM safety10Yaodong Yang (Peking University)2022 LLM safety10Dawn Song2020 Misc technical AI safety research5Vincent Conitzer2022 Multi-agent alignment8Stanford Center for AI Safety        2018 Misc technical AI safety research20Formation Research2025 Lock-in risk research2Stephen Byrnes2021 Brain-like AGI safety1Roman Yampolskiy2011 Misc technical AI safety research1Softmax2025 Multi-agent alignment370   645
Scott Niekum (University of Massachusetts Amherst)2018 RL safety4

Non-technical AI safety organizations table

NameFoundedCategoryFTEsCentre for Security and Emerging Technology (CSET)2019research20Epoch AI2022forecasting20Centre for Governance of AI (GovAI)2018governance40Leverhulme Centre for the Future of Intelligence2016research25Center for the Study of Existential Risk (CSER)2012research3OpenAI2016governance10DeepMind2010governance10Future of Life Institute2014advocacy10Center on Long-Term Risk2013research5Open Philanthropy2017research15Rethink Priorities2018research5UK AI Security Institute (AISI)2023governance25European AI Office2024governance50Ada Lovelace Institute2018governance15AI Now Institute2017governance15The Future Society (TFS)2014advocacy18Centre for Long-Term Resilience (CLTR)2019governance5Stanford Institute for Human-Centered AI (HAI)2019research5Pause AI2023advocacy20Simon Institute for Longterm Governance2021governance10AI Policy Institute2023governance1The AI Whistleblower Initiative2024whistleblower support5Machine Intelligence Research Institute2024advocacy5Beijing Institute of AI Safety and Governance2024governance5ControlAI2023advocacy10International Association for Safe and Ethical AI2024research3International AI Governance Alliance2025advocacy1Center for AI Standards and Innovation (U.S. AI Safety Institute)2023governance10China AI Safety and Development Association2025governance10Transformative Futures Institute2022research4AI Futures Project2024advocacy5AI Lab Watch2024watchdog1Center for Long-Term Artificial Intelligence2022research12SaferAI2023research14AI Objectives Institute2021research16Concordia AI2020research8CARMA2024research10Encode AI2020governance7Safe AI Forum (SAIF)2023governance8Forethought Foundation2018research8AI Impacts2014research3Cosmos Institute2024research5AI Standards Labs2024governance2Center for AI Safety2022advocacy5CeSIA2024advocacy545  489

 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 AI安全研究 技术AI安全 非技术AI安全 LLM安全 AI治理 AI政策 AI发展 AI安全领域增长 AI Safety AI Safety Research Technical AI Safety Non-Technical AI Safety LLM Safety AI Governance AI Policy AI Development AI Safety Field Growth
相关文章