少点错误 10月07日 04:05
AI安全训练营新加坡站:技术与治理并重
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近期在新加坡举办的Machine Learning for Good(ML4Good)训练营为参与者提供了深入了解AI安全的机会。该训练营涵盖了从基础机器学习概念到AI安全特定技术(如可解释性、RLHF)的广泛议题,并辅以动手实践环节。此外,训练营还关注AI治理,包括最新的AI法案解读。课程内容主要围绕AI安全指南展开,每日安排紧凑,包含讲座、技术研讨和讨论。训练营还设有个人职业规划指导和为期两天的项目开发,鼓励参与者在AI安全领域探索感兴趣的方向。本次训练营的成功举办,尤其是在亚洲地区的首次举办,为AI安全领域的知识传播和人才培养做出了贡献。

🌟 **全面的AI安全课程体系**:本次ML4Good新加坡训练营提供了一个涵盖广泛的AI安全知识体系,从通用的机器学习技术(如优化器、Transformer)到AI安全的核心议题(如能力评估、风险预测、策略制定),再到具体的技术方向(如可解释性、RLHF)。这种结构化的学习路径,结合了理论讲授与实践操作,旨在帮助参与者建立起对AI安全领域的全面认知。

🤝 **强调社区建设与合作**:训练营不仅注重技术和知识的传授,还积极营造积极正面的社区氛围。通过组织小组活动、鼓励互相帮助和提供职业规划指导,促进了参与者之间的交流与合作。这种“以人为本”的体验,使得参与者在学习AI安全知识的同时,也能感受到社群的支持与归属感,为未来的AI安全工作奠定良好的人脉基础。

💰 **亚太地区AI安全人才培养的里程碑**:作为ML4Good在亚洲的首个训练营,此次新加坡站的成功举办具有重要意义。它显著降低了亚太地区参与者参与AI安全培训的成本和门槛,为该地区AI安全领域的知识普及和人才培养打开了新的局面。这表明AI安全领域的关注度和资源正在向更多地区扩展。

💡 **灵活的职业发展路径**:训练营强调,进入AI安全领域并不一定需要深厚的学术背景(如硕士或博士学位),即使是初学者,也可以通过参与现有项目或从事非技术性工作(如领域建设)来做出贡献。这为有意投身AI安全领域的人士提供了更为灵活和可行的职业发展思路,鼓励更多人参与到这一重要领域中来。

Published on October 6, 2025 6:49 PM GMT

Introduction

This is my personal report for the recently held Machine Learning for Good (ML4Good Bootcamp) Singapore, Sept 20-28, 2025. 

ML4Good provides intensive in-person bootcamps to upskill people in AI safety. The bootcamps have been held in various parts of the world (mainly Europe and LatAm). ML4Good Singapore, to the best of my knowledge, was the first ML4Good bootcamp in Asia. You can see more information at their page

There has been similar posts in the LW community as well (for example, see this and this) .

Curriculum and Schedules

The bootcamp covers a lot of topics related to AI safety, including

Our main book is the AI Safety Atlas written by CeSIA. The first three chapters were  prerequisites for the bootcamp, and my impression is that the course was organized around those chapters.

A usual day started at 9 AM and ended formally at 7.30 PM. A single day usually consisted of mixes of lectures, hands-on technical sessions, and other workshops in the format of discussions. The mix was varied; for example, our first day was mostly filled with hands-on sessions,  whereas some other days lectures and discussions are more common.

Besides the lecture-style sessions, we also had one-on-one sessions between participants and career planning. For the one-on-one session, each participant was assigned a partner and given some time to talk through each of their career plans and provide feedback. Career planning was done by the instructors to help the participants solidify their career plans and they provided feedback as well.

The last major component of the bootcamp was the final project. All participants were given roughly 2 days (10 hours) to do AI safety related topics of their interests. A large number of participants worked together to set up accountability systems for their current/future AI safety endeavors (e.g. fellowships, field building), and the rest did mixtures of governance and technical works on quite diverse topics, e.g. governance, eval awareness, control, red-teaming, to name a few. I did my project with another participant on the topic of interpretability of speech augmented models. 

Instructors

We had the following very wonderful people as our instructors : 

Valerie Pang from Singapore AI Safety Hub (SASH) acted as the main coordinator of the event (and also a special thanks to Jia Yang for letting her place be the venue for the second day!)

We also were grateful to have Tekla Emborg (Future of Life Institute, governance) and Mike Zijdel (Catalyze, startup incubation) as external speakers. 

Participants

There were 14 participants for the program. For ASEAN countries, we have people from Indonesia, the Philippines, Malaysia and Singapore. There were also some participants from Taiwan, China and Japan as well. The backgrounds were somewhat diverse

The Good Things

The Good Things that Might be Improved

Final Remarks

I wasn’t really confident about how and whether I should go into AI safety earlier, but the camp had provided me with enough nudge to start spending more time in doing AI safety. One major thing I learned was that I probably could start very early in AI safety without needing an advanced background (MSc/PhD/expert in some topics of AI safety). It seems to be that there are a lot of good introductory projects out there, and even I can contribute to something non-technical, such as field building, with good potential impacts.

I mentioned the vibe a lot because personally the people had been a major net positive contributor to my experience ! I think I probably would lean less towards working on AI safety if I felt the community to be unwelcoming, but my experience has been the opposite so far.

I am very happy to recommend this camp to anyone interested in AI safety and will be interested to see more such initiatives, especially in the region.

Notes : Special thanks to all ML4Good Singapore organizers and participants who had made the event possible, hence allowing me to write this post. Also special thanks to Jia Yang, Harry, Valerie , and Sasha for the feedback on this post.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 ML4Good 新加坡 训练营 AI治理 机器学习 AI Safety ML4Good Singapore Bootcamp AI Governance Machine Learning
相关文章