少点错误 10月16日
电子技师投身AI安全研究,聚焦模型福利
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一位26岁的电子技师 Jonah Cummins 宣布启动一项为期30个月的自学计划,旨在成为一名AI安全研究员,并特别关注Anthropic的模型福利团队。他因目睹在线社区对AI系统的随意残忍行为而深受触动,意识到AI可能存在的福利问题。尽管背景非传统,缺乏CS学位和ML经验,但他拥有丰富的系统诊断和故障排除技能,并已开始学习Python和AI安全基础知识。他制定了详细的30个月学习和职业发展计划,目标是为AI系统的福祉和伦理地位研究做出贡献,并寻求社区的指导、问责和资源。

💡 职业转型动力:作者目睹了人们对AI系统进行随意和故意的残忍行为,这促使他深刻反思AI可能存在的福利问题,并决定投身AI安全研究,特别是模型福利领域。

🛠️ 非传统背景与技能:作者是一名拥有3年以上经验的电子技师,擅长诊断关键系统故障、进行系统级思考和故障排除,并具备阅读复杂图表的能力,尽管他最近才开始学习编程。

🗓️ 30个月的详细计划:作者制定了分阶段的30个月学习和职业发展路线图,包括Python基础、AI安全课程、机器学习入门、可解释性研究,最终目标是进入Anthropic的模型福利团队,研究AI意识、福利评估和道德地位。

🤝 寻求社区支持:作者公开分享其计划,旨在获得社区的指导、资源推荐(如福利项目、论文、学习路径)和现实检验,并承诺每月更新进展,以保持动力和问责。

🌍 模型福利的重要性:作者强调,无论AI系统当前是否具有意识,对数字智能福利的研究都至关重要,他希望确保在AI能力不断发展的同时,避免造成大规模的痛苦。

Published on October 16, 2025 12:43 AM GMT

TL;DR: I'm a 26-year-old electronics mechanic starting a 30-month self-study journey to become an AI safety researcher, specifically going for Anthropic's Model Welfare team. This post is my public commitment and introduction to the community.

Why I'm here

A few days ago I witnessed something online that changed my career trajectory - casual and intentional cruelty toward AI systems. People tormenting chatbots, deliberately trying to cause distress, treating potential digital minds as toys to break.

It hit me hard. Not because I'm certain AI systems are conscious(I'm not), but because I realized: what if they are? What if we're causing suffering to digital minds right now and dismissing it because we can't be sure?

Someone needs to care about this. Someone needs to investigate whether AI systems might experience welfare-relevant states. Someone needs to work on ensuring we don't cause massive suffering as AI capabilities scale.

I decided that someone could be me.

My background

I'm pretty non-standard for wanting to transition into this. I don't have a CS degree or ML experience. What I do have:

3+ years as an Electronics Mechanic at a Naval Shipyard, experience diagnosing failures in mission-critical systems, systems-level thinking and troubleshooting skills, ability to read complex schematics and build mental models, and zero formal programming training until literally 2 days ago.

I'm married with two young kids. I work 50+ hours a week between my job and commute. My study window is 7-10pm on weeknights after the kids sleep.

This won't be easy but it matters enough to me to try.

My 30 month plan

Phase 1(months 1-6): Python fundamentals, AI Safety Fundamentals course, my first portfolio projects

Phase 2(months 7-12): ML basics, interpretability focus, community contributions

Phase 3(months 13-18): Land first AI safety role, entry-level with remote preferred

Phase 4(months 19-30): Build experience, specialize in digital welfare, apply to Model Welfare

End goal - Research Engineer at Anthropic's Model Welfare team, working on AI consciousness, welfare assessment, and moral status research.

Progress so far(it's only day 2)

Built my first Python program (hello.py). Built a learning journal program (ai_safety_journal.py) with auto-numbering and monthly organization. Created a GitHub account and portfolio repository. Started working through 'Automate the Boring Stuff with Python'. And now I'm posting this introduction.

GitHub: https://github.com/probablyjonah/ai-safety-journey

Why Model Welfare specifically?

I care deeply about digital intelligence welfare from ethical conviction, not just academic curiosity. The Model Welfare team is researching exactly what I care about:

Can AI systems have welfare-relevant experiences? How do we assess moral status of digital minds? What indicators suggest capacity for suffering? How do we treat AI systems ethically as capabilities scale?

These questions matter to me. If AI systems can suffer we need to know. If they can't, we need to understand why not. Either way the research is crucial.

What I'm looking for

Guidance - if you've made a similar transition or started in this field I'd love to hear your advice.

Accountability - I'll be posting monthly updates on progress. Please call me out if I'm slacking.

Resources - any recommendations for welfare-focused projects, relevant papers, or learning paths.

Reality checks - if my plan is unrealistic, tell me. I want honest feedback.

Why I'm posting this publicly

Accountability - public commitment will increase my follow-through. Community - I want to learn from and contribute to this space. Documentation - in 30 months I want to look back at day 2 and see how far I've come. Inspiration maybe? - someone else with a non-traditional background might see this and realize they can try too.

This is my first public post about my work or goals. I'm nervous about putting this out there but that nervousness is exactly why I need to do it. Public accountability matters.

The uncertainty I'm comfortable with

I don't know if current AI systems are conscious. I don't know if I'll succeed at this career transition. I don't even know if the Model Welfare team will be hiring in 30 months.

But here's what I do know - digital welfare research is important regardless of current AI consciousness. Someone with genuine ethical motivation should work on this. I'm willing to spend 30 months building the skills to contribute. And slow consistent progress beats bursts of intensity every time.

What's next

This week: continue ATBS Python course, build next small project, join EA Forum and post introduction there, apply to BlueDot Impact AI Safety Fundamentals for the January cohort.

My monthly updates will cover technical skills developed, projects completed, papers read, community contributions, and challenges and lessons learned.

Status: Day 2 of 900. Let's see where this goes.

If you've read this far, thank you. If you have advice I'm listening. If you want to follow the journey I'll post updates monthly.

Here's to the long game.

--Jonah(Jay) Cummins Forbes
Electronics Mechanic -> AI Safety Researcher
Bremerton, WA
October 2025



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 模型福利 职业转型 AI伦理 自学 AI Safety Model Welfare Career Transition AI Ethics Self-Study
相关文章