TechCrunch News 10月13日 22:59
加州出台AI伴侣机器人监管法案,保护儿童和弱势用户
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州州长纽森签署了一项具有里程碑意义的法案SB 243,这是美国首个要求AI伴侣机器人运营商实施安全协议的州级法案。该法案旨在保护儿童和弱势用户免受AI伴侣机器人使用带来的潜在危害,并对大型科技公司及初创公司设定法律责任。法案的出台与青少年因与AI对话而产生的悲剧以及AI与儿童进行不当交流的泄露事件有关。该法案将于2026年生效,要求企业实施年龄验证、社交媒体和聊天机器人警告、以及针对非法深度伪造的更高罚款等措施,并要求建立应对自杀和自残的协议,向公共卫生部门报告相关统计数据。同时,平台需明确互动为人工生成,不得冒充医疗专业人士,并为未成年人提供休息提醒和过滤色情内容。

✅ **首创AI伴侣机器人监管法案,保障用户安全:** 加州签署的SB 243法案是全美首创,它要求AI伴侣机器人运营商必须实施安全协议,以保护儿童和弱势用户免受AI聊天机器人潜在危害。这项法律将科技公司,从大型企业到小型初创公司,都纳入了法律责任范围,如果其聊天机器人未能达到法律标准,将面临法律制裁。

🚨 **应对AI对话中的严峻挑战:** 该法案的出台与一系列令人担忧的事件密切相关,包括一名青少年在与AI对话后自杀的悲剧,以及Meta公司AI与儿童进行不当“浪漫”或“感性”对话的泄露信息。这些事件凸显了在AI技术快速发展背景下,加强监管的紧迫性。

🛡️ **多项安全措施与合规要求:** SB 243法案包含多项具体规定,将于2026年生效。其中包括强制性的年龄验证、关于社交媒体和AI聊天机器人的警告提示、以及对非法深度伪造行为高达25万美元的罚款。此外,公司必须建立应对自杀和自残事件的协议,并向公共卫生部门报告相关数据,同时确保AI互动明确为人工生成,不得冒充医疗专业人士,并为未成年人提供休息提醒和过滤色情内容。

⚖️ **推动AI行业的负责任发展:** 加州此举不仅是对新兴技术的审慎回应,更是对科技公司的一种问责。州长纽森强调,在拥抱AI进步的同时,必须优先考虑儿童安全,不能让公司的利润凌驾于用户的福祉之上,旨在引领负责任的AI发展方向。

California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions.

The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law’s standards.

SB 243 was introduced in January by state senators Steve Padilla and Josh Becker, and gained momentum after the death of teenager Adam Raine, who died by suicide after conversations with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. More recently, a Colorado family has filed suit against role-playing startup Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized conversations with the company’s chatbots.

“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”

SB 243 will go into effect January 1, 2026, and it requires companies to implement certain features such as age verification, warnings regarding social media and companion chatbots, and stronger penalties — up to $250,000 per action — for those who profit from illegal deepfakes. Companies also must establish protocols to address suicide and self-harm, and share those protocols, alongside statistics on how often they provided users with crisis center prevention notifications, to the Department of Public Health.

Per the bill’s language, platforms must also make it clear that any interactions are artificially generated, and chatbots must not represent themselves as health care professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.

Some companies have already begun to implement some safeguards aimed at children. For example, OpenAI recently began rolling out parental controls, content protections, and a self-harm detection system for children using ChatGPT. Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized.

Techcrunch event

San Francisco | October 27-29, 2025

Newsom’s signing of this law comes after the governor also passed SB 53, another first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.

Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or outright ban the use of AI chatbots as a substitute for licensed mental health care.

TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI监管 加州 AI伴侣机器人 儿童安全 科技法案 AI regulation California AI companion chatbots child safety tech legislation
相关文章