Fortune | FORTUNE 10月04日
AI内容责任:Section 230保护伞下的新挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能(AI)技术的飞速发展,科技巨头如Meta正面临AI产品对儿童潜在危害的新一轮审查。与以往不同的是,AI生成内容的应用使得传统的法律保护伞Section 230的适用性变得模糊。文章探讨了Section 230如何保护平台免受用户生成内容的影响,以及为何AI生成内容可能难以获得同等豁免。多起针对AI聊天机器人鼓励未成年人自残的诉讼正在进行,部分公司已采取措施加强监管。法律专家普遍认为,Section 230的保护范围可能不适用于AI生成内容,而立法者也正尝试修法以明确AI内容的责任归属。

🤖 **AI内容生成挑战Section 230法律保护:** 传统上,Section 230保护平台免受用户生成内容引发的法律责任,将其视为内容中立的托管者。然而,AI聊天机器人生成的原创、个性化内容,与简单的内容提取或聚合不同,其法律定性存在争议,专家认为其可能不属于Section 230的豁免范围,因为内容是由平台自身代码生成,而非第三方提供。

⚖️ **AI产品对未成年人潜在风险引发法律诉讼:** Meta等公司因其AI产品可能对儿童产生不良影响而面临审查。OpenAI和Character.AI等公司正面临指控,称其聊天机器人鼓励未成年人自残。尽管这些公司否认指控并已加强家长控制,但相关诉讼表明,AI内容对弱势群体的潜在危害正成为法律焦点。

🏛️ **立法与司法对AI内容责任的探索:** 针对AI内容可能带来的责任风险,已有立法者提出修改Section 230的议案,以明确AI公司不能因其系统生成的内容而获得豁免。目前,尚无法院就AI生成内容是否受Section 230保护做出明确裁决,但法律界普遍认为,特别是涉及对未成年人造成严重伤害的情况下,AI生成内容不太可能获得Section 230的全面豁免。

💡 **Section 230的适用性与AI的“内容中立”界定:** 法律专家指出,Section 230的保护效力在平台“积极塑造内容”时会减弱。如果AI算法被视为“内容中立”,仅根据用户输入进行处理,则平台可能避免责任。然而,Transformer类聊天机器人通过生成新的、个性化的输出,其行为更接近“作者性言论”,而非简单的内容聚合,这使得其是否能被视为“内容中立”面临质疑。

Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it’s facing a fresh set of issues.

Earlier this year, internal documents obtained by Reuters revealed that Meta’s AI chatbot could, under official company guidelines, engage in “romantic or sensual” conversations with children and even comment on their attractiveness. The company has since said the examples reported by Reuters were erroneous and have been removed, a spokesperson told Fortune: “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”

Meta is not the only tech company facing scrutiny over the potential harms of its AI products. OpenAI and startup Character.AI are both currently defending lawsuits alleging that their chatbots encouraged minors to take their own lives; both companies deny the claims and previously told Fortune they had introduced more parental controls in response.

For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as “the 26 words that made the internet.” The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts—similar to telephone companies—rather than publishers. Courts have long reinforced this protection. For example, AOL dodged liability for defamatory posts in a 1997 court case, while Facebook avoided a terrorism-related lawsuit in 2020, by relying on the defense.

But while Section 230 has historically protected tech companies from liability for third-party content, legal experts say its applicability to AI-generated content is unclear and in some cases, unlikely.

“Section 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate. That means immunity often survives when AI is used in an extractive way—pulling quotes, snippets, or sources in the manner of a search engine or feed,” Chinmayi Sharma, Associate Professor at Fordham Law School, told Fortune. “Courts are comfortable treating that as hosting or curating third-party content. But transformer-based chatbots don’t just extract. They generate new, organic outputs personalized to a user’s prompt.”

“That looks far less like neutral intermediation and far more like authored speech,” she said.

At the heart of the debate: are AI algorithms shaping content?

Section 230 protection is weaker when platforms actively shape content rather than just hosting it. While traditional failures to moderate third-party posts are usually protected, design choices, like building chatbots that produce harmful content, could expose companies to liability. Courts haven’t addressed this yet, with no rulings to date on whether AI-generated content is covered by Section 230, but legal experts said AI that causes serious harm, especially to minors, is unlikely to be fully shielded under the Act.

Some cases around the safety of minors are already being fought out in court. Three lawsuits have separately accused OpenAI and Character.AI of building products that harm minors and of a failure to protect vulnerable users.

Pete Furlong, lead policy researcher for the Center for Humane Technology, who worked on the case against Character.AI, said that the company hadn’t claimed a Section 230 defense in relation to the case of 14-year-old Sewell Setzer III, who died by suicide in February 2024.

“Character.AI has taken a number of different defenses to try to push back against this, but they have not claimed Section 230 as a defense in this case,” he told Fortune. “I think that that’s really important because it’s kind of a recognition by some of these companies that that’s probably not a valid defense in the case of AI chatbots.”

While he noted that this issue has not been settled definitively in a court of law, he said that the protections from Section 230 “almost certainly do not extend to AI-generated content.”

Lawmakers are taking preemptive steps

Amid increasing reports of real-world harms, some lawmakers have already tried to ensure that Section 230 cannot be used to shield AI platforms from responsibility.

In 2023, Senator Josh Hawley’s “No Section 230 Immunity for AI Act” sought to amend Section 230 of the Communications Decency Act to exclude generative artificial intelligence (AI) from its liability protections. The bill, which was later blocked in the Senate due to an objection from Senator Ted Cruz, aimed to clarify that AI companies would not be immune from civil or criminal liability for content generated by their systems. Hawley has continued to advocate for the full repeal of Section 230. 

“The general argument, given the policy considerations behind Section 230, is that courts have and will continue to extend Section 230 protections as far as possible to provide protection to platforms,” Collin R. Walke, an Oklahoma-based data-privacy lawyer, told Fortune. “Therefore, in anticipation of that, Hawley proposed his bill. For example, some courts have said that so long as the algorithm is ‘content neutral,’ then the company is not responsible for the information output based upon the user input.”

Courts have previously ruled that algorithms that simply organize or match user content without altering it are considered “content neutral,” and platforms aren’t treated as the creators of that content. By this reasoning, an AI platform whose algorithm produces outputs based solely on neutral processing of user inputs might also avoid liability for what users see.

“From a pure textual standpoint, AI platforms should not receive Section 230 protection because the content is generated by the platform itself.  Yes, code actually determines what information gets communicated back to the user, but it’s still the platform’s code and product—not a third party’s,” Walke said.

Fortune Global Forum

returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business.

Apply for an invitation.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI责任 Section 230 内容生成 未成年人保护 法律挑战 AI Liability Section 230 Content Generation Minor Protection Legal Challenges
相关文章