TechCrunch News 08月29日
Anthropic更新用户数据政策,鼓励用户选择是否用于AI训练
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic公司正在对用户数据处理方式进行重大调整,要求所有Claude用户在9月28日前决定是否允许其对话用于AI模型训练。此前,Anthropic不使用消费者聊天数据进行模型训练,但现在希望利用用户对话和编码会话来训练其AI系统,并可能将数据保留延长至五年。此次政策变更适用于Claude Free、Pro和Max用户(包括Claude Code),而企业客户则不受影响。Anthropic将此举描述为用户选择,旨在提高模型安全性和功能,但分析认为这主要是为了获取高质量的训练数据以应对竞争。同时,政策变更的设计也可能导致用户在不知情的情况下同意数据共享,引发了关于用户知情同意和隐私保护的担忧,尤其是在AI行业数据政策日益受到关注的背景下。

🤖 Anthropic强制用户在9月28日前选择是否允许其Claude对话用于AI模型训练,这是其数据政策的一项重大调整。此前,公司不使用消费者聊天数据进行模型训练,但现在希望通过用户对话和编码会话来改进AI系统,并将数据保留期延长至五年。

💼 此次政策更新仅适用于Anthropic的消费者产品用户,包括Claude Free、Pro和Max(以及Claude Code),而使用Claude Gov、Claude for Work、Claude for Education或API的企业客户不受影响。这与OpenAI保护其企业客户数据训练政策的方式类似。

💡 Anthropic将此次变更归因于用户选择,声称这将有助于提高模型安全性、检测有害内容以及改进编码、分析和推理等能力。然而,分析人士认为,获取海量高质量对话数据以增强其在与OpenAI和Google等竞争对手的优势,是推动此次变更的关键因素。

⚠️ 政策变更的设计引发了用户对知情同意的担忧。新用户在注册时选择偏好,但现有用户会收到弹窗,默认启用训练权限,可能导致用户在未充分了解的情况下点击接受。这与AI行业数据政策日益受到关注以及监管机构对“隐蔽更改服务条款”的警告相呼应。

⚖️ 随着AI行业的发展,数据政策的变化和用户隐私保护成为焦点。Anthropic和OpenAI等公司面临数据保留实践的审查,以及用户对数据使用方式的困惑。在技术快速迭代的背景下,如何确保用户真正理解并同意数据使用条款,是行业面临的重要挑战。

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own.

But first, what’s changing: previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it’s extending data retention to five years for those who don’t opt out.

That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.

By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies.

So why is this happening? In that post about the update, Anthropic frames the changes around user choice, saying that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Users will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”

In short, help us help you. But the full truth is probably a little less selfless.

Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand. Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.

Techcrunch event

San Francisco | October 27-29, 2025

Beyond the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI, for instance, is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed by The New York Times and other publishers.

In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected.

What’s alarming is how much confusion all of these changing usage policies are creating for users, many of whom remain oblivious to them.

In fairness, everything is moving quickly now, so as the tech changes, privacy policies are bound to change. But many of these changes are fairly sweeping and mentioned only fleetingly amid the companies’ other news. (You wouldn’t think Tuesday’s policy changes for Anthropic users were very big news based on where the company placed this update on its press page.)

But many users don’t realize the guidelines to which they’ve agreed have changed because the design practically guarantees it. Most ChatGPT users keep clicking on “delete” toggles that aren’t technically deleting anything. Meanwhile, Anthropic’s implementation of its new policy follows a familiar pattern.

How so? New users will choose their preference during signup, but existing users face a pop-up with “Updates to Consumer Terms and Policies” in large text and a prominent black “Accept” button with a much tinier toggle switch for training permissions below in smaller print – and automatically set to “On.”
As observed earlier today by The Verge, the design raises concerns that users might quickly click “Accept” without noticing they’re agreeing to data sharing.

Meanwhile, the stakes for user awareness couldn’t be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. Under the Biden Administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”

Whether the commission — now operating with just three of its five commissioners — still has its eye on these practices today is an open question, one we’ve put directly to the FTC.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Anthropic Claude AI训练 用户数据 隐私政策 AI发展 Anthropic Claude AI Training User Data Privacy Policy AI Development
相关文章