少点错误 09月29日
AI聊天机器人监管:在安全与隐私间寻求平衡
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国参议院就AI聊天机器人的危害举行听证会,引发了关于AI治理的讨论。文章认为,在关注AI潜在危害的同时,应避免采取过度严厉的措施,如大规模监控或普遍的客户端扫描。作者强调,技术公司可以通过改进设计来保护青少年,同时尊重所有用户的隐私和自主权。文章指出,AI聊天机器人的实际使用场景中,负面用途占比极低,并呼吁在政策制定中纳入更多技术专家和受影响群体的声音。同时,文章强调隐私作为一种安全控制的重要性,并倡导端到端加密作为保护私人对话的关键手段,并探讨了隐私保护技术在AI领域的应用前景,以及如何通过设计而非内容审查来降低风险。

🔒 **保护隐私与安全并行:** 文章强调,隐私本身就是一种安全机制。应默认采用端到端加密保护私人聊天记录,执法部门如需访问,应通过司法授权获取,而非普遍监控或客户端扫描。这与保护电话、健康记录等敏感信息的现有法律框架一致。

⚖️ **技术解决方案优于强制性审查:** 作者认为,AI聊天机器人的风险应通过技术设计来管理,而非依赖大规模身份验证或内容过滤。例如,通过改进模型设计,限制可能导致风险的功能,并提供设备级别的家长控制工具,以平衡青少年保护与成人自由。

💡 **包容性专家参与与用户视角:** 听证会过于片面,未邀请技术专家和受AI影响的群体(如自闭症患者)发声。文章呼吁在政策制定中纳入更多元化的视角,以确保解决方案的有效性和公平性,避免因对少数极端案例的过度关注而影响大多数用户的正常使用。

🌐 **倡导负责任的AI设计原则:** 文章提倡“隐私优先”的设计理念,即在构建AI系统时,首先确保用户数据的私密性,再在此基础上叠加安全功能。研究和应用隐私保护机器学习技术,如差分隐私、联邦学习等,是实现这一目标的关键。

🗣️ **维护言论自由与匿名权:** 文章指出,任何以身份验证为前提或审查私人信息的内容政策,都可能侵犯合法的表达自由和匿名权。即使是青少年,也应在保护其安全的同时,不损害成年人自由思考和交流的权利。

Published on September 29, 2025 2:18 PM GMT

Crosspost from Hugging Face blog. Read the original here

On September 16, the United States Senate held a hearing called “Examining the Harm of AI Chatbots”. It took a real and urgent problem about meaningful AI governance and framed it in a way that will make real solutions harder. The stories mentioned in the hearing were devastating, and deserve genuine attention and empathy. Parents who lost children to alleged harms from AI products deserve action that actually reduces risk, not a new architecture of mass surveillance dressed up as safety. I believe it’s fully possible for tech companies to protect teens and respect the dignity and autonomy of everyone else at the same time. We do not need blanket age-verification, client-side scanning, or the normalization of companies and governments reading private chats by default. The hearing moved the public conversation toward those heavy-handed ideas, and in my opinion that is a grievous mistake that is anathema to the values of a democratic society.

Let us start with the facts the public did not hear in much detail at this month’s hearing. OpenAI has announced that it is building an age-prediction system and a more restrictive ChatGPT experience for users under 18. This new system ostensibly has additional parental controls like blackout hours and limits on sexual content, and with heightened crisis protocols. Those changes were announced on September 16, 2025, the same day as the hearing, and they represent a major shift. They aim to separate minors from adults by default and to apply stronger safeguards to teens, while leaving adults with broader freedom. More information about these changes was announced on September 29. At face value, OpenAI’s solution seems sensible and a good solution to alleged harms. However, If we are going to debate policy, we should start from what platforms are actually doing, since blunt new mandates can collide with or duplicate these emerging controls, as Sam Altman himself said in his statement.

The conversation also needs perspective on what people actually use these systems for. OpenAI’s own large-scale usage study published this Monday found that less than 3 percent of ChatGPT use fell into therapy-adjacent or companionship-like categories. Specifically, “relationships and personal reflection” accounted for about 1.9 percent, and “games and role-play” accounted for 0.4 percent. That is an important number, because the loudest fears concentrate on a tiny fraction of actual use, which can lead lawmakers to massively over-correct in ways that burden or endanger the far larger set of benign and productive uses.

Additionally, the hearing itself was extremely one-sided in ways that matter. Senators heard from grieving parents and advocacy groups, but not from a single autistic self-advocate or a machine learning engineer who could explain what is technically feasible and what will backfire. That omission is especially glaring because many of the families who have come forward to describe their autistic teens were especially vulnerable to compulsive, ruminative chat patterns and to role-play spirals that felt real. That profile should have been front and center. The witness list did not include an autistic person or a technical expert who has shipped or red-teamed a large model. Megan Garcia, the mother of Thomas Sewel Seltzer III, mentioned Noam Shazeer several times in her testimony, and he’s a co-defendant in her lawsuit against Character Technologies and Google. It would have been very interesting to see one of the authors of the Transformers paper explain the rudimentary functions of Large Language Models to Senators, and even apologize to Megan Garcia, the way Mark Zuckerberg did during 2024’s KOSA hearing. If the goal is real harm reduction, the right experts and affected communities must be in the room alongside bereaved families and loved ones.

Privacy is not an obstacle to safety, in fact privacy is a safety control. The United States already recognizes that certain communications deserve the strongest protection the law can give. The Supreme Court has held that police may not search a phone without a warrant, and that obtaining location histories from carriers is a search that usually requires a warrant. Riley V. California and other rulings like it exist for a reason. Phones and chat histories contain the most intimate details of a life. In my opinion, stare decisis on privacy does not change because a conversation happens to involve an AI chatbot. If someone is suspected of a crime, get a judge to approve a warrant and seize the device or the account; America has a Fourth Amendment for a reason. Preemptive monitoring, broad data retention, and client-side scanning invert that logic and make everyone less safe; especially if the solution is pre-emptive from a company, not coerced by legislation.

The First Amendment matters here too. The Court has made clear that speech on the internet enjoys full First Amendment protection and that anonymous speech has a long and protected tradition in the United States and in other democratic countries. Any policy that conditions access to general-purpose information tools on identity proof, or that pressure-tests “suspicious” speech in private messages, is a policy that burdens lawful expression and anonymity. Teens deserve special care, but that shouldn’t infringe on adults retaining the right to read, think, and speak without showing papers.

That brings me to the core of what should change. We should set a bright-line rule for private messaging and personal chat logs. End-to-end encryption should be the default, and scanning should not live on a user’s device. If law enforcement needs access, it should use targeted warrants and existing forensic tools that work on seized devices. That is how we already handle the most sensitive data in other contexts, including privileged communications and health information. HIPAA is not perfect, but it encodes the idea that medical records are private by default with limited, lawful exceptions. The Kovel doctrine extends attorney-client confidentiality to necessary third-party consultants. We should treat all LLM chats in a similar spirit, for everyone. You do not have to weaken everyone’s locks to catch a burglar. You investigate the burglar instead.

Technically, this is possible. The field of privacy-preserving machine learning is not science fiction. CrypTen, an open-source framework from Meta, shows how secure multi-party computation and related techniques can support model training or inference without exposing plaintext data. The field also includes work on differential privacy, federated learning, secure enclaves, and homomorphic encryption. Additionally, Anthropic published a paper in June 2025 about Confidential Inferencing for AI models, both for the model’s weights and the input and output from user chats. Apple is arguably the boldest player here, with encrypted “cloud compute” for Apple Intelligence. None of these is a silver bullet, and many are not production-ready for everything, but they prove a simple point. We can build useful AI systems while keeping message content encrypted in transit and at rest, and while minimizing exposure during processing. Even major messaging companies are describing ways to add AI features that do not require turning off end-to-end encryption across the board. The right design is privacy first, then safety features layered on top.

If end-to-end encryption is the bright line, what should platforms do on the platform side? Start by fixing the features that create predictable risk. The main thing to mitigate has already been implemented by CharacterAI and other LLM providers. Every conversation says that the LLM is just that, a LLM! ChatGPT says all conversations can make mistakes, and CharacterAI says “this is not real”. Public-interest groups and policy shops have begun to outline risk-based frameworks that scale the level of age assurance and friction by feature, not by sweeping content bans. That is the right direction.

We also need to stop pretending that mandatory age verification is either risk-free or narrow. Most proposals require either government IDs, biometrics like face scans, or behavioral profiles that are sensitive in their own right. Those programs create honey pots for identity theft, expose which sites people visit, and collapse the boundary between our offline and online lives. The risks are not hypothetical. A major identity verification provider recently exposed credentials that opened access to troves of sensitive user documents. Additionally, client-side scanning has already been recognized by security and civil liberties experts as a backdoor that can be repurposed beyond its initial scope. Once the scanning code runs on your device, there is no principled way to limit what it is used to find next. That is why Apple’s decision to drop its 2021 on-device scanning proposal and expand iCloud end-to-end encryption was a win for users. It reduced the attack surface and respected the idea that private means private.

If you watched the September 16th hearing, you likely came away believing that AI chat harms flow from a lack of identity checks and a lack of content filters, or outright malice from Big Tech entrepreneurs. I think that framing mistakenly confused symptoms for causes. The case studies that animate the debate on chatbot harms often feature a teen who found a path into obsessive or romanticized conversation loops and a system that never slowed down, reframed, or routed out. The right fix is to rigorously test the models before launch (and grant complete and total safe harbor to 3rd party researchers who conduct adversarial research). Safety should not look like scanning everyone’s messages.

So, in light of all the information discussed, here is my practical blueprint that respects both safety and freedom for everyone of all ages.

Start by drawing a constitutional line around private chats. Make end-to-end encryption the default, full stop. If someone wants to read my messages, they should get a warrant from a judge. The Supreme Court has already told us that phones and digital records deserve that level of protection, so we do not need a new surveillance layer to reinvent probable cause. AI companies should also regulate design, not ideas. Aim the rules at the features that drive risk: long, immersive role-play, unsolicited stranger messaging, late-night spirals, or going live. Require real risk assessments, teen-safe defaults, and published crisis protocols that are easy to audit. Give families device-level tools such as blackout hours that do not require handing a government ID to a vendor, and that keep sensitive settings in the home rather than in a corporate dashboard. Big Tech should also invest in privacy-preserving safety research. Fund the work that lets us keep content sealed while still spotting system-level hazards: private inference, secure aggregation, formal verification. Research will not erase every tradeoff, yet it will raise the baseline and shrink the moments when privacy and safety feel like a zero-sum choice.

Finally, and arguably most importantly, civil society needs to be bluntly honest about what speech rights we are willing to trade to protect youth. The internet’s speech protections did not fall from the sky; they were argued into existence, defended in court, and paid for by people who risked their jobs and safety to write under a pen name. Anonymous and adult access to lawful content are still rights. Any rule that turns a passport into the price of entry for a conversation, or that rummages through private messages before they are sent, demands the most exacting justification. That bar has not been met. Say it plainly. No doxxing. No swatting. No quiet handoffs of private chat logs to anyone. Encryption is encryption is encryption.

If we teach a generation that the lock on their diary clicks open for companies or the state on a whim, we will not earn their trust back when the next crisis arrives. The old rule of justice applies here as well. It is better to accept that a few bad actors may slip through than to build a system that treats every innocent person like a suspect. We accept a harder job for investigators so that the rest of us can think, read, and speak without fear, and I think that’s a worthy sacrifice.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI聊天机器人 AI治理 隐私保护 端到端加密 言论自由 AI安全 AI Ethics AI Governance Privacy End-to-End Encryption Free Speech AI Safety
相关文章