The Verge - Artificial Intelligences 10月06日
Sora更新加强AI形象控制,应对深度伪造内容隐忧
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Sora平台近期推出更新,允许用户更精细地控制AI生成的自身形象(“cameos”)在应用内的出现方式和场景。此举旨在应对深度伪造内容可能带来的误导信息风险,并回应用户对AI生成内容泛滥的担忧。更新内容包括允许用户限制AI形象出现在政治相关视频中,禁止其说出特定词语,或避免出现在特定场景。此外,用户还可以设定AI形象的偏好,如固定穿着某件物品。尽管这些安全措施受到欢迎,但文章也指出,AI技术的绕过能力及其他安全功能(如水印)的不足,仍需持续改进和加强。

Sora更新的核心在于增强用户对AI生成自身形象(“cameos”)的控制权。用户现在可以主动设置限制,例如阻止AI形象出现在政治相关内容中,或不让其说出特定词语,甚至避免与某些特定物品(如芥末)出现在同一视频中,从而更灵活地管理个人数字形象的呈现。

此次更新是在AI生成内容(尤其是深度伪造)可能泛滥的背景下进行的。Sora被描述为“深度伪造版的TikTok”,允许用户创建包含AI生成自身或其他人物的10秒视频。平台此前宽松的“cameo”控制机制曾引发担忧,例如OpenAI CEO Sam Altman就曾出现在一系列模仿性视频中。

除了对AI形象出现场景和言论的限制,用户还可以为AI形象添加积极的偏好设置。例如,可以要求AI形象在所有视频中都佩戴特定的配饰,如“#1 Ketchup Fan”的棒球帽,这为用户提供了更加个性化和富有创意的形象管理方式。

尽管Sora团队正在努力加强安全措施,包括改进水印功能,但文章也指出,AI技术“总有人会找到绕过的方法”。过去的经验表明,像ChatGPT和Claude这样的AI也曾被发现提供危险信息的指导。因此,持续的监控和安全机制的迭代至关重要。

Sora团队承诺将继续“hillclimb(持续改进)”限制措施的稳健性,并在未来引入更多控制方式,以确保用户能够更好地掌控其在平台上的数字身份和内容生成过程。这表明了平台对用户担忧的回应和对未来发展的规划。

A frame from a Sora 2-generated video.

Sora now lets you rein in your AI doubles, giving you more say on how and where deepfake versions of you make an appearance on the app. The update lands as OpenAI hurries to show it actually cares about its users’ concerns as an all-too-predictable tsunami of AI slop threatens to take over the internet

The new controls are part of a broader batch of weekend updates meant to stabilize Sora and manage the chaos brewing in its feed. Sora is essentially “a TikTok for deepfakes,” a place to make 10-second videos of pretty much anything, including AI-generated versions of yourself or others (voice included). OpenAI calls these virtual appearances “cameos.” Critics call them a looming misinformation disaster. 

Bill Peebles, who heads the Sora team at OpenAI, said users can now restrict how AI-generated versions of themselves can be used in the app. For example, you could prevent your AI self from appearing in videos involving politics, stop it from saying certain words, or — if you hate mustard — stop it from showing up anywhere near the hellish condiment. 

OpenAI staffer Thomas Dimson said users can also add preferences for their virtual doubles, such as, for example, making them “wear a “#1 Ketchup Fan” ball cap in every video.”   

The safeguards are welcome, but history of AI-powered bots like ChatGPT and Claude offering up tips on explosives, cybercrime, or bioweapons suggests someone, somewhere will probably figure out a way around them. People already have skirted one of Sora’s other safety features, a feeble watermark. Peebles said the company is also “working on” improving that. 

Peebles said Sora will continue “to hillclimb on making restrictions even more robust,” and “will add new ways for you to stay in control” in the future. 

In the week since the app launched, Sora has been complicit in filling the internet with AI-generated slop. The loose cameo controls — pretty much a yes or no to groups like mutuals, people you approve, or “everyone” — were a particular problem. The unwitting star of the platform, none other than OpenAI CEO Sam Altman, illustrated the danger, appearing in a variety of mocking videos that show him stealing, rapping, or even grilling a dead Pikachu

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Sora OpenAI AI生成内容 深度伪造 cameos 内容控制 人工智能安全
相关文章