Mashable 10月07日 11:31
OpenAI's Sora 2 AI video model faces controversy
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI's second-generation AI video model, Sora 2, has stirred up controversy shortly after its launch. Users have been creating alleged celebrity deepfakes, sensitive political content, and licensed character videos on the platform, which is designed like TikTok or Reels. Despite Sora 2 having robust safeguards, including easy reporting mechanisms and a face ban to prevent deepfakes, issues remain, particularly with OpenAI's Cameos feature, which allows users to create reusable characters based on themselves. Concerns have led OpenAI to announce new content restrictions for Cameos, allowing users to set more precise limits on what their digital likeness can do and say.

🔍 OpenAI's Sora 2 AI video model has faced controversy due to its ability to generate detailed and alarming content, including celebrity deepfakes, sensitive political content, and licensed character videos, shortly after its launch.

🛡️ Sora 2 is designed with robust safeguards, such as easy reporting mechanisms for sexual and violent content, harassment, and child endangerment, and a face ban to prevent deepfakes. However, these measures have not entirely prevented misuse.

🎭 The Cameos feature, which allows users to create reusable characters based on themselves, has posed significant problems. Users can opt-in to their own deepfake and grant access to their digital likeness on four levels, leading to potential misuse if not properly restricted.

🔄 OpenAI has acknowledged the safety issues associated with free access to someone's digital likeness and has announced new content restrictions for Cameos. Users can now set more precise limits on what their Cameo can do and say using text prompts, such as restricting political commentary or specific words.

🔒 To further protect users, Sora 2 is undergoing tweaks to its model safety, including making the Sora 2 watermark more distinct to address user frustrations with 'overmoderation' on the app.

OpenAI's second-generation AI video model, Sora 2, is stirring up controversy, less than a week after the AI giant unveiled the highly anticipated tool and its corresponding app.

The hubbub stems from Sora 2's impressive but alarming ability to generate just about anything in precise detail. Shortly after its launch, users flooded the platform — pitched as a video-forward social media app in the likeness of TikTok or Reels — with alleged celebrity deepfakes, sensitive political content, and licensed characters.

Sora 2's safeguards are seemingly more robust than its competitors — such as those on Grok — reported Mashable tech editor Timothy Beck Werth. Sora 2 has easy reporting mechanisms for sexual and violent content, harassment, and child endangerment. As a way to prevent deepfakes, Sora 2 is also supposed to block users from uploading content that features faces. In theory, Sora 2's face ban should prevent users from creating a deepfake of someone without their consent. But OpenAI's own solution to nonconsensual deepfakes, a feature known as Cameos, has posed its own problems.

Cameos are "reusable characters” modeled after users based on audio and video that they upload. Users have to opt-in to their own deepfake, and can then grant access to their digital likeness on four levels: Only you, people you approve, friends, or everyone. Until now, that was the extent to which Cameos could be controlled, meaning if you had your Cameo toggled to app-wide access, your likeness could be made to do just anything.

Responding to user concerns, OpenAI has since acknowledged the safety issues free access to someone's digital likeness can pose, announcing new content restrictions for the Cameos. Here's what you need to know if you're trying to make your Cameo a star.

How to protect your Cameo

In an X post by Sora head Bill Peebles, users were directed to a thread by OpenAI technical staffer Thomas Dimson, explaining that the new Cameo settings include both content preferences and restrictions.

To lock down your Cameo, go to your profile. Select "settings" and then "edit cameo." Tap on "Cameo preferences" and choose "restrictions."

From there, users can set more precise limits on what their Cameo can do and say using text prompts, like "Don't put me in videos that involve political commentary" or "Don't let me say this word," Peebles explained. You can also ensure that your Cameo appears with specific details, such as wearing an identifying clothing item.

If you want to make sure no one but you can use your likeness, make sure you've selected "only me" in the "Cameo rules" section. And if you don't want to make a Cameo at all, users can opt-out while signing up.

Peebles added that Sora 2 is still undergoing tweaks to its model safety, and will be making the Sora 2 watermark more distinct, acknowledging that users may be frustrated with "overmoderation" on the app. "We think it's important to be conservative here while the world is still adjusting to this new technology."

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI Sora 2 AI video deepfakes content restrictions
相关文章