ΑΙhub 08月14日
New research could block AI models learning from your online content
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

澳大利亚科学家开发了一项新技术,可以在不影响人类视觉的情况下,使图片内容对人工智能模型变得难以识别。这项研究由CSIRO、网络安全合作研究中心和芝加哥大学合作完成,旨在保护内容创作者、组织和社交媒体用户免受其作品和个人数据被用于训练AI系统或生成深度伪造。该技术提供数学上的保证,即使面对自适应攻击或重训练,也能限制AI模型从受保护内容中学习的程度。这项技术有望大规模应用,例如社交媒体平台可以自动为上传的图片添加保护层,从而打击深度伪造、减少知识产权盗窃,并帮助用户更好地控制自己的内容。目前该技术适用于图片,未来计划扩展到文本、音乐和视频。

🛡️ **AI内容保护新技术:** 澳大利亚研究人员开发了一种创新的方法,通过向图片添加“噪声”来阻止未经授权的人工智能系统从中学习。这项技术由CSIRO、网络安全合作研究中心和芝加哥大学共同研发,能够在不改变人类视觉感知的情况下,使图像内容对AI模型而言变得难以理解和学习。

🔒 **保护创作者与数据安全:** 该技术旨在保护艺术家、组织和社交媒体用户的内容免遭滥用,防止其作品和个人数据被用于训练AI模型或生成深度伪造。例如,社交媒体用户在发布照片前可以自动添加保护层,阻止AI学习面部特征;国防组织则可以保护敏感的卫星图像或网络威胁数据不被AI模型吸收。

📈 **数学保证的强大防护:** 该技术通过设定一个数学上限来限制AI系统从受保护内容中学习的能力,并提供数学上的保证,确保该保护措施即使在面对自适应攻击或重训练尝试时也能有效。CSIRO科学家Dr. Derui Wang表示,这种方法与依赖试错或假设AI模型行为的现有方法不同,能够提供更高级别的确定性。

🚀 **大规模应用与未来展望:** 该技术可以被大规模自动应用,例如社交媒体平台或网站可以为所有上传的图片嵌入此保护层,从而有效遏制深度伪造的兴起,减少知识产权盗窃,并帮助用户重新掌握对其内容的使用控制权。虽然目前主要应用于图像,但研究团队正计划将其扩展至文本、音乐和视频等领域。

🏅 **理论验证与合作机会:** 该方法已在实验室环境中得到验证,并在2025年网络与分布式系统安全研讨会(NDSS)上发表的论文《Provably Unlearnable Data Examples》获得了杰出论文奖。研究团队已将代码发布在GitHub供学术界使用,并积极寻求AI安全与伦理、国防、网络安全、学术界等领域的合作伙伴。

“Noise” protection can be added to content before it’s uploaded online.

A new technique developed by Australian researchers could stop unauthorised artificial intelligence (AI) systems learning from photos, artwork and other image-based content.

Developed by CSIRO, Australia’s national science agency, in partnership with the Cyber Security Cooperative Research Centre (CSCRC) and the University of Chicago, the method subtly alters content to make it unreadable to AI models while remaining unchanged to the human eye.

This could help artists, organisations and social media users protect their work and personal data from being used to train AI systems or create deepfakes. For example, a social media user could automatically apply a protective layer to their photos before posting, preventing AI systems from learning facial features for deepfake creation. Similarly, defence organisations could shield sensitive satellite imagery or cyber threat data from being absorbed into AI models.

The technique sets a limit on what an AI system can learn from protected content. It provides a mathematical guarantee that this protection holds, even against adaptive attacks or retraining attempts.

Dr Derui Wang, CSIRO scientist, said the technique offers a new level of certainty for anyone uploading content online.

“Existing methods rely on trial and error or assumptions about how AI models behave,” Dr Wang said. “Our approach is different; we can mathematically guarantee that unauthorised machine learning models can’t learn from the content beyond a certain threshold. That’s a powerful safeguard for social media users, content creators, and organisations.”

Dr Wang said the technique could be applied automatically at scale.

“A social media platform or website could embed this protective layer into every image uploaded,” he said. “This could curb the rise of deepfakes, reduce intellectual property theft, and help users retain control over their content.”

While the method is currently applicable to images, there are plans to expand it to text, music, and videos.

The method is still theoretical, with results validated in a controlled lab setting. The code is available on GitHub for academic use, and the team is seeking research partners from sectors including AI safety and ethics, defence, cybersecurity, academia, and more.

The paper, Provably Unlearnable Data Examples, was presented at the 2025 Network and Distributed System Security Symposium (NDSS), where it received the Distinguished Paper Award.

To collaborate or explore this technology further, you can contact the team.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI内容保护 深度伪造 数字版权 CSIRO 人工智能安全
相关文章