MIT Technology Review » Artificial Intelligence 09月27日 03:27
利用AI技术识别儿童性虐待图像
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着生成式AI技术的飞速发展,儿童性虐待图像(CSAM)的制作数量激增,给执法部门带来了严峻挑战。美国国土安全部网络犯罪中心已与Hive AI公司签订合同,利用其AI软件来区分AI生成的图像和描绘真实受害者的材料。这一举措旨在通过自动化工具高效处理和分析海量数字内容,从而将调查资源集中于真实受害者案件,提高效率并保护弱势群体。Hive AI的AI检测算法经过通用性训练,能够识别图像中AI生成的像素组合特征,为打击CSAM提供新的技术支持。

📈 **AI技术应对CSAM激增的挑战:** 生成式AI的进步导致儿童性虐待图像(CSAM)数量急剧增加,给调查工作带来困难。美国国土安全部网络犯罪中心正尝试利用Hive AI公司的软件,通过AI技术区分AI生成的图像和描绘真实受害者的内容,以应对这一严峻局面。

🎯 **聚焦真实受害者,优化资源配置:** CSAM泛滥使得执法部门难以辨别图像的真实性,影响了对当前虐待案件的优先处理。引入AI图像识别工具,旨在确保调查资源能够精准地投入到涉及真实受害者的案件中,从而最大化项目成效并保护弱势个体。

💡 **Hive AI的技术能力与应用:** Hive AI提供的AI软件能够识别内容是否由AI生成,其通用性检测算法可以识别图像中AI生成的像素组合特征。该技术不仅可用于CSAM的检测,也有望应用于其他内容审核领域,如识别暴力、垃圾邮件、性内容及名人。

📄 **政府采用非竞争性招标的理由:** 政府在授予Hive AI合同过程中,采用了非竞争性招标方式,主要基于Hive AI在AI图像识别方面的卓越表现。一项芝加哥大学的研究表明,Hive AI的检测工具在识别AI生成艺术品方面优于其他四种检测器,并且其与五角大楼在识别深度伪造方面的合作也为其赢得了信赖。

Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing.

The Department of Homeland Security’s Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco–based Hive AI for its software, which can identify whether a piece of content was AI-generated.

The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told MIT Technology Review that he could not discuss the details of the contract, but confirmed it involves use of the company’s AI detection algorithms for child sexual abuse material (CSAM).

The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024. “The sheer volume of digital content circulating online necessitates the use of automated tools to process and analyze data efficiently,” the filing reads.

The first priority of child exploitation investigators is to find and stop any abuse currently happening, but the flood of AI-generated CSAM has made it difficult for investigators to know whether images depict a real victim currently at risk. A tool that could successfully flag real victims would be a massive help when they try to prioritize cases.

Identifying AI-generated images “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals,” the filing reads.

Hive AI offers AI tools that create videos and images, as well as a range of content moderation tools that can flag violence, spam, and sexual material and even identify celebrities. In December, MIT Technology Review reported that the company was selling its deepfake-detection technology to the US military. 

For detecting CSAM, Hive offers a tool created with Thorn, a child safety nonprofit, which companies can integrate into their platforms. This tool uses a “hashing” system, which assigns unique IDs to content known by investigators to be CSAM, and blocks that material from being uploaded. This tool, and others like it, have become a standard line of defense for tech companies. 

But these tools simply identify a piece of content as CSAM; they don’t detect whether it was generated by AI. Hive has created a separate tool that determines whether images in general were AI-generated. Though it is not trained specifically to work on CSAM, according to Guo, it doesn’t need to be.

“There’s some underlying combination of pixels in this image that we can identify” as AI-generated, he says. “It can be generalizable.” 

This tool, Guo says, is what the Cyber Crimes Center will be using to evaluate CSAM. He adds that Hive benchmarks its detection tools for each specific use case its customers have in mind.

The National Center for Missing and Exploited Children, which participates in efforts to stop the spread of CSAM, did not respond to requests for comment on the effectiveness of such detection models in time for publication. 

In its filing, the government justifies awarding the contract to Hive without a competitive bidding process. Though parts of this justification are redacted, it primarily references two points also found in a Hive presentation slide deck. One involves a 2024 study from the University of Chicago, which found that Hive’s AI detection tool outranked four other detectors in identifying AI-generated art. The other is its contract with the Pentagon for identifying deepfakes. The trial will last three months. 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

生成式AI 儿童性虐待图像 CSAM AI检测 Hive AI 国土安全部 网络犯罪 Generative AI Child Sexual Abuse Material CSAM AI Detection Hive AI Department of Homeland Security Cybercrime
相关文章