Communications of the ACM - Artificial Intelligence 09月18日
AI生成内容真假难辨,技术与教育双管齐下应对挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着Google Veo 3等强大AI视频生成模型的出现,超写实AI内容已能以极低成本快速制作,并难以与真实内容区分。这引发了对虚假信息泛滥的担忧。目前,行业正通过数字水印(如Google的SynthID和C2PA联盟)来标识AI生成内容。然而,水印的普及和一致性仍是挑战。社交媒体平台如Instagram、YouTube和TikTok已开始实施内容标签和披露要求,但执行力度不一。专家指出,提高用户的AI素养和批判性思维能力至关重要,因为即使有技术手段,开源AI模型也可能被滥用,导致制造难以辨别的虚假内容,用户对此尚未做好准备。

🚀 **AI生成内容逼近真实,真假难辨成严峻挑战**:Google Veo 3等先进AI模型的出现,使得生成超写实视频、音频和对话的成本大幅降低且速度加快。一个仅耗时三天、花费400美元的AI广告,其效果足以媲美耗资巨大、耗时数月的传统广告。这表明AI生成内容已达到一个新高度,其逼真度让普通用户难以分辨真伪,对数字世界的真实性构成根本性威胁。

💧 **数字水印技术是初步应对策略,但普及度是关键**:为解决AI内容辨别难题,数字水印技术被寄予厚望。Google的SynthID能将肉眼不可见的水印嵌入AI生成内容,并已应用于十亿级内容。OpenAI和Microsoft等则参与C2PA联盟,旨在建立内容溯源的开放标准。尽管这些技术显示出巨大潜力,但其有效性高度依赖于能否实现大规模、一致的应用,目前距离这一目标尚远。

📢 **平台监管与用户素养双重发力,但效果受限**:Instagram、YouTube和TikTok等平台已采取措施,如添加“AI生成”标签或要求创作者披露。然而,各平台执行标准不一,效果大打折扣。专家指出,随着AI造假技术日益精进,辨别难度增加,用户需提升批判性思维和信息源核查能力。AI素养教育被认为是提升用户辨别力的重要途径,但开源AI模型的存在,使得恶意用户可以规避水印和平台限制,制造难以追踪的虚假内容,用户对此未来准备不足。

💡 **开源AI模型带来潜在风险,完美解决方案仍遥远**:尽管有水印技术和平台监管,但开源AI模型为恶意行为者提供了绕过限制的途径。他们可以利用不带水印或可移除水印的开源模型生成内容,使得区分真实与虚假的任务变得异常困难。专家认为,当前检测AI生成内容的方法尚不足以完全防止普通用户被先进的文本到视频生成模型所欺骗,实现完全的区分仍然是一个巨大的挑战。

Change comes at you fast.

In late May 2025, Google announced its Veo 3 video generation model at its I/O conference. Early testers were stunned by the photorealistic video, audio, and dialogue that the model produced.

Just a month later, a single director used Veo 3 to create a hyper-realistic AI-generated ad for betting platform Kalshi. The ad took three days to create and cost $400 in Veo 3 credits. It aired during the NBA Finals beside ads that took months to create and cost hundreds of thousands of dollars.

Veo 3 isn’t the first AI video generation platform. But its power has given rise to a sobering fact: in the right (or wrong) hands, generative AI tools can now produce images and videos that are indistinguishable from reality.

These tools are cheap and are usable by anyone who can type a prompt into a chat window. That may lead to an unprecedented wave of AI-generated content across social media platforms and digital channels that is so realistic it’s hard to tell if it’s real or not. In fact, AI slop already has a strong presence on platforms like YouTube, Pinterest, and Instagram, as John Oliver explained on a recent episode of Last Week Tonight.

This begs the question: What, if anything, can be done about this?

A Big (Synthetic) Problem

How big of a problem is the proliferation of hyper-realistic synthetic images and video generated by AI? According to experts, it’s enormous.

“It is now almost impossible to tell, in the digital world, what is real and what is artificial,” said Paolo Giudici, a professor of statistics at Italy’s University of Pavia.

AI image and video models can now produce content for distribution on social media that most users “would not question as fake,” said Mike Perkins, a researcher at the British University Vietnam who has done work on synthetic content.

In a sense, Pandora’s box has been opened. The AI tools that generate hyper-realistic synthetic content are not going away. In fact, as Veo 3 proves, they’re getting better, faster.

So, the first, and biggest, effort to address what is digitally real and what is not starts at the tool level. And the primary way that is being tackled right now is through watermarking.

Some AI labs are using, or participating in, digital watermarking efforts to indicate that an image is AI-generated by adding data to the image file itself the moment it is generated by an AI tool. The image essentially carries a digital “badge” that indicates it is AI-generated.

One of the top lab-run watermarking initiatives is SynthID from Google. SynthID embeds into content machine-readable watermarks that can’t be seen by the human eye. It is now automatically added to all content produced by Google’s generative AI models, including Veo. Google is reported to have already watermarked more than 10 billion pieces of content with SynthID.

Other labs, like OpenAI and Microsoft, participate in C2PA, the Coalition for Content Provenance and Authority, an open standard for watermarking AI-generated content. The C2PA initiative seeks to create a standardized way to track the origin of digital content. It allows cryptographically signed metadata to be attached to digital assets identifying the tools that created it.

Watermarking shows plenty of promise, said Giudici. Efforts like SynthID and C2PA are becoming more sophisticated and cost-effective.

But there’s still an obvious problem.

Watermarking works, but it requires universal, consistent application at scale to fully address the problem. And we are nowhere near that.

Policing Falls Short

To fill the gaps in watermarking’s coverage, some social media platforms are also taking steps to combat AI-generated content.

Instagram automatically attaches a “Made with AI” tag to AI-generated content flagged by its systems or by the user uploading it. YouTube requires creators to disclose when a video they upload contains AI-generated content. TikTok requires users to label AI-generated content.

But a familiar problem quickly rears its ugly head: for platform-led policing to be effective, every platform needs to do it consistently. Based on the flood of AI-generated content already available across these platforms, this is something that is decidedly not happening today.

That leaves a burden on users to address the issue. The problem, said Perkins, is that many users are unable to identify AI-generated content.

“I believe we are now at the tipping point where the signs of fakes are becoming so small that we need to rely on critical thinking of viewers rather than any visible artifacts or problems with the content,” he said. That raises the importance of verifying the source of consumed information, a skill in short supply if the proliferation of online misinformation is an indicator.

The best bet, Perkins said, may be more education at the user level. When social media users know how good AI-generated content has become, they can be better prepared to handle their consumption.

“Having some form of AI literacy is important so that users realize what is possible, and then know what to watch out for,” he said.

Then there’s the final, perhaps largest, elephant in the room.

Watermarking is inconsistently applied. Platform- and user-led policing is inconsistently applied. But even if both somehow worked perfectly, that wouldn’t eliminate hyper-realistic AI-generated content that’s designed to mislead, due in part to open-source AI.

Open-source AI allow users to run models that generate images and video locally on a robust machine without restrictions or guardrails imposed by the model creator.

That means, said Perkins, that malicious actors can find open-source models that don’t have watermarking and use them to generate non-watermarked content. Even if an open-source model creator adds watermarking later, bad actors can simply run a version of the model—or another one—without any safeguards.

While Perkins recommends that platforms implement more robust policies and technical checks, and that users become more AI-aware and literate, at the end of the day open-source models challenge a perfect solution to distinguishing what’s real and what’s not.

And that’s a future for which users are not ready.

“It’s my opinion,” said Perkins, “that the current methods for detection of AI-generated image and video content is entirely insufficient to prevent the average user from being fooled by the new generation of text-to-video diffusion models.”

Logan Kugler is a freelance technology writer based in Tampa, FL. He is a regular contributor to Communications and has written for nearly 100 major publications.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI生成内容 深度伪造 数字水印 SynthID C2PA AI素养 信息真实性 开源AI Veo 3 AI-generated content Deepfakes Digital watermarking AI literacy Information authenticity Open-source AI
相关文章