Fortune | FORTUNE 09月18日
内容来源验证标准C2PA:信任与隐私的博弈
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近期,Google Pixel 10手机支持的内容来源验证标准(C2PA)及其内容凭证(Content Credentials)功能引发关注。该标准旨在通过数字标签追溯媒体内容的创作者、制作过程及AI参与情况,以建立在线信息的信任度。然而,一项新的报告指出,C2PA在推广信任的同时,可能威胁用户隐私。该报告认为,C2PA易被误解,它并非专门用于检测深度伪造或侵犯版权,而是构建了一个新的媒体基础设施层,可能生成大量关于创作者的可共享数据,并与商业、政府甚至生物识别身份系统关联。C2PA的开放性也带来了潜在的操纵风险,以及对小型创作者和边缘化声音的排斥,引发了关于谁被信任以及谁来决定的广泛讨论。

✨ **C2PA标准旨在提升数字内容的可信度**:通过“内容凭证”技术,为照片、视频和音频添加不可轻易篡改的元数据标签,详细记录内容的创作者、制作过程以及AI是否参与其中,从而帮助用户辨别信息的真实性,建立对在线内容的信任。

⚠️ **信任机制可能引发隐私担忧**:尽管C2PA旨在增强信任,但有报告指出,该标准可能被误解,并非万能的深度伪造检测工具。其开放性可能导致大量关于创作者的可共享数据被生成,并可能与商业、政府甚至生物识别身份系统相连接,从而对用户隐私构成潜在风险。

⚖️ **“信任名单”的潜在偏见与操纵风险**:C2PA依赖于“信任名单”和合规性计划来验证参与者。这可能导致小型媒体、独立记者或个人创作者因未能进入名单而面临作品被忽视的风险。此外,该框架的开放性也意味着可能被恶意行为者以误导性方式利用,增加了信息操纵的可能性。

🌐 **企业与消费者需关注的责任与风险**:企业在采用C2PA时,需要将其纳入隐私和数据治理范畴,制定相应的数据收集、共享和安全政策。而消费者可能面临身份信息泄露的风险,因为C2PA元数据可能包含时间戳、地理位置、编辑详情甚至身份系统连接,而消费者对此的控制和认知有限。

Last week, Google said its new Pixel 10 phones will ship with a feature aimed at one of the biggest questions of the AI era: Can you trust what you see? The devices now support the Coalition for Content Provenance and Authenticity (C2PA), a standard backed by Google and other heavyweights like Adobe, Microsoft, Amazon, OpenAI and Meta. At its core is something called Content Credentials—essentially a digital nutrition label for photos, videos, or audio. The metadata tag, which can’t easily be tampered with, shows who created a piece of media, how it was made, and whether AI played a role. 

Over a year ago, I reported that TikTok would automatically label all realistic AI-generated content created using TikTok Tools with Content Credentials. And the standard was actually founded before the current generative AI boom: The C2PA was founded in February 2021 by a group of technology and media companies to create an open, interoperable standard for digital content provenance, or the origin and history of a piece of content, to build trust in online information.  

But a new report from the World Privacy Forum, a data-privacy nonprofit, warns that this growing push for trust could put privacy on the line. The group argues C2PA is widely misunderstood: it doesn’t detect deepfakes or flag potential copyright infringement. Instead, it’s quietly laying down a new technical layer of media infrastructure—one that generates vast amounts of shareable data about creators and can link to commercial, government, or even biometric identity systems.

Because C2PA is an open framework, its metadata is designed to be replicated, ingested, and analyzed across platforms. That raises thorny questions: Who decides what counts as “trustworthy”? For example, C2PA relies on developing “trust lists” and a compliance program to verify participants. But if small media outlets, indie journalists, or independent creators don’t make the list, their work could be penalized or dismissed. In theory, any creator can apply credentials to their work and apply to C2PA to become a trusted entity. But to get full “trusted status,” the creator often needs to have a recognized certificate authority, meet criteria that are not fully public and navigate verification. According to the the report, this risks sidelining marginalized voices, even as policymakers — including a New York state lawmaker — push for “critical mass” adoption.

But inclusion on these “trust lists” isn’t the only concern. The report also warns that C2PA’s openness also cuts the other way: the framework can be too easy to manipulate, since so much depends on the discretion of whoever attaches the credentials—and there’s little to stop bad actors from applying them in misleading ways.

“A lot of people think, oh, this is a content labeling system, they’re not necessarily cognizant of all of the layers of identifiable information that might be baked in here,” said Kate Kaye, deputy director of the World Privacy Forum and co-author of the report. She emphasized that C2PA isn’t just a simple label on a piece of media — it creates a stream of data that can be ingested, stored, and linked to identity information across countless systems.

All of this matters for both corporate entities and consumers. For example, Kaye stressed that businesses might not realize that C2PA falls into privacy and data governance and requires policies around how it’s collected, shared, and secured. Also, researchers have already shown it’s possible to cryptographically sign forged images. So while companies may embrace C2PA to gain credibility — they also assume new obligations, potential liabilities, and dependence on a trust system controlled by Big Tech players.

For consumers, there are definitely privacy and identity exposure issues. C2PA metadata can include timestamps, geolocation, details on editing, and even connections to identity systems (including government IDs), but consumers may have little control or awareness that this is being captured. It’s technically opt-in—but if you don’t opt in, your content could be marked less trustworthy. And in the case of TikTok, for example, users are automatically opted in (other platforms like Meta and Adobe are adopting C2PA, but generally as opt-in for creators).

Overall, there are a lot of power dynamics at play, Kaye said. “Who is trusted and who isn’t and who decides – that’s a big, open-ended thing right now.” But the burden to figure it out isn’t on consumers, she emphasized: Instead, it’s on businesses and organizations to think carefully about how they implement C2PA, with appropriate risk assessments.

With that, here’s the rest of the AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Exclusive: Former Google DeepMind researchers secure $5 million seed round for new company to bring algorithm-designing AI to the masses – by Jeremy Kahn

Big Tech companies pledge $42 billion in U.K. investments as U.S. President Donald Trump begins state visit – by Beatrice Nolan

Nvidia shares drop, China tech surges as Beijing tries to push homegrown AI chips – by Nicholas Gordon

Why OpenAI’s $300 billion deal with Oracle has set the ‘AI bubble’ alarm bells ringing – by Beatrice Nolan

AI IN THE NEWS

Nvidia and Intel announces sweeping partnership to co-develop AI infrastructure and personal computing products. The deal, which includes Nvidia taking a $5 billion stake in Intel, brings together two longtime rivals at a moment when demand for AI computing is exploding. “This historic collaboration tightly couples NVIDIA’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem — a fusion of two world-class platforms," Nvidia CEO Jensen Huang said. “Together, we will expand our ecosystems and lay the foundation for the next era of computing.”

Meta raises its bets on smart glasses with an AI assistant. According to the New York Times, Meta is doubling down on smart glasses after selling millions since their debut four years ago. At its annual developer conference this week, the company unveiled three new models — including the $799 Meta Ray-Ban Display, which features a tiny screen in the lens, app controls via a wristband, and a built-in AI voice assistant. Meta also introduced an upgraded Ray-Ban model and a sport version made with Oakley. But the rollout wasn’t flawless: onstage, Mark Zuckerberg’s demo faltered when the glasses failed to deliver a recipe and place a call.

China's DeepSeek says its hit model cost just $294,000 to train. Reuters reported today that Chinese AI startup DeepSeek is back in the spotlight after months of relative quiet, with new details on how it trained its reasoning-focused R1 model. A recent Nature article co-authored by founder Liang Wenfeng revealed the system cost just $294,000 to train using 512 of Nvidia’s China-only H800 chips — a striking contrast with U.S. firms like OpenAI, whose training runs cost well over $100 million. But questions remain: U.S. officials said that DeepSeek has had access to large volumes of restricted H100 chips, despite export controls, and the company has now formally acknowleged it also used older A100s in early development. The revelations may reignite debate over AI "scaling laws" and whether massive clusters of the most advanced AI chips are really necessary to train cutting-edge AI models. It also highlights ongoing geopolitical tensions over access to Nvidia's chips. 

AI CALENDAR

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco. Apply to attend here.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

50% 

Half of Americans are now more worried than excited about AI’s growing role in daily life — up from just 37% in 2021, according to a new Pew Research study. Only 10% say they’re more excited than concerned, while 38% feel both equally.

A majority say they want more control over how AI shows up in their lives. Larger shares believe AI will erode — not enhance — people’s creativity and relationships. Still, many are fine with AI lending a hand on everyday tasks.

Americans draw a clear line: most reject AI in personal domains like religion or matchmaking, but are more open to its use in data-heavy fields like weather forecasting or medical research. And while most say it’s important to know whether images, video or text come from AI or humans, many admit they can’t reliably tell the difference.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

C2PA 内容来源验证 数字内容 AI 隐私 信任 元数据 Content Provenance Digital Content Metadata Privacy Trust
相关文章