Fortune | FORTUNE 前天 21:28
AI视频创作将改变互联网内容生态
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

五年前,我预测到到2025年95%的互联网内容将是AI生成的。虽然难以精确衡量当前进度,但未来几年我们将接近这一目标。本周,我们将OpenAI的Sora 2集成到Synthesia中,使企业可以在平台上制作的AI虚拟人视频中添加电影感旁白。随着OpenAI新模型的测试,我比以往任何时候都更确信未来两年的互联网将同时变得奇特、富有创造力、混乱而精彩,这将迫使品牌、员工和客户重新思考什么样的视频才算好。大型语言模型普及的早期反应是狂热,随后是焦虑,担心网络将被自动生成的文本淹没。现在,视频领域正在重复这一模式,只是更快、更响亮。目前,我们处于大规模实验阶段,人们正在将技术推向极限。下一步是产品阶段,企业将新奇转化为实际的工作流程。现在,任何有浏览器的用户都可以使用强大的视频模型,互联网很快将充斥着为震惊、讽刺或多巴胺而设计的、高容量、高强度的片段。区分拍摄内容和合成内容将既有趣又令人筋疲力尽。一旦新奇感消退,没有人会关心视频是如何制作的。每一次媒体转变都会经历这一周期。画家们争论相机,印刷品争论数字,关于来源的争论在观众标准化价值后变得激烈。它是否值得我的时间?我学到了什么?它是否帮助我更快地做出决定?我们将停止关心视频来自iPhone还是GPU,除了新闻或体育等少数格式。在那之前,事情将很快变得非常奇怪。随着生产成本的下降,我们将看到大量AI生成的“观看诱饵”:为滑动优化的短小、一次性片段。平台将迅速推出AI原生创意工具,免费提供,填充信息流,并通过广告 монетизировать 参与循环。期待合成新闻广播、无限的“直播”节目、个性化剧情和从未存在过的电影世界。为了应对这种情况,我们需要更好的媒体素养。像防伪水印等可以经受编辑的水印、加密来源证明和清晰的平台级披露将至关重要。在企业世界中,Sora 2的影响将较慢。公司不交易八秒钟的AI垃圾,他们购买可衡量的商业成果。为了有用,生成视频必须做更多的事情,而不仅仅是震惊。它必须推动可衡量的结果:更快地创建、更低的成本、更高的参与度。这意味着将模型集成到完整的工作流程中——编辑、护栏、翻译、版本控制、协作、分发和分析。我们已经看到了这一点。世界上最大的食品和饮料公司之一利用这种新集成更新了他们在设施中的安全培训视频。他们没有使用标准的AI虚拟人列出卫生规则,而是部署了一个行走、说话的巧克力棒传递相同的信息。它不会赢得戛纳奖,但它能吸引注意力,并使信息深入人心。所以,让我们明确:尽管喧嚣,AI音频和视频将留下来。2021年听起来像科幻的东西现在正在产品更新中交付。ChatGPT的语音模式重新定义了人机交互。年轻一代越来越依赖社交视频和AI搜索来获取娱乐和信息,而不是开放网络,导致维基百科流量同比下降8%。在视频领域,我们已经为大多数商业案例跨越了“足够好”的门槛,而“伟大”就在眼前。随着视频生产成本接近文本成本,新的竞争单位是清晰度、创造力和信任。我们的Sora 2集成就在这个方向上——不仅仅是一个有趣的演示,而是一个帮助团队更快讲述更好故事的工具。这场狂热将会过去。洪水将会退去。剩下的将是每个创作者在过去一个世纪里都问过的问题:我们最好用什么方式来表达我们需要表达的东西?答案往往将是AI视频。

🌐 AI视频创作将改变互联网内容生态,未来几年我们将接近95%的互联网内容将是AI生成的目标。随着OpenAI的Sora 2集成到Synthesia平台,企业可以在AI虚拟人视频中添加电影感旁白,推动视频内容创作进入一个新奇且快速发展的阶段。

🎬 目前,我们正处于大规模实验阶段,人们正在将技术推向极限。未来将进入产品阶段,企业将新奇转化为实际的工作流程。任何有浏览器的用户都可以使用强大的视频模型,互联网很快将充斥着为震惊、讽刺或多巴胺而设计的、高容量、高强度的片段。

📈 在企业世界中,Sora 2的影响将较慢。公司不交易八秒钟的AI垃圾,他们购买可衡量的商业成果。为了有用,生成视频必须做更多的事情,而不仅仅是震惊。它必须推动可衡量的结果:更快地创建、更低的成本、更高的参与度。这意味着将模型集成到完整的工作流程中。

🍫 我们已经看到了这一点。世界上最大的食品和饮料公司之一利用这种新集成更新了他们在设施中的安全培训视频。他们没有使用标准的AI虚拟人列出卫生规则,而是部署了一个行走、说话的巧克力棒传递相同的信息。它不会赢得戛纳奖,但它能吸引注意力,并使信息深入人心。

🚀 所以,让我们明确:尽管喧嚣,AI音频和视频将留下来。2021年听起来像科幻的东西现在正在产品更新中交付。ChatGPT的语音模式重新定义了人机交互。年轻一代越来越依赖社交视频和AI搜索来获取娱乐和信息,而不是开放网络,导致维基百科流量同比下降8%。在视频领域,我们已经为大多数商业案例跨越了‘足够好’的门槛,而‘伟大’就在眼前。

Five years ago, I predicted that 95% of internet content would be AI-generated by 2025. It’s hard to measure where we are exactly, but we’ll get close to that in the coming years.

This week, we integrated OpenAI’s Sora 2 into Synthesia. Businesses can now drop cinematic b-roll into the AI avatar videos they already make on our platform. As I’ve been playing around with OpenAI’s new model, I feel more confident than ever that the next two years of the internet will be weird, creative, sloppy, and brilliant, all at once. It’ll force a reset on how brands, employees, and customers think about what makes a video good.

We’ve seen this before. When large language models went mainstream, the early reaction was euphoria, quickly followed by anxiety that the web would drown in auto-generated text. The same pattern is repeating in video, only faster and louder. Right now, we’re in the mass experimentation phase. People are pushing the tech to its limits. Next comes the product phase where businesses turn novelty into workflows that actually matter.

With powerful video models now available to anyone with a browser, the internet will soon flood with high-volume, high-intensity clips engineered for shock, irony, or dopamine. It’ll be entertaining—and exhausting—to tell what’s filmed and what’s synthesized.

But once the novelty fades, no one will care how a video was made. Every media shift goes through this cycle. Painters debated cameras. Print debated digital. Provenance debates burn hot, then cool, as audiences standardize on value. Was it worth my time? Did I learn something? Did it help me decide faster? We’ll stop caring if a video came from an iPhone or a GPU, except for select formats like news or sports.

Until then, things will get very weird, very fast. As production costs collapse, we’ll see an explosion of AI-generated “watchbait”: short, disposable clips optimized for the swipe. Platforms will rush to release AI-native creative tools, give them away, fill feeds, and monetize the engagement loop with ads.

Expect synthetic news broadcasts, infinite “live” shows, personalized storylines, and cinematic worlds that never existed. To navigate that, we’ll need better media literacy. Authenticity rails like watermarking that survives edits, cryptographic provenance, and clear platform-level disclosures.

In the enterprise world, the impact of Sora 2 will be slower. Companies don’t trade in eight-second AI slop, they buy measurable business outcomes. To be useful, generative video must do more than shock. It has to drive measurable results: faster creation, lower costs, higher engagement. That means integrating models into full workflows—editing, guardrails, translation, versioning, collaboration, distribution, and analytics.

We’re already seeing it. One of the world’s largest food and beverage companies used this new integration to update their safety training videos across facilities. Instead of a standard AI avatar listing hygiene rules, they deployed a walking, talking chocolate bar delivering the same message. It won’t win Cannes, but it gets attention and makes the message stick.

So, let’s be clear: despite the froth, AI audio and video are here to stay. What sounded like science fiction in 2021 is now shipping in product updates. ChatGPT’s voice mode has redefined human-computer interaction. Younger generations are relying more and more on social video and AI searches for entertainment and information, rather than the open web, causing an 8% drop in Wikipedia traffic year-over-year. In the video realm, we’ve crossed the “good enough” threshold for most business cases, and “great” is in sight.

As video production costs approach those of text, the new competitive unit is clarity, creativity and trust. Our Sora 2 integration moves in that direction—not just a fun demo, but a tool to help teams tell better stories, faster.

The frenzy will pass. The flood will recede. What will remain is the same question every creator has asked for a century: What’s the best way to say what we need to say?

More often than not, the answer will be AI video.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI视频创作 互联网内容生态 OpenAI Synthesia 媒体素养
相关文章