Fortune | FORTUNE 11月07日 02:07
AI 时代下的网络安全挑战与应对
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能技术的飞速发展,网络安全领域正面临前所未有的机遇与挑战。企业急于拥抱AI工具,却又担忧数据泄露和新的安全风险。企业信息安全官(CISO)处于两难境地:是阻碍AI发展以保障安全,还是允许AI应用却面临潜在的数据暴露。目前,大多数AI工具尚不完全自主,但未来几年内,自主AI代理将大规模部署,届时若无有效安全措施,后果将不堪设想。文章强调,在AI浪潮下,企业亟需加强安全防护,以应对日益严峻的网络安全态势。

🛡️ AI 驱动的创新与安全困境:企业在积极采纳AI技术以驱动创新和效率提升的同时,也面临着前所未有的安全挑战。例如,员工可能未经授权使用公共AI工具(如ChatGPT、Gemini等),并将敏感或受监管的数据输入其中,这使得企业信息安全官(CISO)处于一个两难的境地:一方面是推动创新,另一方面是防范潜在的数据泄露风险。这要求企业在拥抱AI的同时,必须建立起更 robust 的安全策略。

📉 现有安全控制的滞后性:当前企业用于管理数据访问和风险的传统安全控制措施,在应对AI带来的新威胁方面显得力不从心。AI工具的多样化和快速迭代,使得追踪敏感信息的位置、访问权限以及AI工具如何暴露这些信息变得异常困难。特别是对于非监管行业的企业,它们可能面临更大的压力,因为它们可能没有大型科技公司或受监管行业那样拥有“叫停”AI应用的特权,更容易被AI带来的风险“碾压”。

⏳ 未来自主AI代理的潜在风险:文章指出,目前大多数AI工具仍属于“知识系统”阶段,尚可被有效控制。然而,一旦AI代理开始自主执行任务并与其他代理交互,当前的安全措施将不足以应对。预计在未来几年内,这类自主AI代理将在企业中广泛部署,届时,如果企业未能及时建立起相应的安全防护机制,将面临巨大的风险。因此,提前布局并构建AI安全能力至关重要。

🌐 AI安全研究与企业合作的重要性:为了应对AI带来的安全风险,一些AI安全初创公司(如Cyera)已经成立了专门的研究实验室,致力于研究数据与AI系统在大型组织内的交互方式。通过追踪敏感数据的去向、访问者以及AI工具可能带来的暴露风险,这些研究有助于企业提前预警并制定相应的安全策略。这种产学研的合作模式,对于推动整个AI安全生态系统的发展具有积极意义。

As the wife of a cybersecurity pro, I can’t help but pay attention to how AI is changing the game for those on the digital front lines—making their work both tougher and smarter at the same time. I often joke with my husband that “we need him on that wall” (a nod to Jack Nicholson’s famous A Few Good Men monologue), so I’m always tuned in to how AI is transforming both security defense and offense.

That’s why I was curious to jump on a Zoom with AI security startup Cyera’s co-founder and CEO Yotam Segev and Zohar Wittenberg, general manager of Cyera’s AI security business. Cyera’s business, not surprisingly, is booming in the AI era–it’s ARR has surpassed $100 million in less than two years and the company’s valuation is now over $6 billion–thanks to surging demand from enterprises scrambling to adopt AI tools without exposing sensitive data or running afoul of new security risks. The company, which is on Fortune’s latest Cyber 60 list of startups, has a roster of clients that includes AT&T, PwC, and Amgen.

“I think about it a bit like Levi’s in the gold rush,” said Segev. Just as every gold digger needed a good pair of jeans, every enterprise company needs to adopt AI securely, he explained. 

The company also recently launched a new research lab to help companies get ahead of the fast-growing security risks created by AI. The team studies how data and AI systems actually interact inside large organizations—tracking where sensitive information lives, who can access it, and how new AI tools might expose it. 

I must say I was surprised to hear Segev describe the current state of AI security as “grim,” leaving CISOs—chief information security officers—caught between a rock and a hard place. One of the biggest problems, he and Wittenberg told me, is that employees are using public AI tools such as ChatGPT, Gemini, Copilot, and Claude either without company approval or in ways that violate policy—like feeding sensitive or regulated data into external systems. CISOs, in turn, face a tough choice: block AI and slow innovation, or allow it and risk massive data exposure.

“They know they’re not going to be able to say no,” said Segev. “They have to allow the AI to come in, but the existing visibility controls and mitigations they have today are way behind what they need them to be.” Regulated organizations in industries like healthcare, financial services or telecom are actually in a better position to slow things down, he explained: “I was meeting with a CISO for a global telco this week. She told me, ‘I’m pushing back. I’m holding them at bay. I’m not ready.’ But she has that privilege, because she’s a regulated entity, and she has that place in the company. When you go one step down the list of companies to less regulated entities. They’re just being trampled.” 

For now, companies aren’t in too much hot water, Wittenberg said, because most AI tools aren’t yet fully autonomous. “It’s just knowledge systems at this point—you can still contain them,” he explained. “But once we reach the point where agents take action on behalf of humans and start talking to each other, if you don’t do anything, you’re in big trouble.” He added that within a couple of years, those kinds of AI agents will be deployed across enterprises.

“Hopefully the world will move at a pace that we can build security for it in time,” he said. “We’re trying to be make sure that we’re ready, so we can help organizations protect it before it becomes a disaster.” 

Yikes, right? To borrow from A Few Good Men again, I wonder if companies can really handle the truth: when it comes to AI security, they need all the help they can get on that wall.

Also, a small self-promotional moment: Yesterday I published a new Fortune deep-dive profile on OpenAI’s Greg Brockman — the engineer-turned-power-broker behind its trillion-dollar AI infrastructure mission. It’s a wild story, hope you’ll check it out! It’s one of my favorite stories I worked on this year.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Meet the power broker of the AI age: OpenAI’s ‘builder-in-chief’ helping to turn Sam Altman’s trillion-dollar data center dreams into realityby Sharon Goldman

Microsoft, freed from relying on OpenAI, joins the race for ‘superintelligence’—and AI chief Mustafa Suleyman wants to ensure it serves humanity–by Sharon Goldman

The under-the-radar factor that helped Democrats win in Virginia, New Jersey, and Georgiaby Sharon Goldman

Exclusive: Voice AI startup Giga raises $61 million to take on customer service automationby Beatrice Nolan

OpenAI’s new safety tools are designed to make AI models harder to jailbreak. Instead, they may give users a false sense of securityby Beatrice Nolan

AI IN THE NEWS

Mark Zuckerberg and Priscilla Chan have restructured their philanthropy to focus on AI and science. The New York Times reported today that Mark Zuckerberg and Priscilla Chan’s philanthropy, the Chan Zuckerberg Initiative, is going all-in on AI. Once known for its sweeping ambitions to fix education and social inequality, CZI announced a major restructuring to focus squarely on AI-driven scientific research through a new organization called the Chan Zuckerberg Biohub Network. The group even acquired the team behind AI startup Evolutionary Scale, naming its chief scientist Alex Rives as head of science. It's a boomerang move for Rives: When I interviewed him about Evolutionary Scale last year, he explained that he had led a research cohort known as Meta's "AI protein team" that in August 2023 was disbanded as part of Mark Zuckerberg’s “year of efficiency” that led to over 20,000 layoffs at Meta. Undeterred, he immediately spun up a startup with a core group of his former Meta colleagues, called Evolutionary Scale, to continue their work building large language models that, instead of generating text, images, or video, generate recipes for entirely new proteins.

Apple is reportedly finalizing a deal to pay Google about $1 billion per year to use a 1.2-trillion-parameter AI model to power a major overhaul of Siri. According to Bloomberg, after testing models from Google, OpenAI, and Anthropic, Apple has chosen Google’s technology to help rebuild Siri’s underlying system. The partnership would give Apple access to Google’s massive AI infrastructure, enabling more capable, conversational versions of Siri and new features expected to launch next spring. Both companies declined to comment publicly. While the hope is reportedly to use the technology as an interim solution until Apple’s own models are powerful enough, my colleague Jeremy Kahn and I both wonder if this might ultimately signal that Apple has given up trying to compete in the AI model game with their own native technology for Siri.

OpenAI CFO Sarah Friar clarifies comment, says company isn’t seeking government backstop. CNBC reported that OpenAI CFO Sarah Friar clarified late Wednesday that the company is not seeking a government “backstop” for its massive infrastructure buildout, walking back remarks she made earlier at the Wall Street Journal’s Tech Live event. Friar said her comments about a potential federal guarantee “muddied the point,” explaining that she meant the U.S. and private sector must both invest in AI as a national strategic asset. Her clarification comes as OpenAI faces scrutiny over how it will finance more than $1.4 trillion in data center and chip commitments despite reporting roughly $13 billion in revenue this year. CEO Sam Altman has brushed off concerns, calling AI infrastructure the foundation of America’s technological strength.

AI CALENDAR

Nov. 10-13: Web Summit, Lisbon. 

Nov. 19: Nvidia third quarter earnings

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

82%

That's how many CISOs face pressure from boards or executives to increase efficiency using AI-driven automation, according to a new survey of 100 chief information security officers from Nagomi Security called the 2025 CISO Pressure Index

Other key findings included: 

    59% of CISOs say they fear AI attacks more than any other over the next 12 months. 

    47% expect agentic AI to be their top concern within the next two to three years.

    80% of CISOs say they are under high or extreme pressure right now, and 87% report that pressure has climbed over the past year.

Fortune Brainstorm AI

returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment.

Register here.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 网络安全 人工智能安全 数据安全 企业安全 AI风险 CISO 自主AI代理 AI合规 Cybersecurity AI Security Data Security Enterprise Security AI Risks Autonomous AI Agents AI Compliance
相关文章