VentureBeat 10月29日 22:24
AI在安全运营中的挑战与机遇
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能的飞速发展,安全运营面临着在利用AI潜力与维持人类监督之间取得平衡的挑战。代理AI正在重塑安全运营,但成功关键在于自动化与问责制的结合。文章探讨了AI在提高效率、建立信任、对抗攻击、技能发展、身份管理以及合规报告等方面的作用,并强调了数据基础的重要性。最终目标不是取代安全分析师,而是通过人机协作,构建更强大的安全体系。

🤖 **效率与问责制的平衡:** AI能显著提高安全分析师的效率,例如将调查时间从60分钟缩短到5分钟,但关键在于识别哪些任务适合自动化,哪些需要人类的判断。例如,自动做出下线系统或隔离终端的决定可能带来业务中断风险,因此需要人类验证。

🤝 **建立信任与透明度:** 安全团队需要理解AI决策的依据,包括所检查的数据、识别的模式以及排除的替代解释。这种透明度有助于建立对AI建议的信任,并为持续改进提供机会,同时确保复杂决策中的“人机环”。

⚔️ **攻防不对称下的AI应用:** 攻击者不受约束地使用AI,可以快速开发漏洞,降低攻击门槛。防御方必须谨慎使用AI,确保其安全部署,避免自身成为漏洞。同时,也要学习攻击者的技术,但要保持必要的安全护栏。

🧠 **技能发展与核心能力维持:** 随着AI承担更多例行工作,安全专业人员的核心技能可能面临退化。组织需要有意识地制定技能发展策略,通过手动调查练习、跨培训和职业发展路径来平衡AI效率与核心能力的维持。

🆔 **代理AI的身份与权限管理:** 预计到2028年将有13亿代理AI,每个都需要身份、权限和治理。过度授权的代理AI存在巨大风险,可能被利用进行破坏性操作。需要通过工具化访问控制和治理框架来管理这些代理的身份和权限。

Presented by Splunk, a Cisco Company


As AI rapidly evolves from a theoretical promise to an operational reality, CISOs and CIOs face a fundamental challenge: how to harness AI's transformative potential while maintaining the human oversight and strategic thinking that security demands. The rise of agentic AI is reshaping security operations, but success requires balancing automation with accountability.

The efficiency paradox: Automation without abdication

The pressure to adopt AI is intense. Organizations are being pushed to reduce headcount or redirect resources toward AI-driven initiatives, often without fully understanding what that transformation entails. The promise is compelling: AI can reduce investigation times from 60 minutes to just 5 minutes, potentially delivering 10x productivity improvements for security analysts.

However, the critical question isn't whether AI can automate tasks — it's which tasks should be automated and where human judgment remains irreplaceable. The answer lies in understanding that AI excels at accelerating investigative workflows, but remediation and response actions still require human validation. Taking a system offline or quarantining an endpoint can have massive business impact. An AI making that call autonomously could inadvertently cause the very disruption it's meant to prevent.

The goal isn't to replace security analysts but to free them for higher-value work. With routine alert triage automated, analysts can focus on red team/blue team exercises, collaborate with engineering teams on remediation, and engage in proactive threat hunting. There's no shortage of security problems to solve — there's a shortage of security experts to address them strategically.

The trust deficit: Showing your work

While confidence in AI's ability to improve efficiency is high, skepticism about the quality of AI-driven decisions remains significant. Security teams need more than just AI-generated conclusions — they need transparency into how those conclusions were reached.

When AI determines an alert is benign and closes it, SOC analysts need to understand the investigative steps that led to that determination. What data was examined? What patterns were identified? What alternative explanations were considered and ruled out?

This transparency builds trust in AI recommendations, enables validation of AI logic, and creates opportunities for continuous improvement. Most importantly, it maintains the critical human-in-the-loop for complex judgment calls that require nuanced understanding of business context, compliance requirements, and potential cascading impacts.

The future likely involves a hybrid model where autonomous capabilities are integrated into guided workflows and playbooks, with analysts remaining involved in complex decisions.

The adversarial advantage: Fighting AI with AI — carefully

AI presents a dual-edged sword in security. While we're carefully implementing AI with appropriate guardrails, adversaries face no such constraints. AI lowers the barrier to entry for attackers, enabling rapid exploit development and vulnerability discovery at scale. What was once the domain of sophisticated threat actors could soon be accessible to script kiddies armed with AI tools.

The asymmetry is striking: defenders must be thoughtful and risk-averse, while attackers can experiment freely. If we make a mistake implementing autonomous security responses, we risk taking down production systems. If an attacker's AI-driven exploit fails, they simply try again with no consequences.

This creates an imperative to use AI defensively, but with appropriate caution. We must learn from attackers' techniques while maintaining the guardrails that prevent our AI from becoming the vulnerability. The recent emergence of malicious MCP (Model Context Protocol) supply chain attacks demonstrates how quickly adversaries exploit new AI infrastructure.

The skills dilemma: Building capabilities while maintaining core competencies

As AI handles more routine investigative work, a concerning question emerges: will security professionals' fundamental skills atrophy over time? This isn't an argument against AI adoption — it's a call for intentional skill development strategies. Organizations must balance AI-enabled efficiency with programs that maintain core competencies. This includes regular exercises that require manual investigation, cross-training that deepens understanding of underlying systems, and career paths that evolve roles rather than eliminate them.

The responsibility is shared. Employers must provide tools, training, and culture that enable AI to augment rather than replace human expertise. Employees must actively engage in continuous learning, treating AI as a collaborative partner rather than a replacement for critical thinking.

The identity crisis: Governing the agent explosion

Perhaps the most underestimated challenge ahead is identity and access management in an agentic AI world. IDC estimates 1.3 billion agents by 2028 — each requiring identity, permissions, and governance. The complexity compounds exponentially.

Overly permissive agents represent significant risk. An agent with broad administrative access could be socially engineered into taking destructive actions, approving fraudulent transactions, or exfiltrating sensitive data. The technical shortcuts engineers take to "just make it work" — granting excessive permissions to expedite deployment — create vulnerabilities that adversaries will exploit.

Tool-based access control offers one path forward, granting agents only the specific capabilities they need. But governance frameworks must also address how LLMs themselves might learn and retain authentication information, potentially enabling impersonation attacks that bypass traditional access controls.

The path forward: Start with compliance and reporting

Amid these challenges, one area offers immediate, high-impact opportunity: continuous compliance and risk reporting. AI's ability to consume vast amounts of documentation, interpret complex requirements, and generate concise summaries makes it ideal for compliance and reporting work that has traditionally consumed enormous analysts’ time. This represents a low-risk, high-value entry point for AI in security operations.

The data foundation: Enabling the AI-powered SOC

None of these AI capabilities can succeed without addressing the fundamental data challenges facing security operations. SOC teams struggle with siloed data and disparate tools. Success requires a deliberate data strategy that prioritizes accessibility, quality, and unified data contexts. Security-relevant data must be immediately available to AI agents without friction, properly governed to ensure reliability, and enriched with metadata that provides the business context AI cannot understand.

Closing thought: Innovation with intentionality

The autonomous SOC is emerging — not as a light switch to flip, but as an evolutionary journey requiring continuous adaptation. Success demands that we embrace AI's efficiency gains while maintaining the human judgment, strategic thinking, and ethical oversight that security requires.

We're not replacing security teams with AI. We're building collaborative, multi-agent systems where human expertise guides AI capabilities toward outcomes that neither could achieve alone. That's the promise of the agentic AI era — if we're intentional about how we get there.


Tanya Faddoul, VP Product, Customer Strategy and Chief of Staff for Splunk, a Cisco Company. Michael Fanning is Chief Information Security Officer for Splunk, a Cisco Company.

Cisco Data Fabric provides the needed data architecture powered by Splunk Platform — unified data fabric, federated search capabilities, comprehensive metadata management — to unlock AI and SOC’s full potential. Learn more about Cisco Data Fabric.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 安全运营 代理AI 效率 信任 攻防 技能 身份管理 合规 Splunk
相关文章