Radar 09月29日
AI安全新挑战与机遇
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能,尤其是自主型AI,在企业系统中的深度嵌入,安全领域正经历重大转变。AI系统,特别是企业工作流程中不可或缺的AI助手,正成为攻击者的首要目标。Zenith的Michael Bargury展示了影响ChatGPT、Gemini和Microsoft Copilot等主要AI平台的未知'零点击'漏洞。同时,NVIDIA的AI红队揭示了大型语言模型对恶意输入的独特脆弱性。尽管AI扩展提升了生产力,但也增加了对敏感数据的访问,形成了新的攻击面和更复杂的供应链防御。传统安全原则如应用安全实践仍至关重要,但威胁建模因AI系统而变得更加复杂。黑帽USA 2025推出了MAESTRO、NIST AI风险管理框架和OWASP代理安全十大项目,为理解和应对AI特定安全风险提供了结构化方法。安全专业人员需平衡传统基础与AI特定安全挑战的新专业知识,重新评估安全态势。

🔒 AI系统,尤其是企业工作流程中不可或缺的AI助手,正成为攻击者的首要目标。Zenith的Michael Bargury展示了影响ChatGPT、Gemini和Microsoft Copilot等主要AI平台的未知'零点击'漏洞,凸显了AI助手在强大安全措施下可能成为系统入侵的媒介。

🌐 AI安全呈现矛盾:组织为提升生产力扩展AI能力,必须增加这些工具对敏感数据和系统的访问权限,这创造了新的攻击面和更复杂的供应链防御。NVIDIA的AI红队揭示了大型语言模型对恶意输入的独特脆弱性,并展示了利用这些内在弱点的多种新型攻击技术。

🛠 传统安全原则如应用安全实践对于AI安全仍然至关重要,但更具挑战性。Kudelski Security的研究员Nathan Hamiel和Nils Amiet展示了AI驱动开发工具如何无意中重新引入众所周知的漏洞,表明基本应用安全实践在AI安全中是根本性的。

📊 威胁建模因AI系统而变得更加复杂,需要新的框架来应对。黑帽USA 2025推出了MAESTRO、NIST AI风险管理框架和OWASP代理安全十大项目,为理解和应对AI特定安全风险提供了结构化方法,帮助安全专业人员更好地识别和缓解新兴威胁。

🧭 安全专业人员需平衡传统基础与AI特定安全挑战的新专业知识,重新评估安全态势。组织必须通过这一新视角重新评估其安全态势,同时考虑传统漏洞和新兴的AI特定威胁,以适应AI带来的安全变化。

The security landscape is undergoing yet another major shift, and nowhere was this more evident than at Black Hat USA 2025. As artificial intelligence (especially the agentic variety) becomes deeply embedded in enterprise systems, it’s creating both security challenges and opportunities. Here’s what security professionals need to know about this rapidly evolving landscape.

AI systems—and particularly the AI assistants that have become integral to enterprise workflows—are emerging as prime targets for attackers. In one of the most interesting and scariest presentations, Michael Bargury of Zenity demonstrated previously unknown “0click” exploit methods affecting major AI platforms including ChatGPT, Gemini, and Microsoft Copilot. These findings underscore how AI assistants, despite their robust security measures, can become vectors for system compromise.

AI security presents a paradox: As organizations expand AI capabilities to enhance productivity, they must necessarily increase these tools’ access to sensitive data and systems. This expansion creates new attack surfaces and more complex supply chains to defend. NVIDIA’s AI red team highlighted this vulnerability, revealing how large language models (LLMs) are uniquely susceptible to malicious inputs, and demonstrated several novel exploit techniques that take advantage of these inherent weaknesses.

However, it’s not all new territory. Many traditional security principles remain relevant and are, in fact, more crucial than ever. Nathan Hamiel and Nils Amiet of Kudelski Security showed how AI-powered development tools are inadvertently reintroducing well-known vulnerabilities into modern applications. Their findings suggest that basic application security practices remain fundamental to AI security.

Looking forward, threat modeling becomes increasingly critical but also more complex. The security community is responding with new frameworks designed specifically for AI systems such as MAESTRO and NIST’s AI Risk Management Framework. The OWASP Agentic Security Top 10 project, launched during this year’s conference, provides a structured approach to understanding and addressing AI-specific security risks.

For security professionals, the path forward requires a balanced approach: maintaining strong fundamentals while developing new expertise in AI-specific security challenges. Organizations must reassess their security posture through this new lens, considering both traditional vulnerabilities and emerging AI-specific threats.

The discussions at Black Hat USA 2025 made it clear that while AI presents new security challenges, it also offers opportunities for innovation in defense strategies. Mikko Hypponen’s opening keynote presented a historical perspective on the last 30 years of cybersecurity advancements and concluded that security is not only better than it’s ever been but poised to leverage a head start in AI usage. Black Hat has a way of underscoring the reasons for concern, but taken as a whole, this year’s presentations show us that there are also many reasons to be optimistic. Individual success will depend on how well security teams can adapt their existing practices while embracing new approaches specifically designed for AI systems.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 人工智能 自主型AI 零点击漏洞 大型语言模型 威胁建模 NIST AI风险管理框架 OWASP代理安全十大项目
相关文章