AI News 11月06日 11:15
AI助手安全风险与应对策略
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着企业对AI助手生产力需求的增长,其固有的安全风险也日益凸显。AI助手如浏览实时网站、记忆用户上下文和连接业务应用等功能,无意中扩大了网络攻击面。Tenable公司的“HackedGPT”研究揭示了间接提示注入等技术如何导致数据泄露和恶意软件持久化。为应对这些风险,需将AI视为用户或设备,实施严格的审计和监控。关键在于建立AI系统注册表,区分人类、服务和代理身份,限制AI的风险功能,并对其进行全面监控。此外,提升安全意识和技能,以及确保供应商及时修复漏洞,是保障AI安全运行的关键。

🌐 **AI助手扩展攻击面**: AI助手能够浏览实时网站、记忆用户上下文并连接业务应用,这些强大的功能也显著增加了其网络攻击的潜在入口。Tenable的研究表明,利用间接提示注入等技术,攻击者可以绕过安全措施,实现数据泄露和恶意软件的持久化存在,这要求企业必须认识到AI助手并非只是简单的生产力工具,而是需要严格安全管理的复杂系统。

🛡️ **强化AI安全治理框架**: 为有效管理AI助手带来的安全风险,企业应将其视为独立的“用户”或“设备”,并对其进行严格的审计和监控。这包括建立完善的AI系统注册表,明确每个AI模型、助手或代理的用途、能力及数据访问权限,以防止“影子AI”的出现。同时,为人类用户、服务账户和AI代理分配独立的身份,并实施零信任策略,确保最小权限原则得到遵守。

🔒 **限制AI风险功能与加强监控**: AI助手的功能,尤其是网络浏览和独立执行操作的能力,应根据具体用例进行限制,并默认为选择加入模式。对客户面向的助手,应设置较短的记忆保留时间,除非有合法依据。同时,应捕获AI助手的操作和工具调用,并将其作为结构化日志进行记录。通过监控异常活动,如访问未知域、代码摘要尝试或超出策略的连接器访问,及时发现并应对潜在的安全威胁。

🧠 **提升人员能力与供应商责任**: 针对AI安全风险,企业需要对开发人员、云工程师和分析师进行培训,使其能够识别注入攻击的迹象,并鼓励用户报告异常行为。承认技能差距的存在,并投资于AI/ML与网络安全实践的融合培训。此外,企业应密切关注AI供应商的补丁发布情况,确保其能够及时修复新出现的漏洞,共同应对AI技术发展带来的安全挑战。

Boards of directors are pressing for productivity gains from large-language models and AI assistants. Yet the same features that makes AI useful – browsing live websites, remembering user context, and connecting to business apps – also expand the cyber attack surface.

Tenable researchers have published a set of vulnerabilities and attacks under the title “HackedGPT”, showing how indirect prompt injection and related techniques could enable data exfiltration and malware persistence. Some issues have been remediated, while others reportedly remain exploitable at the time of the Tenable disclosure, according to an advisory issued by the company.

Removing the inherent risks from AI assistants’ operations requires governance, controls, and operating methods that treat AI as a user or device, to the extent that the technology should be subject to strict audit and monitoring

The Tenable research shows the failures that can turn AI assistants into security issues. Indirect prompt injection hides instructions in web content that the assistant reads while browsing, instructions that trigger data access the user never intended. Another vector involves the use of a front-end query that seeds malicious instructions.

The business impact is clear, including the need for incident response, legal and regulatory review, and steps taken to reduce reputational harm.

Research already exists that shows assistants can leak personal or sensitive information through injection techniques, and AI vendors and cybersecurity experts have to patch issues as they emerge.

The pattern is familiar to anyone in the technology industry: as features expand, so do failure modes. Treating AI assistants as live, internet-facing applications – not productivity drivers – can improve resilience.

How to govern AI assistants, in practice

1) Establish an AI system registry

Inventory every model, assistant, or agent in use – in public cloud, on-premises, and software-as-a-service, in line with the NIST AI RMF Playbook. Record owner, purpose, capabilities (browsing, API connectors) and data domains accessed. Even without this AI asset list, “shadow agents” can persist with privileges no one tracks. Shadow AI – at one stage encouraged by the likes of Microsoft, who encouraged users to deploy home Copilot licences at work – is a significant threat.

2) Separate identities for humans, services, and agents

Identity and access management conflate user accounts, service accounts, and automation devices. Assistants that access websites, call tools, and write data need distinct identities and be subject to zero-trust policies of least-privilege. Mapping agent-to-agent chains (who asked whom to do what, over which data, and when) is a bare minimum crumb trail that may ensure some degree of accountability. It’s worth noting that agentic AI is susceptible to ‘creative’ output and actions, yet unlike human staff, are not constrained by disciplinary policies.

3) Constrain risky features by context

Make browsing and independent actions taken by AI assistants opt-in per use case. For customer-facing assistants, set short retention times unless there’s a strong reason and a lawful basis otherwise. For internal engineering, use AI assistants but only in segregated projects with strict logging. Apply data-loss-prevention to connector traffic if assistants can reach file stores, messaging, or e-mail. Previous plugin and connector issues demonstrate how integrations increase exposure.

4) Monitor like any internet-facing app

5) Build the human muscle

Train developers, cloud engineers, and analysts to recognise injection symptoms. Encourage users to report odd behaviour (e.g., an assistant unexpectedly summarising content from a site they didn’t open). Make it normal to quarantine an assistant, clear memory, and rotate its credentials after suspicious events. The skills gap is real; without upskilling, governance will lag adoption.

Decision points for IT and cloud leaders

QuestionWhy it matters
Which assistants can browse the web or write data?Browsing and memory are common injection and persistence paths; constrain per use case.
Do agents have distinct identities and auditable delegation?Prevents “who did what?” gaps when instructions are seeded indirectly.
Is there a registry of AI systems with owners, scopes, and retention?Supports governance, right-sizing of controls, and budget visibility.
How are connectors and plugins governed?Third-party integrations have a history of security issues; apply least privilege and DLP.
Do we test for 0-click and 1-click vectors before go-live?Public research shows both are feasible via crafted links or content.
Are vendors patching promptly and publishing fixes?Feature velocity means new issues will appear; verify responsiveness.

Risks, cost visibility, and the human factor

Bottom line

The lesson for executives is simple: treat AI assistants as powerful, networked applications with their own lifecycle and a propensity for both being the subject of attack and for taking unpredictable action. Put a registry in place, separate identities, constrain risky features by default, log everything meaningful, and rehearse containment.

With these guardrails in place, agentic AI is more likely to deliver measurable efficiency and resilience – without quietly becoming your newest breach vector.

(Image source: “The Enemy Within Unleashed” by aha42 | tehaha is licensed under CC BY-NC 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post The enemy within: AI as the attack surface appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 AI助手 网络安全 提示注入 数据泄露 AI Governance Cybersecurity AI Assistants Prompt Injection Data Exfiltration
相关文章