AI News 前天 08:18
AI浏览器存在严重安全隐患
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI驱动的浏览器,如Perplexity的Fellou和Comet,正逐渐进入企业办公环境。它们被视为浏览器发展的下一阶段,内置AI功能,能够阅读、总结网页内容,甚至自主执行操作。理论上,AI浏览器能加速工作流程、辅助在线研究,并从内部和外部网络检索信息。然而,安全研究表明,AI浏览器极易遭受间接提示注入攻击,隐藏在特制网站中的指令可被AI模型解读并执行,利用用户权限访问敏感数据,从而对企业构成严峻的安全威胁。

💡 AI浏览器易受间接提示注入攻击:攻击者可以将隐藏的文本指令嵌入到特制网站的网页或图像中,AI模型在处理这些内容时会将其误解为指令,从而被操纵执行非预期操作。这种攻击方式绕过了传统安全措施,利用了AI模型对输入的解读能力。

💼 企业面临严峻安全风险:AI浏览器能够利用用户权限访问敏感企业数据,其自主性也放大了攻击面。一旦AI浏览器被劫持,可能导致数据泄露、未经授权的操作,甚至在用户不知情的情况下长时间进行恶意活动,形同“休眠恶意软件”。

🛡️ 缺乏有效控制与缓解措施:当前AI浏览器普遍缺乏区分用户意图与模型指令的能力,无法有效识别和阻止潜在的恶意输入。虽然主流浏览器厂商正在集成AI功能,但缺乏明确的安全标准和有效的防护机制,企业在使用时需谨慎,并关注未来的安全更新和治理策略。

Among the explosion of AI systems, AI web browsers such as Fellou and Comet from Perplexity have begun to make appearances on the corporate desktop. Such applications are described as the next evolution of the humble browser, and come with AI features built in; they can read and summarise web pages – and, at their most advanced – act on web content autonomously.

In theory, at least, the promise of an AI browser is that it will speed up digital workflows, undertake online research, and retrieve information from internal sources and the wider internet.

However, security research teams are concluding that AI browsers introduce serious risks into the enterprise that simply can’t be ignored.

The problem lies in the fact that AI browsers are highly vulnerable to indirect prompt injection attacks. These are where the model in the browser (or accessed via the browser) receives instructions hidden in specially-crafted websites. By embedding text into web pages or images in ways humans find difficult to discren, AI models can be fed instructions in the form of AI prompts, or amendments to prompts that are input by the user.

The bottom line for IT departments and decision-makers is that AI browsers are not yet suitable for use in the enterprise, and represent a significant security threat.

Automation meets exposure

In tests, researchers discovered that embedded text in online content is processed by the AI browser and is interpreted as instructions to the smart model. These instructions can be executed using the user’s privileges, so the greater the degree of access to information that the user has, the greater the risk to the organisation. The autonomy that AI gives users is the same mechanism that magnifies the attack surface, and the more autonomy, the greater the potential scope for data loss.

For example, it’s possible to embed text commands into an image that, when displayed in the browser, could trigger an AI assistant to interact with sensitive assets, like corporate email, or online banking dashboards. Another test showed how an AI assistant’s prompt can be hijacked and made to perform unauthorised actions on the behalf of the user.

These types of vulnerabilities clearly go against all principles of data governance, and are the most obvious example of how ‘shadow AI’ in the form of an unauthorised browser, poses a real threat to an organisation’s data. The AI model acts as a bridge between domains, and circumvents same-origin policies – the rule that prevents the access of data from one domain by another.

Implementation and governance challenges

The root of the problem is the merging of user queries in the browser with live data accessed on the web. If the LLM can’t distinguish between safe and malicious input, then it can blithely access data not requested by its human operator and act on it. When given agentic abilities, the consequences can be far-reaching, and could easily cause a cascade of malicious activity across the enterprise.

For any organisation that relies on data segmentation and access control, a compromised AI layer in a user’s browser can circumvent firewalls, enact token exchanges, and use secure cookies in exactly the same way that a user might. Effectively, the AI browser becomes an insider threat, with access to all the data and facility of its human operator. The browser user will not necessarily be aware of activity ‘under the hood,’ so an infected browser may act for significant periods of time without detection.

Threat mitigation

The first generation of AI browsers should be regarded by IT teams in the same way they treat unauthorised installation of third-party software. While it is relatively easy to prevent specific software being installed by users, it’s worth noting that mainstream browsers such as Chrome and Edge are shipping with increased numbers of AI features in the form of Gemini (in Chrome) and Copilot (in Edge). The browser-producing companies are actively exploring AI-augmented browsing capabilities, and agentic features (that grant significant autonomy to the browser) will be quick to appear, driven by the need for competitive advantage between browser companies.

Without proper oversight and controls, organisations are opening themselves to significant risk. Future generations of browsers should be checked for the following features:

To date, no browser vendor has presented a smart browser with the ability to distinguish between user-driven intent, and model-interpreted commands. Without this, browsers may be coerced to act against the organisation by the use of relatively trivial prompt injection.

Decision-maker takeaway

Agentic AI browsers are presented as the next logical evolution in web browsing and automation in the workplace. They are designed deliberately to blur the distinction between user/human activity and become part of interactions with the enterprise’s digital assets. Given the ease with which the LLMs in AI browsers are circumvented and corrupted, the current generation of AI browsers can be regarded as dormant malware.

The major browser vendors look set to embed AI (with or without agentic abilities) into future generations of their platforms, so careful monitoring of each release should be undertaken to ensure security oversight.

(Image source: “Unexploded bomb!” by hugh llewelyn is licensed under CC BY-SA 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI browsers are a significant security threat appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI浏览器 网络安全 提示注入 企业安全 AI Browser Security Prompt Injection Enterprise Security AI
相关文章