Information Age 09月29日
警惕组织内部AI风险
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着AI技术的普及,员工未经授权使用外部AI工具(称为“影子AI”)已成为全球组织面临的五大新兴风险之一。这些工具可能导致数据泄露和合规问题,因为员工可能在不知情的情况下将敏感信息输入公共AI平台。企业必须制定明确的AI政策,加强员工培训,并使用数据丢失防护(DLP)和云访问安全代理(CASB)等工具来监控和防止未经授权的AI使用。建立主动的AI优先战略,包括明确允许和禁止的工具、嵌入AI审查的第三方采购流程,以及提高员工AI素养,是解决这一问题的关键。

🔍 影子AI是指员工在未经授权的情况下使用外部AI工具,这些工具可能包含安全漏洞,导致数据泄露和合规风险。

📊 根据Gartner的报告,79%的网络安全领导者怀疑员工正在误用批准的生成式AI工具,而69%的报告称,仍然在使用禁止的工具。这表明组织在AI使用监管方面存在严重不足。

🛡️ 为了应对影子AI的威胁,企业需要建立明确的AI治理政策,包括定义允许和禁止使用的工具,以及制定AI特定数据处理规则。

🧠 提高员工的AI素养至关重要,这包括教育员工了解现实世界的风险,并教授他们负责任地创新操作流程,而不仅仅是追求效率。

🔧 实施数据丢失防护(DLP)和云访问安全代理(CASB)等监控工具,可以帮助企业检测和防止未经授权的AI使用,从而保护敏感数据。

By Jon Bance on Information Age - Insight and Analysis for the CTO

When it comes to security risks, the era of AI presents a new battlefield for data breaches and leaks. However, while AI is undoubtedly playing a major part in cyberattacks and defence alike, it’s the hidden, unregulated tools within your organisation that may pose an equally significant data loss risk. Unsanctioned use of external tools by employees, dubbed ‘Shadow AI’, like Shadow IT before it, has become one of the top five emerging risks facing organisations globally, according to Gartner’s Quarterly Emerging Risk Report.

This isn’t a distant scenario as your teams are currently using unsecured tools to boost productivity, regardless of existing IT policies.

This ultimately leads your organisation straight to critical data exposure. Nefarious cyber actors don’t need to steal sensitive data when your employees are giving it away to publicly accessible tools. Businesses must urgently implement AI policies and focus on training their workforce, not just to effectively capitalise on new technologies, but also to mitigate risks currently being introduced to their network.

The hidden threat of shadow AI

When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not, but without a proper ruleset in place, everyday employee actions can quickly become security nightmares.

This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. Third-party vendors are already, quietly, integrating AI-boosted features into software your teams may already use, without formal notification. From a security perspective, individuals and entire teams alike are choosing to integrate custom AI solutions to solve immediate problems, ignoring company cybersecurity reviews entirely.

The numbers agree. Gartner’s recent 2025 Cybersecurity Innovations in AI Risk Management and Use survey highlighted that 79 per cent of cybersecurity leaders suspect employees are misusing approved GenAI tools, and yet 69 per cent reported that prohibited tools are still being used anyway. Perhaps most alarmingly, 52 per cent believe custom AI is being built without any risk checks, a recipe for intellectual property leakage and severe compliance breaches.

Most organisations lack awareness

The root cause of turning to shadow AI isn’t malicious intent. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done.

Teams are constantly being pushed to increase output and efficiency. But where there is trust from companies in their employees to perform, that doesn’t always equate to clear AI governance and visibility of access from IT teams. Yet even with more prohibitive policies in place, employees will still find workarounds to make ends meet. Shadow AI isn’t just a problem with technology, but a problem of process and culture as well.

Building a proactive AI-first strategy

A balanced, strategic approach to address these challenges requires more than just direction from your IT team; it must come directly from the C-suite. Codifying your AI governance policies should be a priority; you cannot manage what you haven’t defined. Establishing clear, practical rules for what tools are acceptable in your organisation, and what aren’t, including AI-specific data handling rules and embedding AI reviews into third-party procurement.

Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security.

AI literacy must come in tandem with this, integrated directly into company culture. This means educating teams on the real-world risks and the ways to innovate operational lines responsibly, not just efficiently. The most effective way to combat Shadow AI use in your organisation is to provide a better, safer and more secure alternative. Enforcing a collaborative culture that can openly share best AI practices is also essential; don’t just say ‘no’ to public tools, but provide an avenue of ‘yes, and here’s how you do it securely.’

The first step is assessing readiness

A professional readiness assessment must be your first step, as it identifies the gaps in your organisation and allows a path to building the right, resilient foundation. This includes an overview of your current technology and AI environment, including any hidden risks, reviewing existing policies and monitoring capabilities. Prioritising AI use cases that can deliver tangible value without compromising control is key.

Building your AI roadmap that balances innovation with governance and security is critical before opening the floodgates and bringing Shadow AI into the light. When it comes to new and emerging technologies, your business mindset shouldn’t just be thinking about what these tools can do, but how you can best control them within your organisation.

Jon Bance is chief operating officer at Leading Resolutions.

Read more

Only 22% of IT staff fully understand capabilities of AI tools – AI is being explored across multiple sectors, but IT staff surveyed by SolarWinds are found to be struggling to use tools to full capability

Why knowledge is the ultimate weapon in the Information Age – Learn how to build a human knowledge-first approach to AI, so that your organisation can run on the best information possible

The post Are you really ready for AI? Exposing shadow tools in your organisation appeared first on Information Age.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

影子AI AI风险 数据泄露 AI治理 员工培训
相关文章