AI News 09月25日
零售业生成式AI应用普及,安全成本攀升
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

根据网络安全公司Netskope的报告,零售业已广泛采用生成式AI技术,95%的组织现在使用相关应用,较一年前的73%大幅增长。然而,这种AI热潮伴随着巨大的安全风险,组织将AI工具融入运营,创造了大量新的网络攻击和数据泄露面。报告显示,行业正从混乱的早期采用转向更受企业主导的、受控的方法。员工使用个人AI账户的比例从74%降至36%,而使用公司批准的GenAI工具的比例从21%增至52%。在零售桌面领域,ChatGPT仍是王者,被81%的组织使用,但Google Gemini(60%)和Microsoft Copilot(56%和51%)紧随其后。ChatGPT的受欢迎程度首次下降,而Microsoft 365 Copilot的使用量激增,可能得益于其与员工日常使用的生产力工具的深度集成。零售业生成式AI采用的潜在安全噩梦在于,这些工具处理信息的能力也是它们最大的弱点。零售商发现大量敏感数据被输入这些工具,其中47%是公司自己的源代码,39%是受监管数据,如机密客户和商业信息。作为回应,越来越多的零售商开始禁止他们认为风险过高的应用程序,其中ZeroGPT被47%的组织禁用,因为它存储用户内容并甚至被发现在将数据重定向到第三方网站。这种新发现的谨慎正推动零售业转向来自主要云提供商的更严肃的企业级生成式AI平台,这些平台提供更大的控制权,允许公司私下托管模型并构建自己的自定义工具。OpenAI via Azure和Amazon Bedrock并列领先,各被16%的零售公司使用。但这些并非万能药;简单的配置错误可能会意外地将强大的AI直接连接到公司的核心资产,从而造成灾难性漏洞。威胁不仅仅来自员工在浏览器中使用AI,报告发现63%的组织现在直接连接到OpenAI的API,将AI深度嵌入其后端系统和自动化工作流程。这种特定于AI的风险是更广泛、令人担忧的云安全习惯不良模式的一部分。攻击者越来越多地使用受信任的名称来传递恶意软件,知道员工更有可能点击来自熟悉服务的链接。Microsoft OneDrive是最常见的罪魁祸首,每月有11%的零售商受到该平台的恶意软件攻击,而开发者中心GitHub被用于9.7%的攻击。员工在工作时使用个人应用程序的长期问题仍在加剧这场火灾。像Facebook和LinkedIn这样的社交媒体网站几乎在每个零售环境(分别为96%和94%)中使用,以及个人云存储帐户。当员工将文件上传到个人应用程序时,76%的 resulting 政策违规涉及受监管数据。对于零售业的安全领导者来说,随意的生成式AI实验已经结束了。Netskope的调查结果是一个警告,要求组织果断行动。现在是时候获得对所有网络流量的全面可见性,阻止高风险应用程序,并执行严格的数据保护政策来控制信息可以发送到何处了。如果没有适当的治理,下一个创新很可能会成为下一个引发关注的漏洞。

📊 根据网络安全公司Netskope的报告,零售业已广泛采用生成式AI技术,95%的组织现在使用相关应用,较一年前的73%大幅增长。这种普及表明零售商正积极采用AI以避免落后。

🔒 然而,这种AI热潮伴随着巨大的安全风险。组织将AI工具融入运营,创造了大量新的网络攻击和数据泄露面。生成式AI处理信息的能力也是它们最大的弱点,导致敏感数据(如47%的公司源代码和39%的受监管数据)被意外输入。

🔄 行业正从混乱的早期采用转向更受企业主导的、受控的方法。员工使用个人AI账户的比例从74%降至36%,而使用公司批准的GenAI工具的比例从21%增至52%。这表明企业正在意识到‘影子AI’的危险并试图加以控制。

👑 在零售桌面领域,ChatGPT仍是王者,被81%的组织使用,但Google Gemini(60%)和Microsoft Copilot(56%和51%)紧随其后。ChatGPT的受欢迎程度首次下降,而Microsoft 365 Copilot的使用量激增,可能得益于其与员工日常使用的生产力工具的深度集成。

🚫 作为回应,越来越多的零售商开始禁止他们认为风险过高的应用程序,其中ZeroGPT被47%的组织禁用,因为它存储用户内容并甚至被发现在将数据重定向到第三方网站。这种新发现的谨慎正推动零售业转向更严肃的企业级生成式AI平台。

🌐 这些企业级平台(如OpenAI via Azure和Amazon Bedrock,各被16%的零售公司使用)提供更大的控制权,允许公司私下托管模型并构建自己的自定义工具。但这些并非万能药;简单的配置错误可能会意外地将强大的AI直接连接到公司的核心资产,从而造成灾难性漏洞。

🖥️ 威胁不仅仅来自员工在浏览器中使用AI,报告发现63%的组织现在直接连接到OpenAI的API,将AI深度嵌入其后端系统和自动化工作流程。

📱 这种特定于AI的风险是更广泛、令人担忧的云安全习惯不良模式的一部分。攻击者越来越多地使用受信任的名称来传递恶意软件,知道员工更有可能点击来自熟悉服务的链接。Microsoft OneDrive是最常见的罪魁祸首,每月有11%的零售商受到该平台的恶意软件攻击,而开发者中心GitHub被用于9.7%的攻击。

📎 员工在工作时使用个人应用程序的长期问题仍在加剧这场火灾。像Facebook和LinkedIn这样的社交媒体网站几乎在每个零售环境(分别为96%和94%)中使用,以及个人云存储帐户。当员工将文件上传到个人应用程序时,76%的 resulting 政策违规涉及受监管数据。

🛡️ 对于零售业的安全领导者来说,随意的生成式AI实验已经结束了。Netskope的调查结果是一个警告,要求组织果断行动。现在是时候获得对所有网络流量的全面可见性,阻止高风险应用程序,并执行严格的数据保护政策来控制信息可以发送到何处了。

The retail industry is among the leaders in generative AI adoption, but a new report highlights the security costs that accompany it.

According to cybersecurity firm Netskope, the retail sector has all but universally adopted the technology, with 95% of organisations now using generative AI applications. That’s a huge jump from 73% just a year ago, showing just how fast retailers are scrambling to avoid being left behind.

However, this AI gold rush comes with a dark side. As organisations weave these tools into the fabric of their operations, they are creating a massive new surface for cyberattacks and sensitive data leaks.

The report’s findings show a sector in transition, moving from chaotic early adoption to a more controlled, corporate-led approach. There’s been a shift away from staff using their personal AI accounts, which has more than halved from 74% to 36% since the beginning of the year. In its place, usage of company-approved GenAI tools has more than doubled, climbing from 21% to 52% in the same timeframe. It’s a sign that businesses are waking up to the dangers of “shadow AI” and trying to get a handle on the situation.

In the battle for the retail desktop, ChatGPT remains king, used by 81% of organisations. Yet, its dominance is not absolute. Google Gemini has made inroads with 60% adoption, and Microsoft’s Copilot tools are hot on its heels at 56% and 51% respectively. ChatGPT’s popularity has recently seen its first-ever dip, while Microsoft 365 Copilot’s usage has surged, likely thanks to its deep integration with the productivity tools many employees use every day.

Beneath the surface of this generative AI adoption by the retail industry lies a growing security nightmare. The very thing that makes these tools useful – their ability to process information – is also their biggest weakness. Retailers are seeing alarming amounts of sensitive data being fed into them.

The most common type of data exposed is the company’s own source code, making up 47% of all data policy violations in GenAI apps. Close behind is regulated data, like confidential customer and business information, at 39%.

In response, a growing number of retailers are simply banning apps they deem too risky. The app most frequently finding itself on the blocklist is ZeroGPT, with 47% of organisations banning it over concerns it stores user content and has even been caught redirecting data to third-party sites.

This newfound caution is pushing the retail industry towards more serious, enterprise-grade generative AI platforms from major cloud providers. These platforms offer far greater control, allowing companies to host models privately and build their own custom tools.

Both OpenAI via Azure and Amazon Bedrock are tied for the lead, with each being used by 16% of retail companies. But these are no silver bullets; a simple misconfiguration could inadvertently connect a powerful AI directly to a company’s crown jewels, creating the potential for a catastrophic breach.

The threat isn’t just from employees using AI in their browsers. The report finds that 63% of organisations are now connecting directly to OpenAI’s API, embedding AI deep into their backend systems and automated workflows.

This AI-specific risk is part of a wider, troubling pattern of poor cloud security hygiene. Attackers are increasingly using trusted names to deliver malware, knowing that an employee is more likely to click a link from a familiar service. Microsoft OneDrive is the most common culprit, with 11% of retailers hit by malware from the platform every month, while the developer hub GitHub is used in 9.7% of attacks.

The long-standing problem of employees using personal apps at work continues to pour fuel on the fire. Social media sites like Facebook and LinkedIn are used in nearly every retail environment (96% and 94% respectively), alongside personal cloud storage accounts. It’s on these unapproved personal services that the worst data breaches happen. When employees upload files to personal apps, 76% of the resulting policy violations involve regulated data.

For security leaders in retail, casual generative AI experimentation is over. Netskope’s findings are a warning that organisations must act decisively. It’s time to gain full visibility of all web traffic, block high-risk applications, and enforce strict data protection policies to control what information can be sent where.

Without adequate governance, the next innovation could easily become the next headline-making breach.

See also: Martin Frederik, Snowflake: Data quality is key to AI-driven growth

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Generative AI in retail: Adoption comes at high security cost appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

零售业 生成式AI 网络安全 数据泄露 企业级AI平台 云安全 影子AI ChatGPT Microsoft Copilot Google Gemini
相关文章