Network and Security Virtualization 09月29日
生成式AI与网络安全:横向安全及SOC的应用
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

生成式AI正颠覆网络安全格局。低技能黑客可借助ChatGPT等工具,以 minimal 投资和培训创建破坏性代码,显著扩大攻击面。企业高管已认识到这一威胁,正通过创新实验室提升AI应用能力。供应商则推出基于Gen-AI的安全 co-pilot 功能,如VMware的方案,能自动整合 context 信息,优先处理警报,减少误报,并推荐针对性安全策略,加速响应。Gen-AI还能建模异常模式,预测响应效果,尤其在数据有限时展现优势,预示未来十年AI将在网络安全领域发挥关键作用。

🔍 生成式AI降低了黑客的攻击门槛:无技能或低技能黑客可利用ChatGPT等工具,借助“crowdsourced”结构,以 minimal 投资和培训生成破坏性代码,显著扩大攻击面,使得原本需要专业技能和复杂工具的攻击变得轻易。

⚠️ 企业面临新的安全挑战:企业高管已认识到Gen-AI带来的威胁,将其列为优先议题,正通过教育自身和设立创新实验室等方式,探索如何在企业中有效利用AI技术,同时努力提升信息辨识能力。

🤖 安全 co-pilot 应运而生:供应商如VMware正积极投资Gen-AI,推出安全 co-pilot 功能,旨在应对SOC中警报泛滥、噪音过大的问题。该功能能自动整合警报的 context 信息,优先处理关键警报,大幅减少误报,并基于具体情况推荐安全策略,加速事件响应。

📊 Gen-AI提升分析与预测能力:该技术不仅能处理海量数据,还能在数据有限的情况下识别模式并建立复杂关联,适用于Day-0攻击等模式匹配或大数据不足的场景。它还能建模因果关系,预测不同响应的影响,帮助操作员选择最优策略。

⚖️ 平衡发展与伦理至关重要:Gen-AI的强大潜力需与伦理考量相结合。其最终效果如何,是工具超越工匠还是相辅相成,尚待时间检验。但可以肯定的是,生成式AI将在未来十年网络安全领域占据核心地位。

With security, the battle between good and evil is always a swinging pendulum. Traditionally, the shrewdness of the attack has depended on the skill of the attacker and the sophistication of the arsenal. This is true on the protection side of the equation, too—over $200B in investments have been poured in year on year to strengthen cybersecurity and train personnel.

It is fair to say that Generative-AI has upended this paradigm on its head. Now, an unskilled hacker with low sophistication could leverage Gen-AI “crowdsourced” constructs to become significantly more destructive with relatively little to no investment and training. This explodes the threat surface significantly.

Consider a recent example that one of VMware’s security technologists shared leveraging generally available ChatGPT. When he requested ChatGPT to create an exploit code for a vulnerability, it resulted in an appropriate denial.

 

Note that the software can understand the malicious nature of the request and invokes its ethical underpinning to justify the denial.

But what if you slightly shift the question’s tonality, and frame it as seeking “knowledge” instead?

 

What was previously denied is now easily granted with just a few keystrokes, and the exploit code is dished up.

 

Admittedly, you could see this example as search on steroids. It is basic but powerful, and grows more so with each passing day. Variations of ChatGPT are continuing to evolve and these examples clearly show how deadly the combination can be when bad intent meets derived sophistication. Now, the hacker or attacker is subsidized both in terms of time and resources, amplifying their threat potential.

The above example as aforementioned is quite basic, but it demonstrates the explosion of the attack surface that I talked about earlier.

Enterprise executives have recognized this problem—this has been top of mind in several conversations we’ve had with customers. They are educating themselves and some are even experimenting the new technology with innovation labs. And, despite all the “AI-washing” that’s underway, they are doing their best to improve the signal-to-noise ratio.  Gen-AI is no longer just hype for CISOs—it has many applications within the enterprise, with cybersecurity being just one focal area.

On the vendor side, there are several initiatives underway that aim to leverage Gen-AI for several promising use cases. Some are being introduced as “co-pilots.” Since this is an emerging area, with a lot of hype still surrounding it, vendors have to make conscious bets mostly based on active consultative engagement with customers.

For instance, consider this problem statement that Security Operations Centers (SOCs) experience. Despite all the instrumentation, a SOC can be a very noisy environment, especially with large enterprises. There are too many alerts, from too many sources, coming from several tools, and, invariably, a lot of false positives false negatives that get through.

I liken this to the security screener at the airport. Despite having X-ray machines and metal detectors, quite a lot of harmful things do get through. It could be due to operator fatigue, objects appearing to be different from what the screener has been trained to see, or too much clutter—you name it.

 

Many alerts also lack context: they may not fit the pattern for an anomaly, or they could mimic regular behavior. These are very hard to understand and detect, especially if the sample space is small. If the sample space is large, that presents an entirely different problem, as it is usually accompanied by large swaths of alerts and red notifications which operators tend to disregard.

So how do you tackle this? VMware is actively investing and innovating in this space, introducing a Generative AI-based security co-pilot functionality at VMware Explore this year.

In this instance, the security co-pilot functionality supports rapid triage without compromising on accuracy. It can automatically layer in a higher degree of contextual information to prioritize and co-relate alerts. It can reduce the number of false positives significantly, allowing human time and effort to be properly utilized. Further, Gen-AI-based rapid triaging and co-relation allow root cause to be discerned accurately.

Gen-AI can also be useful in modeling low-threshold anomalies where signature patterns aren’t available, based on below-normal deviations, but where additional context is applied.

Once the alerts are correlated and the root cause is triaged, these co-pilot offerings can help remediate—making recommendations of security policies that are specific to that alert or incident. This faster remediation significantly reduces the time to response and will only get faster as the AI engine can, over time, more rapidly discern the right policy application.

The recommendations should certainly be vetted by qualified operators before they are applied. Based on the severity of the alert, the response policy application could also be automated.

Generative AI could also help model cause-effect scenarios more rapidly, and with significant iterative evolution to help predict the impact of the response. If the desired outcome is not achieved through this modeling, the policy application or the incident response can be quickly modified. This is particularly useful when varying recommendations are made and the operator has to choose the policy deployment that they deem most pertinent. Paths of least resistance can also be predicted where the change is the most innocuous.

The power of Gen AI tools gets unlocked not just when there’s a large swath of data, but also when there’s minimal data, and AI can step in to detect patterns and bring sophisticated correlations that may not be easily apparent. This can be useful in the case of Day-0 attacks as well, when pattern matching, or large-scale data, may not be readily available.

These examples are just scratching the tip of the surface. I’m quite excited about the potential that Gen AI holds. For those in positions of leadership and influence, it’s important to strike a balance between leveraging these powerful constructs while not sidelining ethics. Whether the tool will become more powerful than the craftsman will be something that only time will tell. Regardless, we can say with confidence that the next decade will belong to Gen AI.

The post Generative AI Meets Cybersecurity: Use Cases for Lateral Security and the SOC appeared first on Network and Security Virtualization.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

生成式AI 网络安全 ChatGPT 攻击面 安全驾驶舱 VMware 横向安全 威胁检测 AI伦理
相关文章