TechCrunch News 09月18日
AI安全公司Irregular获8000万美元融资,聚焦新兴风险
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI安全公司Irregular近日宣布完成8000万美元融资,由Sequoia Capital和Redpoint Ventures领投。该公司专注于识别和应对人工智能模型中出现的新兴风险和行为。Irregular利用复杂的模拟环境对模型进行密集测试,包括模拟AI作为攻击者和防御者的网络环境,以评估模型的安全防护能力。其开发的SOLVE框架在业界被广泛用于评估模型的漏洞检测能力,并已被用于Claude 3.7 Sonnet及OpenAI的o3和o4-mini模型的安全评估中。此次融资将助力Irregular实现更宏大的目标,即在潜在风险广泛传播之前就将其识别出来,以应对日益增长的AI安全挑战。

🛡️ **AI安全公司Irregular获得8000万美元融资**:此次融资由Sequoia Capital和Redpoint Ventures领投,凸显了市场对AI安全领域的高度关注。这笔资金将支持Irregular进一步发展其在AI安全评估方面的技术和能力。

🚀 **专注于新兴风险的识别与应对**:Irregular的核心目标是识别和预测人工智能模型在实际应用中可能出现的、尚未显现的风险和异常行为。公司通过建立复杂的模拟环境,让AI扮演攻击者和防御者的角色,以在模型发布前发现潜在的安全漏洞。

🔧 **行业领先的评估框架与应用**:Irregular开发的SOLVE框架是业界公认的用于评估模型漏洞检测能力的方法论,并已被广泛应用于知名AI模型的安全评估中,例如Claude 3.7 Sonnet以及OpenAI的o3和o4-mini模型。这证明了Irregular在AI安全评估领域的专业性和影响力。

🌐 **应对人机及AI间交互带来的安全挑战**:随着人与AI、AI与AI之间交互日益增多,现有的安全架构面临新的挑战。Irregular的成立正是为了解决这些由AI能力飞速发展带来的安全隐患,确保AI应用的安全性。

On Wednesday, AI security firm Irregular announced $80 million in new funding in a round led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. A source close to the deal said the round valued Irregular at $450 million.

“Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction,” co-founder Dan Lahav told TechCrunch, “and that’s going to break the security stack along multiple points.”

Formerly known as Pattern Labs, Irregular is already a significant player in AI evaluations. The company’s work is cited in security evaluations for Claude 3.7 Sonnet as well as OpenAI’s o3 and o4-mini models. More generally, the company’s framework for scoring a model’s vulnerability-detection ability (dubbed SOLVE) is widely used within the industry.

While Irregular has done significant work on models’ existing risks, the company is fundraising with an eye towards something even more ambitious: spotting emergent risks and behaviors before they surface in the wild. The company has constructed an elaborate system of simulated environments, enabling intensive testing of a model before it is released.

“We have complex network simulations where we have AI both taking the role of attacker and defender,” says co-founder Omer Nevo. “So when a new model comes out, we can see where the defenses hold up and where they don’t.”

Security has become a point of intense focus for the AI industry, as the potential risks posed by frontier models as more risks have emerged. OpenAI overhauled its internal security measures this summer, with an eye towards potential corporate espionage. 

At the same time, AI models are increasingly adept at finding software vulnerabilities — a power with serious implications for both attackers and defenders.

Techcrunch event

San Francisco | October 27-29, 2025

For the Irregular founders, it’s the first of many security headaches caused by the growing capabilities of large language models.

“If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models,” Lahav says. “But it’s a moving target, so inherently there’s much, much, much more work to do in the future.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 Irregular 融资 新兴风险 AI评估 漏洞检测 Sequoia Capital Redpoint Ventures AI Security Funding Emerging Risks AI Evaluation Vulnerability Detection
相关文章