Microsoft AI News 09月20日
Azure AI Foundry:构建企业级AI代理的安全与治理蓝图
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文是“Agent Factory”系列博客的第六篇,深入探讨了企业如何通过Azure AI Foundry构建值得信赖的AI代理。面对日益增长的AI应用和潜在风险,企业需要一个分层、系统化的方法来确保AI的安全、合规和可控。文章提出了一个包含身份识别、数据保护、内置控制、风险评估和持续监控的信任蓝图,并详细介绍了Azure AI Foundry如何通过Entra Agent ID、Prompt Shields、风险评估工具、网络隔离以及与Microsoft Purview和Defender等工具的集成,来支持企业实现这一蓝图,从而加速AI从试点到生产的落地。

🛡️ **构建信任的AI代理蓝图:** 文章强调,随着AI代理从原型走向核心业务系统,信任成为企业AI面临的关键挑战。Azure AI Foundry提供了一个分层的安全、安全和治理流程,帮助企业构建可信赖的AI代理,其核心在于整合身份识别、数据保护、内置控制、风险评估和持续监控。

🔑 **关键要素:独特的身份与内置控制:** 每个AI代理都应拥有唯一的“Entra Agent ID”,以便追踪和管理,防止代理失控增长(agent sprawl)。同时,内置的防护措施至关重要,包括跨提示注入分类器(能扫描提示文档、工具响应、邮件触发器等)、防止不当工具调用、高风险操作、敏感数据丢失的控制,以及危害和风险过滤器、事实核查(groundedness checks)和受保护内容检测。

🧪 **风险评估与持续监控:** 在部署前后及生产过程中,持续的评估是必不可少的。Azure AI Foundry支持运行危害和风险检查、事实核查评分以及受保护内容扫描。通过Azure AI Red Teaming Agent和PyRIT工具包,可以模拟对抗性提示,在AI代理上线前发现并加固潜在漏洞,确保其韧性。

🔒 **数据主权与合规:** Azure AI Foundry允许企业将其自有Azure资源(如文件存储、搜索、对话历史)集成,确保数据在租户边界内处理,并受企业自身的安全、合规和治理控制。此外,与Microsoft Purview的集成,使得AI代理能够遵守Purview的敏感度标签和DLP策略,确保数据保护贯穿AI输出。

🌐 **集成与实践:** Azure AI Foundry通过与Microsoft Defender XDR集成,将警报和建议直接呈现在代理环境中,便于开发人员和管理员识别问题,安全运营中心团队也能利用现有工作流程进行调查。文章还提到了与Credo AI和Saidot等治理协作工具的集成,以映射评估结果至EU AI Act和NIST AI RMF等框架,展示负责任的AI实践和法规遵从性。

Azure AI Foundry brings together security, safety, and governance in a layered process enterprises can follow to build trust in their agents.

This blog post is the sixth out of a six-part blog series called Agent Factory which shares best practices, design patterns, and tools to help guide you through adopting and building agentic AI.

Trust as the next frontier

Trust is rapidly becoming the defining challenge for enterprise AI. If observability is about seeing, then security is about steering. As agents move from clever prototypes to core business systems, enterprises are asking a harder question: how do we keep agents safe, secure, and under control as they scale?

The answer is not a patchwork of point fixes. It is a blueprint. A layered approach that puts trust first by combining identity, guardrails, evaluations, adversarial testing, data protection, monitoring, and governance.

Why enterprises need to create their blueprint now

Across industries, we hear the same concerns:

  • CISOs worry about agent sprawl and unclear ownership.
  • Security teams need guardrails that connect to their existing workflows.
  • Developers want safety built in from day one, not added at the end.

These pressures are driving the shift left phenomenon. Security, safety, and governance responsibilities are moving earlier into the developer workflow. Teams cannot wait until deployment to secure agents. They need built-in protections, evaluations, and policy integration from the start.

Data leakage, prompt injection, and regulatory uncertainty remain the top blockers to AI adoption. For enterprises, trust is now a key deciding factor in whether agents move from pilot to production.

What safe and secure agents look like

From enterprise adoption, five qualities stand out:

  • Unique identity: Every agent is known and tracked across its lifecycle.
  • Data protection by design: Sensitive information is classified and governed to reduce oversharing.
  • Built-in controls: Harm and risk filters, threat mitigations, and groundedness checks reduce unsafe outcomes.
  • Evaluated against threats: Agents are tested with automated safety evaluations and adversarial prompts before deployment and throughout production.
  • Continuous oversight: Telemetry connects to enterprise security and compliance tools for investigation and response.

These qualities do not guarantee absolute safety, but they are essential for building trustworthy agents that meet enterprise standards. Baking these into our products reflects Microsoft’s approach to trustworthy AI. Protections are layered across the model, system, policy, and user experience levels, continuously improved as agents evolve.

How Azure AI Foundry supports this blueprint

Azure AI Foundry brings together security, safety, and governance capabilities in a layered process enterprises can follow to build trust in their agents.

  • Entra Agent ID
    Coming soon, every agent created in Foundry will be assigned a unique Entra Agent ID, giving organizations visibility into all active agents across a tenant and helping to reduce shadow agents.
  • Agent controls
    Foundry offers industry first agent controls that are both comprehensive and built in. It is the only AI platform with a cross-prompt injection classifier that scans not just prompt documents but also tool responses, email triggers, and other untrusted sources to flag, block, and neutralize malicious instructions. Foundry also provides controls to prevent misaligned tool calls, high risk actions, and sensitive data loss, along with harm and risk filters, groundedness checks, and protected material detection.
  • Risk and safety evaluations
    Evaluations provide a feedback loop across the lifecycle. Teams can run harm and risk checks, groundedness scoring, and protected material scans both before deployment and in production. The Azure AI Red Teaming Agent and PyRIT toolkit simulate adversarial prompts at scale to probe behavior, surface vulnerabilities, and strengthen resilience before incidents reach production.
  • Data control with your own resources
    Standard agent setup in Azure AI Foundry Agent Service allows enterprises to bring their own Azure resources. This includes file storage, search, and conversation history storage. With this setup, data processed by Foundry agents remains within the tenant’s boundary under the organization’s own security, compliance, and governance controls.
  • Network isolation
    Foundry Agent Service supports private network isolation with custom virtual networks and subnet delegation. This configuration ensures that agents operate within a tightly scoped network boundary and interact securely with sensitive customer data under enterprise terms.
  • Microsoft Purview
    Microsoft Purview helps extend data security and compliance to AI workloads. Agents in Foundry can honor Purview sensitivity labels and DLP policies, so protections applied to data carry through into agent outputs. Compliance teams can also use Purview Compliance Manager and related tools to assess alignment with frameworks like the EU AI Act and NIST AI RMF, and securely interact with your sensitive customer data under your terms.
  • Microsoft Defender
    Foundry surfaces alerts and recommendations from Microsoft Defender directly in the agent environment, giving developers and administrators visibility into issues such as prompt injection attempts, risky tool calls, or unusual behavior. This same telemetry also streams into Microsoft Defender XDR, where security operations center teams can investigate incidents alongside other enterprise alerts using their established workflows.
  • Governance collaborators
    Foundry connects with governance collaborators such as Credo AI and Saidot. These integrations allow organizations to map evaluation results to frameworks including the EU AI Act and the NIST AI Risk Management Framework, making it easier to demonstrate responsible AI practices and regulatory alignment.

Blueprint in action

From enterprise adoption, these practices stand out:

  1. Start with identity. Assign Entra Agent IDs to establish visibility and prevent sprawl.
  2. Built-in controls. Use Prompt Shields, harm and risk filters, groundedness checks, and protected material detection.
  3. Continuously evaluate. Run harm and risk checks, groundedness scoring, protected material scans, and adversarial testing with the Red Teaming Agent and PyRIT before deployment and throughout production.
  4. Protect sensitive data. Apply Purview labels and DLP so protections are honored in agent outputs.
  5. Monitor with enterprise tools. Stream telemetry into Defender XDR and use Foundry observability for oversight.
  6. Connect governance to regulation. Use governance collaborators to map evaluation data to frameworks like the EU AI Act and NIST AI RMF.

Proof points from our customers

Enterprises are already creating security blueprints with Azure AI Foundry:

  • EY uses Azure AI Foundry’s leaderboards and evaluations to compare models by quality, cost, and safety, helping scale solutions with greater confidence.
  • Accenture is testing the Microsoft AI Red Teaming Agent to simulate adversarial prompts at scale. This allows their teams to validate not just individual responses, but full multi-agent workflows under attack conditions before going live.

Learn more

Did you miss these posts in the Agent Factory series?

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Azure AI Foundry AI安全 AI治理 企业AI AI代理 信任AI Azure Microsoft Entra Agent ID Prompt Injection Data Protection Risk Assessment Compliance Azure AI Agent Factory Responsible AI AI Foundry Security Safety Governance Enterprise AI AI Agents Trustworthy AI
相关文章