Microsoft is integrating AI agents into its core platform, exemplified by Microsoft Foundry and Agent 365. This evolution emphasizes the need for zero-trust security principles in agent design, especially around identity, permissions, and network access. Agents are increasingly treated as part of the production infrastructure, and thus must be secured accordingly. Zero-trust means not trusting anything by default: users, models, tools, or external content. Microsoft Foundry offers tools like AI Gateway and Prompt Shields to help implement a zero-trust model. These tools help create a secure environment where actions are checked, limited, logged and tied to a verified identity. The text provides real-world scenarios, like refund agents and browsing agents, to highlight risks like prompt injection and financial abuse. Microsoft offers solutions like scoped permissions, governed tools catalog, and policy enforcement to combat these security concerns. Prompt Shields and AI Gateway protect against prompt injection attacks occurring in the user prompt and web content. With a zero-trust model, the platform decides what tools execute and how, rather than trusting the model.
techcommunity.microsoft.com
techcommunity.microsoft.com
Create attached notes ...
