AI agent delegation via MCP has gaps a Murderbot could walk through

The introduction of Model Context Protocol (MCP) open standard developed by Anthropic has advanced the data-sharing capabilities of AI agents and the systems they interact with, but the question of how to secure these interactions from rogue agents and a host of other threats remains open.
Gluu Founder and CEO Michael Schwartz presented his vision for secure AI agent authorization in a talk titled “Golem to Murderbot: Challenges with Agentic Security Delegation via MCP” at the MCP Dev Summit 2026 in New York City.
The Hebrew story of the Golem from late antiquity raises the question of how an automated actor can be relied on to carry out the intent of the person who automates it. In the story, the Golem becomes unruly as it carries out its task. The “truth” – the equivalent for an AI agent of its mission – becomes more unstable with each change in its network context.
Fortunately, a “kill switch” is built into the Golem.
In Murderbot, a series of books by Martha Wells adapted into an Apple TV series, the “Corporation” which controls “SecuBots” like Murderbot uses a “Governor Module,” a software module which monitors and “punishes” them for policy violations. The title character has gone rogue and hacked its Governor Module, but must fool a second oversight mechanism, the “Hub System,” that everything is in order by feeding data back to it.
Murderbot’s presence is necessary as a security agent to reduce the risk in the scenario the story depicts to the point where it is insurable.
Automation and risk reduction
Schwartz argues that while some people tend to see zero trust in a typical agentic AI flow as a matter of enforcing security at an MCP Gateway, because it is a chokepoint, “we should be good.
“But this would imply that all the traffic is trusted beyond the gateway, which is sort of the definition of what zero trust seeks to avoid in the first place,” Schwartz says.
Instead, “each service needs a Governor Module,” in the form of a policy engine embedded with each service. Each would then produce decision logs, scaling security data and requiring “more operational leverage” for humans to make use of it to take security actions.
Schwartz then explained that human authentication is pretty much solved with mechanisms like passkeys and digital wallets, and even software authentication is for the most part functionally solved.
Authorization is another matter. From an enterprise perspective, the question is: “under what conditions is access allowed?”
The answer may depend on things that have nothing to do with the properties of an AI agent requesting data on behalf of a human. Schwartz gives the example of data governed by agreements between different organizations.
Authorization therefore needs to move beyond role-based access control to policies that include context and complexity.
This leads to his case for using Cedar as “a policy syntax that is analyzable.”
Schwartz concluded his talk by presenting the concept of GovOps, an operating model for enterprise governance through risk management, accountability and transparency.
“Identity is the key for accountability,” Schwartz says, “not authorization.”
The presentation is posted to Schwartz Identerati Office Hours channel.
Article Topics
AI agents | authentication | authorization | enterprise | Gluu | identity access management (IAM)







Comments