Tamed Autonomy

Zero Trust says don't trust the network. Tamed Autonomy says authenticate the purpose, not just the identity.

Robin Martherus

I Helped Build the Identity Industry in 1998. I'm Seeing the Same Signals Again.

By Robin Martherus, Identity Whisperer


In 1998, I was part of a small group of people trying to convince the security industry that it needed something called "identity management."

The response wasn't hostile. It was skeptical. People didn't see the problem. Access Control Lists worked. Every system had its own user database. You set permissions on files and directories, and that was security. Why would you need a whole separate layer just for identity?

We saw it differently. The web was exploding. Enterprises were accumulating dozens of systems, each with its own user store. The same person had fifteen accounts. Offboarding took weeks. Orphaned accounts lingered for months. Nobody could answer a basic question: "What access does John Smith have across all our systems?"

But ACLs worked. For now.

It took roughly twelve years — from the first signals in the mid-1990s to mainstream adoption after Sarbanes-Oxley and SAML — for the industry to recognize that ACLs were not going to scale to the web era. The identity industry that emerged is now worth tens of billions of dollars.

I'm seeing the same signals again. This time, it's not about human identity. It's about governing AI agents.

The Problem Nobody Recognizes Yet

Today's AI security approach is to take the identity and authorization models we built for humans — OAuth, RBAC, SPIFFE, workload identity — and extend them to cover AI agents. Add more scopes. Create agent-specific service accounts. Build MCP gateways with policy layers.

This is the equivalent of building better ACL management tools in 1997. It works. For now.

But the design assumptions underneath these systems don't hold for autonomous agents:

These aren't deployment failures. These are design-level mismatches between systems built for human actors and the autonomous, non-deterministic agents now using them.

And the evidence is already accumulating. In 2024, a security researcher demonstrated at Black Hat that Microsoft Copilot inherited a user's full Microsoft Graph permissions with no mechanism to constrain the agent to a subset. Invariant Labs documented "tool poisoning" attacks where a single malicious MCP server could hijack an agent's behavior across every service it was connected to — using legitimate credentials for each. UIUC researchers showed that GPT-4 agents could autonomously exploit real-world web vulnerabilities at a 73% success rate.

Every one of these incidents involved agents with valid credentials performing authorized actions for unauthorized purposes. The systems authenticated the credential. Nobody authenticated the intent.

Three Questions Current Systems Can't Answer

Today's security infrastructure answers one question well: "Can this agent do this?" — checked against credentials, scopes, and roles.

It doesn't answer three others:

These aren't nice-to-haves. They're the gap between "secure" and "governed."

What a New Paradigm Looks Like

I believe we need new governance primitives — not replacing OAuth and identity, but layering above them. The same way SAML assertions and OAuth tokens were new primitives that created the identity industry, AI agent governance needs its own:

These primitives don't require every system to adopt them overnight. OAuth didn't either. They need to be compelling enough that a cooperating enclave forms and grows — the same way SAML federation grew because it was easier to cooperate than to build custom integrations.

Why Now

You can never prove a new paradigm is necessary while the old one still works. Nobody could prove TCP/IP was necessary while the phone network functioned. Nobody could prove identity management was necessary while ACLs held up.

But every security paradigm shift I've witnessed in thirty years has followed the same pattern: anomaly accumulation, increasingly complex workarounds, conceptual articulation by a few voices, widespread skepticism, a catalytic crisis, then rapid adoption.

For AI agent governance, we're in the early stages. The anomalies are accumulating. The workarounds are getting complex. The voices are emerging — the Kinetic Trust Protocol, NIST's AI Risk Management Framework, OWASP's identification of "Excessive Agency" as a top LLM risk. Gartner predicts 25% of enterprise security breaches by 2028 will trace to AI agent credential misuse.

But here's what makes this different from every previous paradigm shift: the technology isn't waiting for us to catch up.

The ACL-to-identity transition had a 12-year runway. Perimeter-to-Zero-Trust had nearly 17 years from the Jericho Forum to the Biden Executive Order. The underlying technology — networks, directories, web applications — moved at human speed. There was time to observe, debate, prototype, and standardize.

AI agent capabilities are not moving at human speed. The gap between what agents can do today and what they could do six months from now is larger than the gap between any two years in the identity or Zero Trust timelines. And I'm watching it from the inside — companies laser-focused on "what can we ship in the next few months to secure AI agents," reacting to today's threats with today's tools. That's not a criticism; it's rational under pressure. But it means the industry is building incremental defenses against a problem that is evolving faster than incremental solutions can keep pace.

When the old paradigm moved slowly, reactive security worked well enough. When the new paradigm moves at AI speed, reactive security means you are perpetually behind. The only way to stay ahead of a technology that moves faster than your planning cycle is to design the governance architecture before you need it.

The catalytic crisis hasn't happened yet. But the signals are here. And this time, the gap between "the signals are here" and "why didn't we act sooner" will be shorter than anyone expects — because AI is compressing the timeline.

The question isn't whether AI agents need governance beyond identity and authorization. The question is whether we start building it now — or wait for the breach that forces our hand.

Zero Trust says don't trust the network. Tamed Autonomy says authenticate the purpose, not just the identity.


I'm developing Tamed Autonomy — a framework for multi-layer AI agent governance that addresses intent, computed trust, and normative reasoning above current security infrastructure. The full whitepaper is available here. I'd welcome perspectives from security architects, identity practitioners, and AI engineers who are seeing these same signals.

Read the Whitepaper