A scalability crisis in human oversight threatens to render AI agent security controls ineffective, as users face thousands of authorisation requests that they will simply approve without review, according to a white paper from the OpenID Foundation. The paper warns that these vulnerabilities are emerging now, rather than remaining theoretical future problems.
The document identifies five critical security gaps already appearing as agents gain autonomy: agent identity fragmentation where companies build separate identity systems instead of common standards; user impersonation risks with no method to distinguish whether actions came from humans or agents; recursive delegation creating unmanageably complex permission chains when agents spawn sub-agents; browser control bypass where agents controlling screens sidestep traditional security checks entirely; and the fundamental scalability problem where consent becomes meaningless.
Tobin South led the whitepaper for the OpenID Foundation in collaboration with the Artificial Intelligence Identity Management Community Group and Stanford’s Loyal Agents Initiative. The research emphasises that these are not theoretical concerns, but vulnerabilities that emerge as agents operate autonomously today.
The scalability crisis presents the most immediate threat. As AI agents proliferate, a single user could manage dozens of agents making thousands of daily decisions, creating what the whitepaper terms “consent fatigue” where users reflexively approve requests without proper review. This reflex approval paradoxically reduces security whilst attempting to maintain human oversight, transforming security controls into what South described as “security theatre.”
Browser-based AI agents present particularly acute dangers. Systems like OpenAI’s Operator control computers by manipulating visual interfaces directly, bypassing all traditional security controls. The document warns these agents “bypass all traditional API-based authorisation controls” by operating at the presentation layer rather than through authenticated APIs, making their actions nearly impossible to distinguish from the user’s own interactions.
The recursive delegation problem compounds as agents gain autonomy. When a primary agent delegates tasks to specialised sub-agents discovered in real-time, authorisation chains become complex without clear mechanisms to progressively narrow permissions at each step. A compromised agent wielding delegated authority from a human could trigger cascading failures across entire networks of sub-agents, each operating with fractions of the original user’s permissions but collectively representing significant risk.
Agent identity fragmentation threatens interoperability as vendors develop proprietary systems. The whitepaper warns this “would reduce developer velocity by forcing repeated one-off integrations” and “compromise security by creating multiple security models, each with different risks and vulnerabilities.” Without common standards, agents require separate identities for each system, creating both usability problems and expanded attack surfaces.
User impersonation represents a fundamental accountability gap. Today’s AI agents often act indistinguishably from the humans they represent, creating scenarios where nobody can determine whether an action came from a person or their agent. The document emphasises the need to move from impersonation to explicit “on-behalf-of” flows where agents prove their delegated scope whilst remaining identifiable as distinct from users they represent.
The Model Context Protocol leads adoption as the key framework for connecting language models to external tools. MCP initially launched without authentication but subsequently added security features after community pressure. The whitepaper recommends OAuth 2.1 as the standard framework but notes this only works within single organisations, not across trust boundaries where agents increasingly operate.
Current frameworks prove inadequate for agents acting on behalf of multiple users simultaneously. Whilst OAuth was designed for individual user authorisation, agents deployed in shared environments like team chat channels may access information that only some team members should see, potentially disclosing confidential data because the agent operates under one person’s permissions without standardised methods to respect the subset of permissions overlapping for all users.
Enterprise systems require immediate changes. The document proposes extending identity management protocols to treat AI agents as first-class entities with their own lifecycles, permissions and audit trails separate from human users. Without this, companies cannot track which agent performed which action or revoke access when agents are compromised or decommissioned.
South said the document addresses three distinct groups: AI implementers receive technical guidance on foundational standards, enterprises receive strategies for governance and integration, while consumer platforms receive foundations for user trust and scalable consent. The whitepaper’s conclusion emphasises urgency: “Successfully navigating this future depends on a concerted, collaborative effort across the industry.”