In the growing ecosystem of agent-enabled integrations, securing access is not optional — it’s foundational. The Model Context Protocol (MCP) now includes a formal layer of authentication and authorization, but its nuances, particularly for enterprise adoption, are still evolving.
In this post, I’ll unpack the current spec, explain its standards, explore a reference architecture using enterprise IdPs like Keycloak, Okta, or Microsoft Entra ID, discuss the operational burden of configuring every MCP server, and walk through an end-to-end example that illustrates how it all fits together.
Why Authentication Matters in MCP
Think of MCP as a new kind of universal socket for software agents. Just like the wall socket in your house delivers electricity to all kinds of appliances, MCP delivers access to tools that agents can plug into and use. But imagine if every socket in your house looked identical and there was no way to check who plugged into it or how much power they were drawing. That would be a disaster — your toaster might short-circuit your entire house, or worse, someone could plug in a device that steals power.
MCP without authentication is essentially that: a universal interface without a lock. Authentication ensures that only the right appliances — here, the right agents — plug in, and only under the right conditions. It is the foundation of trust, the mechanism by which a server can distinguish between a legitimate business workflow and a rogue agent trying to exfiltrate data. Without it, every higher-level security control is meaningless.
From a technical standpoint, authentication in MCP is built on OAuth 2.1, the widely adopted framework that governs secure access for APIs, mobile apps, and now agent workflows. The June 2025 MCP specification requires servers to defer trust decisions to established identity providers (IdPs), effectively saying: “Don’t reinvent the lock; reuse the world’s most battle-tested system for it.”
The Current MCP Authentication Model
The June 2025 MCP spec also formalizes authentication around the OAuth 2.1 Authorization Code flow with PKCE. That may sound intimidating, so let’s begin with an analogy. Imagine a hotel that requires guests to show both a booking confirmation and a room key card to access their room. The booking system represents the authorization server (your IdP, like Keycloak or Entra ID), while the key card represents the access token. The key card is only issued if the booking is verified, and it only works for the specific room (the resource) that was reserved. If you try to use it on another room, the lock rejects it.
In MCP terms, an agent (the guest) tries to use a tool (the hotel room) by making a request to the server (the hotel front desk). The server denies access until the agent shows a token (the key card) issued by the trusted authorization server (the booking system). The token is specific to that server — thanks to Resource Indicators (RFC 8707), which MCP mandates.
For those who want the details: when an MCP client calls a server without valid credentials, the server responds with a 401 Unauthorized message. Importantly, this isn’t just a rejection. It includes metadata called the Protected Resource Metadata (PRM), which tells the client which authorization server to use, what scopes it needs, and what kind of token to bring back. The client then kicks off the OAuth 2.1 Authorization Code flow with PKCE: it generates a cryptographic code challenge, redirects to the IdP authorization endpoint, and eventually receives an access token scoped for that MCP server. That token is then sent back with the MCP request, and if valid, the server processes it.
This separation of concerns — letting an IdP issue the token while the MCP server only verifies it — is critical for scale. It means enterprises don’t need to bolt custom authentication logic into every MCP server. Instead, they can rely on the same centralized identity systems already in use for APIs and apps.
The Origins of OAuth — and Why It Fits MCP
To understand why OAuth is a good fit for MCP, it helps to go back to its roots. OAuth was created in 2007 when developers faced a problem: how to let users grant one service access to their data on another service without handing over their passwords. The original spark came from Twitter and a photo-sharing site called Ma.gnolia that needed a way for users to allow third-party apps to post photos to Twitter without giving those apps their Twitter passwords.
The solution was elegant: instead of sharing credentials, users would authorize apps to act on their behalf by issuing tokens. These tokens were scoped (they could only do certain things) and revocable (users could pull them back anytime). Over time, OAuth grew into the industry standard for delegated access, adopted everywhere from Google APIs to banking systems. OAuth 2.0 became the dominant version, and now OAuth 2.1 refines the model by cleaning up ambiguities and adopting best practices like PKCE by default.
The fit for MCP is almost uncanny. MCP, by design, enables one system (an agent) to act on behalf of a user or enterprise by invoking tools and APIs. That’s exactly the use case OAuth was built for. Just as early OAuth stopped developers from sharing passwords across apps, MCP needs to stop agents from bypassing security boundaries when discovering and invoking tools. OAuth tokens give us a way to grant just-enough, time-limited, auditable access to MCP servers, while IdPs give us a way to centrally enforce policy and identity.
MCP With Enterprise IdPs
Let’s take a step back and picture this as a city with guarded gates. The MCP server is a building filled with sensitive tools. The IdP (say Keycloak, Okta, or Entra ID) is the central passport office. Agents are citizens trying to enter the building to do work. Without a passport and a visa stamped for the right building, no one gets in.
Technically, this plays out as follows. The enterprise configures its IdP as an OAuth authorization server. That means it knows about the MCP clients (the agents) and about the protected resources (the MCP servers). Each MCP server is registered as a Resource Server with its own identifier (audience) and scopes (what operations are permitted).
When an agent attempts to call a tool, the MCP server pushes back with PRM, pointing the agent to the IdP. The agent runs the Authorization Code + PKCE flow, gets a token, and returns. The MCP server checks the token’s signature, its audience, its scopes, and expiry. If all checks out, the request is allowed.
This model is consistent with enterprise best practices: central identity, delegated trust, and scoped tokens.
The Challenge of Configuring Every MCP Server
A useful analogy here is airports. Every airline flying into an airport doesn’t build its own passport control; instead, the airport provides a centralized customs and immigration checkpoint. Imagine the chaos if every airline had to build and staff its own control desks, each with its own rules. Passengers would be frustrated, security would be inconsistent, and the system would break down.
MCP faces a similar challenge. The current specification requires every MCP server to respond with PRM when a client tries to access it without credentials. This metadata must point to the right authorization server, include the right scopes, and define the supported token types. In other words, each MCP server must know its own security policy and be correctly configured to communicate it.
This creates two problems. First, it is operationally heavy. In an enterprise with hundreds of MCP servers — many wrapping existing APIs or tools — the burden of configuring, testing, and maintaining correct authentication metadata for each one is enormous. Second, it is unreliable. Not every server implementation supports the latest authentication methods, and even if it does, inconsistencies in configuration can lead to gaps in enforcement. The weakest link becomes the entry point for compromise.
The solution is not to push this burden onto every MCP server, but to centralize it in a platform layer that acts as an authentication gateway. Such a platform sits between agents and servers, intercepting unauthorized requests, handling the OAuth flows, and attaching the correct tokens before forwarding the request to the underlying MCP server.
From the agent’s perspective, the flow is seamless; from the enterprise’s perspective, policies are consistent, logging and auditing are unified, and server teams no longer have to become experts in authentication flows. This mirrors how API gateways and service meshes centralize security for REST and gRPC.
Example Scenario: Sales Data Assistant
Imagine Acme Corp has built an AI assistant that helps sales teams by summarizing data stored in a secure internal database. The database is wrapped behind an MCP server called SalesDataMCP, and Acme uses Keycloak as its IdP.
When the assistant starts, it tries to call generate_summary on SalesDataMCP. The server denies the request and returns PRM metadata pointing to Keycloak. The agent generates a code verifier and challenge, and redirects to Keycloak’s authorization endpoint. Because this is a machine agent, Keycloak authenticates it using client credentials combined with PKCE, then issues a code. The agent exchanges the code plus verifier at Keycloak’s token endpoint and receives an access token scoped for sales-data.mcp.acme.com with sales:read and summary:generate.
Now armed with the token, the agent calls SalesDataMCP again, this time with the Bearer token in the header. The server verifies the token’s signature using Keycloak’s public keys, checks that the audience matches, confirms the scopes, and processes the request. The assistant generates a sales summary for the user, and both the token issuance and the tool invocation are logged in Keycloak and the MCP server for audit.
From the outside, this feels seamless: the assistant simply works. But under the hood, every step — from cryptographic proof with PKCE to scope validation — ensures that only the right agent with the right permissions could invoke the tool.
Where the Spec Still Needs to Go
MCP’s adoption of OAuth is the right move, but the spec still leaves gaps that enterprises need to close. Christian Posta and others have pointed out that the spec blurs the lines between resource and authorization servers, making implementation more complex than it needs to be. Clearer guidance on separation would improve interoperability and reduce friction for enterprise deployments.
Another area ripe for improvement is standardizing the discovery of IdP endpoints. Today, servers must return PRM, but in large environments, this needs to be complemented with well-known metadata discovery, so clients can adapt dynamically without manual configuration.
Finally, the spec should strengthen requirements around Resource Indicators, token scoping, and machine authentication models like SPIFFE/SPIRE. This would ensure that non-human agents can authenticate robustly without kludges, and that tokens cannot be misused across servers.
Platforms, in turn, must not stop at the spec. They need to enforce short-lived tokens, implement runtime policy engines that evaluate scopes and context, and integrate with enterprise conditional access frameworks. Authentication is not just about the lock — it’s about the rules for when, why, and how the lock opens.
Conclusion
The promise of MCP is to become the universal interface for agents to safely interact with enterprise systems. Authentication is the bedrock that makes this possible. By aligning with OAuth 2.1 and PKCE, MCP taps into a mature, battle-tested system of delegated access. But the spec still needs sharper edges, and enterprises must bring the operational discipline to make it real.
If we get this right, we create an ecosystem where agents act with precision and accountability, tools are shielded behind strong identity boundaries, and enterprises can embrace agentic AI without sacrificing security. MCP gives us the socket. OAuth gives us the lock. The gateway gives us the customs checkpoint that ensures everything flows consistently. Now it’s up to us to wire them together into a system that can scale with both trust and safety.
Visit boomi.com/mcp to learn how Boomi makes MCP enterprise-ready
This blog was written with the assistance of AI.