Model Context Protocol (MCP) enterprise deployments demand robust input validation, role-based access controls for tool execution, parameter schema validation, rate limiting, and comprehensive audit trails. Tool authorization should occur before execution to prevent unauthorized operations. Moving MCP from development environments to production systems requires authentication and authorization frameworks that extend beyond simple API keys or basic authentication.
We are currently implementing this at scale, and today we will walk you through the key considerations we have been weighing and some implementation details we are considering. We have not yet fully completed our implementation, so some of our decisions are in progress. However, we can paint a picture for you that will equip you to take on this challenge in your organization.
The challenge with AI systems is that they operate with a level of autonomy that renders traditional access control models insufficient; you’re not only controlling what a user can do, but also what an AI agent can do on behalf of users, systems, or automated processes.
The fundamental problem is delegation. When a model invokes a tool, whose authority is it operating under? The user who initiated the conversation? The application that’s hosting the model? The system account that’s running the MCP server?
Without clear answers to these questions, you end up with either overprivileged AI systems that can do anything or underprivileged systems that can’t do anything useful.
MCP’s authentication model needs to handle multiple identity contexts simultaneously. A single request chain might involve:
User authentication—who initiated this?
Application authentication—what system is making this request?
Delegation authentication—what authority has been granted to the AI agent?
Each layer serves a different purpose and requires a different validation approach.
Traditional OAuth flows assume that human users click through consent screens and make deliberate authorization decisions. MCP clients are often AI agents that need to discover and access capabilities programmatically, without human intervention at runtime. This approach creates a fundamental mismatch that requires extending OAuth patterns to support dynamic client registration and automated consent management.
Dynamic client registration enables MCP clients to automatically register themselves with authorization servers and request the necessary scopes for the capabilities they require. This approach is essential for scalable MCP deployments where it is not feasible to manually configure every possible client-server relationship.
The client registration process becomes a negotiation. The client declares what capabilities it needs, the server responds with what it’s willing to grant, and ongoing interactions are governed by the established scope boundaries.
The OAuth integration includes sophisticated On-Behalf-Of token exchange for service-to-service authentication. When an AI agent needs to call downstream services, it can exchange its current token for service-specific tokens:
The challenge is maintaining security while enabling automation. Dynamic registration can’t be a free-for-all. You need policies that govern which clients can register, what scopes they can request, and under what conditions their access can be revoked. Conceptually, you can think of it as a series of policies leading up to authorization, as shown in the following diagram, where the policies are applied during registration and authentication, but then authorization is applied when the request reaches the MCP gateway.
With this in mind, it is critical to think beyond OBO tokens. These work great for the first hop, the first level of indirection, but what happens from there? We are discussing MCP today, but we also need to consider agentic communication in general. This approach involves thinking about agents that call other agents, which in turn call tools.
Consider the following fictitious future scenario: The HR agent needs to schedule new-hire training for the new program manager who starts tomorrow. Therefore, HR requests a learning agent that will need to query the database to check availability.
Hence, we have an HR agent fetching an OBO token to call the learning agent, which now needs to call the MCP gateway to request the database tool to do a lookup for the schedule. The OBO token that came from HR doesn’t have the proper audience, and therefore, the MCP gateway denies the request. As a result, the learning agent cannot obtain an OBO token because it already has one.
What do we do now? We are discussing token exchange services with our cybersecurity team. The idea is that you, as a service, can reach out to another service with the token you have and request access, obtaining a new token in return. This type of service could be a central need in the new agentic world.
In the future, you may need to make requests to a service (or agent) that is under the control of a different IdP, and having an exchange service in place gives you a central location to handle and govern this type of behavior.
Unlike human users who log in and out, AI agents might run continuously for extended periods or spin up and down dynamically based on workload. Agent lifecycles require sophisticated token refresh strategies, scope adjustment mechanisms, and the ability to gracefully handle authorization failures without disrupting ongoing AI conversations.
Scopes provide a solution to the AI authorization problem by defining granular, composable permissions that can be dynamically assigned and revoked. Unlike traditional role-based systems where permissions are relatively static, scope-based systems allow for fine-grained, context-aware access control that adapts to the specific needs of each AI interaction.
In an MCP context, scopes operate at multiple levels:
Tool scopes—control which functions an agent can invoke
Resource scopes—control what data can be accessed
Parameter scopes—control what arguments to pass to tools and resources
This multi-level approach allows you to create sophisticated permission models. An agent may have the scope to read customer data and invoke email tools, but not have the scope to access customer financial information or send emails to external addresses.
Scopes become particularly powerful when combined with dynamic evaluation. Rather than pre-calculating all possible permissions, scope evaluation can happen at request time based on the current context:
The user involved
The specific operation requested
The current system state
The conversation history
Dynamic scope evaluation allows for conditional permissions that adapt to circumstances—elevated scopes during incident response, restricted scopes during maintenance windows, or specialized scopes for specific business processes.
The key architectural decision is whether scopes are evaluated by MCP servers themselves, by the central proxy/registry layer, or by dedicated authorization services. Each approach has tradeoffs in terms of performance, consistency, and operational complexity. However, the critical requirement is that scope evaluation must be fast enough to avoid significant latency with AI interactions while being comprehensive enough to enforce meaningful security boundaries.
This is a question that has come up for us where Sales and Marketing are at odds with technology and security. Security would ask that you give back no information at all about a tool, API, process, or resources that the caller does not have access to but in this scenario Sales may want the agent to respond with an answer such as "your current subscription does not provide access to the get_historical_pricing toolset please contact your sales rep if you think this information would be valuable to your workflow."
Answering authorization at the proxy/registry layer makes this challenging, but if you let the downstream MCP respond with a 403 (not authorized) instead of a 401 (not authenticated) or 404 (not found) the registry layer can respond with the explanation that this tool is available for purchase.
The moment you expose MCP servers beyond your internal network, you enter an entirely different security landscape. What worked fine for trusted internal systems becomes woefully inadequate when dealing with external clients, public registries, or third-party integrations. The stakes are higher because you’re not just exposing data, you’re exposing executable capabilities that AI systems can invoke autonomously.
Public MCP exposure amplifies every possible security concern. Tool invocation becomes a potential vector for remote code execution. Resource access becomes a potential channel for data exfiltration. Even prompt templates become potential injection attack surfaces.
MCP servers can’t trust that clients will send well-formed requests, that tool parameters will be within expected ranges, or that resource queries will be reasonable in scope. Every input needs validation, sanitization, and rate limiting:
Input validation is complicated by the fact that AI agents might generate edge cases that human users would never attempt. For instance, unusual parameter combinations, extreme values, or rapid-fire request patterns that expose race conditions.
AI agents are incredibly creative at finding unintended ways to achieve their goals, and they’ll happily exploit logical flaws in permission systems. If there’s a way to chain tool calls to attain elevated privileges or combine resource access to extract sensitive data, an AI system will eventually find it.
Authorization bypass prevention requires defense-in-depth approaches where multiple independent authorization checks prevent any single failure from compromising security:
The challenges with auditing and monitoring are unprecedented. Traditional security monitoring assumes human-scale activity patterns and recognizable attack signatures. AI agents generate significantly higher volumes of requests with less predictable patterns, making it more challenging to distinguish legitimate behavior from attacks.
You need specialized monitoring approaches that can track AI agent behavior, detect anomalies in tool usage patterns, and provide forensic trails that make sense in the context of AI decision-making.
When integrating with external MCP servers or allowing third-party tools, robust vetting policies become essential. Organizations need frameworks for evaluating external capabilities, managing trust relationships, and enforcing consistent security policies across diverse toolsets.
You can have a look at Anthropic’s MCP partners and see that many companies are jumping on the MCP train and hosting MCP servers that you can connect to directly over the Internet. Anthropic performs some vetting to allow you access to their portal, but you and your organization need to determine your own comfort level. It should certainly be a red flag when you find a random MCP server address, and you should hesitate to enable this in your instance of GitHub Copilot, for instance.
Policy enforcement should occur at multiple layers:
Gateway level—Central policy enforcement for all MCP traffic
Server level—Server-specific policies and validation
Tool level—Individual tool authorization and parameter validation
Downstream level—Service-specific authorization and data access controls
This dismissive attitude is responsible for more MCP security failures than any technical vulnerability. Treating MCP as “just another REST API” ignores the fundamental differences in how AI agents interact with systems compared to human users or traditional applications.
AI agents don’t have the same cognitive limitations as humans. They can:
Make thousands of requests per second
Explore every possible parameter combination systematically
Probe for vulnerabilities in ways that humans simply can’t
Lack human intuition about what constitutes “reasonable” behavior
This means that rate limiting, input validation, and abuse detection systems need to be designed for fundamentally different threat models.
The JSON API perspective also underestimates the complexity of tool composition and chaining. While each MCP call may appear to be a simple API request, AI agents can combine multiple tool calls in sophisticated ways to achieve complex goals. It is important to evaluate the security implications of any individual tool in the context of all other available tools, which creates combinatorial complexity that’s easy to underestimate.
Perhaps most importantly, the “just a JSON API” mindset ignores the autonomy aspect of AI agents. When a human user makes an API call, there’s a human in the loop who can recognize when something goes wrong and intervene. When an AI agent makes MCP calls, the intervention has to be built into the system itself through automated monitoring, circuit breakers, and fail-safe mechanisms.
The unified authentication system provides comprehensive audit logging that’s essential for governance:
Key features of the audit system:
Authentication attempts—(success/failure)
User information—(without sensitive data)
Tool/endpoint access
IP addresses and timestamps
Error details
Never logs signatures or tokens
Never logs sensitive parameters
The lesson isn’t that MCP is impossibly dangerous, but that it requires security thinking that’s appropriate to the actual threat model rather than defaulting to patterns designed for simpler use cases. OAuth provides a solid foundation, but it must be extended and adapted for the unique challenges of AI agent authorization and the autonomous, high-volume, exploratory nature of AI interactions with MCP systems.
MCP security represents a fundamental shift from protecting APIs to governing autonomous AI agents. It's a challenge that demands entirely new approaches to authentication, authorization, and monitoring.
The core insight driving our implementation is that traditional security models, designed for predictable human users, are insufficient for AI systems that operate with unprecedented autonomy, scale, and creativity. While OAuth provides a solid foundation, it requires significant extension through dynamic client registration, sophisticated token exchange services for agent-to-agent communication, and fine-grained scope-based authorization that adapts to context in real-time.
Moving MCP to production requires defense-in-depth architectures with multiple authorization layers, AI-specific input validation, and specialized monitoring that can detect anomalous behavior patterns at machine scale. The “just a JSON API” mindset is not only inadequate but also dangerous. It ignores the combinatorial complexity that emerges when AI agents can chain tools in sophisticated ways and the autonomous nature that removes human intervention from the security equation.
Organizations deploying MCP at scale require comprehensive audit logging, robust policy enforcement across gateway-to-downstream layers, and governance frameworks for third-party integrations that strike a balance between business requirements and security imperatives. While our implementation remains a work-in-progress, with decisions still being finalized, the framework we’ve outlined—encompassing identity delegation, multi-layered authentication, dynamic authorization, and AI-aware monitoring—provides a roadmap for enterprises ready to tackle the security challenges of the agentic AI era.
The time to start building these capabilities is now, before AI agent deployments outpace the security infrastructure needed to govern them safely.
The FactSet authors of this article are Tony Piazza, Principal Software Architect, and Telmuun Enkhbold, Machine Learning Operation Engineer.
This blog post is for informational purposes only. The information contained in this blog post is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.