Featured Image

Enterprise MCP (Model Context Protocol) — Part One

Technology

By FactSet Insight  |  August 28, 2025

Model Context Protocol (MCP) is a JSON-based communication standard that defines how AI models interact with external systems. Over the past few months, MCP has gained rapid traction, emerging as a critical piece of infrastructure for AI applications. Anthropic made MCP open source on November 25, 2024. Read Introducing the Model Context Protocol for an overview of MCP.

What Makes MCP So Attractive?

The answer lies in the fundamental problem that MCP addresses: AI model isolation, the gap between a model’s reasoning and its actual capabilities in the real world. Without standardized external access, language models are essentially sophisticated chatbots trapped in a sandbox, unable to read your files, query your databases, or interact with your systems in a meaningful way.

MCP bridges that gap by providing a standard interface between AI systems and real-world tools. Even more critically, it addresses a key scalability issue. Before MCP, every organization had to build custom integration layers, reinventing the wheel with bespoke APIs and one-off connectors that didn’t scale and couldn’t interoperate.

What makes MCP particularly crucial is timing given we’re at an inflection point where AI capabilities are advancing faster than infrastructure to support them. Models are capable of handling complex and multi-step tasks, but they need access to real-world data and tools to be useful. MCP provides that access layer without requiring every team to become experts in AI integration. A developer can build an MCP server once and immediately make their tools available to any MCP-compatible AI system, whether it's Claude, a custom application, or future models yet to be built.

From an enterprise perspective, MCP is important because it creates a path toward governed and scalable AI deployment. Instead of shadow IT, where several teams build their own AI integrations, MCP allows you to establish centralized tool registries, implement consistent authentication and authorization, and maintain visibility into how AI systems interact with your infrastructure. The protocol’s standardization means you are not dependent on any particular vendor’s ecosystem; you are building reusable assets that can work across different AI platforms and evolve as the technology landscape changes.

Most importantly, MCP shifts the conversation from “how do we connect AI to our systems” to “what capabilities should we expose and how do we govern them responsibly.” That is the conversation organizations need to have right now.

Most discussions about MCP get bogged down in the technical specifics of the protocol itself, when we need to focus on how we govern access to them at scale. Once you move beyond Proof-of-Concept MCP implementations and start deploying multiple MCP servers across teams, you quickly encounter the classic distributed systems problem: multiple teams building overlapping capabilities with inconsistent interfaces and conflicting assumptions, resulting in authentication headaches and security concerns. The same challenges that led us to API gateways, service meshes, and developer portals apply equally to MCP deployments, but with the added complexity that your consumers are AI models rather than human developers.

Tool collisions are inevitable when multiple teams develop MCP servers independently. Two teams might both create “send_email” tools with subtly different parameter requirements or implement overlapping database query capabilities with varying assumptions of authentication. The protocol itself does not prevent this; it’s a governance challenge that requires organizational solutions.

The key is treating tool descriptions as API contracts with centralized management. For example, you would not let teams deploy conflicting REST endpoints. Similarly, you should not allow teams to deploy conflicting tool definitions. It is best to establish consistent naming conventions, parameter standards, and description templates that teams must follow when exposing capabilities through MCP.

It also means creating a process to enforce the use of tool descriptions. One way to achieve this is to create a vector database and use it to score the uniqueness of a tool description across all other tool descriptions. Sample prompts should result in selecting the appropriate tool, which can be a reference to search across the embeddings and incorporated into the uniqueness score.

Tool descriptions also need to be written for AI consumption, not human documentation. Models interpret descriptions literally and lack the contextual knowledge to resolve ambiguities that humans would handle intuitively. A poorly described tool might work fine in testing but fail unpredictably when models encounter undocumented edge cases. Good governance includes review processes for tool descriptions, testing with actual AI models, and iterative refinement based on real usage patterns.

Ultimately, the real challenge is building sustainable registries and governance frameworks that will outlast any particular implementation. The solution is to think systematically about centralized tool registries, proxied access patterns, and governance structures that can scale across enterprise environments while maintaining the flexibility that makes MCP valuable in the first place.

Central Tool Registry

The logical conclusion of multi-team MCP governance is a central tool registry, a single source of truth for all available tools, resources, and prompts across your organization. The goal of a tool registry is to provide the operational visibility and control needed to run AI systems in production environments. Without centralized registration, you are unaware of available capabilities, who is using what, and how to coordinate changes across dependent systems.

Proxied Access

A central registry naturally leads to proxied access patterns, where client applications connect to a single MCP endpoint that routes requests to appropriate backend servers based on tool names, authentication context, or business rules. This proxy layer becomes your control plane for AI capabilities where you implement rate limiting, access controls, audit logging, and dependency management.

The proxy pattern also enables sophisticated routing logic that would be impossible with direct server connections. You might route database tools to read replicas during business hours and primary instances for critical operations, or automatically failover to backup implementations when primary servers are unavailable. The proxy can also implement request transformation, parameter validation, and response normalization to smooth over inconsistencies between different backend servers.

From a client perspective, proxied access significantly simplifies configuration management. Instead of maintaining connection details for dozens of MCP servers, applications connect to a single proxy endpoint and discover all available capabilities through standard MCP capability negotiation.

Controller/Worker MCP

The controller/worker pattern extends proxied access by creating hierarchical MCP deployments where a controller server aggregates capabilities from multiple worker servers. This pattern is particularly useful for organizations with complex deployment topologies such as different geographic regions, security zones, or business units might run their own MCP clusters while still participating in a global capability registry.

Controller servers handle capability discovery, request routing, and result aggregation, while worker servers focus on implementing domain-specific tools and resources. This separation of concerns allows teams to maintain autonomy over their implementations while participating in centralized governance frameworks. The controller can implement organization-wide policies, such as authentication, authorization, rate limiting, and audit logging, while workers focus on business logic.

The pattern also enables interesting hybrid deployments where some capabilities are available globally while others are restricted to specific contexts. A controller server might expose general-purpose tools to all clients while routing sensitive operations to worker servers that implement additional security controls.

Conclusion

MCP entered the AI arena last November and has risen in rare form, unchallenged, into the daily vocabulary of AI workflows as an essential part of the agentic stack. Today, we briefly explored the role of enterprise governance and its importance in the context of the vast array of tools you may want to introduce to your AI workflows. The effort to accurately articulate the tool capabilities to LLMs, creating mechanisms to rate this metadata, and iterating over this data will prove valuable. Investing time to experiment with model tool choice will create sustainable accuracy and quality over time. A central registry enables this type of governance and lends itself to a proxied controller/worker pattern, enabling centralized routing and other infrastructural benefits that far outweigh the alternative: an ungoverned free-for-all.

 

The FactSet authors of this article are Tony Piazza, Principal Software Architect, and Telmuun Enkhbold, Machine Learning Operation Engineer.

FactSet clients have access to insightful technology articles on the FactSet Developer Hub.

Want to gain access to a network of industry experts, developers, architects and technologists? Request access to the FactSet Developer Hub.

 

 

This blog post is for informational purposes only. The information contained in this blog post is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.

Comments

The information contained in this article is not investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.