Join us at Boomi World 2025 May 12-15 in Dallas, TX

How to Use Model Context Protocol the Right Way

by Markus Mueller
Published Apr 11, 2025

Executive Summary

The Model Context Protocol (MCP) is emerging as a foundational enabler for the next generation of intelligent, autonomous digital systems. As AI agents evolve from passive copilots to proactive actors capable of executing context-aware decisions, the limitations of traditional integration and API interaction models become increasingly apparent. Enterprises striving to harness the full power of agentic AI need new infrastructure — not just more powerful models, but a new way for these models to interact with tools, data, and enterprise logic.

MCP offers a standardized protocol designed to meet these demands. It creates a common language for AI agents to dynamically discover, inspect, and invoke tools without requiring custom code or hardcoded interfaces. This capability is transformative: it significantly lowers the barrier to entry for intelligent automation, introduces flexibility into traditionally rigid integration workflows, and opens the door to AI-native platforms that are composable, adaptive, and secure by design.

Despite its clear potential, MCP is still in its early stages. Organizations wanting to use MCP for their AI solutions must be careful to address security, governance, observability, and lifecycle management concerns. And in spite of the hype, implementers must remember that the protocol alone is not sufficient. Real enterprise value depends on the emergence of end-to-end platforms that bring together high-quality data, robust and trustworthy tools, and structured, policy-driven environments in which AI agents can operate safely and effectively. MCP accelerates the realization of such platforms, but its success hinges on ecosystem collaboration, tooling maturity, and standardization efforts.

This blog offers a clear-eyed view of what MCP is, what it enables, and what still needs to happen for it to become a cornerstone of enterprise AI infrastructure.

Introduction: The Need for a New Interface Layer

Over the last twenty years, APIs have evolved from isolated developer tools to foundational elements of enterprise architecture. They enable service reuse, system modularity, and integration at scale. Yet, they were never designed with AI in mind. They were built for deterministic consumers — systems that follow predictable paths and known patterns. As AI agents gain the ability to reason, plan, and act independently, this architectural foundation begins to show its age.

The emergence of large language models and agent frameworks introduces new expectations for software systems. No longer are we building applications where human users drive every interaction. Instead, increasingly, we are designing systems where intelligent agents interact with APIs on our behalf — selecting tools dynamically, adapting to new information, and coordinating across distributed systems to complete complex tasks.

In this landscape, traditional API interactions fall short. Hard-coded API integrations are brittle. Workflow engines designed for static control flows struggle with dynamic goal-oriented behavior. Also, documentation alone is not enough to make an API useful to an agent that must reason about its capabilities in real time. What is needed is a protocol that speaks the language of agents: structured, machine-readable, introspective (capable of understanding its own state), and consistent across tools.

This is where MCP enters the picture. The Model Context Protocol defines a standard way for AI agents to discover, inspect, and invoke tools exposed by MCP servers. Tools in this context are not limited to APIs; they may include database queries, file system interactions, or even natural language prompts. The value MCP brings is not in replacing existing APIs, but in abstracting and unifying them behind a common interaction pattern that is accessible to intelligent systems. In the words of Nate Jones, “It’s like the difference between giving someone step-by-step directions and handing them a map. With directions, they’re stuck following your exact path. With a map, they can find their own way — and maybe even discover better routes you hadn’t thought of.”

From a business perspective, MCP promises faster time-to-value for AI initiatives, greater reuse of existing integration assets, and a smoother transition from traditional IT to AI-enhanced operations. From a technical standpoint, it provides a clean separation between agent logic and backend capabilities, enabling more modular, secure, and adaptable system design. In the end, MCP gives developers the ability to create more meaningful agents — that is, agents capable of working with broader types of data and performing more complex tasks.

The protocol is widely gaining traction. Organizations such as Anthropic, OpenAI, Microsoft, and Salesforce are experimenting with or actively adopting MCP-compatible interfaces. Communities have begun curating tool registries, and open-source implementations are evolving rapidly. The excitement is palpable — but so are the gaps. In the sections that follow, we explore both the promise and the limitations of MCP, and we outline how enterprises can begin engaging with this emerging standard in a meaningful and sustainable way.

Understanding MCP: A Protocol, Not a Platform

MCP is best understood as a lightweight coordination layer that bridges the gap between intelligent agents and the tools they need to perform useful work. The protocol follows a client-server architecture, where agents (clients) connect to MCP servers that expose one or more tools. These tools are defined using JSON schemas that describe the expected inputs and outputs of each function, along with human-readable metadata that helps agents choose the right tool for the job.

Communication between client and server is based on JSON-RPC 2.0, a well-established, transport-agnostic protocol that enables method invocation over various channels including stdio, HTTP, and WebSocket. This flexibility allows MCP to support both local and remote deployment scenarios. For example, tools running on a developer’s laptop can be exposed to a local agent using stdio, while cloud-based APIs can be invoked via HTTP endpoints protected by access controls.

Crucially, MCP is not an alternative to REST or GraphQL. It sits one layer above those protocols. Traditional APIs can be wrapped as MCP tools, allowing them to participate in agentic workflows without changing their underlying implementation. In this sense, MCP provides a unified execution layer that complements existing API strategies while making them more accessible to AI systems.

However, MCP is not a complete platform. It does not provide identity management, policy enforcement, or monitoring out of the box. Nor does it define how tools should be governed, versioned, or retired. These responsibilities must be handled by the surrounding infrastructure. For enterprises, this means that the true value of MCP will only be realized in the context of a broader architecture that includes:

  • High-quality, accessible data that agents can reason about
  • Well-documented, reliable tools exposed via MCP
  • Identity and access control systems that ensure safe usage
  • Observability and logging mechanisms for auditing and debugging

In other words, MCP is the wiring. The intelligence and control must come from what is connected at either end. Organizations that treat MCP as a drop-in solution will be disappointed. Those that see it as an enabler for richer, more dynamic AI platforms will unlock its full potential.

Strategic Use Cases Across the Enterprise

Enterprises exploring MCP will find opportunities in several domains, each of which benefits from the protocol’s ability to decouple agents from the systems they interact with. In integration and automation, MCP transforms rigid workflows into flexible, composable environments. AI agents can invoke business processes as tools, using real-time context to determine what actions to take and in what order. This enables more responsive, resilient operations where processes adapt rather than fail in the face of change. Where needed existing integration processes can provide deterministic processes, especially if those processes are part of quality management, security or other certifications.

In data management, MCP brings new capabilities to ETL pipelines and analytics environments. By exposing pipeline components as tools, organizations can empower agents to validate schemas, optimize load strategies, or troubleshoot data quality issues. The protocol also supports more sophisticated metadata management, where lineage, constraints, and usage patterns are available in real time to inform agent decisions.

AI management is another natural fit for MCP. As organizations deploy more agents into production, the need to observe, control, and configure their behavior becomes paramount. MCP can expose control surfaces as tools — start, stop, configure, monitor — enabling human operators and supervisory agents to manage fleets of agents in a consistent way — a capability direly needed for future architectures based on decentralized collections of AI agents that work together to solve complex problems.

API management also benefits. MCP enables a shift from human-first to agent-first API ecosystems. By registering APIs as tools with associated schemas and metadata, organizations can track how agents use their capabilities, apply policies dynamically, and gain insights into how AI systems interact with their infrastructure.

These use cases are not hypothetical. Early adopters are already proving the model. In developer tools like Replit and Sourcegraph, MCP enables context-aware code navigation and automated development workflows. The patterns are emerging, and the advantages are becoming clear.

Governance, Security, and the Road Ahead

Like any technology innovation, with new capabilities come new risks. Enterprises need to consider the challenges raised by MCP that must be addressed before it can be considered enterprise-ready. Chief among these are security and governance.

The initial specification lacked native support for authentication or access control, and it placed the burden of enforcing these concerns on tool developers. This creates inconsistencies and increases the risk of misconfiguration. Since then, a second revision of the specification materialized that added rudimentary support for authentication. Yet, experts have already pointed out where it still falls short of the authentication and authorization requirements commonly found in enterprises today.

Enterprises need stronger guarantees. They need a clear separation between the MCP server, which exposes tools, and the systems that handle identity, authorization, and audit. OAuth 2.1 and OIDC integration must be well-defined. Support for federated identity providers and token-based access should be a first-class concern. Without these elements, MCP will remain difficult to deploy in zero-trust or compliance-sensitive environments.

Governance is another area not covered by the MCP specification. There is no standard for defining tool ownership, managing versions, or setting policies around usage. There are no built-in metrics or logging standards to monitor behavior or detect anomalies. Enterprises will need to layer governance infrastructure on top of MCP — or, more likely, wait for vendors to deliver MCP-compatible platforms that integrate these features out of the box.

The good news is that this work is already underway including here at Boomi. Core platform capabilities can be leveraged to provide MCP governance. For example, an API gateway can be used to wrap MCP servers, validate tool schemas, and apply rate limits and access controls. Tool registries are beginning to emerge that allow organizations to publish, discover, and deprecate tools with versioning and metadata. These developments point to a future where MCP is not just a protocol, but part of a larger ecosystem of interoperable, governed, AI-ready components.

Conclusion: Building the Future, Together

The Model Context Protocol represents a shift in how we think about integration, orchestration, and system design. It abstracts complexity, promotes reuse, and enables AI agents to interact with enterprise systems in a standardized, secure, and scalable way. But it is also a work in progress. The promise is there, but the path to enterprise adoption requires collaboration across vendors, open-source contributors, and enterprise architects.

Organizations that want to lead in the era of agentic AI must start laying the groundwork now. They should begin exploring MCP in controlled settings, wrapping existing APIs and tools, and experimenting with agent-based workflows. They must also engage in the standards process — contributing use cases, identifying gaps, and pushing for security, governance, and observability as first-class citizens in the protocol.

Ultimately, MCP alone will not make AI agents successful. The real differentiator will be the platforms that surround it — platforms that curate high-quality tools, enforce robust policies, and provide the data, metadata, and visibility that intelligent systems need to act with precision and accountability. MCP is the interface. The work happens beyond it. And it is the organizations that understand this distinction that will define the future of AI in the enterprise.

For more information on how MCP was created, check out the Latent Space podcast episode featuring MCP co-authors David Soria Parra and Justin Spahr-Summe.

On this page

On this page

Stay in touch with Boomi

Get the latest insights, news, and product updates directly to your inbox.

Subscribe now