MCP explained with example

Recently, a crucial concept around LLMs and external tools called Model Context Protocol (MCP, coined by Anthropic) is gaining some great traction amongst developers. So, in this post, we will try to understand it in quite some depth.
Subscribe to datasciencepocket on Gumroad
Before we jump onto MCP, let’s understand
What is tool calling?
Tool calling is a mechanism in AI systems where LLMs invoke external tools or APIs to perform specific tasks. These tools can range from simple functions like retrieving data from a database to more complex operations such as sending emails or interacting with third-party services.
Example: AI-Assisted Flight Booking
Imagine you’re using an AI assistant to plan a trip. You say:
“Book me a flight from New York to San Francisco on October 15th.”
Here’s how tool-calling would work:
Model Identifies the Need for a Tool — The AI detects that it requires a flight booking tool to fulfill your request.
Tool Invocation — The model calls the flight booking API, providing relevant details (origin, destination, date).
Tool Execution — The flight booking system searches for available flights and returns the results.
Model Response — The AI processes the results and replies:
“Here are the available flights: Flight A at 9 AM, Flight B at 3 PM. Which one would you like to book?”
This process enables AI models to act as intelligent intermediaries, automating tasks that require external data retrieval and processing.
Challenges in Tool-Calling
While tool calling is powerful, it comes with several challenges:
- Fragmentation — Different tools have different APIs and integration requirements. For example, a flight booking API might be structured differently from a hotel booking API, leading to inconsistent implementations.
- Lack of Standardization — Without a common protocol, each AI application must implement its own integration logic, leading to redundant efforts.
What is a protocol?
A protocol is a set of rules or standards that define how systems or devices communicate and interact with each other. It ensures consistency, interoperability, and efficient data exchange across different platforms.
A common example of a protocol is HTTP (Hypertext Transfer Protocol), which defines how web browsers and servers communicate to fetch and display web pages. Another example is SMTP (Simple Mail Transfer Protocol), used for sending emails between servers.
- Context Limitations — Tool calling is typically restricted to predefined tools. If a new tool becomes available, the AI model may not know how to use it unless it is explicitly updated.
- Error Handling — If a tool fails (e.g., the flight booking API is down), the AI may not handle the failure gracefully, leading to poor user experience.
Comes in Model Context Protocol (MCP)
MCP: Protocol for LLMs interaction with tools
At its core, MCP is an open protocol designed to standardize how AI/LLMs applications interact with external systems, such as databases, tools, and resources.
Think of it as a universal language that allows AI models to seamlessly integrate with the tools and data they need to function effectively.
The motivation behind MCP stems from the idea that models are only as good as the context we provide them. A year ago, most AI applications required users to manually copy-paste or type in context. But today, AI systems are evolving to have direct hooks into your data and tools, making them more powerful and personalized. MCP was created to enable this seamless integration.
How is MCP Different from Tool Calling?
Before MCP, AI systems relied heavily on tool calling, where models would invoke specific tools to perform tasks. While tool calling is useful, it has problems we discussed above
MCP, on the other hand, introduces a standardized interface for AI applications to interact with tools, prompts, and resources. Here’s how it differs:
Standardization: MCP provides a unified protocol for AI applications to connect with any MCP-compatible server. This eliminates the need for custom integrations.
Separation of Concerns: MCP separates tools, resources, and prompts into distinct components, each controlled by different entities (model, application, or user).
Dynamic Context: MCP allows models to dynamically discover and use tools and resources, making AI systems more adaptive and context-aware.
Advantages of MCP
Seamless Integration: Once your AI application is MCP-compatible, it can connect to any MCP server without additional work. This reduces development time and complexity.
Scalability: MCP enables enterprises to centralize their AI development. For example, one team can manage a vector database as an MCP server, while other teams can build AI applications on top of it.
Rich Context: MCP allows AI models to access richer context, making them more powerful and personalized. For example, an AI assistant can pull data from your CRM, GitHub, or local file system without requiring manual input.
Open Ecosystem: MCP is an open protocol, meaning anyone can build and share MCP servers. This fosters collaboration and innovation across the AI community.
A Simple Example of MCP in Action
Let’s say you’re using an AI-powered coding assistant like Cursor or Windsurf. You’re working on a GitHub repository and want to triage issues. Here’s how MCP makes this process seamless:
- Client-Server Interaction: The AI application (client) connects to an MCP server for GitHub.
- Tool Invocation: The model decides to invoke the list_issues tool to pull in all the issues from the repository.
- Resource Access: The server fetches the issues and sends them back to the client as a resource.
- Prompt Interpolation: The model uses a predefined prompt to summarize and prioritize the issues based on your past interactions.
- Action Execution: The model then invokes another tool to add the top issues to your Asana project, all without manual intervention.
This entire process is possible because MCP provides a standardized way for the AI application to interact with GitHub.
MCP vs. Tool Calling: Key Differences

Why MCP Matters for the Future of AI
MCP is more than just a protocol — it’s a foundational layer for building intelligent agents. As AI models become more capable, the ability to dynamically discover and use tools and resources will be critical. MCP enables agents to evolve over time, discovering new capabilities and adapting to new tasks without requiring manual updates.
For example, imagine an AI agent that can automatically discover and integrate with a Grafana server to monitor your logs, even if it wasn’t programmed to do so initially. This level of adaptability is what makes MCP so powerful.
How to use Model Context Protocol for your AI?
This is a little complicated. So I will just share the overview so that you’ve a hang around of the process
- Set Up an MCP Server: Build or use an existing MCP server that exposes tools, resources, or prompts. For example, a GitHub MCP server could provide tools like list_issues or create_pull_request.
- Connect an MCP Client: Use an MCP-compatible AI application (client) like Claude or Cursor. The client connects to the MCP server to access its tools and resources.
- Invoke Tools or Resources: The AI model dynamically decides when to invoke tools (e.g., fetching data) or use resources (e.g., attaching files) based on the task. For example, you could ask, “List open issues in my GitHub repo,” and the model would call the list_issues tool from the GitHub MCP server.
Conclusion
MCP is more than just another AI integration framework — it’s a game-changer for how AI applications interact with external tools and data. By introducing standardization, dynamic context discovery, and seamless tool integration, MCP eliminates the inefficiencies of traditional tool calling and unlocks a future where AI agents can adapt, evolve, and autonomously discover new capabilities.
For developers, businesses, and AI researchers, adopting MCP isn’t just an upgrade — it’s a necessary step toward building smarter, more autonomous AI applications. The question isn’t whether MCP will shape the future of AI — it’s how soon you’ll start using it to revolutionize your AI workflows.
What is the Model Context Protocol (MCP)? was originally published in Data Science in your pocket on Medium, where people are continuing the conversation by highlighting and responding to this story.