MCP using LangChain with any AI model

MCP using LangChain with any AI model

How to integrate Model Context Protocol with LangChain

Photo by Markus Spiske on Unsplash

Model Context Protocol, aka MCP, servers are all over the internet. A huge wave of MCP servers is coming shortly to spam the internet. In this tutorial, we will be exploring how we can connect MCP servers using LangChain.

Let’s get started!

https://medium.com/media/13a0920a64d494be859dc58ad03293e9/href

First of all, we need to pip install LangChain MCP Adapter.

pip install langchain-mcp-adapters

Next up, you need to update your LangChain, LangGraph package alongside the AI model service provider, which is integrated with Langchain.

In this tutorial, we will be using Google’s Gemini Free API key, so I will be updating LangChain Gemini integration.

%pip install -qU langchain-google-genai langgraph langchain

The next steps are quite easy. We will create a custom MCP server that is a mathematics tool and then integrate it with LangChain.

Here is a direct example from the Github repository itself.

Create file `math_server.py`

# math_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math")

@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b

@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b

if __name__ == "__main__":
mcp.run(transport="stdio")

Now create a ‘mcp_serve.py’ with the below content

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
import asyncio
from langchain_google_genai import ChatGoogleGenerativeAI
import os


model = ChatGoogleGenerativeAI(
model="gemini-2.0-flash-001")

server_params = StdioServerParameters(
command="python",
args=[
"C:\Users\datas\OneDrive\Desktop\maths.py"
])

async def run_agent():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()

# Get tools
tools = await load_mcp_tools(session)

# Create and run the agent
agent = create_react_agent(model, tools)
agent_response = await agent.ainvoke({"messages": "what is (2+3)x4"})
return agent_response

# Run the async function
if __name__ == "__main__":
try:
result = asyncio.run(run_agent())
print(result)
except:
pass

Briefing on what we are doing in the above code snippet.

  1. Uses Gemini + LangGraph Agent — Integrates Google’s gemini-2.0-flash model with a React-style agent for reasoning.
  2. Calls External Python Tool — Spawns maths.py via stdio to fetch specialized tools (likely for math operations).
  3. Solves Math Query — The agent processes (2+3)x4 by combining Gemini’s logic with the external tool’s capabilities.

Do remember that this MCP support is just for chat LLMs, and LLMs that can support tool calling.

Run

python mcp_serve.py

And now your MCP server is working with LangChain.

How to use an existing MCP server with LangChain?

Quite easy,

In the `mcp_serve.py` file, you need to change the command and the argument passed as present in the GitHub repo for that particular MCP server that you are using.

For example:

Filesystem MCP server, the code snippet would be

server_params=StdioServerParameters(
command="npx",
args=[
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/username/Desktop",
"/path/to/other/allowed/dir"
])

I hope you try out MCP servers with LangChain and build your desired generative application.


MCP using LangChain with any AI model was originally published in Data Science in Your Pocket on Medium, where people are continuing the conversation by highlighting and responding to this story.

Share this article
0
Share
Shareable URL
Prev Post

Seed-Thinking v1.5: New reasoning model beats DeepSeek-R1

Next Post

OpenAI o3 and o4-mini released

Read next
Subscribe to our newsletter
Get notified of the best deals on our Courses, Tools and Giveaways..