Model Context Protocol (MCP) using Ollama

Model Context Protocol (MCP) using Ollama

MCP Servers using Local LLMs tutorial

Photo by Lorenzo Herrera on Unsplash

Model Context Protocols: MCP servers are said to be the next big game changer in the world of AI, which will make AI agents way more advanced than we can think of.

https://medium.com/media/c6a045064f9c758e7a42cbbd59a2f36f/href

MCP or Model Context Protocol was released by Anthropic last year, which helps the LLM to connect with software and take control over it.

but there is a catch

Most of the MCP servers are compatible with Claude AI, especially the Claude AI desktop applications, which have their own limits.

https://medium.com/media/375970129b1b63595e561e5abe636ad8/href

Is there a way we can run MCP servers using local LLMs?

Yes, in this particular step-by-step, detailed tutorial, we will be exploring how to use MCP servers using local LLMs using Ollama.

Let’s get started!

Data Science in Your Pocket – No Rocket Science

Setting up Ollama

Step 1: Install Ollama in your local system.

Step 2: Ollama pull an LLM that supports tool calling. How? Check for relevant LLMs from the ‘tool’ sections in the ‘Model’

How to pull? Go to cmd in your local system and run (for qwen2.5)

ollama run qwen2.5

Prepare config.json

config.json is important to store information about the different MCP servers we will be using. Below is a sample config.json that I am using for now.

Note: As I am using Windows, I need to provide the full path of UVX, DB Path, etc., alongside double backslashes. If you are using Mac OS or Linux, this should be quite easy, and you should use just the command name (e.g., UVX).

{
"globalShortcut": "Ctrl+Space",
"mcpServers": {
"sqlite": {
"command": "C:\Users\datas\anaconda3\Scripts\uvx.exe",
"args": ["mcp-server-sqlite", "--db-path", "C:\Users\datas\OneDrive\Desktop\Car_Database.db"] },
"ddg-search": {
"command": "C:\Users\datas\anaconda3\Scripts\uvx.exe",
"args": ["duckduckgo-mcp-server"]},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\Users\datas\OneDrive\Desktop\ollama-mcp"
] }
}

}

Save the above JSON in some JSON file (e.g., `local.json`) and copy its full path.

Setting up MCPHost

now, our local LLM is set to use the MCP servers. Now let’s set up the MCP servers.

Step 1: Install Go.

Step 2: Open CMD and run this command.

go install github.com/mark3labs/mcphost@latest

Step 3: Start your MCP host using the command below, provide the path for your local.json file that you created above

mcphost -m ollama:qwen2.5 --config "C:UsersdatasOneDriveDesktoplocal.json"

You are now ready to use MCP servers using local LLMs in Ollama.

Ask any queries to your local LLM around the software; they should be able to answer. Do remember that if you are using a weak tool calling LLM, you need to be very specific about the tool names that are to be used.

Thank you so much! I hope you try it out.


Model Context Protocol (MCP) using Ollama was originally published in Data Science in your pocket on Medium, where people are continuing the conversation by highlighting and responding to this story.

Share this article
0
Share
Shareable URL
Prev Post

The Mediocre Engineer is Dead

Next Post

How to Create Custom MCP Servers for Free?

Read next
Subscribe to our newsletter
Get notified of the best deals on our Courses, Tools and Giveaways..