Vucense

What Is MCP (Model Context Protocol)? The Standard That Makes AI Actually Useful at Work

Kofi Mensah
Inference Economics & Hardware Architect Electrical Engineer | Hardware Systems Architect | 8+ Years in GPU/AI Optimization | ARM & x86 Specialist
Published
Reading Time 12 min read
Published: March 31, 2026
Updated: March 31, 2026
Recently Published Recently Updated
Verified by Editorial Team
Abstract network diagram showing AI model connecting to tools and data sources representing the Model Context Protocol MCP architecture in 2026
Article Roadmap

Key Takeaways

  • MCP is the USB-C of AI. Before MCP, every AI tool needed a custom integration with every data source. MCP is the universal standard — one protocol, all connections.
  • 97 million installs in 16 months. Released by Anthropic in November 2024, MCP crossed 97 million installs in March 2026. Every major AI provider now ships MCP-compatible tooling.
  • Your data stays local by default. MCP servers run on your machine. The AI model calls the server to get what it needs — your files, calendar, email, database — without those files being uploaded to any cloud.
  • The shift from chat to action. MCP is what makes AI models genuinely useful at work. Without it, AI can only talk about your work. With it, AI can interact with your actual systems.

The Problem MCP Solves

Before November 2024, getting an AI to interact with your actual work looked like this:

You copy-paste a chunk of your code into Claude. You describe your database schema in natural language. You summarise your emails. You manually feed the AI context about your project because the AI has no way to access any of it directly.

This is tedious, error-prone, and has a fundamental ceiling — you can only give an AI what you can fit in its context window, and only what you remember to include.

MCP (Model Context Protocol) is Anthropic’s solution to this problem, released in November 2024 and now adopted by every major AI provider. It is an open standard that defines how AI models communicate with external tools and data sources — so instead of you acting as the bridge between AI and your systems, MCP creates a direct, standardised connection.

Direct Answer: What is MCP (Model Context Protocol)? Model Context Protocol (MCP) is an open standard released by Anthropic in November 2024 that defines how AI models connect to external tools, data sources, and services. It allows an AI model to securely call a local or remote server to fetch files, query databases, run code, interact with APIs, and access calendar and email data — without the user manually copying that information into the chat. MCP crossed 97 million installs in March 2026 and is supported by Claude, ChatGPT, Gemini, and all major AI providers. MCP servers can run locally on your device, meaning your data stays on your machine.


How MCP Works: The Three Components

MCP has three parts that work together:

1. MCP Host

The AI application you are using — Claude Desktop, Cursor IDE, Zed, or any MCP-compatible client. The host is what the user interacts with.

2. MCP Client

A component inside the host that speaks the MCP protocol. When the AI decides it needs external data, the client sends a standardised request to the appropriate server.

3. MCP Server

A lightweight programme — running locally on your machine or remotely — that exposes specific tools and data to the AI. A filesystem MCP server gives the AI read/write access to your files. A database MCP server lets it query your Postgres or SQLite database. A calendar MCP server lets it read and create calendar events.

The key insight: the MCP server is the gatekeeper. You decide what each server exposes. The AI can only access what you explicitly grant it access to, via the server you configure.

User → Claude Desktop (MCP Host)
           ↓ AI decides it needs your project files
       MCP Client sends request

       Filesystem MCP Server (running locally)
           ↓ reads ~/Projects/myapp/ 
       Returns file contents to Claude

   Claude responds with actual context from your real files

No files were uploaded to Anthropic. No data left your machine. The AI got what it needed through a local server call.


Why This Matters for Sovereignty

Before MCP, using AI with sensitive data meant one of two things: either you manually summarised your data (losing detail) or you uploaded it to the AI provider’s cloud (losing control).

MCP creates a third option: the AI calls a local server that you control, which returns only the specific data requested, which is then processed by the model. For Claude, that processing happens in Anthropic’s cloud — but the raw data (your files, your database, your email) never leaves your local MCP server unless you explicitly expose it.

For users running local models via Ollama, MCP creates full end-to-end sovereignty: local model, local MCP server, local data. Nothing leaves your machine at any point.


The Most Useful MCP Servers in 2026

Filesystem MCP Server

Gives Claude access to read and write files in directories you specify.

# Install via npm
npm install -g @modelcontextprotocol/server-filesystem

# Add to Claude Desktop config (~/.claude/claude_desktop_config.json):
{
  "mcpServers": {
    "filesystem": {
      "command": "mcp-server-filesystem",
      "args": ["/Users/yourname/Documents", "/Users/yourname/Projects"]
    }
  }
}

Once configured, you can ask Claude: “Read my project README and suggest improvements” — and it will actually read the file, not ask you to paste it.

Git MCP Server

Lets Claude interact with your Git repositories — read commit history, diff changes, understand branch structure.

npm install -g @modelcontextprotocol/server-git

PostgreSQL / SQLite MCP Server

Gives Claude read access to your database. You ask it questions; it writes and runs the SQL.

npm install -g @modelcontextprotocol/server-postgres
# Set connection string: postgresql://localhost/mydb

Brave Search MCP Server

Gives Claude real-time web search capability without leaving the conversation.

npm install -g @modelcontextprotocol/server-brave-search
# Requires a free Brave Search API key

Google Drive / Calendar MCP Server

Connects Claude to your Google Workspace — with your explicit authorisation.

GitHub MCP Server

Lets Claude interact with GitHub repos, issues, pull requests, and code reviews.

Obsidian MCP Server (community)

Gives Claude read/write access to your Obsidian vault — your personal knowledge base becomes an AI-accessible resource.


Setting Up MCP With Claude Desktop: 20-Minute Guide

Step 1: Install Claude Desktop Download from claude.ai/download. MCP support is built in.

Step 2: Create the config file

# macOS
mkdir -p ~/.claude
touch ~/.claude/claude_desktop_config.json

# Windows
# %APPDATA%\Claude\claude_desktop_config.json

Step 3: Add your MCP servers

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/Documents",
        "/Users/yourname/Projects"
      ]
    },
    "git": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-git"]
    },
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "your_api_key_here"
      }
    }
  }
}

Step 4: Restart Claude Desktop The servers will start automatically. You will see a hammer icon (🔨) in the chat interface showing which MCP tools are available.

Step 5: Test it Type: “List the files in my Documents folder” — Claude will call the filesystem server and return the actual listing.


MCP With Local Models (Ollama)

For full sovereignty, you can run MCP with Ollama instead of Claude. Several MCP-compatible clients support local Ollama models:

Continue.dev (VS Code/JetBrains extension) supports both Ollama and MCP:

// .continue/config.json
{
  "models": [{
    "provider": "ollama",
    "model": "llama3.3:8b",
    "title": "Local Llama"
  }],
  "mcpServers": [{
    "name": "filesystem",
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/your/projects"]
  }]
}

Jan.ai — a local-first AI desktop app with built-in MCP support and Ollama integration.

This combination — Ollama + MCP + local servers — is the maximum sovereignty configuration. Your model runs locally, your data stays local, and the AI’s tool calls never leave your machine.


Building Your Own MCP Server

The MCP SDK makes building custom servers straightforward. A simple example that exposes your team’s internal wiki:

from mcp.server import Server
from mcp.types import TextContent, Tool
import mcp.server.stdio

server = Server("internal-wiki")

@server.list_tools()
async def list_tools():
    return [
        Tool(
            name="search_wiki",
            description="Search the internal knowledge base",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "Search query"}
                },
                "required": ["query"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "search_wiki":
        # Your actual search logic here
        results = search_your_wiki(arguments["query"])
        return [TextContent(type="text", text=results)]

async def main():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(read_stream, write_stream)

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

This turns your private internal wiki into an AI-accessible resource — queryable by Claude or any MCP-compatible model — without exposing it to any external service.


MCP vs RAG: When to Use Which

Both MCP and RAG (Retrieval-Augmented Generation) solve the problem of giving AI access to external data. They are complementary, not competing:

MCPRAG
Best forReal-time data, tools, actionsLarge document libraries
Data freshnessAlways current (calls live systems)Depends on index refresh schedule
ActionsYes (can write, not just read)No (read-only)
Setup complexityLow (npm install + config)Medium (embedding pipeline needed)
Sovereign optionYes (local MCP servers)Yes (local embeddings + Ollama)

Use MCP when you need the AI to interact with live systems — your current files, running databases, live APIs. Use RAG when you need the AI to search across a large, relatively static document corpus.

Many production deployments use both: MCP for real-time tool calls, RAG for searching the knowledge base.


FAQ

Is MCP safe to use with sensitive data? Yes, with local MCP servers. The server only exposes what you configure it to expose, and runs on your machine. The AI model (if using Claude) processes the returned data in Anthropic’s cloud — if that is a concern, use a local Ollama model with MCP for full local processing.

Does MCP work with models other than Claude? Yes. MCP is an open standard now supported by OpenAI, Google Gemini, and most major AI providers. It also works with local Ollama models via compatible clients like Continue.dev and Jan.ai.

Can MCP servers write data, not just read it? Yes. MCP supports both read and write tools. A filesystem server can create, edit, and delete files. A database server can run INSERT and UPDATE queries. You control exactly what actions each server exposes.

How many MCP servers can I run simultaneously? As many as you need. Each server runs as a separate process and can be active concurrently. Claude Desktop shows all available tools from all active servers in the hammer menu.

What is the difference between MCP and function calling/tool use? Function calling (OpenAI’s original implementation) and MCP both let AI models call external tools. MCP provides a standardised protocol so the same server works with any MCP-compatible client — you build the server once and it works with Claude, ChatGPT, Gemini, and local models. Function calling required a separate implementation per AI provider.


Kofi Mensah

About the Author

Kofi Mensah

Inference Economics & Hardware Architect

Electrical Engineer | Hardware Systems Architect | 8+ Years in GPU/AI Optimization | ARM & x86 Specialist

Kofi Mensah is a hardware architect and AI infrastructure specialist focused on optimizing inference costs for on-device and local-first AI deployments. With expertise in CPU/GPU architectures, Kofi analyzes real-world performance trade-offs between commercial cloud AI services and sovereign, self-hosted models running on consumer and enterprise hardware (Apple Silicon, NVIDIA, AMD, custom ARM systems). He quantifies the total cost of ownership for AI infrastructure and evaluates which deployment models (cloud, hybrid, on-device) make economic sense for different workloads and use cases. Kofi's technical analysis covers model quantization, inference optimization techniques (llama.cpp, vLLM), and hardware acceleration for language models, vision models, and multimodal systems. At Vucense, Kofi provides detailed cost analysis and performance benchmarks to help developers understand the real economics of sovereign AI.

View Profile

Further Reading

All AI & Intelligence

You Might Also Like

Cross-Category Discovery

Comments