Model Context Protocol (MCP) Explained for Developers
What MCP is, why it exists, how servers and clients work together, and when you would actually build one.
Model Context Protocol is a standard for connecting AI assistants to external tools and data sources. Instead of building a custom integration for every AI tool and every data source, MCP gives both sides a common interface — the AI client knows how to talk to any MCP server, and an MCP server exposes tools in a format any MCP client can use.
The problem it solves
Before MCP, every AI integration was bespoke. A code editor that wanted Claude to read files had to implement file reading itself. A chat interface that wanted Claude to query a database had to build the database connector. The AI tool and the data source had to be developed together or not at all.
MCP separates the two concerns. An MCP server exposes a set of tools (functions the AI can call). An MCP client (the AI assistant) discovers what tools are available and calls them when needed. The client does not need to know what database engine the server is using. The server does not need to know which AI model is calling it.
The three building blocks
Tools — functions the AI can call. A tool has a name, a description, and a JSON schema for its parameters. The AI decides when to call a tool based on the description.
{
"name": "read_file",
"description": "Read the contents of a file at the given path",
"inputSchema": {
"type": "object",
"properties": {
"path": { "type": "string", "description": "Absolute path to the file" }
},
"required": ["path"]
}
}
Resources — data the AI can read. Unlike tools (which do something), resources expose content — a file, a database record, a config value. The client fetches a resource by its URI.
Prompts — reusable prompt templates stored on the server. Less commonly used than tools, but useful when you want to provide consistent phrasing for specific operations.
Most MCP servers focus on tools. Resources and prompts are optional.
How a tool call works
- The AI client starts the MCP server and asks what tools are available (
tools/list). - The server responds with a list of tool definitions.
- The client passes the tool list to the AI model as part of its context.
- The model decides to call a tool and returns a structured tool call (name + arguments).
- The client sends the tool call to the server (
tools/call). - The server executes the tool and returns the result.
- The client passes the result back to the model.
- The model continues generating, now with the tool result in context.
Steps 4–8 can happen multiple times in a single conversation turn. A model might call three tools in sequence before generating its final response.
Transport: how clients and servers communicate
MCP servers can communicate over two transports:
stdio — the client starts the server as a child process and communicates over stdin/stdout. This is the standard for local tools. Claude Code and Claude Desktop use stdio for locally configured servers.
HTTP with SSE — the server runs as a network service and the client connects over HTTP. Used for remote servers or when multiple clients need to share one server instance.
Most developers building their first MCP server start with stdio — it requires no network setup and is easier to debug.
A minimal MCP server in Python
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import mcp.types as types
app = Server("my-tools")
@app.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="get_timestamp",
description="Return the current UTC timestamp as an ISO 8601 string",
inputSchema={"type": "object", "properties": {}}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
if name == "get_timestamp":
from datetime import datetime, timezone
ts = datetime.now(timezone.utc).isoformat()
return [TextContent(type="text", text=ts)]
raise ValueError(f"Unknown tool: {name}")
if __name__ == "__main__":
import asyncio
asyncio.run(stdio_server(app))
Install the SDK:
pip install mcp
Run it:
python server.py
The server starts, waits for a client connection over stdio, responds to tools/list with the get_timestamp tool, and executes it when called.
Register a server with Claude Desktop
Add a server to Claude Desktop's config file:
{
"mcpServers": {
"my-tools": {
"command": "python",
"args": ["/path/to/server.py"]
}
}
}
Config file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Restart Claude Desktop. The tools from your server are now available in every conversation.
When to build an MCP server
An MCP server is worth building when you have a data source or system that Claude should be able to query or act on during a conversation. Common examples:
- A company's internal documentation or knowledge base
- A custom database or API that Claude should be able to search
- File system operations in a specific directory
- CI/CD system status checks
- Anything you currently copy-paste into Claude repeatedly
If you find yourself consistently pasting the same kind of data into Claude before asking questions about it, that data source is a good candidate for an MCP server.
When not to build one
MCP adds latency and complexity. For one-off tasks, pasting the data directly is faster than building an integration. For simple prompt templates that do not need external data, a skill file is lighter-weight and easier to share.
The full walkthrough for registering a server is in the Add a Custom MCP Server to Claude Code tutorial.
SysEmperor