What is MCP?
The Model Context Protocol (MCP) is an open standard introduced by Anthropic in late 2024 that defines how AI agents connect to external tools and data sources. Before MCP, every agent framework invented its own way to define and call tools — leading to fragmentation and vendor lock-in. MCP solves this with a universal protocol: tools are exposed as MCP servers (lightweight processes that declare their capabilities), and agents connect to them as MCP clients. Think of it like USB for AI — one standard interface that works everywhere. Why MCP became the de-facto standard:- Before MCP: Each framework (LangChain, AutoGen, Semantic Kernel) had its own tool format. Tools built for one couldn’t be reused in another.
- After MCP: A tool built as an MCP server works with any MCP-compatible client — Claude, ChatGPT, Cursor, VS Code, or your own agent.
- Adoption: Within months of its release, MCP gained support from Anthropic, OpenAI, Google, Microsoft, and most major agent frameworks.
From Hardcoded Tools to MCP
In the previous section, you built a weather agent with a tool defined directly inside the agent code. That works for a single agent — but what happens when you want a second agent to use the same weather tool? Or when a different team wants to add a new tool without touching your agent code? You’d have to copy the tool definition, keep them in sync, and redeploy every agent when a tool changes. This doesn’t scale. MCP solves this by separating tools from agents. Instead of defining tools inside your agent, you run them as independent MCP servers over HTTP. Any agent can connect, discover available tools, and call them — without knowing how they’re implemented. Here’s what changes when you move the weather tool from hardcoded to MCP:| Hardcoded Tool (intro) | MCP Server | |
|---|---|---|
| Where tool lives | Inside agent code | Separate process on localhost:8002 |
| Discovery | Agent knows tools at compile time | Agent queries tools/list at runtime |
| Reuse | Copy-paste to other agents | Any MCP client connects |
| Updates | Redeploy the agent | Restart the server — agents pick it up |
| Protocol | Framework-specific | Standard JSON-RPC over HTTP |
- MCP server starts on a port and declares its tools (name, description, parameter schema)
- Agent connects as an MCP client and discovers available tools via
tools/list - User sends a query — the LLM sees the tool descriptions and decides which to call
- Agent calls the tool through the MCP protocol with structured parameters
- MCP server executes and returns a structured result
- LLM uses the result to continue reasoning or respond to the user
mcp.registerTool() call is all it takes to make a tool available over the protocol. Any MCP client that connects to this server will automatically discover get_weather and know how to call it.
Tool Design Principles
Principle 1: Clear, Descriptive Names
Bad names:process(process what?)fetch(fetch what?)do_thing(what thing?)
Pseudocode
[verb]_[noun]_[context]
Principle 2: Comprehensive Descriptions
The description is the most important part of your tool. It must answer:- What does this tool do?
- When should the agent use it?
- When NOT to use it (distinguish from similar tools)
- What format are inputs/outputs?
Pseudocode
Pseudocode
Principle 3: Simple Parameter Schemas
Research shows: Tool parameter complexity significantly affects agent accuracy.| Parameter Count | Agent Accuracy |
|---|---|
| 1-3 parameters | 90%+ correct usage |
| 4-6 parameters | 75-85% correct usage |
| 7+ parameters | 60-70% correct usage |
Pseudocode
Pseudocode
Principle 4: Consistent Return Formats
Standard response envelope:Pseudocode
- Agent knows what to expect
- Easy to check success/failure
- Consistent error handling
Build Your Own MCP Server
The weather server has one tool. A real production server has many. This customer support MCP server exposes four tools that work together — an agent connecting to it can answer FAQs, look up accounts, create tickets, and track orders, all through the same MCP protocol.Search Knowledge Base
The first tool an agent reaches for when a user asks a question. It searches help articles by keyword and optional category, returning matches with confidence scores. The description explicitly tells the agent when to use it (“general questions, how-to, troubleshooting”) so it doesn’t call customer lookup for a simple FAQ.Customer Support Agent
The full example ties everything together: three MCP servers (knowledge base, customer info, incident tickets), a LangChain agent that discovers tools from all of them, and thread-based memory so the conversation persists across turns.tools/list and the LLM picks the right one per query.