Why Tools Matter
In the previous section, we saw a simple agent with one tool. Real applications need dozens or hundreds of tools. The quality of your tool design directly impacts agent reliability. Poor tool design leads to:- Agents selecting wrong tools
- Excessive API calls (cost, latency)
- Confusing error messages
- Unpredictable behavior
- Accurate tool selection (>90%)
- Minimal agent steps
- Clear error handling
- Predictable, testable behavior
Function Calling Basics
Before we dive into advanced patterns, let’s understand the mechanics. How LLMs Use Tools:- LLM receives tool descriptions (names, descriptions, parameters)
- LLM analyzes user query and available tools
- LLM decides which tool(s) to use and with what parameters
- LLM returns structured data indicating tool choice
- You execute the tool and return results
- LLM uses results to continue or respond
Model Context Protocol (MCP) Introduction
MCP is an open standard for connecting AI systems to data sources and tools. Think of it as “USB for AI” - a universal connector. Why MCP Matters: Tool Definition: Server Setup: MCP Benefits:- Standardization: Same protocol for all tools
- Tool Discovery: Agents can list available tools dynamically
- Error Handling: Consistent error format
- Security: Built-in authentication/authorization
- Composability: Tools can call other tools
Running the Weather MCP Server
The weather MCP server we created above is a full, runnable example that demonstrates MCP’s power. You can use it with any agent framework:- OpenAI Agent SDK
- LangGraph
- Google Gemini SDK
- Claude Agent SDK
- Any MCP-compatible client
http://localhost:8002 and exposes the /mcp endpoint for JSON-RPC communication.
Key MCP Advantages Demonstrated:
-
Tool Discovery: Agents can query
tools/listto discover available tools without hardcoding - Pluggability: Add/remove tools by starting/stopping MCP servers - no code changes needed in your agent
- Framework Independence: The same MCP server works with OpenAI, Anthropic, Google, or any other framework
- Separation of Concerns: Tool implementation is separate from agent logic - teams can work independently
- Reusability: Write the tool once, use it across all your AI applications
Tool Design Principles
Principle 1: Clear, Descriptive Names
Bad names:process(process what?)fetch(fetch what?)do_thing(what thing?)
Pseudocode
[verb]_[noun]_[context]
Principle 2: Comprehensive Descriptions
The description is the most important part of your tool. It must answer:- What does this tool do?
- When should the agent use it?
- When NOT to use it (distinguish from similar tools)
- What format are inputs/outputs?
Pseudocode
Pseudocode
Principle 3: Simple Parameter Schemas
Research shows: Tool parameter complexity significantly affects agent accuracy.| Parameter Count | Agent Accuracy |
|---|---|
| 1-3 parameters | 90%+ correct usage |
| 4-6 parameters | 75-85% correct usage |
| 7+ parameters | 60-70% correct usage |
Pseudocode
Pseudocode
Principle 4: Consistent Return Formats
Standard response envelope:Pseudocode
- Agent knows what to expect
- Easy to check success/failure
- Consistent error handling
Practical Example: Building a Customer Support Tool Set
Let’s build a realistic set of tools for customer support using MCP: Tool Schemas: Knowledge Base Search Tool: Customer Lookup Tool: Support Ticket Creation Tool: Order Status Tool:Tool Implementation Patterns
Pattern 1: Tool Consolidation
Problem: Agent has to make 5 sequential calls to get complete data.Pattern 2: Semantic Enrichment
Don’t just return raw data - add context that helps the agent. Basic tool:Pattern 3: Graceful Error Handling
Tools should never throw exceptions to the agent. Always return structured errors.Check Your Understanding
- Tool Design: You need a tool to search products. What should you name it and what should the description include?
- Parameter Complexity: Your tool has 8 parameters. What should you do?