What Actually Happens When Agents Connect
Forget the marketing diagrams. Here's how MCP integration actually works:
Client Spawns Server Process: Your MCP client (Claude Desktop, custom app, whatever) spawns an MCP server as a subprocess. Not a web service, not a daemon - a plain old process that talks JSON-RPC over stdio.
Handshake Dance: Client sends
initialize
request with its capabilities. Server responds with its available tools, resources, and prompts. This is where schema mismatches bite you.Request/Response Cycle: Client calls server methods using JSON-RPC 2.0. Server executes and returns results. Sounds simple until you deal with timeouts, errors, and state management.
Connection Dies Eventually: Process crashes, pipes break, or someone kills the connection. Your integration needs to handle this gracefully or suffer random failures.
The Three Integration Patterns That Actually Work
Pattern 1: Direct Tool Calling
## Server exposes tools, client calls them directly
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "database_query",
"arguments": {"sql": "SELECT * FROM users WHERE id = ?", "params": [123]}
},
"id": 1
}
This works for simple, stateless operations. Database queries, API calls, file operations. Breaks down when you need to maintain state across calls or handle long-running operations.
Pattern 2: Resource-Based Access
## Server exposes resources, client reads them
{
"jsonrpc": "2.0",
"method": "resources/read",
"params": {
"uri": "postgres://localhost/mydb/users/123"
},
"id": 2
}
Better for data access patterns. Server handles connection pooling, caching, and state management. Client just reads resources by URI. Works until your URIs get complex or you need write operations.
Pattern 3: Prompt Templates with Context
## Server provides prompt templates with dynamic context
{
"jsonrpc": "2.0",
"method": "prompts/get",
"params": {
"name": "analyze_user_behavior",
"arguments": {"user_id": 123, "time_range": "7d"}
},
"id": 3
}
Most flexible for AI workflows. Server builds context-aware prompts, client feeds them to language models. Requires careful prompt engineering and context management.
Authentication Patterns (Because Security Matters)
Environment Variables (Simple, Insecure)
export DATABASE_URL="postgresql://user:pass@localhost/db"
export API_KEY="sk-totally-not-leaked-in-logs"
Fine for development, terrible for production. Credentials leak through process lists, logs, and error messages.
Configuration Files (Better, Still Not Great)
{
"auth": {
"type": "oauth2",
"client_id": "your-client-id",
"token_file": "/secure/path/token.json"
}
}
At least credentials aren't in environment variables. Still need to handle token refresh, file permissions, and secret rotation.
Runtime Token Exchange (Production-Ready)
## Server requests tokens when needed
{
"jsonrpc": "2.0",
"method": "auth/get_token",
"params": {"scope": "database.read", "ttl": 3600},
"id": 4
}
Client manages authentication, server requests tokens as needed. Handles expiration, rotation, and scope limitation properly.
State Management (Where Things Get Messy)
Stateless Operations: Every request is independent. Simple to implement, hard to optimize. Your database takes a beating from connection overhead.
Connection Pooling: Server maintains database connections, caches results, handles cleanup. Much faster, but now you have state to manage and memory leaks to debug.
Session State: Track user sessions, workflow state, partial results. Essential for complex workflows, nightmare for debugging. Sessions leak, state goes stale, and correlation gets lost.
The MCP TypeScript SDK handles some of this automatically, but you'll still need to think about state lifecycle and cleanup.
Error Handling (Because Everything Breaks)
Network Errors: JSON-RPC over stdio is reliable until the process dies. Implement restart logic or accept occasional failures.
Schema Validation: Servers change their schemas, clients send invalid requests. Use JSON Schema validation and version your APIs properly.
Timeout Handling: Long-running operations timeout, clients give up waiting. Implement async patterns or chunked responses for large operations.
Graceful Degradation: When integrations fail, what happens? Fall back to cached data, return errors, or crash? Plan for this upfront.
## Typical error response that you'll see a lot
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params",
"data": {"expected": "string", "got": "null", "field": "query"}
},
"id": 5
}
Production integrations need comprehensive error handling, retry logic with exponential backoff, and monitoring for failure patterns. The MCP specification covers standard error codes, but real errors are always more creative.
Error Handling Architecture: Proper error handling requires circuit breakers, retry logic, timeouts, and comprehensive monitoring to track failure patterns across distributed agent networks.