MCP (Model Context Protocol) is Anthropic's attempt to standardize how AI apps access external data. Released in November 2024, it's gaining traction because it solves a real problem: every AI integration requires custom code.
The Reality Check
MCP is early-stage tech with a small but growing ecosystem. Companies like Block, Apollo, Replit, and Zed are experimenting with it, but most implementations are still prototypes, not production systems.
When to Use It:
- You want Claude Desktop to access your local files
- Building internal tools that AI assistants need to query
- Prototyping AI integrations without custom API work
- You're already comfortable with TypeScript/Node.js
When to Skip It:
- You need something battle-tested for production
- Your team isn't familiar with TypeScript
- You're building simple one-off integrations
- You need to integrate with non-MCP AI systems
How It Actually Works
MCP servers expose three things to AI apps:
- Resources - Files, database records, API responses (read-only data)
- Tools - Functions the AI can call (database writes, API calls, file operations)
- Prompts - Reusable templates for common AI tasks
Instead of HTTP requests, MCP uses stdin/stdout for local communication (faster, no network overhead) or HTTP for remote deployments. It's basically GraphQL but for AI context.
They built this because every AI integration was becoming a custom snowflake. Instead of writing OAuth flows for Gmail, Slack, GitHub, etc., you write one MCP server and it works with any MCP client. In theory.
TypeScript SDK: The Good and Bad
The Good:
- Full type safety if you're already using TypeScript
- Handles all the protocol complexity for you
- Works with Node.js 18+
- Has working examples you can actually run
The Bad:
- Documentation assumes you know how MCP works (you definitely don't on first try)
- Error messages are garbage - "Invalid request" tells you nothing about which field is wrong
- Still version 1.x - breaking changes every update
- Limited real-world production examples beyond GitHub's implementation
Anyway, enough bitching about the docs. Here's what actually works in practice...
Most people are still just fucking around with this - I wouldn't bet production on it yet. GitHub's MCP server is pretty much the only example that actually matters for real work. Everyone else is building internal tools and hoping they work.
The SDK is fast enough unless you're doing heavy database queries. Then you'll wait 2-3 seconds per operation and wonder why you didn't just build a REST API.
The real nightmare isn't performance - it's figuring out what data to expose without breaking security or overwhelming the AI with garbage. We gave Claude access to our entire PostgreSQL database once. It spent forever trying to understand our schema and then asked if we really needed 47 different user-related tables. Learned to expose specific views, not raw tables the hard way.