It's @anthropic-ai/sdk on npm. Anthropic's official TypeScript client for talking to Claude without pulling your hair out. About 2 million weekly downloads because apparently other people also got tired of janky wrappers.
Switched to this after LangChain decided to break our chat app again. LangChain has this fun habit of fucking up function calling every few weeks with zero warning. Watching production chat return 400s to paying customers gets old real fast.
Why It Doesn't Suck Like Everything Else
Streaming doesn't randomly shit the bed - Uses Server Sent Events like a normal person. Third-party wrappers buffer everything then vomit it all at once. This one actually streams like you'd expect.
Had OpenAI's SDK drop connections mid-response on our customer support chat. Spent 4 hours debugging only to find out their SSE implementation is hot garbage. This one hasn't pulled that shit yet.
TypeScript types don't lie to your face - Sounds basic but you'd be shocked how many SDKs get this wrong. Vercel's AI SDK swore max_tokens
was optional. Spoiler alert: it fucking wasn't. API kept spitting out 400s.
The official types for model names and parameters actually work in IntelliSense without making you want to rage quit.
Function calling doesn't require a computer science degree - Got database integration working in 30 minutes instead of the 2 days I wasted on LangChain's agent clusterfuck. The tool use examples actually copy-paste and work. Imagine that.
Error messages aren't complete garbage - When stuff breaks, you get BadRequestError
or RateLimitError
with actual request IDs for support tickets. No more "oops something went wrong" bullshit.
Runtime Support (What Actually Works)
Runs on Node 18+, Bun, Deno, and Cloudflare Workers. Browser support exists with dangerouslyAllowBrowser: true
but please don't be the asshole who commits API keys to git and ends up on r/badcode.
Cloudflare Workers has a 10MB memory limit that Claude can blow past with large responses. Everything else works fine though.
How It's Built (For The Curious)
Built on fetch API with retry logic and exponential backoff. Sometimes retries 400 errors way longer than it should but whatever, it mostly works.
Has auto-pagination for batch endpoints and timeout scaling for large token requests. The examples directory has working code you can copy.