Cold Start Performance Reality Check

Cold starts mean your API is basically dead in the water for the first few seconds. No persistent processes, no warm servers - just you, Lambda, and the sound of users bouncing while AWS decides to wake up your function.

Every time Lambda downloads 600KB of Express garbage, I lose money and patience. Hono's 12KB isn't just a nice-to-have - it's the difference between a usable API and watching users bail during that painful first request.

Bundle Size Impact on Cold Starts

These numbers are from actual production deployments, not synthetic benchmarks:

  • Cloudflare Workers: Hono ~100ms, Express 2+ seconds of pure suffering
  • AWS Lambda: Hono ~500ms, Express hits 3-4 seconds (and sometimes just times out)
  • Vercel Edge: Same story, different platform

Cold starts are random as hell - same function can take 200ms or 3 seconds depending on AWS's mood, your region, and whether Mercury is in retrograde. But smaller bundles consistently perform better across this chaos.

Common Cold Start Performance Killers

Here's what'll fuck your cold starts (learned the hard way):

  • AWS SDK v2 - This ancient piece of shit adds 100MB. Use v3 or hate your life
  • Database connections at startup - PostgreSQL pooling doesn't work in serverless. Found this out at 3am
  • 128MB Lambda memory - Whoever set this default should be fired. Use 512MB-1GB or watch everything crawl
  • Prisma at startup - 200MB+ schema loading killed production twice for me

GitHub issues are full of people who imported their entire ORM at the module level and wondered why Lambda timed out. Don't be that person.

Runtime Setup Options

Cloudflare Workers - Fast as hell but picky:

import { Hono } from 'hono'
import { compress } from 'hono/compress'

const app = new Hono()
app.use('*', compress())

export default app

No filesystem, no Node.js APIs, no bullshit. Perfect for APIs that just serve JSON and don't need to read files.

Bun - Benchmarks look amazing, reality is messier:

import { Hono } from 'hono'
import { serveStatic } from 'hono/bun'

const app = new Hono()
app.use('/static/*', serveStatic({ root: './public' }))

export default {
  port: 3000,
  fetch: app.fetch,
}

Bun breaks on minor version updates. Only use if you enjoy debugging mysterious runtime failures.

Node.js - Boring and reliable:

import { serve } from '@hono/node-server'
import { Hono } from 'hono'

const app = new Hono()
serve(app)

Slower startup, but shit actually works. Use this unless you have a specific reason not to.

Memory Management on Serverless

Express drags in 31 dependencies that each have their own dependency hell. Hono has zero. This shit matters more than you think.

Memory fuckups that'll kill your Lambda:

  • Global variables that grow - Stored user data in global scope, OOMed after 2 hours
  • Buffering 50MB responses - Tried to return a full CSV, Lambda just died
  • Prisma schema loading - 200MB consumed before handling a single request
app.get('/export-data', async (c) => {
  // Don't load this shit unless you need it
  const { generateReport } = await import('./report-generator')
  
  return c.stream(async (stream) => {
    await generateReport(stream)
  })
})

Lazy imports saved my ass when CSV exports started OOMing Lambda. Load heavy shit only when you actually need it.

Router Performance Characteristics

Hono has multiple routing strategies because one size doesn't fit all. Express uses one router that gets sluggish around 100+ routes - noticed this when our admin panel started feeling like molasses.

  • RegExpRouter: Default, compiles to regex patterns
  • LinearRouter: Simple sequential matching for small apps
  • SmartRouter: Picks the best strategy automatically

Express routing gets slower with every route you add. I saw admin dashboards with 200+ routes take 50ms just to figure out which handler to call.

Hono vs Other Frameworks Performance

Hono's routing stays consistent whether you have 10 routes or 1000. Express... doesn't.

Real-world framework comparison (these numbers actually matter in production):

Performance Factor

Hono

Express

Fastify

Reality Check

Cold Start

~200ms

3-6 seconds

1-2 seconds

Cold starts will ruin your demo

Bundle Size

12KB

579KB

200KB

Every KB costs money on Lambda

Request Throughput

50k+ req/s

15k req/s

80k+ req/s

Benchmarks are bullshit

  • your app is different

Learning Curve

Moderate

Low

Medium

Express devs are everywhere for a reason

Ecosystem

Developing

Extensive

Mature

Express has 50,000 middleware, Hono has 50

Edge Runtime

Native

No

Limited

Edge sounds cool until you can't use half your libs

Developer Pool

Limited

Large

Growing

Good luck hiring Hono experts

Stability

Beta-ish

Rock solid

Production

Express has been battle-tested for a decade

Common Hono Performance Fuckups

Middleware Performance Disasters

Middleware runs in order before your route handler. Fuck up the order, fuck up your performance. Learned this when health checks started taking 200ms because auth middleware was running on everything.

// This'll kill your performance
app.use('*', authenticateUser()) // Auth runs on health checks like an idiot
app.use('*', validateRequest()) // Validation on static files? Really?

// Do this instead
app.use('/api/*', authenticateUser()) // Auth only where you need it
app.get('/health', (c) => c.text('OK')) // Health checks should be fast AF

I spent 3 days figuring out why health checks were slow - turns out auth middleware was running on everything. Don't be me.

Middleware mistakes that'll ruin your day:

  • Database queries in CORS middleware (why the fuck would you do this?)
  • JWT verification on public endpoints
  • Parsing request bodies on GET routes

Response Streaming or Die

Tried to return a 50MB CSV once. Lambda just died with an OOM error. Streaming saved my ass and probably my job.

app.get('/export', async (c) => {
  // This'll OOM your Lambda real quick
  // const data = await loadAllData()
  // return c.json(data) // 💀
  
  // Stream that shit instead
  return c.stream(async (stream) => {
    for await (const row of getDatabaseRows()) {
      await stream.write(JSON.stringify(row) + '
')
    }
  })
})

Database Connections on Edge Are a Nightmare

Connection pooling doesn't work on edge because everything dies after each request. Found this out when production went down because I tried to use a traditional Postgres pool.

This doesn't work on edge (learned the hard way):

// Connection pools are a lie on edge
import { Pool } from 'pg'
const pool = new Pool({ max: 20 }) // This does nothing

app.get('/users', async (c) => {
  const client = await pool.connect() // Times out randomly
  const result = await client.query('SELECT * FROM users')
  client.release()
  return c.json(result.rows)
})

Use HTTP APIs instead:

// This actually works on edge
app.get('/users', async (c) => {
  const response = await fetch('https://db-api.example.com/users', {
    headers: { 'Authorization': `Bearer ${DB_TOKEN}` }
  })
  return c.json(await response.json())
})

PlanetScale, Supabase, and Turso use HTTP because TCP connections are fucked on edge. Just accept it and move on.

HTTP Caching (Don't Fuck This Up)

Caching user data globally is a great way to show Alice's profile to Bob. I've seen this happen in production and it's not pretty.

Cache this stuff all day:

// Static config data
app.get('/api/config', (c) => {
  c.header('Cache-Control', 'public, max-age=3600')
  return c.json({ version: '1.0', features: ['auth', 'api'] })
})

NEVER cache user data:

// Seriously, don't cache this
app.get('/api/profile', async (c) => {
  // c.header('Cache-Control', 'max-age=3600') // 💀 RIP privacy
  const user = await getCurrentUser(c)
  return c.json(user)
})

Performance Debugging at 3am

Timing middleware saved my ass when trying to figure out which endpoint was fucking up our response times:

app.use('*', async (c, next) => {
  const start = Date.now()
  await next()
  const ms = Date.now() - start
  
  // Only log the slow shit
  if (ms > 100) {
    console.log(`SLOW AS FUCK: ${c.req.method} ${c.req.path} - ${ms}ms`)
  }
})

Production Deployment (Where Everything Breaks)

Environment detection that actually works:

// This breaks on edge runtimes
if (process.env.NODE_ENV === 'production') { /* nope */ }

// This works everywhere
const isDev = typeof window === 'undefined' && process?.env?.NODE_ENV !== 'production'

Memory leaks that'll kill your function:

  • Global arrays that never stop growing (did this once, OOMed after 4 hours)
  • Event listeners you forgot to clean up
  • Database connections on Node.js that never close
  • Timers that keep running after requests end
// This'll OOM your function eventually
const requestLog = [] // Grows forever

app.use('*', (c, next) => {
  requestLog.push(c.req.path) // Memory leak in action
  return next()
})

If your function works fine for an hour then dies, you probably have a memory leak in global scope.

Runtime Performance Comparison

Performance FAQ

Q

Why does my Hono app keep OOMing?

A

Your app is probably loading some heavy shit globally or leaking memory like a sieve.

Most common fuckups:

  • Storing user data in global variables (why would you do this?)
  • Loading Prisma schema at startup instead of when needed
  • Trying to buffer 50MB responses instead of streaming
  • Importing all of AWS SDK v2 instead of just the parts you needFix: Lazy load heavy dependencies and stream large responses. Also, stop using global variables for user data.
Q

Should I migrate from Express to Hono?

A

If you're getting paged at 3am because of Lambda timeouts, yeah probably.

Migration makes sense when:

  • Lambda cold starts are killing your demos
  • 128MB functions keep OOMing
  • AWS bills are getting stupid expensive

I've seen cold starts drop from 4+ seconds to under 200ms, but migration isn't free. Plan 2-6 weeks because your middleware won't work and auth patterns are different. Worth it if serverless performance is actually fucking you.

Q

Why does my Cloudflare Worker randomly timeout?

A

Worker timeouts are the worst kind of random bullshit:

  1. CPU time limits
    • Free tier gives you 10ms, paid gets 50ms. Any heavy computation will blow past this.2. Subrequest limits
    • 50 HTTP calls (free) or 1000 (paid) per invocation. Hit an API-heavy route under load and you're done.Worker dashboard shows you died of CPU timeout but doesn't tell you why. Use wrangler tail to see what actually broke.
Q

How do I know if migration is worth the pain?

A

Don't trust synthetic benchmarks

  • test your actual workload:

Deploy both versions to identical infra 2. Run realistic traffic patterns (not just hello world)3. Measure real user metrics, not req/sec numbers

Use Artillery, k6, or whatever load testing tool you trust.Hono wins big for:

  • Microservices that don't need Express's 50,000 middleware packages
  • Serverless functions that cold start constantly
  • Edge deployments where every KB matters
  • APIs that just serve JSON without complex middleware chains
Q

Why is my "optimized" API still slow as shit?

A

Usually it's middleware doing stupid things:

  • JWT verification on health checks
  • Why are you authenticating /health?
  • Logging huge request bodies
  • JSON.stringify() on 10MB payloads kills performance
  • Database connections per request
  • Connection pooling doesn't work on edge, use HTTP APIs

Middleware order matters. Put cheap stuff (CORS) before expensive stuff (auth), and scope expensive middleware to routes that actually need it.

Q

Does edge deployment actually help or is it marketing bullshit?

A

Depends on what you're building.

Edge helps with:

  • Serving cached data globally

  • Static responses are fast AF

  • Simple database reads

  • If your DB has edge replicas

  • Real-time apps

  • Low latency actually mattersEdge sucks for:

  • Complex queries

  • Your database is still in us-east-1

  • Heavy computation

  • CPU limits are brutal

  • Apps tied to specific regions

  • Edge can't fix geographyI've seen read-heavy APIs go from 800ms to sub-100ms globally. Write-heavy apps see minimal improvement because database writes still go to one region.

Q

How do I figure out why my API is slow?

A

Timing middleware is your best friend for debugging performance:```typescriptapp.use('*', async (c, next) => { const start = Date.now() await next() const ms = Date.now()

  • start if (ms > 100) console.log(SLOW AS HELL: ${c.req.path} took ${ms}ms)})```Platform debugging:

  • Cloudflare Workers:

Dashboard shows you died but not why

  • AWS Lambda: Cloud

Watch has the details if you can find themTime your database queries separately. If your DB query takes 2 seconds, optimizing middleware is pointless.

Q

What breaks when migrating from Express to Hono?

A

Middleware is where everything goes to shit:

  • Sessions don't work
  • Edge runtimes are stateless
  • File operations fail
  • No filesystem on edge
  • Custom middleware breaks
  • Different patterns, different APIs
  • Nested routers are different
  • Express Router vs Hono routing

Express uses Node.js APIs, Hono uses Web Standards. Everything from parsing requests to handling files works differently. Took me 6 weeks to migrate a non-trivial app, not the 2 weeks I planned.

Performance Resources That Actually Help

Related Tools & Recommendations

compare
Similar content

Hono, Express, Fastify, Koa: Node.js Framework Speed & Selection

Hono is stupidly fast, but that doesn't mean you should use it

Hono
/compare/hono/express/fastify/koa/overview
100%
compare
Recommended

I Benchmarked Bun vs Node.js vs Deno So You Don't Have To

Three weeks of testing revealed which JavaScript runtime is actually faster (and when it matters)

Bun
/compare/bun/node.js/deno/performance-comparison
73%
tool
Similar content

Hono Overview: Fast, Lightweight Web Framework for Production

12KB total. No dependencies. Faster cold starts than Express.

Hono
/tool/hono/overview
64%
integration
Similar content

Hono, Drizzle, tRPC: Fast TypeScript Stack & Integration Guide

Explore the Hono, Drizzle, and tRPC stack for building fast, modern TypeScript applications. Learn how to integrate these powerful tools, avoid common pitfalls,

Hono
/integration/hono-drizzle-trpc/modern-architecture-integration
61%
tool
Similar content

TypeScript Compiler Performance: Fix Slow Builds & Optimize Speed

Practical performance fixes that actually work in production, not marketing bullshit

TypeScript Compiler
/tool/typescript/performance-optimization-guide
60%
tool
Similar content

Hono Production Deployment Guide: Best Practices & Monitoring

Master Hono production deployment. Learn best practices for monitoring, database connections, and environment variables to ensure your Hono apps run stably and

Hono
/tool/hono/production-deployment
46%
tool
Similar content

Qwik Overview: Instant Interactivity with Zero JavaScript Hydration

Skip hydration hell, get instant interactivity

Qwik
/tool/qwik/overview
40%
tool
Similar content

Node.js Performance Optimization: Boost App Speed & Scale

Master Node.js performance optimization techniques. Learn to speed up your V8 engine, effectively use clustering & worker threads, and scale your applications e

Node.js
/tool/node.js/performance-optimization
40%
tool
Similar content

Advanced Node.js Benchmarking: Accurate Performance & Profiling

How to actually measure Node.js performance without bullshitting yourself about the results. Covers Node 22/24 gotchas, tools that work vs. tools that waste you

Node.js
/tool/node.js/advanced-benchmarking-techniques
40%
tool
Similar content

Node.js Memory Leaks & Debugging: Stop App Crashes

Learn to identify and debug Node.js memory leaks, prevent 'heap out of memory' errors, and keep your applications stable. Explore common patterns, tools, and re

Node.js
/tool/node.js/debugging-memory-leaks
38%
tool
Similar content

SolidJS Production Debugging: Fix Crashes, Leaks & Performance

When Your SolidJS App Dies at 3AM - The Debug Guide That Might Save Your Career

SolidJS
/tool/solidjs/debugging-production-issues
38%
tool
Similar content

SQLite Performance Optimization: Fix Slow Databases & Debug Issues

Your database was fast yesterday and slow today. Here's why.

SQLite
/tool/sqlite/performance-optimization
38%
tool
Similar content

MariaDB Performance Optimization: Fix Slow Queries & Boost Speed

Learn to optimize MariaDB performance. Fix slow queries, tune configurations, and monitor your server to prevent issues and boost database speed effectively.

MariaDB
/tool/mariadb/performance-optimization
38%
tool
Similar content

esbuild Production Optimization: Ship Fast, Lean Bundles

Fix your bloated bundles and 45-second build times

esbuild
/tool/esbuild/production-optimization
35%
tool
Similar content

mongoexport Performance Optimization: Speed Up Large Exports

Real techniques to make mongoexport not suck on large collections

mongoexport
/tool/mongoexport/performance-optimization
35%
howto
Similar content

Bun Production Deployment Guide: Docker, Serverless & Performance

Master Bun production deployment with this comprehensive guide. Learn Docker & Serverless strategies, optimize performance, and troubleshoot common issues for s

Bun
/howto/setup-bun-development-environment/production-deployment-guide
34%
tool
Similar content

Node.js Docker Containerization: Setup, Optimization & Production Guide

Master Node.js Docker containerization with this comprehensive guide. Learn why Docker matters, optimize your builds, and implement advanced patterns for robust

Node.js
/tool/node.js/docker-containerization
34%
tool
Similar content

Webpack: The Build Tool You'll Love to Hate & Still Use in 2025

Explore Webpack, the JavaScript build tool. Understand its powerful features, module system, and why it remains a core part of modern web development workflows.

Webpack
/tool/webpack/overview
32%
tool
Similar content

PyTorch Production Deployment: Scale, Optimize & Prevent Crashes

The brutal truth about taking PyTorch models from Jupyter notebooks to production servers that don't crash at 3am

PyTorch
/tool/pytorch/production-deployment-optimization
31%
tool
Similar content

Jaeger: Distributed Tracing for Microservices - Overview

Stop debugging distributed systems in the dark - Jaeger shows you exactly which service is wasting your time

Jaeger
/tool/jaeger/overview
31%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization