Docker is your best bet for Bun in production. Serverless works too, but Docker gives you control and Bun's startup speed means containers boot fast anyway.
Multi-Stage Build That Actually Works
Here's a Dockerfile that I've used in production without major disasters. The key is Bun's compile feature that creates a single binary with everything bundled in:
## Build stage - this gets thrown away
FROM oven/bun:1.2.21-alpine AS builder
WORKDIR /app
## Copy lockfiles first for better Docker layer caching
COPY bun.lockb package.json ./
RUN bun install --frozen-lockfile
## Copy source and build standalone binary
COPY . .
RUN bun build --compile --minify ./src/index.ts --outfile server
## Production stage - tiny final image
FROM gcr.io/distroless/cc-debian12:nonroot
WORKDIR /app
COPY --from=builder /app/server ./
EXPOSE 3000
ENTRYPOINT ["./server"]
This gives you a ~50MB final image vs ~200MB+ if you just throw the full Bun runtime in there. Distroless images are nice because there's no shell or package manager for attackers to abuse.
Real-world gotcha: If you're using native modules (like database drivers), the compile step might fail. In that case, skip the compilation and use a normal runtime image:
FROM oven/bun:1.2.21-alpine
WORKDIR /app
COPY . .
RUN bun install --production
CMD ["bun", "run", "src/index.ts"]
Performance Reality Check
Containers start noticeably faster - usually 2-3x quicker than Node.js in my testing, sometimes way better if you're lucky. This actually matters for auto-scaling where containers spin up and down frequently.
Memory usage is generally better but YMMV. My API server uses somewhere around 30-50MB with Bun vs 50-80MB with Node.js for the same workload - depends on what your app is doing. JavaScriptCore's garbage collector seems less aggressive but I've still seen memory creep in long-running containers.
But: Don't expect miracles. If your app is I/O bound (database calls, external APIs), the runtime won't matter much. The speed benefits show up in CPU-intensive work and startup time.
Security Hardening (Don't Skip This)
Production containers need basic security:
- Run as non-root: The distroless image does this automatically
- No shell access: Can't
docker exec
into a distroless container (feature, not bug) - Resource limits: Always set memory/CPU limits in your orchestrator
- Secrets via environment variables: Never embed API keys in the image
## Kubernetes example with resource limits
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "200m"
The compiled binary approach is nice because you don't have node_modules
with 500 packages that might have vulnerabilities. Everything's baked into one executable.
Things That Actually Break
After running Bun in Docker for 8+ months:
- File watching doesn't work in containers - don't use
--hot
in production (duh) - Alpine vs Debian base images - Alpine is smaller but some native modules break
- Platform architecture issues - Building on M1 Mac for x86 servers needs
--platform linux/amd64
- **Memory leaks in long-running containers** - I've had to restart containers periodically, though recent versions seem more stable
Pro tip: Always test your exact Docker build on the target platform. I've been burned by builds that work locally but fail in production because of architecture differences.
More shit that will break in production:
- Environment variables get loaded differently than in Node.js - caused a 30-minute outage when our config wasn't read properly
- Some process monitoring tools don't recognize Bun processes correctly
- Log aggregation can get confused by Bun's different process title
- Auto-restart scripts written for Node.js might not handle Bun's exit codes the same way