Getting Gleam Running and Building Your First HTTP Server

Install Gleam Without Getting Fucked by Dependencies

First things first - get Gleam and Erlang installed. Gleam runs on the BEAM VM (same as WhatsApp's backend), so you need Erlang first.

macOS (the easy way):

brew install gleam

Ubuntu/Debian (the annoying way):

sudo apt-get install erlang-nox
## Check latest release first, I'm using v1.3.2 but grab whatever's newest
## Official releases: https://github.com/gleam-lang/gleam/releases
## Installation guide: https://gleam.run/getting-started/installing/
wget https://github.com/gleam-lang/gleam/releases/download/v1.3.2/gleam-v1.3.2-x86_64-unknown-linux-musl.tar.gz
tar -xzf gleam-v1.3.2-x86_64-unknown-linux-musl.tar.gz
sudo mv gleam /usr/local/bin/

Make sure it works:

gleam --version
## Should show v1.3.2 or whatever version you downloaded

If you get "command not found", check your PATH. If you get weird Erlang errors like "ERTS not found" or "beam.smp: No such file", you probably need to install the full Erlang/OTP package, not just the runtime. Ubuntu's erlang-nox is the minimal one that actually works. WSL2 users: the Windows PATH fucks with everything - use export PATH="/usr/local/bin:$PATH" in your .bashrc. More troubleshooting tips here.

Create a New Project That Actually Works

gleam new todo_api
cd todo_api
gleam add mist wisp gleam_http gleam_json gleam_erlang

Don't pin versions unless something breaks. Latest Gleam with these packages works fine. I once had our build break because of an automatic update to gleam_json - now I pin everything. Here's what you just installed:

  • gleam_http: HTTP types so you don't have to write them
  • mist: The HTTP server that doesn't suck
  • wisp: Web framework with middleware and cookies
  • gleam_json: JSON handling without wanting to die
  • gleam_erlang: Access to Erlang/OTP goodies

Check the Gleam package index for the latest versions and Hex docs for API documentation.

Why BEAM Matters (And Why Your Node.js Backend is Slow)

Gleam runs on BEAM VM - the same technology powering WhatsApp

Gleam runs on BEAM - the same virtual machine powering WhatsApp's 2 billion users. While you're manually managing connection pools in Express, BEAM gives you:

The catch? Gleam builds are slow as hell. Budget 30-60 seconds minimum, sometimes way longer if the build gods hate you that day. See performance tips for optimization strategies.

Build Your First HTTP Server

Replace src/todo_api.gleam with this:

import gleam/erlang/process
import mist
import wisp
import wisp/wisp_mist

pub fn main() {
  wisp.configure_logger()
  
  // Generate random secret - you'll need a real one in production
  let secret_key_base = wisp.random_string(64)
  
  let assert Ok(_) =
    wisp_mist.handler(handle_request, secret_key_base)
    |> mist.new
    |> mist.port(8000)
    |> mist.start_http
  
  process.sleep_forever()
}

fn handle_request(req: wisp.Request) -> wisp.Response {
  case wisp.path_segments(req) {
    [] -> wisp.ok() |> wisp.html_body("<h1>Todo API is running</h1>")
    ["api", "v1", "todos"] -> handle_todos(req)
    _ -> wisp.not_found()
  }
}

fn handle_todos(req: wisp.Request) -> wisp.Response {
  case req.method {
    Get -> {
      let json = "{\"todos\": [{\"id\": \"1\", \"title\": \"Learn Gleam\", \"completed\": false}]}"
      wisp.ok()
      |> wisp.set_header("content-type", "application/json")
      |> wisp.html_body(json)
    }
    _ -> wisp.method_not_allowed([Get])
  }
}

Start it up:

gleam run

Visit http://localhost:8000 in your browser to see it working. Hit /api/v1/todos for your first JSON response. Check the Wisp documentation for more routing patterns and the Mist server guide for configuration options.

The Secret Key Gotcha That'll Ruin Your Weekend

Don't forget the secret_key_base in production. Took me way too long to figure out that sessions break when you restart without a persistent secret key. Users kept getting logged out. Oops. Read more about session management in Wisp and security best practices.

In production, load it from environment:

let secret_key = case gleam/os.get_env("SECRET_KEY_BASE") {
  Ok(key) if string.length(key) >= 64 -> key
  Ok(short_key) -> {
    io.println("ERROR: SECRET_KEY_BASE too short, need 64+ chars")
    process.halt(1)
  }
  Error(_) -> {
    io.println("WARNING: Using insecure random key, set SECRET_KEY_BASE") 
    wisp.random_string(64)
  }
}

JSON Handling That Won't Make You Want to Quit

Create src/app/models/todo.gleam:

import gleam/json
import gleam/dynamic

pub type Todo {
  Todo(id: String, title: String, completed: Bool)
}

pub fn todo_to_json(todo: Todo) -> json.Json {
  json.object([
    #("id", json.string(todo.id)),
    #("title", json.string(todo.title)),
    #("completed", json.bool(todo.completed)),
  ])
}

pub fn create_todo_decoder() -> dynamic.Decoder(Todo) {
  dynamic.decode3(
    Todo,
    dynamic.field("id", dynamic.string),
    dynamic.field("title", dynamic.string),
    dynamic.field("completed", dynamic.bool),
  )
}

Testing Your API (And Why It'll Probably Break)

Test your API locally once it's running - you'll hit the usual issues like CORS errors from browsers and JSON decode failures when mobile clients send strings as numbers.

Common issues you'll hit:

  • CORS errors: "Access blocked by CORS policy" - browsers hate you by default
  • JSON decode failures: Mobile clients love sending strings as numbers
  • Connection refused: "ECONNREFUSED 127.0.0.1:8000" - server crashed, check the logs
  • 500 errors: "Internal Server Error" - something fucked up in your code

The decoder gives you garbage error messages like "decode error at $.id: expected int, found string". Mobile apps love sending {"id": "123"} when you expect {"id": 123}. You'll spend way too long figuring out string vs number bullshit until you add proper error handling. iOS stringifies everything, Android sometimes sends actual numbers, sometimes doesn't. It's a shitshow.

Check out the JSON decoding guide for better error handling patterns and the mobile API best practices for dealing with inconsistent client behavior.

Adding PostgreSQL Because In-Memory Lists Suck

PostgreSQL Slonik elephant - the database that doesn't randomly corrupt your data

Get PostgreSQL Running

You need a real database. SQLite is fine for prototyping, but PostgreSQL won't randomly corrupt your data. See database comparison guide for other options.

gleam add pog envoy

pog is the PostgreSQL client that doesn't suck. envoy reads environment variables without making you want to quit. Check the database connectivity guide for connection patterns.

Spin Up PostgreSQL (Docker is Easiest)

## Start PostgreSQL in Docker - memorize this command
docker run --name todo_db \
  -e POSTGRES_PASSWORD=password \
  -e POSTGRES_DB=todo_api \
  -p 5432:5432 \
  -d postgres

If Docker gives you "port already in use", something else is using port 5432. Kill it with sudo lsof -ti:5432 | xargs kill -9 or use a different port like -p 5433:5432. On macOS, Postgres.app loves to squat on 5432. See Docker networking guide for port management tips.

Create your .env file:

DATABASE_URL=postgresql://postgres:password@localhost:5432/todo_api
SECRET_KEY_BASE=generate_a_real_64_character_secret_here

Database Connection Pool Hell

Create src/app/database.gleam:

import envoy
import pog
import gleam/result

pub type DatabaseError {
  ConfigError(String)
  ConnectionFailed(String)
}

pub fn get_db_config() -> Result(String, DatabaseError) {
  case envoy.get(\"DATABASE_URL\") {
    Ok(url) -> Ok(url)
    Error(_) -> Error(ConfigError(\"DATABASE_URL not set. Check your .env file.\"))
  }
}

pub fn create_pool(pool_size: Int) -> Result(pog.Config, DatabaseError) {
  use db_url <- result.try(get_db_config())
  
  case pog.url_config(db_url) {
    Ok(config) -> 
      Ok(config |> pog.pool_size(pool_size) |> pog.default_timeout(5000))
    Error(_) -> 
      Error(ConfigError(\"Invalid DATABASE_URL format\"))
  }
}

Connection pools work until they don't. Start with 10 connections or whatever seems reasonable. Bump it up when you start getting "no available connections" errors. BEAM apps eat 200MB base memory, plus another 50MB per 10 connections. Found this out when our monitoring started alerting at 4am. Read more about connection pool tuning and BEAM memory management.

Create the Database Schema

Create schema.sql:

CREATE TABLE todos (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  title TEXT NOT NULL,
  completed BOOLEAN DEFAULT FALSE,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Apply it:

## With Docker (mount the schema file or copy-paste)
docker exec -i todo_db psql -U postgres -d todo_api < schema.sql

## Local PostgreSQL
psql -d todo_api -f schema.sql

## If you get \"relation already exists\" - you ran it twice, ignore it

Don't add indexes yet. You don't know which queries will be slow until you have real data. Check PostgreSQL performance tips and the indexing guide when you have production data.

Wire Up Database Operations

Update your Todo model to work with the database. Create src/app/models/todo.gleam:

import gleam/json
import gleam/dynamic
import pog

pub type Todo {
  Todo(id: String, title: String, completed: Bool, created_at: String)
}

pub fn todo_decoder() -> dynamic.Decoder(Todo) {
  dynamic.decode4(
    Todo,
    dynamic.field(\"id\", dynamic.string),
    dynamic.field(\"title\", dynamic.string),
    dynamic.field(\"completed\", dynamic.bool),
    dynamic.field(\"created_at\", dynamic.string),
  )
}

pub fn todo_to_json(todo: Todo) -> json.Json {
  json.object([
    #(\"id\", json.string(todo.id)),
    #(\"title\", json.string(todo.title)),
    #(\"completed\", json.bool(todo.completed)),
    #(\"created_at\", json.string(todo.created_at)),
  ])
}

Basic Database Queries That Work

Create src/app/db.gleam for database operations:

import pog
import gleam/result
import gleam/list
import app/models/todo.{Todo}

pub fn list_todos(db: pog.Connection) -> List(Todo) {
  let sql = \"SELECT id, title, completed, created_at FROM todos ORDER BY created_at DESC\"
  
  case pog.query(sql) |> pog.returning(todo.todo_decoder()) |> pog.execute(db) {
    Ok(response) -> response.rows
    Error(_) -> []  // Return empty list on error
  }
}

pub fn create_todo(db: pog.Connection, title: String) -> Result(Todo, String) {
  let sql = \"INSERT INTO todos (title) VALUES ($1) RETURNING id, title, completed, created_at\"
  
  case pog.query(sql)
    |> pog.parameter(pog.text(title))
    |> pog.returning(todo.todo_decoder())
    |> pog.execute(db) {
    Ok(response) -> 
      case response.rows {
        [new_todo] -> Ok(new_todo)
        _ -> Error(\"Failed to create todo\")
      }
    Error(_) -> Error(\"Database error\")
  }
}

Test It Out

Update your main handler to use the database:

fn handle_todos(req: wisp.Request, db: pog.Connection) -> wisp.Response {
  case req.method {
    Get -> {
      let todos = db.list_todos(db)
      let json = todos
        |> list.map(todo.todo_to_json)
        |> json.array
        |> json.to_string_builder
      
      wisp.ok()
      |> wisp.set_header(\"content-type\", \"application/json\")
      |> wisp.set_body(json)
    }
    _ -> wisp.method_not_allowed([Get])
  }
}

Now you can:

## Start with database
DATABASE_URL=postgresql://postgres:password@localhost:5432/todo_api gleam run

## Create and list todos
curl -X POST localhost:8000/api/v1/todos -H \"Content-Type: application/json\" -d '{\"title\": \"Test\"}'
curl localhost:8000/api/v1/todos

Common database errors you'll hit:

  • Connection refused: PostgreSQL isn't running
  • Authentication failed: Wrong username/password in DATABASE_URL
  • Relation does not exist: You forgot to run the schema
  • Connection pool exhausted: Too many concurrent requests

PostgreSQL logs will slowly eat your disk space. Set up log rotation or prepare for midnight panic when disk space hits zero. Learned this the hard way when our API died in the middle of the night because PostgreSQL filled up our disk - I think it was like 20GB of query logs. The error was just "ENOSPC: no space left on device" - super helpful.

Set up log rotation and monitoring early. Use pgTune for basic configuration optimization and PostgreSQL monitoring tools for production setups.

Common REST API Development Issues

Q

How do I handle database connection failures gracefully?

A

Database connections fail. A lot. Use Result types and expect disappointment:

pub fn with_db_retry(db_operation: fn(pog.Connection) -> Result(a, DatabaseError)) -> Result(a, DatabaseError) {
  case db_operation() {
    Ok(result) -> Ok(result)
    Error(DatabaseFailure(_)) as error -> {
      // Log the error
      io.println("Database operation failed, retrying...")
      // Wait briefly and retry once
      process.sleep(100)
      db_operation()
    }
    Error(other) -> Error(other)
  }
}

Pog handles most connection recovery automatically. Log the failures and move on with your life.

Q

Why do I get "process crashed" errors when handling JSON requests?

A

Your JSON decoder expects one thing, mobile app sends another. Classic:

// Wrong - will crash on invalid JSON structure
let assert Ok(todo) = dynamic.from(json) |> create_todo_decoder()

// Right - handle parsing errors gracefully
case dynamic.from(json) |> create_todo_decoder() {
  Ok(todo) -> handle_valid_todo(todo)
  Error(decode_errors) -> {
    let error_msg = "Invalid JSON: " <> string.inspect(decode_errors)
    wisp.bad_request() |> wisp.json_body(error_response(error_msg))
  }
}

Never use assert in production. Pattern match on Result or your app will crash when mobile clients inevitably send garbage JSON. Learned this when our API started crashing every time someone used an old version of our mobile app.

Q

How do I implement proper authentication for my API?

A

No built-in auth. Roll your own JWT or use sessions:

import jwt
import gleam/crypto

pub fn authenticate_request(req: Request) -> Result(User, AuthError) {
  use auth_header <- result.try(
    wisp.get_header(req, "authorization")
    |> result.replace_error(MissingAuthHeader)
  )
  
  use token <- result.try(
    case string.starts_with(auth_header, "Bearer ") {
      True -> Ok(string.drop_left(auth_header, 7))
      False -> Error(InvalidAuthFormat)
    }
  )
  
  jwt.verify(token, your_jwt_secret)
  |> result.then(decode_user_from_claims)
  |> result.replace_error(InvalidToken)
}

For production, use OAuth 2.0 or something that's been security-audited. Don't get creative with auth.

Q

What's the best way to handle file uploads in Gleam APIs?

A

Wisp supports multipart form data for file uploads. Handle them with proper size limits and validation:

pub fn handle_file_upload(req: Request) -> Response {
  use formdata <- wisp.require_multipart_body(req)
  
  case list.key_find(formdata, "file") {
    Ok(wisp.File(filename, content)) -> {
      // Validate file size (e.g., max 10MB)
      case bit_array.byte_size(content) > 10_000_000 {
        True -> wisp.request_entity_too_large()
        False -> {
          // Process file upload
          process_uploaded_file(filename, content)
        }
      }
    }
    _ -> wisp.bad_request() |> wisp.json_body(error_json("No file provided"))
  }
}

Validate file types or users will upload executables and pwn your server. Store files outside your web root. I've seen someone upload a PHP shell disguised as a JPEG - don't be that guy who gets owned.

Q

How can I add request rate limiting to prevent abuse?

A

Implement a simple in-memory rate limiter using ETS tables or Redis for distributed rate limiting:

import gleam/erlang/atom
import gleam/int

pub fn rate_limit_middleware(
  req: Request,
  handle_request: fn(Request) -> Response,
  limit_per_minute: Int,
) -> Response {
  let client_ip = get_client_ip(req)
  let current_minute = get_current_minute()
  let key = client_ip <> ":" <> int.to_string(current_minute)
  
  case get_request_count(key) {
    count if count >= limit_per_minute -> {
      wisp.too_many_requests()
      |> wisp.set_header("retry-after", "60")
      |> wisp.json_body(error_json("Rate limit exceeded"))
    }
    count -> {
      increment_request_count(key)
      handle_request(req)
    }
  }
}

For production, use Redis. In-memory rate limiting goes to shit when you have multiple servers. Our rate limiting went to hell during a traffic spike - took us way too long to realize it was because we had multiple servers and the limits weren't shared. Spent 30 minutes wondering why users were getting blocked randomly.

Q

Why are my API responses slow compared to other frameworks?

A

BEAM/Gleam isn't optimized for raw throughput but for consistent latency under load. Profile your specific bottlenecks:

  1. Database queries: Use indexes, connection pooling, and query optimization
  2. JSON serialization: Large objects can be expensive to serialize
  3. Memory allocation: Avoid creating many large temporary objects
  4. External API calls: Use connection pooling and timeouts

Use Observer to find actual bottlenecks. BEAM isn't fast per request, but it handles thousands of concurrent connections without dying. We handle tens of thousands of WebSocket connections - try that with Express.

Q

How do I implement proper API versioning?

A

URL-based versioning is the most straightforward approach in Gleam:

pub fn handle_request(req: Request) -> Response {
  case wisp.path_segments(req) {
    ["api", "v1", ..rest] -> handle_v1_routes(req, rest)
    ["api", "v2", ..rest] -> handle_v2_routes(req, rest)
    _ -> wisp.not_found()
  }
}

fn handle_v1_routes(req: Request, segments: List(String)) -> Response {
  case segments {
    ["todos"] -> legacy_todo_handler(req)
    _ -> wisp.not_found()
  }
}

fn handle_v2_routes(req: Request, segments: List(String)) -> Response {
  case segments {
    ["todos"] -> new_todo_handler_with_pagination(req)
    _ -> wisp.not_found()
  }
}

Support the old version for at least a year. Document deprecation dates or clients will never migrate.

Q

How do I handle CORS properly for browser clients?

A

Implement CORS middleware that handles preflight requests and sets appropriate headers:

pub fn cors_middleware(
  req: Request,
  handle_request: fn(Request) -> Response,
) -> Response {
  // Handle preflight OPTIONS requests
  case req.method {
    Options -> {
      wisp.ok()
      |> add_cors_headers
    }
    _ -> {
      handle_request(req)
      |> add_cors_headers
    }
  }
}

fn add_cors_headers(response: Response) -> Response {
  response
  |> wisp.set_header("access-control-allow-origin", "*")
  |> wisp.set_header("access-control-allow-methods", "GET, POST, PUT, DELETE, OPTIONS")
  |> wisp.set_header("access-control-allow-headers", "content-type, authorization")
  |> wisp.set_header("access-control-max-age", "86400")
}

For production, replace * with actual domains. CORS isn't security theater, wildcards defeat the purpose. I spent 4 hours one afternoon debugging why our admin panel couldn't call the API until I realized Chrome was blocking it because we had * in production. Such a stupid mistake.

Q

What's the best way to structure validation for complex API inputs?

A

Create custom validation functions that compose well and provide clear error messages:

pub type ValidationError {
  Required(String)
  InvalidFormat(String)
  TooLong(String, Int)
  TooShort(String, Int)
}

pub fn validate_create_todo(data: CreateTodo) -> Result(CreateTodo, List(ValidationError)) {
  let title_result = validate_title(data.title)
  
  case title_result {
    Ok(_) -> Ok(data)
    Error(errors) -> Error(errors)
  }
}

fn validate_title(title: String) -> Result(String, List(ValidationError)) {
  []
  |> validate_required("title", title)
  |> validate_max_length("title", title, 255)
  |> validate_min_length("title", title, 1)
  |> result.all
  |> result.map(fn(_) { title })
}

Composable validation that actually tells users which field is fucked up. Revolutionary.

Q

How do I add logging and monitoring to my Gleam API?

A

Use structured JSON logging so when shit breaks, at least you can parse the logs properly:

import gleam/json

pub fn log_api_request(
  req: Request,
  response_time_ms: Int,
  status_code: Int,
) -> Nil {
  json.object([
    #("timestamp", json.string(get_iso_timestamp())),
    #("method", json.string(http.method_to_string(req.method))),
    #("path", json.string(wisp.path(req))),
    #("status_code", json.int(status_code)),
    #("response_time_ms", json.int(response_time_ms)),
    #("user_agent", json.string(get_user_agent(req))),
  ])
  |> json.to_string
  |> io.println
}

For metrics, add a /metrics endpoint. Prometheus format or whatever your monitoring tool expects.

Gleam Web Framework and Database Options

Feature

Wisp

Mist

Lustre

Cowboy Adapter

Type

Web framework

HTTP server

Frontend framework

Erlang server adapter

Primary Use

REST APIs, web services

Low-level HTTP handling

Browser applications

High-performance HTTP

Learning Curve

Easy

  • simple middleware

Medium

  • lower level

Medium

  • Elm-like patterns

Hard

  • Erlang knowledge needed

Middleware Support

✅ Built-in middleware system

❌ Manual implementation

❌ Frontend focus

✅ Via Erlang ecosystem

JSON Handling

✅ Built-in helpers

⚠️ Manual implementation

✅ For frontend data

⚠️ Manual implementation

Static File Serving

✅ Built-in middleware

❌ Manual implementation

✅ Asset compilation

✅ Via cowboy_static

WebSocket Support

❌ Not built-in

✅ Full WebSocket API

✅ Frontend WebSockets

✅ Full WebSocket support

Request Routing

✅ Pattern matching helpers

❌ Manual implementation

❌ Frontend routing

❌ Manual implementation

Session Management

✅ Signed cookies

❌ Manual implementation

❌ Frontend only

⚠️ Via Erlang libraries

CORS Support

⚠️ Manual middleware

❌ Manual headers (pain)

❌ Browser handles

❌ Documentation lies about this

Production Maturity

✅ Production ready

✅ Battle tested

⚠️ Still evolving

✅ Very mature

Performance

Good for APIs

Excellent

N/A (frontend)

Excellent

Community Size

Growing

Small but active

Small

Large (Erlang)

Essential Resources for Gleam REST API Development

Related Tools & Recommendations

compare
Recommended

Python vs JavaScript vs Go vs Rust - Production Reality Check

What Actually Happens When You Ship Code With These Languages

rust
/compare/python-javascript-go-rust/production-reality-check
100%
news
Recommended

Google Avoids $2.5 Trillion Breakup in Landmark Antitrust Victory

Federal judge rejects Chrome browser sale but bans exclusive search deals in major Big Tech ruling

OpenAI/ChatGPT
/news/2025-09-05/google-antitrust-victory
78%
news
Recommended

Google Avoids Breakup, Stock Surges

Judge blocks DOJ breakup plan. Google keeps Chrome and Android.

rust
/news/2025-09-04/google-antitrust-chrome-victory
78%
integration
Recommended

OpenTelemetry + Jaeger + Grafana on Kubernetes - The Stack That Actually Works

Stop flying blind in production microservices

OpenTelemetry
/integration/opentelemetry-jaeger-grafana-kubernetes/complete-observability-stack
70%
integration
Recommended

Temporal + Kubernetes + Redis: The Only Microservices Stack That Doesn't Hate You

Stop debugging distributed transactions at 3am like some kind of digital masochist

Temporal
/integration/temporal-kubernetes-redis-microservices/microservices-communication-architecture
66%
tool
Similar content

TypeScript Compiler Performance: Fix Slow Builds & Optimize Speed

Practical performance fixes that actually work in production, not marketing bullshit

TypeScript Compiler
/tool/typescript/performance-optimization-guide
59%
howto
Recommended

Set Up Microservices Monitoring That Actually Works

Stop flying blind - get real visibility into what's breaking your distributed services

Prometheus
/howto/setup-microservices-observability-prometheus-jaeger-grafana/complete-observability-setup
51%
alternatives
Recommended

GitHub Actions Alternatives That Don't Suck

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/use-case-driven-selection
49%
alternatives
Recommended

Tired of GitHub Actions Eating Your Budget? Here's Where Teams Are Actually Going

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/migration-ready-alternatives
49%
alternatives
Recommended

GitHub Actions Alternatives for Security & Compliance Teams

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/security-compliance-alternatives
49%
troubleshoot
Recommended

Fix Kubernetes ImagePullBackOff Error - The Complete Battle-Tested Guide

From "Pod stuck in ImagePullBackOff" to "Problem solved in 90 seconds"

Kubernetes
/troubleshoot/kubernetes-imagepullbackoff/comprehensive-troubleshooting-guide
49%
tool
Recommended

Erlang/OTP - The Weird Functional Language That Handles Millions of Connections

While your Go service crashes at 10k users, Erlang is over here spawning processes cheaper than you allocate objects

Erlang/OTP
/tool/erlang-otp/overview
47%
tool
Recommended

containerd - The Container Runtime That Actually Just Works

The boring container runtime that Kubernetes uses instead of Docker (and you probably don't need to care about it)

containerd
/tool/containerd/overview
47%
tool
Similar content

Spring Boot: Overview, Auto-Configuration & XML Hell Escape

The framework that lets you build REST APIs without XML configuration hell

Spring Boot
/tool/spring-boot/overview
44%
integration
Similar content

Claude API Node.js Express: Advanced Code Execution & Tools Guide

Build production-ready applications with Claude's code execution and file processing tools

Claude API
/integration/claude-api-nodejs-express/advanced-tools-integration
42%
howto
Similar content

Migrate Node.js to Bun 2025: Complete Guide & Best Practices

Because npm install takes forever and your CI pipeline is slower than dial-up

Bun
/howto/migrate-nodejs-to-bun/complete-migration-guide
37%
howto
Similar content

Anthropic MCP Setup Guide: Get Model Context Protocol Working

Set up Anthropic's Model Context Protocol development like someone who's actually done it

Model Context Protocol (MCP)
/howto/setup-anthropic-mcp-development/complete-setup-guide
33%
tool
Recommended

Podman Desktop - Free Docker Desktop Alternative

competes with Podman Desktop

Podman Desktop
/tool/podman-desktop/overview
33%
tool
Recommended

Podman - The Container Tool That Doesn't Need Root

Runs containers without a daemon, perfect for security-conscious teams and CI/CD pipelines

Podman
/tool/podman/overview
33%
pricing
Recommended

Docker, Podman & Kubernetes Enterprise Pricing - What These Platforms Actually Cost (Hint: Your CFO Will Hate You)

Real costs, hidden fees, and why your CFO will hate you - Docker Business vs Red Hat Enterprise Linux vs managed Kubernetes services

Docker
/pricing/docker-podman-kubernetes-enterprise/enterprise-pricing-comparison
33%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization