If you're coming from any other ecosystem, Docker is probably your comfort zone. Good news: Gleam in Docker is straightforward and gives you the deployment experience you're used to.
Basic Dockerfile That Actually Works
Skip the Alpine Linux approach - it breaks Erlang crypto in weird ways. Use Debian slim instead:
FROM erlang:27-slim
## Install Gleam from official releases
RUN wget https://github.com/gleam-lang/gleam/releases/download/v1.12.0/gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz \
&& tar -xzf gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz \
&& mv gleam /usr/local/bin/ \
&& rm gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz
WORKDIR /app
COPY . .
RUN gleam deps download
RUN gleam build
EXPOSE 8000
CMD [\"gleam\", \"run\"]
Reality check: This Dockerfile works but rebuilds like ass. Change one line? Rebuild the entire fucking thing. For local dev, just mount your source:
docker run -v $(pwd):/app -p 8000:8000 your-gleam-app
Multi-Stage Build for Production
Single-stage builds ship your entire development environment to production. Multi-stage Docker builds separate the build environment from the runtime environment:
Here's a better production Dockerfile that doesn't ship your entire development environment:
## Build stage
FROM erlang:27-slim AS builder
RUN wget https://github.com/gleam-lang/gleam/releases/download/v1.12.0/gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz \
&& tar -xzf gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz \
&& mv gleam /usr/local/bin/
WORKDIR /app
COPY gleam.toml manifest.toml ./
RUN gleam deps download
COPY . .
RUN gleam build
## Runtime stage
FROM erlang:27-slim
WORKDIR /app
COPY --from=builder /app/build /app/build
COPY --from=builder /app/_gleam_artefacts /app/_gleam_artefacts
EXPOSE 8000
CMD [\"erl\", \"-pa\", \"_gleam_artefacts/dev/lib/*/ebin\", \"-noshell\", \"-eval\", \"gleam@@main:run().\"]
Watch the fuck out: The _gleam_artefacts
path changes between versions. I learned this when v1.11.0 changed the build structure and our CI shit the bed for 2 hours. Pin your Gleam version and check what gleam build
actually outputs before you push to prod.
Web Apps With Wisp
Most Gleam web apps use Wisp for HTTP handling. Wisp's architecture is based on middleware composition, similar to Express.js or Ring. Here's a basic setup that handles static files and routing:
import gleam/http/request.{type Request}
import wisp.{type Response}
pub fn main() {
let assert Ok(_) =
wisp.new()
|> wisp.port(8000)
|> wisp.start(handle_request)
process.sleep_forever()
}
fn handle_request(req: Request) -> Response {
use <- wisp.log_request(req)
use <- wisp.serve_static(req, under: \"/static\", from: \"./priv/static\")
case wisp.path_segments(req) {
[] -> wisp.ok() |> wisp.html_body(\"<h1>Hello production!</h1>\")
[\"health\"] -> wisp.ok() |> wisp.json_body(\"{\"status\":\"ok\"}\")
_ -> wisp.not_found()
}
}
Production Gotchas:
- Always include a
/health
endpoint for load balancer health checks - Serve static files through a reverse proxy (nginx or Caddy) in production, not Wisp
- Wisp logs to stdout by default, which works great with Docker logging drivers
- Configure proper CORS headers for browser-based API access
Environment Variables and Config
Don't hardcode configuration values. Use envoy for twelve-factor app environment variable handling:
import envoy
pub fn get_config() -> Config {
let port = envoy.get(\"PORT\") |> result.unwrap(\"8000\") |> int.parse() |> result.unwrap(8000)
let db_url = envoy.get(\"DATABASE_URL\") |> result.unwrap(\"sqlite:db.sqlite3\")
Config(port: port, database_url: db_url)
}
Docker Compose for Local Development:
version: '3.8'
services:
app:
build: .
ports:
- \"8000:8000\"
environment:
- PORT=8000
- DATABASE_URL=postgres://user:pass@db:5432/myapp
volumes:
- .:/app
depends_on:
- db
db:
image: postgres:15
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
ports:
- \"5432:5432\"
Deployment Platforms That Just Work
Fly.io (Recommended): Has specific BEAM support and handles BEAM clustering automatically with built-in service discovery.
flyctl auth login
flyctl launch
flyctl deploy
Creates a fly.toml
that usually works out of the box. Fly understands BEAM health checks and handles rolling deployments correctly.
Railway: Works but treats your app like any other container. No special BEAM features like automatic clustering or hot deployments.
Render: Same as Railway. Works fine but you lose BEAM-specific operational benefits.
Don't Use: Heroku (expensive and they don't understand BEAM's process model), Vercel (serverless doesn't make sense for stateful BEAM applications), AWS Lambda (you lose all the concurrency benefits of BEAM's actor model).