Gleam Production Deployment - AI-Optimized Reference
Configuration
Docker Production Setup
Base Image Requirements:
- Use
erlang:27-slim
(Debian-based) - Alpine Linux breaks Erlang crypto - Never use Alpine Linux - causes cryptographic failures in production
- Debian slim images are 200-400MB vs 30-50MB for optimized releases
Working Production Dockerfile:
FROM erlang:27-slim AS builder
RUN wget https://github.com/gleam-lang/gleam/releases/download/v1.12.0/gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz \
&& tar -xzf gleam-v1.12.0-x86_64-unknown-linux-musl.tar.gz \
&& mv gleam /usr/local/bin/
WORKDIR /app
COPY gleam.toml manifest.toml ./
RUN gleam deps download
COPY . .
RUN gleam build
FROM erlang:27-slim
WORKDIR /app
COPY --from=builder /app/build /app/build
COPY --from=builder /app/_gleam_artefacts /app/_gleam_artefacts
EXPOSE 8000
CMD ["erl", "-pa", "_gleam_artefacts/dev/lib/*/ebin", "-noshell", "-eval", "gleam@@main:run()."]
Critical Warning: _gleam_artefacts
path changes between Gleam versions - pin Gleam version and verify build output before production deployment.
Web Application Configuration
Wisp Framework Setup:
pub fn main() {
let assert Ok(_) =
wisp.new()
|> wisp.port(8000)
|> wisp.bind("0.0.0.0", 8000) // CRITICAL: Use 0.0.0.0, not localhost
|> wisp.start(handle_request)
process.sleep_forever()
}
fn handle_request(req: Request) -> Response {
use <- wisp.log_request(req)
use <- wisp.serve_static(req, under: "/static", from: "./priv/static")
case wisp.path_segments(req) {
[] -> wisp.ok() |> wisp.html_body("<h1>Hello production!</h1>")
["health"] -> wisp.ok() |> wisp.json_body("{\"status\":\"ok\"}")
_ -> wisp.not_found()
}
}
Production Requirements:
- Always include
/health
endpoint for load balancer health checks - Serve static files through reverse proxy (nginx/Caddy), not Wisp
- Configure proper CORS headers for browser API access
- Wisp logs to stdout by default - compatible with Docker logging drivers
Environment Configuration
Using Envoy for Environment Variables:
import envoy
pub fn get_config() -> Config {
let port = envoy.get("PORT") |> result.unwrap("8000") |> int.parse() |> result.unwrap(8000)
let db_url = envoy.get("DATABASE_URL") |> result.unwrap("sqlite:db.sqlite3")
Config(port: port, database_url: db_url)
}
Docker Compose Development Setup:
version: '3.8'
services:
app:
build: .
ports:
- "8000:8000"
environment:
- PORT=8000
- DATABASE_URL=postgres://user:pass@db:5432/myapp
volumes:
- .:/app
depends_on:
- db
db:
image: postgres:15
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
ports:
- "5432:5432"
Resource Requirements
Deployment Platform Comparison
Recommended: Fly.io
- Specific BEAM support with automatic clustering
- Built-in service discovery for BEAM nodes
- Understands BEAM health checks and rolling deployments
- Commands:
flyctl auth login && flyctl launch && flyctl deploy
Acceptable: Railway/Render
- Treats Gleam as generic container
- No BEAM-specific features (clustering, hot deployments)
- Works but loses operational benefits
Avoid: Heroku, Vercel, AWS Lambda
- Heroku: Expensive, doesn't understand BEAM process model
- Vercel/Lambda: Serverless incompatible with stateful BEAM applications
- Loses all concurrency benefits of BEAM's actor model
BEAM Releases vs Docker
Docker Benefits:
- Familiar deployment model
- Works with existing CI/CD pipelines
- Easy local development parity
BEAM Release Benefits:
- Hot code updates without downtime
- Built-in process supervision and monitoring
- Automatic clustering support
- Smaller memory footprint (30-50MB vs 200-400MB)
- Better startup times
Critical Warnings
Common Deployment Failures
Docker Build Failures:
- Problem: "gleam: command not found" - most base images don't include Gleam
- Solution: Install Gleam manually in Dockerfile
Application Crashes:
- Problem: App starts but crashes immediately with "no match of right hand side"
- Root Cause: Environment variables not set OR binding to localhost instead of 0.0.0.0
- Solution: Always bind to
0.0.0.0
for container networking
Port Conflicts:
- Problem: "Port already in use" during development
- Root Cause: Zombie Docker containers squatting on ports
- Nuclear Solution:
docker system prune -a && docker-compose down
Database Connection Failures:
- Problem: Connections work locally but fail in containers
- Root Cause: Container networking or SSL requirements
- Solution: Use container service names, enable SSL for cloud databases
// Use container service names
let db_url = "postgres://user:pass@postgres:5432/myapp" // docker-compose
let db_url = "postgres://user:pass@host:5432/myapp?sslmode=require" // cloud
Memory Issues:
- Problem: Memory usage grows until container killed
- Root Cause: BEAM GC settings not tuned for container limits
- Solution: Set memory limits in vm.args:
+MBas 410 # Limit to 80% of container limit (for 512MB container)
+MBacul 410
+MBcs 2048 # More aggressive garbage collection
Production Gotchas
Hot Reloading in Docker:
- File watching broken through Docker volumes on macOS/Windows
- Use Docker only for production testing, run locally for development
File Path Issues:
- Absolute vs relative paths break between environments
- Use
priv/
directory for static assets and build into Docker image
Release Build Conflicts:
- Mix releases and Gleam dependency resolution sometimes conflict
- Explicitly list Gleam dependencies in
mix.exs
BEAM Releases (Advanced Deployment)
OTP Release Configuration
What Are Releases:
- Self-contained bundle with application, dependencies, and minimal Erlang runtime
- Enables hot code updates, OTP supervision, distributed clustering
- Used by WhatsApp, Discord for production scaling
Mix Release Setup:
# mix.exs
defmodule MyGleamApp.MixProject do
use Mix.Project
def project do
[
app: :my_gleam_app,
version: "0.1.0",
language: :gleam,
deps: deps(),
releases: [
my_gleam_app: [
include_executables_for: [:unix],
applications: [runtime_tools: :permanent]
]
]
]
end
defp deps do
[]
end
end
Build Commands:
gleam build # Build Gleam app first
mix release # Create release
_build/dev/rel/my_gleam_app/bin/my_gleam_app start # Run release
Hot Code Updates
Development Hot Reload:
# Terminal 1: Start release
_build/dev/rel/my_gleam_app/bin/my_gleam_app start
# Terminal 2: Make changes and hot-reload
gleam build
_build/dev/rel/my_gleam_app/bin/my_gleam_app rpc "code:soft_purge(my_gleam_module)"
_build/dev/rel/my_gleam_app/bin/my_gleam_app rpc "c:l(my_gleam_module)"
Production Hot Updates:
mix release --version=0.2.0
mix release.upgrade --from=0.1.0 --to=0.2.0
_build/prod/rel/my_gleam_app/bin/my_gleam_app upgrade 0.2.0 # Zero downtime
Critical Limitations:
- Hot upgrades require careful state management
- Not possible for database schema changes or major refactoring
- Start with rolling deployments, add hot upgrades when needed
Clustering Configuration
Node Discovery:
# Start nodes
PORT=8000 _build/prod/rel/my_gleam_app/bin/my_gleam_app start --name app1@192.168.1.100
PORT=8001 _build/prod/rel/my_gleam_app/bin/my_gleam_app start --name app2@192.168.1.101
Gleam Clustering Code:
import gleam/otp/node
import gleam/list
pub fn connect_to_cluster() {
let nodes = ["app1@192.168.1.100", "app2@192.168.1.101"]
nodes
|> list.each(fn(node_name) {
case node.connect(node_name) {
Ok(_) -> io.println("Connected to " <> node_name)
Error(_) -> io.println("Failed to connect to " <> node_name)
}
})
}
Load Balancing: BEAM automatically distributes processes across nodes using location transparency - no additional configuration required.
Monitoring and Operations
BEAM Runtime Monitoring
Remote Console Access:
_build/prod/rel/myapp/bin/myapp remote_console
Runtime Inspection Commands:
# Memory usage breakdown
:recon.memory()
# Find memory-hungry processes
:recon.proc_count(:memory, 10)
# Find CPU-intensive processes
:recon.proc_count(:reductions, 10)
# Trace function calls (carefully!)
:recon_trace.calls({YourModule, :your_function, :return_trace}, 10)
Performance Impact Warning: Debugging tools can impact performance - use for specific issues, not constant monitoring.
Structured Logging
Production Logging Setup:
import gleam/json
pub fn log_request(req: Request, response_time: Int) -> Nil {
json.object([
#("timestamp", json.string(timestamp())),
#("method", json.string(request.method_to_string(req.method))),
#("path", json.string(request.path(req))),
#("response_time_ms", json.int(response_time)),
#("user_agent", json.string(request.get_header(req, "user-agent") |> result.unwrap("unknown")))
])
|> json.to_string()
|> io.println()
}
Log Aggregation Options:
- Docker: Use
--log-driver=fluentd
or--log-driver=syslog
- Kubernetes: Automatic cluster logging (usually ELK stack)
- Fly.io:
flyctl logs
streams from all instances - Cloud: Built-in aggregation (CloudWatch, Google Cloud Logging, Azure Monitor)
Application Metrics
BEAM System Metrics:
fn handle_metrics_request() -> Response {
let metrics = json.object([
#("memory_total", json.int(get_memory_usage())),
#("process_count", json.int(get_process_count())),
#("uptime_seconds", json.int(get_uptime()))
])
wisp.ok()
|> wisp.json_body(json.to_string(metrics))
}
@external(erlang, "erlang", "memory")
fn get_memory_usage() -> Int
@external(erlang, "erlang", "system_info")
fn get_process_count() -> Int
Database Monitoring:
// Using pog (PostgreSQL)
pub fn get_db_metrics() -> DbMetrics {
let pool_info = pog.pool_info(db)
DbMetrics(
active_connections: pool_info.active,
idle_connections: pool_info.idle,
total_connections: pool_info.total,
)
}
Circuit Breaker Pattern:
import gleam/otp/circuit_breaker
let api_breaker = circuit_breaker.new(
failure_threshold: 5,
recovery_timeout: 30_000, // 30 seconds
)
pub fn call_external_api(data: String) -> Result(Response, Error) {
circuit_breaker.call(api_breaker, fn() {
http.post("https://api.example.com/endpoint", data)
})
}
Health Checks and Alerting
Comprehensive Health Check:
fn comprehensive_health_check() -> HealthStatus {
case #(
check_database_connection(),
check_external_api_reachable(),
check_memory_usage_acceptable(),
) {
#(Ok(_), Ok(_), Ok(_)) -> HealthStatus.Healthy
_ -> HealthStatus.Unhealthy
}
}
SLA-Based Alerting Thresholds:
- Response time p95 > 500ms for 5 minutes
- Error rate > 1% for 2 minutes
- Health check failing for 30 seconds
Don't Alert On:
- Individual process crashes (BEAM restarts automatically)
- Memory usage spikes (BEAM GC handles it)
- Temporary database connection failures (pools retry)
Incident Response
Process Crash Investigation:
# Find recent crashes
:recon.crashes(10)
# Memory-hungry processes
:recon.proc_window(:memory, 10, 5000) # Top 10 by memory, check every 5 seconds
# Kill runaway process
Process.exit(pid, :kill) # Supervisor restarts it
Graceful vs Hard Restart:
# Graceful restart (waits for connections)
_build/prod/rel/myapp/bin/myapp restart
# Hard restart (immediate)
_build/prod/rel/myapp/bin/myapp stop
_build/prod/rel/myapp/bin/myapp start
Golden Rule: BEAM is designed for failure recovery - don't hesitate to kill processes or restart services.
Deployment Strategies
Blue-Green Deployment
# Deploy to green environment
flyctl deploy --app myapp-green
# Test green environment
curl localhost:8080/health
# Switch traffic
flyctl apps rename myapp-blue myapp-blue-backup
flyctl apps rename myapp-green myapp-blue
Rolling Deployment
# fly.toml
[deploy]
strategy = "rolling"
max_unavailable = 1
Canary Deployment
// 5% traffic to new feature
fn should_use_new_feature(user_id: String) -> Bool {
hash(user_id) |> remainder(100) < 5
}
Strategy Selection:
- Rolling deployments: Sufficient for most applications
- Blue-green: High-traffic applications requiring zero downtime
- Canary: Critical business logic changes requiring gradual rollout
Decision Criteria
Docker vs BEAM Releases
Use Docker When:
- Team familiar with container deployments
- Need CI/CD pipeline compatibility
- Simple web applications without clustering needs
- Development/staging environment parity important
Use BEAM Releases When:
- Need hot code updates in production
- Requiring automatic process supervision
- Building distributed/clustered applications
- Memory and performance optimization critical
- Leveraging BEAM's fault tolerance features
Platform Selection Decision Matrix
Platform | BEAM Support | Clustering | Cost | Complexity |
---|---|---|---|---|
Fly.io | Excellent | Automatic | Low | Low |
Railway | Basic | Manual | Low | Low |
Render | Basic | Manual | Medium | Low |
AWS/GCP | Manual | Manual | High | High |
Heroku | Poor | No | Very High | Medium |
Recommendation: Start with Fly.io for BEAM-optimized deployment, fall back to Railway/Render for simpler needs.
Resource Planning
Memory Requirements:
- Development: 256MB sufficient
- Production (small): 512MB-1GB
- Production (high traffic): 2GB+ with proper BEAM tuning
CPU Requirements:
- BEAM excels at I/O-bound workloads
- Single CPU core handles thousands of concurrent connections
- Scale horizontally rather than vertically
Storage Requirements:
- Releases: 30-50MB final image size
- Development containers: 200-400MB
- Database: Plan for growth, use connection pooling
This reference provides actionable deployment guidance while preserving critical operational intelligence for AI-powered decision making and implementation.
Useful Links for Further Investigation
Production Deployment Resources
Link | Description |
---|---|
Gleam Writing Guide - Deployment Section | Actually tells you how to structure projects instead of handwaving. Skip to the deployment section if you just need to ship code. |
Wisp Web Framework Examples | Code that actually works in production. Look at these before writing your own Dockerfile from scratch. |
Gleam Package Index Production Apps | The Gleam package index itself is built in Gleam and deployed to production. Source code shows real-world deployment patterns including SQLite + LiteFS and Fly.io deployment. |
Erlang OTP Release Documentation | Deep dive into BEAM release structure and deployment. Essential for understanding hot code updates, clustering, and advanced deployment patterns. |
Official Gleam Docker Images | Pre-built Docker images with Gleam and Erlang/OTP installed. Available in multiple variants including alpine and debian-slim versions. |
Docker Multi-Stage Build Best Practices | Official Docker guidance for production builds. Particularly relevant sections on minimizing image size and security considerations. |
BEAM in Docker: Memory and Performance | Technical deep-dive into BEAM virtual machine behavior in containerized environments, including memory management and process scheduling. |
Fly.io Elixir/BEAM Application Guide | Best deployment platform for BEAM apps. Their guide works for Gleam too, and they actually understand BEAM clustering. |
Railway Gleam Deployment Guide | Step-by-step deployment process for Railway platform, including environment variables, database connections, and custom Docker configurations. |
Google Cloud Run BEAM Applications | Serverless deployment patterns for BEAM applications, though note that this loses many BEAM concurrency benefits. |
Recon: Erlang Production Debugging | The tool that saves your ass when production breaks at 3am. Learn it before you need it. |
Observer and Runtime System Monitoring | Built-in BEAM tools for system monitoring, process visualization, and performance analysis. Works with any BEAM language including Gleam. |
Prometheus BEAM Metrics | Prometheus metrics collection for BEAM applications. Essential for production monitoring and alerting systems. |
BEAM Telemetry and Observability | Standard telemetry library for BEAM ecosystem. Provides hooks for metrics, logging, and distributed tracing. |
Pog PostgreSQL Client | Production-ready PostgreSQL client for Gleam with connection pooling, prepared statements, and type-safe queries. |
SQLight SQLite Client | SQLite client for Gleam applications. Good for smaller deployments or applications with embedded database requirements. |
LiteFS Distributed SQLite | Distributed SQLite solution used by the Gleam package index. Allows SQLite to work in clustered deployments with automatic replication. |
Mix Release Documentation | Elixir's release building tool, compatible with Gleam applications. Covers hot code updates, configuration management, and production releases. |
Distillery Legacy Release Tool | Older but well-documented release tool for BEAM applications. Useful for understanding release concepts and advanced deployment patterns. |
GitHub Actions BEAM CI/CD | Automated testing and deployment workflows for BEAM applications, including Gleam support and cross-platform testing. |
Envoy Environment Variables | Cross-platform environment variable handling for Gleam. Essential for production configuration management without hardcoded values. |
BEAM Security Best Practices | Official Erlang/OTP security documentation covering SSL/TLS configuration, certificate handling, and cryptographic best practices. |
Docker Security for Production | Docker security guidelines relevant to BEAM application deployment, including user privileges, secrets management, and network security. |
Related Tools & Recommendations
Should You Use TypeScript? Here's What It Actually Costs
TypeScript devs cost 30% more, builds take forever, and your junior devs will hate you for 3 months. But here's exactly when the math works in your favor.
Erlang/OTP - The Weird Functional Language That Handles Millions of Connections
While your Go service crashes at 10k users, Erlang is over here spawning processes cheaper than you allocate objects
rust-analyzer - Finally, a Rust Language Server That Doesn't Suck
After years of RLS making Rust development painful, rust-analyzer actually delivers the IDE experience Rust developers deserve.
How to Actually Implement Zero Trust Without Losing Your Sanity
A practical guide for engineers who need to deploy Zero Trust architecture in the real world - not marketing fluff
Google Avoids Breakup but Has to Share Its Secret Sauce
Judge forces data sharing with competitors - Google's legal team is probably having panic attacks right now - September 2, 2025
VS Code 1.103 Finally Fixes the MCP Server Restart Hell
Microsoft just solved one of the most annoying problems in AI-powered development - manually restarting MCP servers every damn time
GitHub Copilot + VS Code Integration - What Actually Works
Finally, an AI coding tool that doesn't make you want to throw your laptop
Cursor AI Review: Your First AI Coding Tool? Start Here
Complete Beginner's Honest Assessment - No Technical Bullshit
Alpaca Trading API - Finally, a Trading API That Doesn't Hate Developers
Actually works most of the time (which is better than most trading platforms)
Alpaca-py - Python Stock Trading That Doesn't Suck
competes with Alpaca-py SDK
Get Alpaca Market Data Without the Connection Constantly Dying on You
WebSocket Streaming That Actually Works: Stop Polling APIs Like It's 2005
Fix Helm When It Inevitably Breaks - Debug Guide
The commands, tools, and nuclear options for when your Helm deployment is fucked and you need to debug template errors at 3am.
Helm - Because Managing 47 YAML Files Will Drive You Insane
Package manager for Kubernetes that saves you from copy-pasting deployment configs like a savage. Helm charts beat maintaining separate YAML files for every dam
Making Pulumi, Kubernetes, Helm, and GitOps Actually Work Together
Stop fighting with YAML hell and infrastructure drift - here's how to manage everything through Git without losing your sanity
Python vs JavaScript vs Go vs Rust - Production Reality Check
What Actually Happens When You Ship Code With These Languages
JavaScript Gets Built-In Iterator Operators in ECMAScript 2025
Finally: Built-in functional programming that should have existed in 2015
Emacs Troubleshooting Guide - Fix the Most Common Issues That Make You Want to Throw Your Laptop Out the Window
When Emacs breaks, it breaks spectacularly. Here's how to fix the shit that actually matters when you're on a deadline.
GNU Emacs - Text Editor or Lisp Interpreter That Happens to Edit Text?
It's weird, it's powerful, and once you get past the learning curve from hell, you'll wonder how you ever tolerated any other editor.
TypeScript - JavaScript That Catches Your Bugs
Microsoft's type system that catches bugs before they hit production
JavaScript to TypeScript Migration - Practical Troubleshooting Guide
This guide covers the shit that actually breaks during migration
Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization