Why Your Memory Monitoring Is Full of Shit

Four different systems, four different numbers, one dead pod

Here's what happened to me last month: production API showing 400MB usage in Datadog, 500MB in kubectl top, 650MB in Prometheus, but got OOMKilled at a 512MB limit. Which number do you trust? None of them work, because they're all measuring different stuff and none of them show what actually kills your pod.

The OOM killer doesn't give a damn about your pretty dashboards. It counts everything that gets charged to the cgroup - every byte of memory-mapped files, every network buffer, every piece of kernel memory allocated on behalf of your container. Your monitoring? It's mostly showing RSS memory and calling it a day.

I spent two solid days debugging this before I realized `kubectl top` was basically useless for OOM debugging. It samples every 15-30 seconds and only shows physical memory that's currently in RAM. Miss that 5-second memory spike during garbage collection? Too bad, your pod's dead and you'll never see what killed it in the metrics.

What Actually Counts Toward Your Memory Limit

Your app thinks it's using 400MB. kubectl top agrees. But the OOM killer counted 900MB. Here's what it saw that you didn't:

Memory-mapped files: Your app loads some huge JSON config file using mmap() - maybe 180MB, could be more. Doesn't show up in heap monitoring, counts toward your limit. First time someone writes to that mapped region? Boom, suddenly you're way over your limit.

Kernel socket buffers: Got tons of open connections? Had a service with shitloads of connections - few dozen KB per connection in kernel buffers, but adds up fast with thousands of connections. Can easily be hundreds of MB of memory usage that doesn't show up in any application monitoring.

Java non-heap memory: JVM metaspace, code cache, direct memory, compressed class space. Your 1GB heap might have 500MB of additional JVM memory that you've never monitored.

Node.js hidden memory: V8 external memory, Buffer pools, native addon memory. `process.memoryUsage()` shows heap stats, misses half your actual memory usage.

The only way to see what the OOM killer sees:

kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.current
kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.max

Everything else is just guessing. This is documented in the cgroup memory documentation, but most Kubernetes troubleshooting guides skip this crucial detail. The Red Hat memory management guide explains these differences in detail, and the Linux memory statistics documentation covers the underlying mechanisms.

All the different ways your memory gets counted

Linux memory is a clusterfuck of different layers that all eat your memory:

Real Example: The JSON File That Killed Production

Had an e-commerce API that kept dying every few hours. Monitoring showed 700MB usage with 1GB limits - should be fine, right? Wrong.

The app was memory-mapping these massive JSON catalog files during product updates. Monitoring only saw RSS memory (the 700MB). The kernel saw RSS + memory-mapped files = way over the limit.

Took me three fucking days and probably 50 Stack Overflow tabs to figure out I needed to check what the kernel actually sees:

kubectl exec pod -- cat /proc/1/smaps | grep -A 5 catalog
## Size: like 320MB of memory-mapped files not showing up anywhere

kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.stat  
## rss 734003200          ← monitoring showed this (700MB)
## mapped_file 335544320  ← monitoring completely ignored this (320MB)
## Total: We were over by like 50MB, maybe more

Fixed it by streaming the JSON instead of memory-mapping the whole thing. Three outages to figure out that literally all our monitoring was lying to us.

cgroup v2 Broke My Stable Workloads

Upgraded to Kubernetes 1.31 and suddenly pods that ran fine for months started getting OOMKilled. Same code, same limits, same everything - except now cgroup v2 counts memory differently.

cgroup v2 is "more accurate" which is a polite way of saying it counts a bunch of shit that v1 ignored. Your 800MB pod that was totally stable before? Now it's using 850MB+ because kernel stack memory and network buffers suddenly count toward your limit.

## Check if you're on cgroup v2
kubectl exec pod -- cat /sys/fs/cgroup/cgroup.controllers
## If this file exists, you're on v2

## See the extra memory v2 counts
kubectl exec pod -- cat /sys/fs/cgroup/memory.stat | grep -E "kernel|sock|slab"
## kernel_stack 4194304   ← 4MB now counted (was free in v1)
## sock 8192000          ← 8MB network buffers now counted  
## slab 125829120        ← 120MB kernel memory now counted

Translation: your pods need 10-15% higher memory limits after upgrading to Kubernetes 1.31+. I found this out during a cluster upgrade that took down half our goddamn services at once.

The 15-Second Gap That Kills Your Pods

Your monitoring samples every 15-30 seconds. Memory spikes last 5 seconds. Guess what happens?

10:15:00 - 512MB (monitoring sample: looks fine)
10:15:05 - 1.2GB spike during garbage collection (no sample)
10:15:06 - OOMKilled
10:15:15 - Pod restarting (next sample shows restart, not spike)

You'll never see the spike in your dashboards. The pod just "randomly" dies.

Here's how to catch the spikes that kill your pods:

POD_NAME="your-dying-pod"
while kubectl get pod $POD_NAME > /dev/null 2>&1; do
    MEM=$(kubectl exec $POD_NAME -- cat /sys/fs/cgroup/memory/memory.current)
    LIMIT=$(kubectl exec $POD_NAME -- cat /sys/fs/cgroup/memory/memory.max)
    PERCENT=$(( MEM * 100 / LIMIT ))
    echo "$(date '+%H:%M:%S'): ${PERCENT}%"
    [ $PERCENT -gt 95 ] && echo "SPIKE DETECTED!"
    sleep 0.1
done

Run this before your pod dies and you'll actually see what kills it.

Cloud Provider Memory Overhead (aka Hidden Taxes)

AWS EKS taxes you 50-200MB per pod for VPC CNI networking, instance metadata service, and CloudWatch agents. They don't mention this shit in the pricing calculator, of course.

Google GKE Autopilot forces memory limits based on your requests. Request 256MB, get 512MB limit automatically. But then Stackdriver monitoring eats 40-120MB of that without asking.

Azure AKS has similar CNI overhead plus whatever the hell the Azure Monitor agent feels like consuming that week.

Your 512MB pod suddenly needs 700MB limits on managed Kubernetes. Factor this in or watch your pods die mysterious deaths.

Prometheus Logo

Quick Diagnostic Commands That Actually Work

Skip the fancy monitoring. Use these when shit hits the fan:

## See what the OOM killer sees
kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.current
kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.max

## Find memory-mapped files (common hidden memory)
kubectl exec pod -- cat /proc/1/smaps | grep -A 3 -B 1 "Size:.*[0-9]{6}"

## Check for memory pressure before OOM (cgroup v2 only)
kubectl exec pod -- cat /sys/fs/cgroup/memory.pressure

## Get the real memory breakdown
kubectl exec pod -- cat /proc/1/status | grep -E "VmRSS|VmSize"

Stop guessing. Start measuring what actually matters.

Hidden Memory: What's Actually Killing Your Pods

The stuff your monitoring doesn't see but the OOM killer counts

You know that feeling when your pod dies with plenty of apparent memory headroom? That's hidden memory consumers laughing at your monitoring setup.

I've debugged enough of these to recognize the patterns. Your Java app reports 800MB heap usage, looks fine. But the JVM also allocated 400MB of direct memory for NIO operations, 150MB of metaspace for classes, 64MB of code cache for JIT compilation, and 200MB for compressed class space. Total: 1.6GB. Your limit: 1GB. Mystery solved.

The OOM killer doesn't discriminate. It counts everything charged to your cgroup - RSS, page cache, memory-mapped files, kernel allocations, shared memory, the works. Your pretty dashboards? They're showing you maybe 60% of the story.

The Big Three Hidden Memory Killers

1. High-connection services eat kernel memory for breakfast

Had a WebSocket service with like 3000 concurrent connections. App looked fine at 300MB but the kernel was eating another 200MB+ for socket buffers. Each connection needs about 8KB of kernel memory that doesn't show up anywhere in your app monitoring.

## See kernel memory allocations
kubectl exec pod -- cat /sys/fs/cgroup/memory.stat | grep sock
## sock 209715200  ← 200MB of socket buffers (hidden from app)

2. Memory-mapped files are invisible memory bombs

App was memory-mapping these huge database index files, maybe 400MB worth. Shows 0MB in RSS initially. First time the app actually touches that data? Boom - suddenly you're way over your fucking limit.

## Find memory-mapped files that might be killing you
kubectl exec pod -- cat /proc/1/smaps | grep -B 1 -A 1 "Size:.*[0-9]{6}"
## Look for files with 6+ digit sizes (100MB+)

3. Language runtime overhead that nobody talks about

Java: Your 1GB heap has 500MB+ of metaspace, code cache, and direct memory on top.
Node.js: V8 external memory, Buffer pools, and native addons add 200MB+ easily.
Python: C extensions and module caches that never show up in memory profiling.
Go: CGO allocations and runtime overhead the Go runtime doesn't track.

Kubernetes Memory Management Architecture

The Commands That Actually Help

Your pod is dying? Run these commands first:

## See total memory usage (what the OOM killer sees)
kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.current

## Check for big memory-mapped files
kubectl exec pod -- cat /proc/1/smaps | grep -B1 -A1 "Size:.*[0-9]{6}"

## Java apps: get the full JVM memory picture
kubectl exec java-pod -- jcmd 1 VM.memory_summary

## Node.js: see hidden external memory
kubectl exec node-pod -- node -e "console.log(process.memoryUsage())"

Real talk: If you're spending more than 30 minutes figuring out what's killing your pods, you're doing it wrong. These commands give you 90% of the answers in 5 minutes.

Grafana Dashboard

The Stuff That Happens Between Monitoring Samples

Your monitoring samples every 15-30 seconds. Garbage collection spikes last 3-5 seconds. Math time: you're missing most of the spikes that kill your pods.

Most memory spikes I've debugged fall into three categories:

1. Garbage collection doubling memory temporarily

Java major GC can double your memory usage for 3-5 seconds. Your monitoring never sees it, but the OOM killer sure does.

2. Request burst processing

50 concurrent API requests hit your pod. Each needs 20MB to process. Suddenly you need 1GB more memory for 10 seconds. Pod dies, monitoring shows nothing unusual.

3. Startup memory allocation

Database connection pools, caches warming up, initial data loads. Memory usage spikes 5x during startup, then settles down. If your startup exceeds your limit, you'll never get past initialization.

How to Actually Fix This Mess

Stop trying to predict memory spikes. Start accounting for them:

For Java apps: Set memory limits to 2x your heap size. JVM needs that much for non-heap memory + GC overhead.

For Node.js: Add 200MB+ to whatever your app "uses" for V8 external memory and buffers.

For startup spikes: Set init containers or startup probes with higher memory limits, then reduce after initialization.

For request bursts: Use memory circuit breakers - reject new requests when memory usage hits 85%.

Or just do what I do now after getting burned way too many times: set memory limits to 150-200% of what your monitoring says you need. Stop trying to optimize every last goddamn megabyte. Your uptime is way more important than shaving $50 off your AWS bill.

Java Logo

Memory Monitoring Tools: What Actually Works vs What's Broken

Tool

What It Shows

What It Misses

When to Use

Reality Check

kubectl top pod

RSS memory only

Memory-mapped files, kernel allocations, runtime overhead (30-50% of actual usage)

Never for OOM debugging

Useless for debugging, good for basic monitoring

Application metrics

Language-specific heap/runtime

Everything outside the runtime (JVM non-heap, V8 external, etc.)

Development & tuning

Tells you 40-60% of the story

Prometheus/cAdvisor

Detailed cgroup stats

Short-term spikes (15-60s sampling)

Long-term monitoring, alerting

Good for trends, bad for incidents

cgroup memory.current

Everything the OOM killer sees

Nothing

  • this is the truth

When your pod is dying

The only number that matters

When Multiple Pods Die at Once

The cascading failures that ruin your day

Multiple pods getting OOMKilled simultaneously isn't bad luck - it's system-level memory pressure cascading through your infrastructure. I've seen this take down entire services when one batch job decided to eat all available memory on a node.

Your pods have individual memory limits, but there are three other layers that can kill them: the node runs out of physical RAM, the QoS cgroup hits its collective limit, or the system processes start hogging memory. Individual pod limits mean nothing when the whole foundation is unstable.

Linux cgroups Architecture

Node Memory Pressure: When Your Math Is Wrong

Had a 16GB node, bunch of pods with 512MB limits, thought I had plenty of headroom. Wrong as hell.

Turns out the system was using 3.8GB for containerd, dockerd, kubelet, systemd services, kernel memory, and all the other shit that makes a server actually work. My "6GB headroom" was actually 2.2GB, and during traffic spikes, that wasn't nearly enough.

## Check what's actually using memory on your nodes
free -h
ps aux --sort=-%mem | head -10
slabtop -o | head -5

## Don't trust the Kubernetes allocatable numbers
kubectl describe node | grep -A 5 \"Allocatable\"
## That's just Kubernetes math, not reality

Reality check: Reserve 25-30% of node memory for system overhead, not the 10% that most tutorials suggest. The Kubernetes documentation is too optimistic about actual memory usage patterns in production environments.

QoS Cgroup Limits: The Silent Pod Killer

Your individual pods are fine, but collectively your QoS class is over its limit. Kubernetes groups pods by QoS class (Guaranteed, Burstable, BestEffort) and sets aggregate limits. When the Burstable QoS group hits its 8GB limit, it starts killing random Burstable pods - even ones that are individually under their resource limits.

## Check if QoS-level pressure is killing your pods
kubectl get events --field-selector reason=OOMKilling -o json | \
  jq -r '.items[] | select(.involvedObject.kind==\"Pod\") | .message' | \
  grep -c \"QoS\" || echo \"Individual pod limits exceeded\"

Most people don't even know this exists because it's not well documented in the Kubernetes troubleshooting guides. Found out the hard way when our web APIs kept dying during data processing jobs, even though both had appropriate individual limits. The cgroup memory controller documentation explains this behavior, but it's buried in kernel documentation that most developers never read.

Node.js Logo

Batch Job Memory Interference

Web API pods getting killed when batch jobs start? Classic memory interference pattern. The batch job spikes to 1.8GB, node memory pressure kicks in, Kubernetes looks around and decides your Burstable web API pods are more expendable than the Guaranteed batch job.

Simple fix: Don't run memory-intensive workloads on the same nodes as latency-sensitive services.

## Keep batch jobs away from web services
apiVersion: apps/v1
kind: Deployment
metadata:
  name: batch-processor
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: workload-type
                operator: In
                values: [\"batch\"]

What Actually Works

Stop trying to optimize every last MB. Reserve 30% more memory than you think you need:

  • 25-30% for system overhead
  • 25% buffer for memory spikes
  • Higher limits for memory-intensive apps to avoid QoS pressure

Separate your workloads by memory profile:

  • Latency-sensitive services on dedicated nodes
  • Batch jobs on separate nodes with higher memory capacity
  • Don't mix memory-intensive and memory-sensitive workloads

Monitor at the node level, not just individual pods:

The goal isn't perfect memory utilization - it's keeping your services running. Better to "waste" 30% of your memory than to have cascading failures that take down multiple services simultaneously and wake you up at 3am. Your uptime matters more than your AWS bill.

Look, I've debugged enough of these memory disasters to know what actually works. Stop trying to optimize every last fucking megabyte. Give your pods enough memory to handle reality, not your perfect little test environment. Monitor what the kernel sees, not what your pretty dashboards show. And for fuck's sake, separate your batch jobs from your web services.

OOMKilled Debugging: Real Problems, Real Solutions

Q

kubectl top says I'm using 400MB but got OOMKilled at 512MB limit. What the fuck?

A

kubectl top lies. It only shows RSS memory - what's physically in RAM right now. The OOM killer counts everything: memory-mapped files, kernel allocations, page cache, the works.Your pod that "uses" 400MB might actually be using 700MB when you include the 200MB JSON file it memory-mapped and the 100MB of kernel socket buffers for your database connections.Quick check:

## See what the OOM killer actually sees
kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.current
kubectl exec pod -- cat /sys/fs/cgroup/memory/memory.max
## If current > max, there's your problem

Fix: Set limits to 150-200% of what kubectl top shows. Stop trying to optimize every last fucking megabyte.

Q

My Java app heap is 800MB but got OOMKilled at 1GB limit. JVM bug?

A

Nope. The JVM uses way more memory than just the heap:

  • Metaspace (class metadata): 150MB+
  • Code cache (JIT compiled code): 64MB
  • Direct memory (ByteBuffers): 256MB
  • Compressed class space: 32MB
  • Thread stacks: 200 threads × 1MB = 200MB
    Your "1GB heap" JVM is actually using 1.7GB total.Check it:
kubectl exec java-pod -- jcmd 1 VM.memory_summary
## Look at non-heap memory sections

Fix: Set memory limits to 2x your heap size. Always.

Q

My pod was stable for months, now it's OOMKilled after Kubernetes upgrade. Nothing changed.

A

You upgraded to Kubernetes 1.31+ which uses cgroup v2 by default. cgroup v2 counts kernel memory that cgroup v1 ignored. Your stable 800MB pod now reports 850MB+ because kernel stack memory and network buffers suddenly count toward your limit.Check cgroup version:

kubectl exec pod -- cat /sys/fs/cgroup/cgroup.controllers
## If this file exists, you're on cgroup v2

Fix: Increase memory limits by 10-15% after Kubernetes 1.31+ upgrades.

Q

Multiple pods on the same node got OOMKilled simultaneously. Coordinated attack?

A

Node-level memory pressure. The node ran out of physical RAM and started killing pods based on QoS priority. Your individual pod limits don't matter when the whole node is out of memory.Check node memory:

kubectl describe node <node-name> | grep -A 10 "Allocated resources"
kubectl top node <node-name>
## Compare allocated vs actual usage

Common causes:

  • System processes eating more memory than reserved
  • DaemonSet pods (monitoring, logging) growing over time
  • Memory-intensive batch jobs consuming all available memory
    Fix: Reserve 25-30% of node memory for system overhead, not the typical 10%.
Q

My Node.js app shows 100MB heap usage but uses 500MB total memory. Memory leak?

A

Node.js has tons of hidden memory outside the V8 heap:

  • External memory (C++ objects): 200MB+
  • Array buffers (binary data): 150MB+
  • Buffer pools: 100MB+
  • Native addons: 50MB+
    Check all Node.js memory types:
kubectl exec node-pod -- node -e "console.log(process.memoryUsage())"
## Look at external, arrayBuffers, not just heapUsed

Fix: Add 200-300MB to whatever your V8 heap monitoring shows.

Q

Same workload uses 200MB more memory on AWS EKS than self-managed cluster. What gives?

A

EKS has additional memory overhead:

  • VPC CNI networking: 50-100MB per pod
  • CloudWatch Container Insights: 30-80MB per pod
  • Instance metadata service: 10-50MB per pod
  • EBS volume attachments: 5-15MB per volume
    Total EKS tax: 100-250MB per pod compared to self-managed.Fix: Account for managed Kubernetes overhead. Your 512MB self-managed limits need to be like 650-750MB on managed platforms.
Q

Pod keeps getting OOMKilled but monitoring shows gradual memory growth, no spikes.

A

Your monitoring samples every 15-30 seconds. Memory spikes during garbage collection or request bursts last 3-5 seconds. You're missing the spikes that kill your pods.Catch the spikes:

## Monitor memory every 100ms to catch spikes
POD_NAME="dying-pod"
while kubectl get pod $POD_NAME >/dev/null 2>&1; do
  MEM=$(kubectl exec $POD_NAME -- cat /sys/fs/cgroup/memory/memory.current)
  LIMIT=$(kubectl exec $POD_NAME -- cat /sys/fs/cgroup/memory/memory.max)
  PERCENT=$((MEM * 100 / LIMIT))
  echo "$(date '+%H:%M:%S'): ${PERCENT}%"
  [ $PERCENT -gt 95 ] && echo "SPIKE!"
  sleep 0.1
done

Fix: Set limits high enough to absorb temporary spikes, or implement memory circuit breakers to reject requests at 85% memory usage.

Q

My web API pods get killed when batch jobs start, even with different resource limits. Why?

A

Memory interference. The batch job creates node-level memory pressure, Kubernetes looks around and decides your Burstable web API pods are more expendable than the Guaranteed batch job.Check QoS classes:

kubectl get pods -o custom-columns="NAME:.metadata.name,QOS:.status.qosClass"
## Guaranteed > Burstable > BestEffort in eviction priority

Fix: Separate workloads by memory profile. Don't run batch jobs and latency-sensitive services on the same nodes.

Q

How do I know if this is an individual pod problem or system-wide memory pressure?

A

Check the timing of OOMKilled events:

kubectl get events --field-selector reason=OOMKilling --sort-by='.lastTimestamp'
## Multiple pods killed within minutes = system pressure
## Single pod consistently killed = individual pod problem

Individual pod: Increase that pod's memory limit
System pressure: Check node memory, QoS limits, workload separation

Related Tools & Recommendations

tool
Recommended

Docker Swarm - Container Orchestration That Actually Works

Multi-host Docker without the Kubernetes PhD requirement

Docker Swarm
/tool/docker-swarm/overview
100%
integration
Recommended

Making Pulumi, Kubernetes, Helm, and GitOps Actually Work Together

Stop fighting with YAML hell and infrastructure drift - here's how to manage everything through Git without losing your sanity

Pulumi
/integration/pulumi-kubernetes-helm-gitops/complete-workflow-integration
93%
tool
Recommended

Fix Helm When It Inevitably Breaks - Debug Guide

The commands, tools, and nuclear options for when your Helm deployment is fucked and you need to debug template errors at 3am.

Helm
/tool/helm/troubleshooting-guide
72%
tool
Recommended

Helm - Because Managing 47 YAML Files Will Drive You Insane

Package manager for Kubernetes that saves you from copy-pasting deployment configs like a savage. Helm charts beat maintaining separate YAML files for every dam

Helm
/tool/helm/overview
72%
integration
Recommended

Prometheus + Grafana + Jaeger: Stop Debugging Microservices Like It's 2015

When your API shits the bed right before the big demo, this stack tells you exactly why

Prometheus
/integration/prometheus-grafana-jaeger/microservices-observability-integration
69%
alternatives
Recommended

Docker Desktop Alternatives That Don't Suck

Tried every alternative after Docker started charging - here's what actually works

Docker Desktop
/alternatives/docker-desktop/migration-ready-alternatives
66%
tool
Recommended

Docker Security Scanner Performance Optimization - Stop Waiting Forever

compatible with Docker Security Scanners (Category)

Docker Security Scanners (Category)
/tool/docker-security-scanners/performance-optimization
66%
alternatives
Recommended

GitHub Actions Alternatives for Security & Compliance Teams

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/security-compliance-alternatives
63%
alternatives
Recommended

Tired of GitHub Actions Eating Your Budget? Here's Where Teams Are Actually Going

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/migration-ready-alternatives
63%
alternatives
Recommended

GitHub Actions is Fine for Open Source Projects, But Try Explaining to an Auditor Why Your CI/CD Platform Was Built for Hobby Projects

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/enterprise-governance-alternatives
63%
tool
Recommended

Rancher Desktop - Docker Desktop's Free Replacement That Actually Works

extends Rancher Desktop

Rancher Desktop
/tool/rancher-desktop/overview
53%
review
Recommended

I Ditched Docker Desktop for Rancher Desktop - Here's What Actually Happened

3 Months Later: The Good, Bad, and Bullshit

Rancher Desktop
/review/rancher-desktop/overview
53%
tool
Recommended

Rancher - Manage Multiple Kubernetes Clusters Without Losing Your Sanity

One dashboard for all your clusters, whether they're on AWS, your basement server, or that sketchy cloud provider your CTO picked

Rancher
/tool/rancher/overview
53%
tool
Recommended

Red Hat OpenShift Container Platform - Enterprise Kubernetes That Actually Works

More expensive than vanilla K8s but way less painful to operate in production

Red Hat OpenShift Container Platform
/tool/openshift/overview
47%
troubleshoot
Recommended

Docker Swarm Node Down? Here's How to Fix It

When your production cluster dies at 3am and management is asking questions

Docker Swarm
/troubleshoot/docker-swarm-node-down/node-down-recovery
44%
troubleshoot
Recommended

Docker Swarm Service Discovery Broken? Here's How to Unfuck It

When your containers can't find each other and everything goes to shit

Docker Swarm
/troubleshoot/docker-swarm-production-failures/service-discovery-routing-mesh-failures
44%
tool
Recommended

HashiCorp Nomad - Kubernetes Alternative Without the YAML Hell

competes with HashiCorp Nomad

HashiCorp Nomad
/tool/hashicorp-nomad/overview
42%
tool
Recommended

Amazon ECS - Container orchestration that actually works

alternative to Amazon ECS

Amazon ECS
/tool/aws-ecs/overview
42%
integration
Recommended

GitHub Actions + Docker + ECS: Stop SSH-ing Into Servers Like It's 2015

Deploy your app without losing your mind or your weekend

GitHub Actions
/integration/github-actions-docker-aws-ecs/ci-cd-pipeline-automation
42%
tool
Recommended

Google Cloud Run - Throw a Container at Google, Get Back a URL

Skip the Kubernetes hell and deploy containers that actually work.

Google Cloud Run
/tool/google-cloud-run/overview
42%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization