Why I Stopped Running Full Kubernetes Everywhere

Resource Usage Comparison

I was an idiot and ran up my AWS bill to something completely fucked - several thousand bucks, I don't even want to think about it - trying to run a simple PHP app on Kubernetes because I thought it would look "professional" on my resume. Turns out enterprise buzzwords don't pay the rent.

Why Your Server Keeps Running Out of RAM

So here's the thing nobody tells you about Kubernetes: it eats RAM like it's going out of style. The official docs say 4GB minimum but that's just for the control plane to not immediately crash. I found this out the hard way when my cheap DigitalOcean box kept dying with OOMKilled errors every few hours.

When you're trying to run anything on a Raspberry Pi or some crappy VPS, Kubernetes eats like half your RAM just sitting there doing nothing.

My stupid edge project was bleeding money on AWS - like close to a grand a month for 20 different locations running full K8s. When I switched to K3s on Raspberry Pis, it dropped to maybe a couple hundred a month after buying all the hardware. That's rent money right there.

The official K3s requirements are way more reasonable:

  • K3s server: needs maybe 2GB RAM, works fine with whatever CPU you've got
  • K3s agent: can run on 512MB RAM or less if you're not doing anything crazy
  • Regular Kubernetes: wants 4GB+ just to boot up, before you run anything useful

My Local Development Pain Points

Local Development Setup

I used to run Minikube on my MacBook Pro. It had 16GB RAM which sounds like a lot until Docker Desktop and Minikube eat like 6-7GB just sitting there. Throw in VS Code, Chrome with approximately 47 tabs, and Slack, and my laptop starts sounding like it's about to take off.

What actually works without killing your laptop:

  • K3d: K3s running in Docker containers. Starts fast, uses maybe 1GB RAM for a whole 3-node cluster
  • Kind: Good for CI/CD pipelines, terrible for daily development because it's slow as molasses
  • Colima: Kicked Docker Desktop off my Mac, uses way less RAM
  • Podman: No stupid daemon running in the background, works great on Linux

With K3d I can run a whole cluster, VS Code, and Chrome without my laptop having a nervous breakdown. Try that with full Kubernetes and you're screwed.

Real gotcha: The K3s installer sometimes fails silently and you waste an hour wondering why kubectl isn't working. Always check the logs first: journalctl -u k3s

IoT Projects That Actually Work

Edge Computing Hardware

I've deployed containers on everything from Raspberry Pi Zeros (512MB RAM) to industrial PCs in factories. Here's what I learned:

Skip orchestration entirely for simple IoT:

  • Single container? Just use systemd and docker run. Don't overcomplicate it.
  • Multiple containers? Docker Compose works fine and uses way less resources.
  • Need clustering? That's when K3s makes sense.

Real IoT deployments I've done:

Power consumption is a real pain when you're running stuff remotely. A Pi running K3s draws like 5-10W max. Full Kubernetes on some beefy x86 machine? That's 50-100W easy. If you're running on solar or battery, full K8s will drain your power budget fast.

Why Everyone Thinks They Need Kubernetes

Here's the truth: most teams don't need Kubernetes. They think they do because:

  1. Job requirements list it - "Must know K8s" is on every DevOps job posting
  2. Conference hype - Every talk is about scaling to Netflix levels
  3. Vendor marketing - Cloud providers make more money selling managed K8s
  4. Resume-driven development - Engineers want K8s experience

But if you're running 5 services with 10,000 users, Docker Swarm or even systemd will work fine. I've seen teams spend 6 months setting up Kubernetes for a PHP app that could run on a $5 VPS.

When to Use What (Based on Real Experience)

Use simpler alternatives when:

  • Your server has less than 4GB RAM - Full K8s will eat everything
  • It's just you or a small team - Complex orchestration is overkill
  • You're prototyping - Don't waste time on enterprise bullshit you don't need
  • You're paying the power bill - Edge deployments, IoT, dev laptops
  • You want to understand what's actually happening - Less magic, more learning

Use full Kubernetes when:

  • You have >100 services - At this scale, you need the complexity
  • Multiple teams - Namespaces, RBAC, and multi-tenancy actually matter
  • Compliance requirements - Enterprise security policies, auditing
  • You already have K8s experts - Don't fix what isn't broken

Skip orchestration entirely for:

  • Single applications - systemd + docker run works fine
  • Static sites - Just use a CDN, what are you doing?
  • Databases - Run PostgreSQL directly, not in containers
  • Legacy apps that don't containerize well - Stop forcing it

Most projects start simple and get complex over time. Begin with Docker Compose, move to K3s when you need clustering, upgrade to full Kubernetes when you actually hit its limits. Don't start with the enterprise solution for a side project.

Lightweight Kubernetes Distributions: Resource Consumption & Features

Distribution

Min RAM

CPU Usage

Disk Space

Key Strengths

Best For

Production Ready

K3s

2GB server / 512MB agent

barely uses any

around 6GB

Single binary, works on ARM, no etcd bullshit

Edge computing, IoT, actual work

✅ Actually works in production

K0s

around 1GB

pretty low

maybe 6GB

Zero dependencies, modular, auto-update

Simple deployments, CI/CD

✅ Production-grade

Talos Linux

512MB or less

very low

3GB-ish

Immutable OS, API-driven, security-focused

Security-critical stuff

✅ Production-grade

MicroK8s

500MB+

low-ish

6GB+

Snap packaging, addon ecosystem (if you like Snap hell)

Ubuntu servers (unfortunately)

✅ Works but Snap is garbage

K3d

1GB+

moderate

3-4GB

Docker-in-Docker, multi-node simulation

Development, testing, CI/CD

⚠️ Development only

Kind

1-2GB

moderate

4GB+

Official K8s testing, slow startup

K8s development, conformance

⚠️ Testing/development

Minikube

2GB+

higher usage

8GB+

VM-based, lots of drivers, dashboard

Local development, learning

⚠️ Development focus

Alternatives That Don't Require a YAML Degree

Container Tools Comparison

Kubernetes isn't the only way to run containers. Sometimes the best orchestration is no orchestration at all. Here's what I actually use when K8s is way too much.

Podman: Docker Without the Daemon Bullshit

Podman doesn't run a daemon. That means when you're not running containers, it uses zero resources. Docker Desktop sitting in your system tray? That's 2GB RAM gone even when idle.

I switched to Podman after Docker daemon shit the bed again and took out all my containers. Maybe the third time that month? With Podman, each container is its own process. One crashes, the others don't give a fuck and keep running.

Why I like Podman:

Podman has some rough edges though. The networking can be weird as hell on some distros, Docker Compose files sometimes break for no reason, and the docs are written for people who already know Docker really well. But when it works, it's solid and won't randomly die on you.

Setting Up Podman the Right Way

Skip podman-compose. It's janky and breaks randomly. Instead, use systemd to manage your containers:

Gotcha: Podman networking is still weird as hell on some distros. If containers can't talk to each other, try podman network create and explicitly assign networks.

## Generate systemd service files from running containers
podman run -d --name myapp nginx:alpine
podman generate systemd --files --name myapp

## Enable the service
sudo cp container-myapp.service /etc/systemd/system/
sudo systemctl enable container-myapp.service
sudo systemctl start container-myapp.service

This creates proper systemd services that start at boot, restart on failure, and integrate with Linux logging. Way more reliable than compose files.

Docker Swarm: Actually Pretty Good

Docker Swarm Architecture

Swarm isn't dead, whatever the Kubernetes fanboys tell you. I use it for small production deployments because it's simple and actually works.

Swarm mode is just Docker with clustering. No separate control plane, no etcd, no bullshit. You run docker swarm init and suddenly you have orchestration.

Why Swarm doesn't suck:

I deployed a WordPress site with Redis and MySQL using Swarm. Total setup time: 20 minutes. Trying the same with Kubernetes would take hours.

## Create a 3-node Swarm cluster
docker swarm init --advertise-addr 192.168.1.100

## On worker nodes:
docker swarm join --token <token> 192.168.1.100:2377

## Deploy your app:
docker stack deploy -c docker-compose.yml wordpress

The main downside? Nobody gives a shit about Swarm on resumes. But if you need something that works without giving you headaches, Swarm is solid.

Just Use systemd (Seriously)

systemd Logo

For single-node deployments, systemd beats every container orchestrator on resource usage. It's already running on your Linux box, knows how to restart services, handles dependencies, and integrates with logging.

I run several production services this way. A Go web app, a PostgreSQL container, and a Redis container. All managed by systemd, zero orchestration overhead.

## /etc/systemd/system/webapp.service
[Unit]
Description=Web Application Container
After=docker.service
Requires=docker.service

[Service]
Type=notify
ExecStart=/usr/bin/docker run --rm --name webapp -p 80:8080 myapp:latest
ExecStop=/usr/bin/docker stop webapp
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

Why systemd works:

  • Reliability - systemd has restarted services for decades, it knows what it's doing
  • Dependencies - Start database before web app, systemd handles ordering
  • Logging - journalctl -u webapp shows all your app logs
  • Resource limits - Built-in cgroup controls for CPU/memory limits
  • Zero overhead - systemd is already running anyway.

When You Don't Need Containers At All

Sometimes the best container solution is no containers. If you're running a simple web app, a binary + systemd might be better than containerizing everything.

Consider skipping containers for:

  • Single binary applications - Go/Rust apps that compile to static binaries
  • Simple Python/Node apps - pip install or npm install works fine
  • Legacy applications - Don't force containerization, just run them normally
  • Databases - PostgreSQL installed directly is faster and easier to backup

I've seen teams containerize a PHP app that could run fine with Apache + mod_php. They spent weeks debugging volume mounts and networking when apt install apache2 php would work better.

I watched this team spend like 3 months trying to containerize some ancient .NET app - hardcoded file paths everywhere, registry bullshit, probably written in 2003. They could've just slapped it on Windows Server and called it a day.

My Actual Recommendations

For development: Use K3d if you need K8s compatibility, Colima if you just want Docker without Docker Desktop.

For single-node production: systemd + Docker containers. It's boring, reliable, and uses minimal resources.

For multi-node but simple: Docker Swarm. Dead simple clustering without K8s complexity.

For edge/IoT: K3s on devices with >2GB RAM, systemd + containers on smaller devices.

For "enterprise": Just use regular Kubernetes. Don't fight it, you'll need the features eventually.

The key insight: start simple, add complexity when you actually need it. Don't begin with enterprise solutions for side projects. Don't containerize everything because it's trendy. Use the right tool for the job, even if that tool is boring.

Questions You're Probably Asking

Q

Am I fucking myself by not using real Kubernetes?

A

These alternatives scale better than you'd think:

  • K3s/K0s:

I've seen production clusters with 100+ nodes running fine

  • Docker Swarm: Can handle thousands of nodes, though Docker kind of gave up on it
  • Podman:

Single-node mostly, but you can cluster with systemd if you're masochistic

I've personally run K3s clusters with 50+ nodes and they work great. Unless you're running Netflix or Google scale, you probably won't hit the limits.Start simple and migrate when you actually hit problems, not when you think you might. Moving from K3s to full Kubernetes later is way easier than debugging K8s bullshit from day one.

Q

What if this lightweight shit breaks and I get blamed?

A

Lightweight doesn't mean single-point-of-failure:- K3s HA: Supports embedded etcd or external database clustering- K0s HA: Multi-controller setup with automated leader election- Docker Swarm: Raft consensus with 3/5/7 manager node configurations- Podman: Systemd clustering with shared storage or databaseProduction HA pattern for K3s:

## Controller nodes (3 for HA)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --cluster-init\" sh -
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --server https://first-node:6443\" sh -

## Agent nodes join the cluster
curl -sfL https://get.k3s.io | K3S_TOKEN=secret sh -

Reality check: HA sounds great until you try to cluster over satellite internet that drops out every 20 minutes. Sometimes a single solid node beats trying to sync across flaky connections.

Q

Will compliance lose their minds over this?

A

Lightweight doesn't mean insecure:

Immutable OS with FIPS 140-2 validation

  • Podman:

Rootless containers, SELinux integration, no daemon attack surface

  • Firecracker: VM-level isolation for multi-tenant edge computingCompliance reality:

Most compliance frameworks (SOC2, HIPAA) care about data handling and access controls, not whether you're using the latest enterprise buzzword. Simpler systems have fewer ways to screw up compliance.Security advantage of lightweight: Smaller attack surface, fewer components to patch, simpler security auditing. Talos Linux demonstrates that security-focused lightweight platforms can exceed traditional Kubernetes security posture.

Q

How do I know if something's fucked?

A

Monitoring Dashboard

Monitoring approaches scale with platform complexity:- Prometheus + Grafana: Works across all Kubernetes-compatible platforms- Native monitoring: Cloud platforms provide integrated monitoring- Lightweight monitoring: VictoriaMetrics uses 10x less memory than Prometheus- Edge monitoring: Telegraf + InfluxDB for resource-constrained environmentsObservability for Podman deployments:

## Container metrics via systemd
systemctl status webapp.service
journalctl -fu webapp.service

## Resource monitoring
podman stats
podman top webapp

Edge monitoring reality: Bandwidth and storage suck at edge locations. You need local aggregation and filtering before sending metrics to central systems, assuming your satellite internet doesn't drop out for hours.Gotcha: Don't rely on cloud monitoring for edge deployments. Your edge nodes will look "down" when they're actually fine but just can't phone home.

Q

Can I actually run databases on this stuff?

A

Stateful workloads work well with proper planning:- K3s: Full StatefulSet support with local storage or Longhorn distributed storage- Docker Swarm: Volume management with constraints for database placement- Podman: Direct volume mounts with systemd service dependencies- Edge databases: SQLite, embedded databases often better suited than distributed systemsProduction database patterns:

## K3s StatefulSet example
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  template:
    spec:
      containers:
      - name: postgres
        image: postgres:13
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
  - metadata:
      name: postgres-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

Edge database reality: For edge deployments, managed cloud databases with local caching usually work better than trying to run PostgreSQL on a Pi that might lose power randomly.I lost like 2 weeks of sensor data when a Pi running PostgreSQL corrupted its SD card during some power outage. Should've used a real SSD instead of that cheap SD card bullshit.

Q

What about networking? Docker networking is already a nightmare

A

Networking complexity varies by platform:

  • K3s:

Flannel CNI by default, supports Calico, Cilium for advanced use cases

  • K0s: Calico CNI, supports custom network providers
  • Docker Swarm:

Built-in overlay networking, external network integration

  • Podman: Host networking, CNI plugins, systemd network managementEdge networking challenges:

  • Intermittent connectivity:

Design for offline operation with eventual consistency

  • Bandwidth constraints: Minimize cross-network traffic, local processing priority
  • Firewall restrictions:

Many edge locations have restrictive network policies

  • VPN/tunneling: Secure connectivity back to central systemsNetwork efficiency pattern:

Use ingress controllers or load balancers that understand resource constraints. Traefik and Envoy Proxy both provide lightweight ingress options.

Q

How do I migrate from our current K8s clusterfuck?

A

Migration strategies depend on current platform:

From full Kubernetes to K3s:

  1. Application assessment: Identify Kubernetes-specific dependencies
  2. Resource audit: Calculate actual vs. available resources
  3. Gradual migration: Move non-critical workloads first
  4. Validation testing: Ensure application compatibility

From Docker Compose to orchestration:

  1. Swarm mode: Enable swarm on existing Docker setup
  2. Podman transition: Replace Docker daemon with Podman
  3. K3s adoption: Migrate compose files to Kubernetes manifests

Migration timeline reality: Simple applications migrate in 2-4 weeks. Complex applications with Kubernetes-specific features may require 2-3 months for proper testing and validation.

Q

Will my team need to learn completely new shit?

A

Skill requirements by platform:

  • K3s/K0s:

Existing Kubernetes knowledge transfers directly

  • Docker Swarm: Docker expertise + basic orchestration concepts
  • Podman:

Container knowledge + Linux systems administration

  • Edge platforms: System administration + networking fundamentalsTraining approach:

Start with platforms closest to existing team skills. Teams comfortable with Docker can adopt Swarm immediately. Teams with Kubernetes experience can transition to K3s with minimal additional training.Resource investment: Lightweight platforms typically reduce training overhead compared to full Kubernetes. Learning K3s takes weeks vs. months for full Kubernetes expertise.

Q

Are we killing our careers by not using K8s?

A

Market reality check:

Understanding multiple orchestration approaches makes engineers more valuable, not less. The industry increasingly values practical engineering decisions over following trends.Career development advantages:

  • Broader skills:

Experience with multiple platforms increases versatility

  • Problem-solving focus: Matching tools to requirements rather than following fashion
  • Operational excellence:

Deep understanding of resource management and efficiency

  • Edge expertise: Growing market demand for edge computing skillsHiring perspective: Companies value engineers who can make appropriate technology choices. Understanding when NOT to use complex tools demonstrates senior-level thinking.

Container Runtimes & Non-Kubernetes Alternatives Comparison

Platform

Architecture

Min Resources

Orchestration Model

Best Use Case

Learning Curve

Podman + Systemd

Daemonless, rootless

256MB RAM, 0.5 CPU

systemd service management

Single-node production, development

Low (if Linux familiar)

Docker Swarm

Manager/Worker nodes

512MB RAM, 1 CPU

Declarative services, stacks

Multi-node Docker deployments

Low (Docker knowledge)

Firecracker MicroVMs

MicroVM per container

5MB per VM + app

Individual microVM lifecycle

Secure multi-tenant edge

Medium

LXD System Containers

System containers

1GB RAM, 1 CPU

VM-like lifecycle management

Legacy app containerization

Medium

Systemd + Containers

Native systemd

128MB RAM, 0.3 CPU

Service unit management

Maximum efficiency single-node

Low (systemd knowledge)

Colima

Lima VM runtime

2GB RAM, 1 CPU

Docker API compatible

macOS development replacement

Low (Docker familiar)

Podman Desktop

VM-based (Windows/Mac)

1GB RAM, 1 CPU

GUI + CLI management

Desktop development

Low

Nomad (Non-K8s)

Agent-based cluster

512MB RAM, 1 CPU

Job scheduling, multi-runtime

Mixed workloads, polyglot

Medium

Resources That Don't Suck

Related Tools & Recommendations

tool
Similar content

Minikube Overview: Local Kubernetes for Developers & Beginners

Run Kubernetes on your laptop without the cloud bill

Minikube
/tool/minikube/overview
100%
tool
Similar content

kind Kubernetes: Run Local Clusters Without VM Overhead

Run actual Kubernetes clusters locally without the VM bullshit

kind
/tool/kind/overview
98%
tool
Similar content

Helm: Simplify Kubernetes Deployments & Avoid YAML Chaos

Package manager for Kubernetes that saves you from copy-pasting deployment configs like a savage. Helm charts beat maintaining separate YAML files for every dam

Helm
/tool/helm/overview
91%
tool
Similar content

Fix Slow kubectl in Large Kubernetes Clusters: Performance Optimization

Stop kubectl from taking forever to list pods

kubectl
/tool/kubectl/performance-optimization
84%
tool
Similar content

Rancher Desktop: The Free Docker Desktop Alternative That Works

Discover why Rancher Desktop is a powerful, free alternative to Docker Desktop. Learn its features, installation process, and solutions for common issues on mac

Rancher Desktop
/tool/rancher-desktop/overview
83%
howto
Similar content

Master Microservices Setup: Docker & Kubernetes Guide 2025

Split Your Monolith Into Services That Will Break in New and Exciting Ways

Docker
/howto/setup-microservices-docker-kubernetes/complete-setup-guide
72%
tool
Recommended

Fix Minikube When It Breaks - A 3AM Debugging Guide

Real solutions for when Minikube decides to ruin your day

Minikube
/tool/minikube/troubleshooting-guide
65%
tool
Recommended

K3s - Kubernetes That Doesn't Suck

Finally, Kubernetes in under 100MB that won't eat your Pi's lunch

K3s
/tool/k3s/overview
63%
news
Recommended

Docker Desktop Critical Vulnerability Exposes Host Systems

CVE-2025-9074 allows full host compromise via exposed API endpoint

Technology News Aggregation
/news/2025-08-25/docker-desktop-cve-2025-9074
58%
alternatives
Recommended

Docker Desktop Alternatives That Don't Suck

alternative to Docker Desktop

Docker Desktop
/alternatives/docker-desktop/open-source-alternatives
58%
troubleshoot
Recommended

Docker Desktop Security Configuration Broken? Fix It Fast

The security configs that actually work instead of the broken garbage Docker ships

Docker Desktop
/troubleshoot/docker-desktop-security-hardening/security-configuration-issues
58%
review
Recommended

I Ditched Docker Desktop for Rancher Desktop - Here's What Actually Happened

3 Months Later: The Good, Bad, and Bullshit

Rancher Desktop
/review/rancher-desktop/overview
56%
tool
Recommended

containerd - The Container Runtime That Actually Just Works

The boring container runtime that Kubernetes uses instead of Docker (and you probably don't need to care about it)

containerd
/tool/containerd/overview
56%
tool
Recommended

kubectl - The Kubernetes Command Line That Will Make You Question Your Life Choices

Because clicking buttons is for quitters, and YAML indentation is a special kind of hell

kubectl
/tool/kubectl/overview
54%
integration
Recommended

OpenTelemetry + Jaeger + Grafana on Kubernetes - The Stack That Actually Works

Stop flying blind in production microservices

OpenTelemetry
/integration/opentelemetry-jaeger-grafana-kubernetes/complete-observability-stack
54%
integration
Recommended

Making Pulumi, Kubernetes, Helm, and GitOps Actually Work Together

Stop fighting with YAML hell and infrastructure drift - here's how to manage everything through Git without losing your sanity

Pulumi
/integration/pulumi-kubernetes-helm-gitops/complete-workflow-integration
52%
tool
Recommended

Fix Helm When It Inevitably Breaks - Debug Guide

The commands, tools, and nuclear options for when your Helm deployment is fucked and you need to debug template errors at 3am.

Helm
/tool/helm/troubleshooting-guide
52%
tool
Similar content

ArgoCD - GitOps for Kubernetes That Actually Works

Continuous deployment tool that watches your Git repos and syncs changes to Kubernetes clusters, complete with a web UI you'll actually want to use

Argo CD
/tool/argocd/overview
45%
integration
Recommended

Temporal + Kubernetes + Redis: The Only Microservices Stack That Doesn't Hate You

Stop debugging distributed transactions at 3am like some kind of digital masochist

Temporal
/integration/temporal-kubernetes-redis-microservices/microservices-communication-architecture
43%
troubleshoot
Recommended

Fix Kubernetes ImagePullBackOff Error - The Complete Battle-Tested Guide

From "Pod stuck in ImagePullBackOff" to "Problem solved in 90 seconds"

Kubernetes
/troubleshoot/kubernetes-imagepullbackoff/comprehensive-troubleshooting-guide
43%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization