From Docker to Kubernetes - Why Minikube Exists

Kubernetes Cluster Architecture

If you've been running Docker containers and think you're ready for Kubernetes, here's some bad news: you're not. Kubernetes is a different beast entirely. Minikube exists to bridge that gap by running a complete Kubernetes cluster on your laptop.

What Runs Inside That VM

When you run minikube start, it spins up a VM (or container) with the entire Kubernetes control plane. This isn't just Docker - it's a full orchestration system with multiple components working together: the API server, etcd, scheduler, and controller manager.

Cluster Architecture Diagram

When you run minikube start, it spins up a VM with the full control plane. Not just Docker - the whole orchestration system.

Why Teams Actually Use Minikube

Learning Without Breaking Things

The Kubernetes documentation recommends Minikube as the starting point because it gives you a real cluster to experiment with. You can break deployments, crash services, and mess up networking without affecting anyone else. I've seen developers go from Docker Compose to production Kubernetes in a few weeks using Minikube to understand the concepts.

Reasonable Resource Usage

Minikube officially needs 2GB RAM and 2 CPUs, but that's optimistic. In practice, you want 4GB+ unless you enjoy watching pods take forever to start. Still, it's way lighter than running a multi-node cluster on your laptop.

CI/CD Testing

Most CI platforms support Minikube because it's cheaper than spinning up real clusters for every test run. I've worked on projects where our integration tests ran against Minikube - it's close enough to real Kubernetes for most scenarios.

Predictable Failures

Unlike some local K8s tools, when Minikube breaks, it usually breaks in documented ways. There's a GitHub issue for almost every error you'll encounter, which beats debugging mysterious failures in production.

Resource Reality Check

The docs say 2GB RAM and 2 CPUs. That'll work but you'll hate life. On a 2GB machine, I've sat there for forever waiting for it to start.

What you actually need:

  • 4+ CPUs - Less than this and everything takes forever
  • 4-8GB RAM - Tried 2GB once, pods kept getting killed
  • 30GB+ disk space - Images pile up fast

They say you can run 100+ pods. Sure, if your definition of "running" is pretty loose. Gets slow around 20 pods doing real work.

Common Ways Minikube Breaks

VirtualBox dies every macOS update. Ventura broke VirtualBox 6.x for months. Spent a whole Friday trying to demo something, turns out Tuesday's update killed the kernel extensions again.

Docker driver randomly stops working. Trying to show a coworker something, minikube start just sits there. Docker Desktop updated overnight and disabled WSL2 integration. Wasted hours on that.

Forgetting minikube tunnel. Deploy a LoadBalancer service, everything looks good, spend forever debugging networking. Then realize tunnel isn't running. Every damn time.

Running out of space mid-demo. "No space left on device" while people are watching your screen. Docker images filled up the default 20GB. Now I run docker system prune regularly.

Recent versions get flaky after a few days if you leave them running. Think it's CoreDNS memory leak but not totally sure. Just restart minikube when kubectl starts timing out.

The troubleshooting guide is actually pretty good - bookmark it because you'll need it.

etcd Architecture in Kubernetes

Minikube vs The Other Local K8s Options (That Actually Matter)

Tool

Key Features

Startup Time

Memory Usage

Best For

Install Command

Minikube

The original. Eats resources like crazy

  • need 4GB minimum. Takes forever to start but stays up once running. minikube tunnel crashes constantly which is annoying as hell.

45 seconds to "guess I'll make coffee"

At least 2GB, often 3GB+ when doing real work

just use Minikube if you're learning K8s and have 8GB+ RAM. It's what all the tutorials assume you're using anyway.

brew install minikube works, but you'll spend 30 minutes configuring drivers

Kind

Fast, great for CI. Runs in Docker containers which is clever but if Docker breaks, you're fucked. Starts in like 30 seconds usually.

30 seconds if Docker cooperates

Docker plus another gig or so

If Docker is already eating half your system resources, Kind makes sense. Starts stupid fast.

brew install kind and you're done in 5 minutes

K3s

Actually lightweight. Rancher built this for edge stuff so it uses way less RAM. Works on shitty laptops. ARM support isn't broken like Minikube's.

Pretty fast, like 20 seconds

Usually under a gig, sometimes more

K3s if your laptop is from 2018 and you're tired of waiting 3 minutes for everything to boot. Plus it actually works on M1 Macs without weird compatibility issues.

curl -sfL https://get.k3s.io | sh - actually works as advertised

Docker Desktop

Has K8s built-in. Fine I guess. GUI is nice but it's bloated and kills your battery. If you already use Docker Desktop, whatever.

A minute, sometimes never

Eats 2GB even when idle

Docker Desktop if you're already living in Docker hell and don't want to install another thing.

Download GUI, install, enable Kubernetes in settings

MicroK8s

Ubuntu thing. Works great on Ubuntu, sucks everywhere else. Snap packages either work perfectly or not at all.

Fast on Ubuntu, broken elsewhere

Haven't measured exactly but feels like 1-2GB

Skip MicroK8s unless you're all Ubuntu all the time.

snap install microk8s on Ubuntu, good luck everywhere else

Minikube Features That Actually Matter

Driver Options and Reality

Minikube supports multiple drivers, and each has different trade-offs. After running this across different teams and environments, here's what you need to know:

Container Drivers:

VM Drivers:

  • VirtualBox - Slow but predictable. When other drivers fail, VirtualBox usually works.
  • VMware - Fast and reliable if you have VMware Fusion or Workstation installed.
  • Hyper-V - Windows only, requires admin rights, networking setup is complex.
  • KVM - Linux only, excellent performance once libvirt permissions are sorted out.

Start with Docker driver. If that gives you trouble, VirtualBox is your reliable fallback.

Kubernetes API Server Architecture

Addons That Actually Help

The addon system is one of Minikube's best features. One minikube addons enable command gets you functionality that would take hours to configure manually.

Essential addons:

Dashboard - Web UI for browsing your cluster. Actually useful for debugging when kubectl gets tedious.

Ingress - NGINX ingress controller. Essential if you want to expose services properly instead of using NodePort.

Metrics Server - Enables kubectl top and horizontal pod autoscaling. Required for any performance monitoring.

Useful for specific needs:

Registry - Local Docker registry at localhost:5000. Handy for development, though you'll forget the port number constantly.

Registry Creds - Automatically configures credentials for AWS ECR, GCP Container Registry. Saves you from ImagePullBackOff errors.

Skip unless you have specific needs:

Istio - Service mesh is serious overkill for a single-node development cluster.

EFK Stack - Elasticsearch will consume all your RAM. Use it only if you're specifically testing logging setups.

Dashboard and Ingress are the most commonly used because they solve immediate development needs.

Networking Basics

Kubernetes networking takes some getting used to, and Minikube has its own quirks:

Service Types

ClusterIP - Internal cluster access only. This works reliably and is perfect for backend services.

NodePort - Exposes services on high-numbered ports (30000+). Use minikube ip to get the cluster IP, then access your service at that IP plus the NodePort.

LoadBalancer - This is where it gets tricky. LoadBalancer services need minikube tunnel running to get external IPs.

The minikube tunnel Command

This command creates network routes so LoadBalancer services get real IP addresses instead of staying "pending" forever.

The catch is it needs sudo privileges on Linux and macOS, so you'll get prompted for your password. You need to keep this running in a separate terminal while you're using LoadBalancer services.

If you forget to run tunnel and wonder why your service isn't accessible, that's usually the problem. On Windows, tunnel can be inconsistent - NodePort is often more reliable there.

Minikube Dashboard Screenshot

Storage Limitations

Persistent volumes in Minikube use hostPath storage, which has some important implications:

  • Data survives pod restarts and recreations
  • Data gets wiped when you run minikube delete
  • No real backup or replication - it's just files on the host
  • Fine for development, don't rely on it for anything important

Storage classes exist but they all use the same underlying hostPath mechanism, so the class you choose doesn't affect durability or performance.

Kubernetes Scheduler Workflow

Development Workflow: The Actually Useful Stuff

Image Development Loop:

## The old way (slow)
docker build -t myapp .
docker push myregistry/myapp
kubectl set image deployment/myapp myapp=myregistry/myapp

## The Minikube way (fast)
eval $(minikube docker-env)
docker build -t myapp .
kubectl set image deployment/myapp myapp=myapp:latest

This minikube docker-env thing is useful - builds images right in the cluster.

Port Forwarding That Actually Works:

## Quick access to services
minikube service myservice --url

## Traditional kubectl still works
kubectl port-forward svc/myservice 8080:80

Registry Addon for Local Images:

minikube addons enable registry
## Registry available at localhost:5000
## Push/pull like any other registry

Performance Reality Check

They say you can run 100+ pods. Sure, if your definition of "running" is "using RAM and doing nothing."

Real performance:

  • Driver matters: Docker > VMware > VirtualBox
  • Need RAM: 8GB minimum, 16GB to stay sane
  • Workload type: 20 nginx pods ≠ 20 Postgres pods

Started getting slow around 15-20 pods with real workloads. By the time we hit 25 or so, kubectl commands started timing out.

We tried running like 40 microservices for integration tests once. Build times went from maybe 10 minutes to forever. Switched to Kind and it was way faster, though I don't remember the exact times.

Bottom line: Minikube is great for learning and development, terrible for load testing. Use k6 or JMeter against real clusters for performance work.

Frequently Asked Questions

Q

What are the minimum system requirements for Minikube?

A

Docs say 2 CPUs and 2GB RAM. That'll work but sucks. On a 2GB machine, I've waited forever for minikube start. For real development, you want 4 CPUs and 4GB RAM.You also need a virtualization platform

  • Docker Desktop, Virtual

Box, VMware, or hypervisors like Hyper-V. On Windows, don't enable both Hyper-V and install VirtualBox

  • they conflict with each other.
Q

How do I fix "minikube start" hanging or failing?

A

This happens constantly. First thing to try:

minikube delete
minikube start

If that doesn't work:

  • Check if virtualization is enabled in BIOS (Intel VT-x or AMD-V)
  • Try a different driver: minikube start --driver=docker or --driver=virtualbox
  • On Windows, disable Hyper-V if you're using VirtualBox (they don't play nice)
  • Look for your error message in GitHub issues - there's usually a thread about it

For debugging:

minikube logs
minikube status  

Sometimes it just takes multiple tries. Had setups fail 3 times then work perfectly on the 4th try with no changes. No idea why.

Q

Can I run multiple Minikube clusters simultaneously?

A

Yeah, but your laptop will hate you. Use profiles: minikube start -p dev and minikube start -p staging. Each profile eats its own chunk of RAM, so running 3 clusters means 3x the resource usage. Switch contexts with minikube profile dev.

Tried running 4 profiles on a 16GB MacBook - it works but everything crawls. Better to use different versions in one cluster when possible.

Q

How do I access my applications running in Minikube?

A

This trips up everyone starting out. Here's what actually works:

For quick testing:

minikube service myapp --url
## Opens browser or shows URL

For development:

kubectl port-forward svc/myapp 8080:80
## Access at localhost:8080

For LoadBalancer services:

minikube tunnel  # Keep this running
## Your LoadBalancer gets a real IP

For NodePort (when you're desperate):

minikube ip  # Get cluster IP
## Access at <cluster-ip>:<node-port>
Q

Why is my Minikube cluster using so much memory?

A

Minikube reserves the full memory allocation upfront, whether it's using it or not. The default 2GB gets claimed by the VM immediately.

minikube config set memory 4096  # Set to 4GB
minikube delete && minikube start  # Restart required

Check actual usage with minikube ssh then free -h. Usually only using like 500-600MB but it's reserved 2GB. That's how VMs work - the memory is "gone" from your system even if it's not really using it all.

Kubernetes Networking with Kube-proxy

Q

How do I update Minikube to the latest version?

A

Download the latest binary from GitHub releases or use package managers:

  • macOS: brew upgrade minikube
  • Windows: choco upgrade minikube
  • Linux: Download and replace binary in /usr/local/bin/
Q

Can I use Minikube with corporate proxy/firewall?

A

Yes, configure proxy settings:

minikube start --docker-env HTTP_PROXY=http://proxy:8080 \
               --docker-env HTTPS_PROXY=https://proxy:8080 \
               --docker-env NO_PROXY=localhost,127.0.0.1
Q

How do I persist data between Minikube restarts?

A

Use PersistentVolumes and PersistentVolumeClaims. Minikube's hostPath provisioner persists data in the VM. For development, mount host directories: minikube mount /host/path:/minikube/path.

Q

What's the difference between `minikube stop` and `minikube pause`?

A
  • minikube pause freezes all processes but keeps the VM running, allowing quick resume
  • minikube stop shuts down the VM completely, requiring full startup on next use
  • Pause/unpause is faster (5-10 seconds) vs stop/start (30-60 seconds)
Q

Can I run Minikube on a remote server and access it locally?

A

Yes, but requires additional configuration:

  • Start Minikube with --apiserver-ips=<server-ip>
  • Copy kubeconfig from server to local machine
  • Update server IP in kubeconfig
  • Ensure firewall allows access to API server port (typically 8443)
Q

How do I troubleshoot "ImagePullBackOff" errors in Minikube?

A

Common causes and solutions:

  • Private registry: Configure registry credentials with registry-creds addon
  • Local images: Use minikube image load <image> or point to Minikube's Docker daemon
  • Network issues: Check connectivity and DNS resolution
  • Image name: Verify image exists and name is correct
Q

Is Minikube suitable for production workloads?

A

No. Minikube is for development only. Single-node cluster with shit security settings. For production, use managed services (EKS, GKE, AKS) or real production distributions. Seen people try to use Minikube in production

  • it ends badly.
Q

How do I configure Minikube to use more CPU cores?

A

Set CPU allocation:

  • During start: minikube start --cpus=4
  • Permanently: minikube config set cpus 4
  • Requires cluster restart to take effect
Q

Can I use custom Kubernetes versions with Minikube?

A

Yes, Minikube supports the latest Kubernetes release plus 6 previous minor versions. Specify version with minikube start --kubernetes-version=v1.34.0. Check available versions with minikube get-k8s-versions.

Q

How do I backup and restore a Minikube cluster?

A

Minikube doesn't provide built-in backup. Use Kubernetes-native tools:

  • Export resources: kubectl get all --all-namespaces -o yaml > backup.yaml
  • Use velero for comprehensive backups
  • For development, version control your YAML manifests and recreate clusters as needed

Related Tools & Recommendations

tool
Similar content

kind Kubernetes: Run Local Clusters Without VM Overhead

Run actual Kubernetes clusters locally without the VM bullshit

kind
/tool/kind/overview
100%
tool
Similar content

Helm: Simplify Kubernetes Deployments & Avoid YAML Chaos

Package manager for Kubernetes that saves you from copy-pasting deployment configs like a savage. Helm charts beat maintaining separate YAML files for every dam

Helm
/tool/helm/overview
79%
tool
Similar content

Rancher Desktop: The Free Docker Desktop Alternative That Works

Discover why Rancher Desktop is a powerful, free alternative to Docker Desktop. Learn its features, installation process, and solutions for common issues on mac

Rancher Desktop
/tool/rancher-desktop/overview
70%
tool
Similar content

Fix Slow kubectl in Large Kubernetes Clusters: Performance Optimization

Stop kubectl from taking forever to list pods

kubectl
/tool/kubectl/performance-optimization
64%
tool
Similar content

containerd - The Container Runtime That Actually Just Works

The boring container runtime that Kubernetes uses instead of Docker (and you probably don't need to care about it)

containerd
/tool/containerd/overview
58%
integration
Similar content

Jenkins Docker Kubernetes CI/CD: Deploy Without Breaking Production

The Real Guide to CI/CD That Actually Works

Jenkins
/integration/jenkins-docker-kubernetes/enterprise-ci-cd-pipeline
57%
tool
Similar content

Flux GitOps: Secure Kubernetes Deployments with CI/CD

GitOps controller that pulls from Git instead of having your build pipeline push to Kubernetes

FluxCD (Flux v2)
/tool/flux/overview
44%
tool
Similar content

Minikube Troubleshooting Guide: Fix Common Errors & Issues

Real solutions for when Minikube decides to ruin your day

Minikube
/tool/minikube/troubleshooting-guide
42%
tool
Similar content

Rancher - Manage Multiple Kubernetes Clusters Without Losing Your Sanity

One dashboard for all your clusters, whether they're on AWS, your basement server, or that sketchy cloud provider your CTO picked

Rancher
/tool/rancher/overview
42%
tool
Similar content

KServe - Deploy ML Models on Kubernetes Without Losing Your Mind

Deploy ML models on Kubernetes without writing custom serving code. Handles both traditional models and those GPU-hungry LLMs that eat your budget.

KServe
/tool/kserve/overview
41%
tool
Similar content

etcd Overview: The Core Database Powering Kubernetes Clusters

etcd stores all the important cluster state. When it breaks, your weekend is fucked.

etcd
/tool/etcd/overview
39%
tool
Similar content

Django Production Deployment Guide: Docker, Security, Monitoring

From development server to bulletproof production: Docker, Kubernetes, security hardening, and monitoring that doesn't suck

Django
/tool/django/production-deployment-guide
38%
tool
Similar content

ChromaDB Enterprise Deployment: Production Guide & Best Practices

Deploy ChromaDB without the production horror stories

ChromaDB
/tool/chroma/enterprise-deployment
36%
tool
Similar content

RHACS Enterprise Deployment: Securing Kubernetes at Scale

Real-world deployment guidance for when you need to secure 50+ clusters without going insane

Red Hat Advanced Cluster Security for Kubernetes
/tool/red-hat-advanced-cluster-security/enterprise-deployment
35%
troubleshoot
Similar content

Kubernetes Crisis Management: Fix Your Down Cluster Fast

How to fix Kubernetes disasters when everything's on fire and your phone won't stop ringing.

Kubernetes
/troubleshoot/kubernetes-production-crisis-management/production-crisis-management
32%
tool
Similar content

Portainer Business Edition: Advanced Container Management & DevOps

Stop wrestling with kubectl and Docker CLI - manage containers without wanting to throw your laptop

Portainer Business Edition
/tool/portainer-business-edition/overview
32%
tool
Similar content

Google Cloud Run: Deploy Containers, Skip Kubernetes Hell

Skip the Kubernetes hell and deploy containers that actually work.

Google Cloud Run
/tool/google-cloud-run/overview
32%
tool
Similar content

Debug Kubernetes Issues: The 3AM Production Survival Guide

When your pods are crashing, services aren't accessible, and your pager won't stop buzzing - here's how to actually fix it

Kubernetes
/tool/kubernetes/debugging-kubernetes-issues
31%
tool
Similar content

Jsonnet Overview: Stop Copy-Pasting YAML Like an Animal

Because managing 50 microservice configs by hand will make you lose your mind

Jsonnet
/tool/jsonnet/overview
31%
tool
Similar content

KEDA - Kubernetes Event-driven Autoscaling: Overview & Deployment Guide

Explore KEDA (Kubernetes Event-driven Autoscaler), a CNCF project. Understand its purpose, why it's essential, and get practical insights into deploying KEDA ef

KEDA
/tool/keda/overview
31%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization