Currently viewing the human version
Switch to AI version

Understanding Pod Security Admission

Pod Security Policies got nuked in Kubernetes 1.25. Pod Security Admission is the replacement that actually works... most of the time.

PSA enforces security standards at the namespace level using simple labels. Unlike PSPs, which required you to have a fucking PhD in RBAC to understand what was happening, PSA runs as a built-in admission controller. This means PSA doesn't rely on those godawful webhooks that would randomly decide to take a shit during your Friday deployments.

PSA has been enabled by default since Kubernetes 1.23... or was it 1.22? Fuck it, I'm not looking it up right now - point is, if you upgraded past 1.20 something, it's probably there. Whether it's actually doing anything useful or just lurking in the background waiting to ruin your Friday deployment is anyone's guess.

Pod Security Standards Levels

PSA implements three security levels with increasing restrictions:

Privileged: No restrictions whatsoever. Your pods can do whatever the hell they want - mount the host, run as root, execute random binaries. Use this for system workloads and those ancient legacy apps that were built back when security wasn't even a thought.

Baseline: Blocks the really obvious security disasters like privileged containers and host networking. Most semi-modern apps can probably run under baseline without you wanting to throw your laptop out the window.

Restricted: Full paranoid mode. Pods must run as non-root with read-only filesystems and basically zero capabilities. Good luck getting anything from before 2021 to work with this - you'll be debugging security contexts until the heat death of the universe.

PSA Enforcement Modes

Kubernetes Cluster Architecture

PSA supports three enforcement modes that can be applied independently to each namespace, which sounds flexible until you realize it's just three different ways for things to break:

  • enforce: Actually rejects pods that violate the security standard. Failed deployments supposedly return "clear" error messages (spoiler: they don't, you'll still be scratching your head wondering wtf went wrong).
  • audit: Lets pods run but creates logs that nobody reads. Great for compliance checkboxes.
  • warn: Shows warnings to users who will promptly ignore them and deploy anyway.

You're supposed to start with audit and warn modes first to see what breaks, then enable enforcement once you've "fixed the obvious violations." Ha. What actually happens is you enable audit mode, get flooded with about 847 violations, realize your entire infrastructure was built on the assumption that everything runs as root, and then quietly go back to privileged mode.

Most teams enable audit mode at their target security level, spend about 3 days trying to make sense of the thousands of violations, give up, and just exempt everything. I may or may not have done this myself... multiple times.

Why Pod Security Policies Were Garbage

PSPs had so many fucking design issues that even the Kubernetes maintainers eventually said "fuck it, we're starting over":

  • The RBAC relationships were so convoluted that figuring out which policy applied to which pod was like solving a Rubik's cube blindfolded
  • Policy selection logic was about as transparent as a brick wall - pods would randomly get different policies and nobody knew why
  • Every single policy needed its own special YAML file and a matching RBAC binding, because apparently complexity is a feature
  • Error messages were written by someone who clearly hated users and wanted them to suffer

PSA fixes this clusterfuck by using explicit namespace labels - no more guessing games about which policy applies. But here's the catch: PSA trades PSP's fine-grained per-pod control for simpler namespace-level policies. Works fine for most people, but if you need different security settings for each pod in the same namespace, well... you're basically fucked.

PSA is easier to configure than PSPs, but that doesn't mean it's easy to implement. The namespace-level enforcement completely fucks over applications that need mixed security requirements in the same namespace, which is basically every real-world application.

How PSA Plays with Other Kubernetes Security

PSA works with other Kubernetes security stuff like security contexts, network policies, and RBAC. It validates pods early in the admission process, so you get immediate feedback when shit breaks instead of finding out later when your app mysteriously crashes.

PSA error messages are supposedly clearer than PSPs. When a pod gets rejected, the error is supposed to tell you what security control failed. "Supposed to" being the key phrase here - sometimes the errors are still cryptic as hell and about as helpful as a chocolate teapot.

I spent 6 fucking hours debugging why Prometheus node-exporter wouldn't start only to discover it needed to mount /proc from the host. The error message? "violates PodSecurity restricted:v1.29" - thanks for nothing, Kubernetes. Real helpful. I could have gotten more useful information from a Magic 8-Ball.

Here's some free advice: start with privileged mode everywhere and gradually tighten restrictions. Don't be like me and jump straight to restricted mode thinking you're some kind of security hero. I made that mistake once and spent the entire next Monday fixing 47 different apps that suddenly couldn't write to temp directories anymore. My coffee got cold about 15 times that day.

Actually Implementing PSA (And What Goes Wrong)

Is PSA Even Enabled? Let's Find Out The Hard Way

PSA supposedly comes enabled by default in Kubernetes 1.23+, but "enabled by default" in Kubernetes-land is about as reliable as a promise from a politician. Half the time it's there but not doing anything, and the other half it's secretly breaking your shit. Let me save you some time - here's how to check if you actually have it:

## This command will probably return nothing useful
kubectl get pods -n kube-system -l component=kube-apiserver -o yaml | grep PodSecurity

If that doesn't work (and it probably won't because nothing in Kubernetes is ever simple), try creating a privileged pod in a restricted namespace. If it fails, PSA is working. If it succeeds, PSA is either disabled, misconfigured, or just silently ignoring you because why the fuck not.

Cloud provider reality check (aka why managed Kubernetes isn't actually easier):

  • EKS: Supposedly enabled in 1.23+, but this is AWS we're talking about - they somehow managed to make even the simple stuff complicated. At least it works most of the time.
  • GKE: Works fine unless you're using Autopilot, in which case it becomes this weird mystery box where Google decides what's secure for you. Thanks Google, I definitely needed someone else making decisions about my cluster.
  • AKS: Enabled by default and immediately starts fighting with Azure Policy like two drunk guys arguing about who's going to break your deployments first. Microsoft managed to create two competing security systems that hate each other.

The Namespace Label Dance

PSA uses namespace labels, which sounds simple until you actually try to implement it and discover all the ways it can fuck you over. Here's what the docs conveniently forget to mention:

apiVersion: v1
kind: Namespace
metadata:
  name: production-workloads
  labels:
    # This will break your legacy apps immediately
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: v1.29

    # This creates logs nobody reads
    pod-security.kubernetes.io/audit: baseline
    pod-security.io/audit-version: v1.29

    # This creates warnings developers ignore
    pod-security.kubernetes.io/warn: baseline
    pod-security.kubernetes.io/warn-version: v1.29

Gotcha #1: If your namespace name has special characters, PSA might completely shit the bed in mysterious ways. I learned this one the hard way with a namespace called "test-app_v2" that worked fine for months until I tried to enable PSA and it just... didn't. Spent 2 hours debugging before I tried renaming the namespace. Apparently underscores are the devil.

Gotcha #2: Version pinning is supposed to prevent surprises during cluster upgrades. In practice, you'll forget to update the versions for like 6 months, then during a Tuesday morning cluster upgrade to 1.28, you'll get mysterious errors about "unsupported PSA version" and spend 3 hours wondering why the fuck your monitoring pods won't start. Pro tip: that's when you discover you pinned everything to v1.24 and forgot to update it.

Version Pinning (Because Kubernetes Loves Breaking Changes)

Always pin your versions unless you enjoy surprises during cluster upgrades:

## Pin everything or suffer the consequences
pod-security.kubernetes.io/enforce-version: v1.29
pod-security.kubernetes.io/audit-version: v1.29
pod-security.kubernetes.io/warn-version: v1.29

What they don't tell you in the docs: if you don't pin versions, PSA will use whatever "latest" version means that week, which means your cluster upgrade might suddenly start enforcing new policies that nobody tested. This will 100% guaranteed break something important at 3 AM on a Friday when you're already three beers deep into your weekend. Trust me on this one.

Cluster-Wide Configuration (For Masochists)

Kubernetes Architecture Diagram

You can set cluster-wide defaults, which sounds like a brilliant idea until you realize it applies to literally fucking everything and you can't take it back easily:

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
  configuration:
    apiVersion: pod-security.admission.config.k8s.io/v1beta1
    kind: PodSecurityConfiguration
    defaults:
      enforce: baseline  # Don't go straight to restricted
      enforce-version: v1.29
      audit: restricted
      audit-version: v1.29
      warn: restricted
      warn-version: v1.29
    exemptions:
      usernames: []
      runtimeClasses: []
      namespaces: [kube-system, kube-public, kube-node-lease]

WARNING: This configuration file goes in /etc/kubernetes/ on your control plane nodes. If you fuck up the YAML syntax even slightly, your entire cluster won't start and you'll be explaining to management why the production cluster is down because of a missing comma. Seriously, have backups and test this shit in dev first. I've seen people brick clusters with bad admission controller configs and it's not fun for anyone.

The Exemption List (Your Escape Hatch)

Some namespaces need to be exempt because they run system-level stuff that needs to break all the rules:

exemptions:
  namespaces:
    - kube-system          # Obviously
    - kube-public          # Ditto
    - kube-node-lease      # Don't touch this
    - istio-system         # Istio does what it wants
    - cert-manager         # Needs privileges for DNS challenges
    - monitoring           # Prometheus is basically root anyway
    - gitlab-runner        # CI/CD needs to do sketchy things

You'll discover more namespaces that need exemptions as things break. Keep a list.

The "Gradual Rollout" Lie (aka Why Documentation is Fiction)

The docs recommend this nice, neat phased approach that looks great in powerpoint slides. Here's what actually happens in the real world:

Phase 1 (Discovery):

kubectl label namespace production pod-security.kubernetes.io/audit=restricted

You'll get approximately 4,847 audit violations in the first hour. Nobody has time to read them all, and half of them will be duplicates anyway.

Phase 2 (User Education):

kubectl label namespace production pod-security.kubernetes.io/warn=restricted

Developers see warnings and ignore them. Nothing changes.

Phase 3 (Enforcement):

kubectl label namespace production pod-security.kubernetes.io/enforce=restricted

Everything breaks. You spend the next week debugging why your monitoring stack can't start.

Phase 4 (Reality):

kubectl label namespace production pod-security.kubernetes.io/enforce=privileged

You temporarily disable everything to get deployments working again.

Monitoring PSA (Or Trying To)

Admission Control Phases

PSA generates events and logs that are supposed to help you track compliance:

## This returns way too much information
kubectl get events --field-selector reason=FailedCreate

## This might actually be useful
kubectl describe pod your-broken-pod

The error messages are cryptic at best. "violates PodSecurity restricted:v1.29" tells you absolutely fucking nothing about what specifically is wrong. You'll end up reading the pod spec line by line like you're looking for buried treasure, muttering "what the hell could it possibly be mad about now?" until you find that one random security context setting buried 30 lines deep.

What Actually Breaks

Here's what will definitely fail when you enable restricted mode:

  1. Anything running as root (which is most legacy apps)
  2. Pods without security contexts (basically everything from before 2020)
  3. Init containers that need privileges (half your Helm charts)
  4. Monitoring agents (they need host access)
  5. CI/CD runners (they need to do privileged builds)
  6. Legacy Java apps (they shit themselves without write access to temp directories)
  7. Database containers (MySQL/PostgreSQL containers expect to create files as root)

Plan for at least 2-3 weeks of fixing broken deployments after enabling enforcement, and that's if you're lucky and don't have too much legacy garbage. I once spent 4 fucking hours debugging why Prometheus couldn't start, only to discover it needed write access to /tmp. The audit logs? They just told me "violates PodSecurity" - real helpful, thanks for nothing Kubernetes. I could have diagnosed the problem faster by reading tea leaves.

Common Pod Security Admission Questions

Q

Why are my pods being rejected by PSA?

A

PSA enforces security constraints that many applications weren't designed to meet. Common violations include:

  • Running containers as root user (UID 0)
  • Missing or incomplete security contexts
  • Using privileged containers for system access
  • Mounting host directories or using restricted volume types
  • Allowing privilege escalation (enabled by default in many container runtimes)

Fix this by adding proper security contexts to your pod specs, or just say "fuck it" and set the namespace to privileged mode if your app legitimately needs root access and you don't have time to rewrite the entire application stack.

Q

How do I fix CI/CD pipeline issues with PSA?

A

CI/CD systems are fundamentally incompatible with PSA because they need to do sketchy shit like mounting Docker sockets and running privileged builds. PSA takes one look at this and goes "absolutely fucking not."

Quick fix (what everyone actually does because we have deadlines):

kubectl label namespace ci-system pod-security.kubernetes.io/enforce=privileged

"Proper" longer-term approaches (that nobody actually implements because life is short):

  • Using rootless build tools like kaniko or buildah (good luck getting them to work correctly on your first try... or your tenth)
  • Running builds in dedicated privileged namespaces (defeats the entire fucking purpose of PSA but whatever)
  • Remote build services that don't run in the cluster (expensive, complicated, and your networking team will hate you)
Q

How do I temporarily disable PSA enforcement?

A

When you need to quickly restore service during an outage:

## Set namespace to privileged mode
kubectl label namespace your-namespace pod-security.kubernetes.io/enforce=privileged

## Or remove enforcement labels entirely
kubectl label namespace your-namespace pod-security.kubernetes.io/enforce-

This fixes your immediate problem but you'll probably forget to re-enable security later. Most people document "temporary" privileged namespaces and then they stay that way forever.

Q

Why do monitoring tools fail with PSA enabled?

A

Monitoring agents like Prometheus node exporter, Datadog agents, and log collectors need to read host metrics, mount /proc, and generally do all the things that PSA considers evil. PSA takes one look at your monitoring stack and says "fuck no" to basically everything.

How people actually fix this in the real world:

  1. Dedicated monitoring namespace: Exempt the monitoring namespace and call it a day (what 95% of people do)
  2. Baseline security level: Try baseline and hope it doesn't break your monitoring (spoiler: it probably will anyway)
  3. eBPF-based tools: Fancy new monitoring that doesn't need root access (if you can afford the licensing and have 6 months to migrate everything)
  4. Remote monitoring: Use external SaaS services (expensive but works, until your CFO sees the bill)

Most people just exempt the monitoring namespace because debugging why node-exporter can't read /proc/stats at 2 AM on a Saturday while your on-call alerts are going off is not how anyone wants to spend their weekend. I spent 4 hours on this exact issue once - turns out node-exporter needs hostNetwork: true and hostPID: true to function, which PSA restricted mode blocks faster than you can say "incident response."

kubectl label namespace monitoring pod-security.kubernetes.io/enforce=privileged
Q

Can I just ignore the warnings and deploy anyway?

A

Yes, warnings don't block deployments. But ignoring them is like ignoring the check engine light in your car - everything works fine until it suddenly doesn't.

The warn mode is basically PSA's way of saying "this is stupid but I'll allow it." Eventually you'll need to fix it or accept that your security posture is garbage.

Q

How do I debug "violates PodSecurity" errors?

A

The error messages are intentionally vague because Kubernetes hates you. Here's what actually helps:

## Get the full error message (still won't help much)
kubectl describe pod your-broken-pod

## Check what security level is enforced
kubectl get namespace your-namespace -o yaml | grep pod-security

## Compare against what your pod is trying to do
kubectl get pod your-broken-pod -o yaml | grep -A 20 securityContext

Most common violations:

  • runAsUser: 0 (running as root) - Error: "spec.securityContext.runAsUser: 0"
  • Missing runAsNonRoot: true - Error: "spec.securityContext.runAsNonRoot: false"
  • privileged: true anywhere - Error: "spec.containers[0].securityContext.privileged: true"
  • Capabilities that PSA doesn't like - Error: "spec.containers[0].securityContext.capabilities.add[0]: SYS_ADMIN"

Pro tip: The pod probably runs as root. It's always running as root.

Q

Does this actually make my cluster more secure?

A

PSA prevents some obvious security mistakes, but it's mostly security theater. If your main concern is "developers accidentally running privileged containers," then yes, it helps.

If your threat model includes "sophisticated attackers who have already gained access to your cluster," then PSA is about as useful as a screen door on a submarine.

Real security comes from proper RBAC, network policies, and not running random containers from the internet.

Q

What's the difference between this and the old Pod Security Policies?

A

PSPs were a nightmare to configure and debug. PSA is much simpler but also less flexible.

PSPs: Required PhD in Kubernetes RBAC to understand which policy applied to which pod. Debugging took hours.

PSA: Uses simple namespace labels. When it breaks, you know immediately what's wrong.

Trade-off: PSPs could do complex per-pod rules. PSA is one-size-fits-all per namespace.

Q

How long does it take to fix all the violations?

A

Plan for 2-6 months if you have any legacy applications, and that's being optimistic. Here's what actually happens in the real world:

  • Week 1: Discover that literally everything violates restricted policies because your entire infrastructure was built in 2018
  • Week 2: Fix the easy stuff (those 3 new microservices that actually have proper security contexts)
  • Week 3-8: Fight tooth and nail with legacy applications and third-party Helm charts that were written by people who thought security was optional
  • Week 9: Give up, exempt 80% of your namespaces, and start drinking heavily
  • Week 10: Declare victory with baseline enforcement on 3 namespaces and hope management doesn't ask too many questions

The audit phase will show you 500+ violations on day one. I tried to fix them all once - biggest mistake of my career. Spent 3 months debugging security contexts and ended up with more gray hair. Just fix the critical apps and exempt the rest. Life's too short to debug why a 5-year-old Java app can't write to /tmp, and your sanity is worth more than perfect security compliance.

Q

Can I use PSA with service meshes like Istio?

A

Istio does whatever it wants and ignores most security policies. Put Istio in its own exempt namespace and don't ask questions:

kubectl label namespace istio-system pod-security.kubernetes.io/enforce=privileged

Same goes for most CNI plugins, ingress controllers, and anything else that considers itself "infrastructure."

PSA Security Level Comparison

Security Control

Privileged

Baseline

Restricted

Notes

Privileged Containers

✅ Allowed

❌ Blocked

❌ Blocked

Common in CI/CD systems

Host Network/PID/IPC

✅ Allowed

❌ Blocked

❌ Blocked

Required by some service meshes

Host Path Volumes

✅ Allowed

❌ Blocked

❌ Blocked

Monitoring tools often need this

Root User (UID 0)

✅ Allowed

✅ Allowed

❌ Blocked

Many legacy images default to root

Privilege Escalation

✅ Allowed

✅ Allowed

❌ Blocked

Container runtime default

Volume Types

✅ All types

⚠️ Limited

⚠️ Very limited

Check specific volume requirements

Security Context

⚪ Optional

⚪ Optional

✅ Required

Often omitted in older manifests

Migration from PSPs: Welcome to Hell

The Hard Truth About PSP to PSA Migration

Pod Security Policies got fucking nuked in Kubernetes 1.25, so if you're upgrading past 1.24, you don't have a choice in the matter. You're migrating whether you like it or not, and trust me, you won't like it.

The official migration guide makes it sound like this gentle, smooth transition. It's not. It's more like switching from Windows to Linux while blindfolded and on fire - everything you thought you knew is wrong and half your shit won't work.

Pre-Migration: Discovering the Damage

First, figure out what PSPs you actually have and whether they're doing anything:

## See what PSPs exist (probably none)
kubectl get psp

## Find out which pods actually use PSPs
kubectl get pods --all-namespaces -o custom-columns=\"NAMESPACE:.metadata.namespace,NAME:.metadata.name,PSP:.metadata.annotations.kubernetes\\.io/psp\"

Reality check: Most organizations discover they have PSPs configured but nothing is actually fucking using them. Turns out those years of PSP configuration were just elaborate security theater - all show, no substance.

If you do have active PSPs, mapping them to PSA levels is "fun":

  • Your most restrictive PSP: Probably maps to baseline, not restricted (restricted is stricter than you think)
  • Your "standard" PSP: Maps to privileged
  • Your permissive PSP: Also maps to privileged

The Migration Process (And What Actually Happens)

Enable Audit Mode First

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: v1.29

What actually happens: You get about 10,000 audit violations in the first 30 minutes and realize every single goddamn pod in your cluster violates "restricted" policies. This is normal and expected, but it still feels like getting punched in the face.

Try to Analyze the Damage

## This returns way too much shit to be useful
kubectl get events --field-selector reason=PolicyViolation

## Check audit logs (if you can even find them)
journalctl -u kube-apiserver | grep \"pod-security\"

What you'll discover (spoiler alert, it's all bad news):

  • Your entire monitoring stack runs as root because monitoring is basically spyware
  • Every single init container needs privileges to do god knows what
  • Half your Helm charts were written in 2018 by people who thought security was someone else's problem and just assume everything runs as root
  • Your CI/CD pipeline is basically a privilege escalation playground that would make hackers weep with joy

Attempt Some Quick Fixes

You'll try to fix a few pods to be compliant:

## Before: Works
spec:
  containers:
  - name: app
    image: your-legacy-app:latest

## After: Doesn't work, app can't write to disk
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - name: app
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL

Result: Your app breaks because it can't write logs, create temp files, or do any of the normal things apps do. Check the security context docs for what actually works (spoiler: it's complicated).

Step 4: The Great Exemption

After a week of fighting with security contexts, you give up and exempt everything:

## The nuclear option
kubectl label namespace production pod-security.kubernetes.io/enforce=privileged
kubectl label namespace staging pod-security.kubernetes.io/enforce=privileged
kubectl label namespace development pod-security.kubernetes.io/enforce=privileged

Step 5: PSP Cleanup (Finally Something Easy)

## This probably fails because you don't have cluster admin
kubectl delete psp --all

## This might work
kubectl delete clusterrole psp:*
kubectl delete clusterrolebinding psp:*

Migration Challenges (The Real List)

Everything Runs as Root: Legacy applications, third-party containers, basically anything from before 2020 when security was just a vague suggestion. Fixing this requires rebuilding containers from scratch or just accepting that you'll have privileged namespaces everywhere.

Monitoring Agents Are Privileged by Design: Node exporters, log collectors, security scanners - they all need host access to spy on your system. Put them in exempt namespaces and move on with your life.

Init Containers: Every single fucking Helm chart has init containers that need to chown files or set permissions because apparently container images can't be built properly. These will all break under restricted policies. I spent an entire afternoon debugging why our PostgreSQL database wouldn't start after enabling baseline PSA - turns out the init container was trying to chown /var/lib/postgresql/data as user 0, which baseline blocks. The error? "violates PodSecurity baseline:v1.28: forbidden syscalls" - thanks for nothing, Postgres Helm chart. Four hours of my life I'll never get back tracing through initdb scripts.

Volume Mounts: Applications expect to write to /tmp, /var/log, and other paths. With readOnlyRootFilesystem: true, nothing works. Check the volume documentation for alternatives.

CI/CD is Fundamentally Incompatible: Docker-in-Docker, mounting Docker sockets, running privileged builds - CI/CD systems violate every security principle PSA enforces. Consider buildpacks or Tekton for more secure builds.

Troubleshooting: When Everything Breaks

Decoding PSA Error Messages

## The error message is useless
kubectl describe pod broken-app
## \"violates PodSecurity restricted:v1.29: forbidden syscalls\"

## What you actually need to check
kubectl get pod broken-app -o yaml | grep -A 50 securityContext

Common violations and their actual meanings:

  • \"securityContext.runAsUser: 0\" = Your app runs as root, which is forbidden
  • \"allowPrivilegeEscalation: true\" = Docker's default, blocked in restricted mode
  • \"capabilities.add: [SYS_ADMIN]\" = Your app wants godmode privileges
  • \"volumes: hostPath\" = Your app wants to mount host directories

Testing PSA Configuration

## Create a throwaway namespace for testing
kubectl create namespace psa-test
kubectl label namespace psa-test pod-security.kubernetes.io/enforce=restricted

## Try to deploy something simple
kubectl run test-pod --image=nginx --namespace=psa-test
## Watch it fail spectacularly

The Performance Lie

The docs claim PSA has "minimal performance impact." This is technically true - PSA itself is fast. But the troubleshooting, debugging, and constant namespace relabeling will consume weeks of engineering time.

Real performance impact:

  • Engineering velocity: -50% for first month
  • Time to deploy new apps: +300% (due to security context debugging)
  • Incident response time: +200% (due to "is this a PSA issue?" questions)

Migration Timeline: Expectations vs Reality

What the fucking docs say: 6-8 weeks for complete migration (lol)
What actually happens in the real world:

  • Week 1: Enable PSA, everything catches fire, nobody can deploy anything, all hands on deck to unfuck the situation
  • Month 2-4: Fight with security contexts like you're wrestling a bear, maybe fix 20% of violations if you're lucky and have unlimited coffee
  • Month 5-6: Give up on most apps, exempt 80% of namespaces to privileged mode, pretend this was always the plan
  • Month 7+: New applications might get baseline enforcement if developers remember and if the stars align
  • Month 12: Declare victory with 3 namespaces actually using baseline mode, update your resume

How to measure "success":

  • What management wants to hear: "All applications running in restricted mode with zero security exceptions"
  • What actually happens: "We stopped getting paged about PSA blocking deployments at 3 AM"
  • What goes in the post-mortem: "PSA migration caused 4 production outages, 17 missed deployments, and 2 engineers to question their career choices before we exempted most workloads to privileged"

Resources That Actually Help (And Which Ones Suck)

Related Tools & Recommendations

howto
Similar content

Your Kubernetes Cluster is Probably Fucked

Zero Trust implementation for when you get tired of being owned

Kubernetes
/howto/implement-zero-trust-kubernetes/kubernetes-zero-trust-implementation
100%
troubleshoot
Similar content

Kubernetes Security Policies Are Blocking Everything - Here's How to Actually Fix It

Learn to diagnose and resolve Kubernetes security policy violations, including PodSecurity and RBAC errors. Get quick triage tips and lasting fixes to unblock y

Kubernetes
/troubleshoot/kubernetes-security-policy-violations/security-policy-violations
72%
tool
Similar content

Hardening GKE Enterprise - Security That Actually Works

Secure Google Kubernetes Engine Enterprise (GKE) clusters with this hardening guide. Learn best practices for Workload Identity, Binary Authorization, and the G

Google Kubernetes Engine Enterprise
/tool/gke-enterprise/security-hardening-guide
58%
tool
Similar content

Pod Security Standards - Three Security Levels Instead of Policy Hell

Replace the clusterfuck that was Pod Security Policies with simple security profiles

Pod Security Standards
/tool/pod-security-standards/overview
56%
tool
Similar content

RHACS Compliance Implementation: Stop Panicking When Auditors Show Up

I've been through 5 SOC 2 audits with RHACS. Here's what actually works (and what's complete bullshit)

Red Hat Advanced Cluster Security for Kubernetes
/tool/red-hat-advanced-cluster-security/compliance-implementation-guide
55%
howto
Similar content

Complete Kubernetes Security Monitoring Stack Setup - Zero to Production

Learn to build a complete Kubernetes security monitoring stack from zero to production. Discover why commercial tools fail, get a step-by-step implementation gu

Kubernetes
/howto/setup-kubernetes-security-monitoring/complete-security-monitoring-stack
54%
tool
Recommended

Red Hat OpenShift Container Platform - Enterprise Kubernetes That Actually Works

More expensive than vanilla K8s but way less painful to operate in production

Red Hat OpenShift Container Platform
/tool/openshift/overview
54%
troubleshoot
Similar content

Your GPU Pods Are Stuck Pending (Here's How I Fixed It After 4 Hours at 3AM)

When nvidia-smi shows 8 GPUs but Kubernetes sees zero, and you're about to lose your shit

Kubernetes
/troubleshoot/kubernetes-gpu-resource-allocation/gpu-allocation-fundamentals
52%
tool
Recommended

Amazon EKS - Managed Kubernetes That Actually Works

Kubernetes without the 3am etcd debugging nightmares (but you'll pay $73/month for the privilege)

Amazon Elastic Kubernetes Service
/tool/amazon-eks/overview
49%
tool
Similar content

RHACS Troubleshooting Guide: Fix the Stuff That Breaks

When your security platform decides to become the security problem

Red Hat Advanced Cluster Security for Kubernetes
/tool/red-hat-advanced-cluster-security/troubleshooting-guide
47%
howto
Similar content

How to Reduce Kubernetes Costs in Production - Complete Optimization Guide

Master Kubernetes cost optimization with our complete guide. Learn to assess, right-size resources, integrate spot instances, and automate savings for productio

Kubernetes
/howto/reduce-kubernetes-costs-optimization-strategies/complete-cost-optimization-guide
47%
troubleshoot
Similar content

When Kubernetes Network Policies Break Everything (And How to Fix It)

Your pods can't talk, logs are useless, and everything's broken

Kubernetes
/troubleshoot/kubernetes-network-policy-ingress-egress-debugging/connectivity-troubleshooting
47%
review
Similar content

Container Runtime Security is Where Everything Goes to Hell

I've watched container escapes take down entire production environments. Here's what actually works.

Falco
/review/container-runtime-security/comprehensive-security-assessment
47%
troubleshoot
Similar content

Stop Kubernetes From Ruining Your Life - Prevention Guide That Actually Works

Prevent Kubernetes production outages with this guide. Learn proactive strategies, effective monitoring, and advanced troubleshooting to keep your clusters stab

Kubernetes
/troubleshoot/kubernetes-production-outages-prevention/proactive-outage-prevention
47%
howto
Similar content

Setup Kubernetes Production Deployment - Complete Guide

The step-by-step playbook to deploy Kubernetes in production without losing your weekends to certificate errors and networking hell

Kubernetes
/howto/setup-kubernetes-production-deployment/production-deployment-guide
46%
integration
Similar content

Escape Istio Hell: How to Migrate to Linkerd Without Destroying Production

Stop feeding the Istio monster - here's how to escape to Linkerd without destroying everything

Istio
/integration/istio-linkerd/migration-strategy
46%
tool
Similar content

Docker Security Scanner Failures - Debug the Bullshit That Breaks at 3AM

Troubleshoot common Docker security scanner failures like Trivy database timeouts or 'resource temporarily unavailable' errors in CI/CD. Learn to debug and fix

Docker Security Scanners (Category)
/tool/docker-security-scanners/troubleshooting-failures
46%
tool
Recommended

Shopify Polaris - Stop Building the Same Components Over and Over

competes with Shopify Polaris

Shopify Polaris
/tool/shopify-polaris/overview
44%
troubleshoot
Similar content

When Admission Controllers Shit the Bed and Block Your Deployments

Fix the Webhook Timeout Hell That's Breaking Your CI/CD

Trivy
/troubleshoot/container-vulnerability-scanning-failures/admission-controller-policy-failures
41%
review
Similar content

Kubernetes Enterprise Review - Is It Worth The Investment in 2025?

Evaluate Kubernetes for enterprise. This guide assesses real-world implementation, success stories, pain points, and total cost of ownership for businesses in 2

Kubernetes
/review/kubernetes/enterprise-value-assessment
41%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization