Currently viewing the human version
Switch to AI version

Understanding Pod Security Admission

Pod Security Policies were a clusterfuck. If you've ever spent 3 hours debugging why your pod wouldn't start because of some obscure RBAC binding you forgot about, you know what I'm talking about. PSA fixes that mess by ditching the complex policy-to-pod mapping nightmare that made PSPs impossible to debug.

PSA has been enabled by default since Kubernetes 1.23, but it starts in "privileged" mode globally - meaning it doesn't actually enforce anything until you configure namespace labels. This is actually smart because otherwise every cluster upgrade would break half your workloads.

Why Pod Security Policies Had to Die

PSPs got the axe in Kubernetes 1.21 and were completely removed in 1.25. Good riddance. Here's why they sucked:

  • RBAC hell - You had to create ClusterRoles, RoleBindings, and ServiceAccounts just to figure out which policy applied to your pod. When it broke, good luck debugging that maze. I spent an entire Friday tracing through RBAC bindings trying to figure out why our monitoring pods stopped working after someone "cleaned up" some old ServiceAccounts.
  • Policy selection mystery - Multiple PSPs could match your pod, and Kubernetes picked one using logic that nobody could predict. I've seen production outages caused by the "wrong" PSP getting selected after an innocent namespace cleanup.
  • Cryptic errors - Error messages like unable to validate against any pod security policy told you absolutely nothing about what was actually wrong. You'd spend hours trying to figure out if it was RBAC, the policy itself, or some other random thing.
  • No testing - You couldn't test policies without potentially breaking production. Every PSP change was a "hope and pray" moment, usually followed by "oh shit, that broke the logging daemonset."

The day PSPs stopped working in 1.25 staging went down for 3 hours because someone missed migrating the monitoring namespace. That's when I learned PSA the hard way.

Pod Security Standards Migration from PSP to PSA

How PSA Actually Works

PSA is a validating admission controller built into the Kubernetes API server. Unlike PSPs that required RBAC gymnastics, PSA uses simple namespace labels to define what's allowed. That's it. No ClusterRoles, no bindings, just labels.

Kubernetes Admission Controller Flow

When you try to create a pod, PSA checks it against the namespace's security profile before the pod even gets to the scheduler. If your pod violates the policy, you get an error immediately instead of watching it fail mysteriously later.

The key difference: PSA doesn't validate existing pods. Your old shit keeps running even if it violates the new policy. This saved my ass during our migration - we could enable PSA gradually without taking down production workloads.

Pod Security Standards Overview Diagram

The Three Security Profiles

PSA gives you three security levels. Pick your poison:

Privileged - Anything goes. Your containers can be root, mount the host filesystem, access the host network - basically do whatever the fuck they want. Use this for system pods (kube-proxy, CNI drivers) and when you absolutely need privileged access. Most legacy apps end up here because nobody wants to fix their shit.

Baseline - Blocks the obviously dangerous stuff like privileged containers and host networking, but still allows root users. This is where most production workloads live because developers haven't figured out how to run their apps as non-root yet. It's the "good enough" security level.

Restricted - The paranoid security level. Forces non-root users, read-only root filesystems, drops all Linux capabilities. Your app probably won't work with this unless it was designed with security in mind from day one. Great for new cloud-native apps, nightmare for legacy stuff.

Why PSA Actually Works

Here's what makes PSA better than the PSP shitshow:

  • Namespace labels are explicit - You can kubectl get namespace -o yaml and immediately see what security policy applies. No more guessing which of your 47 PSPs is actually being used.
  • Built into the API server - No more webhook timeouts taking down your deployments. PSAs runs inside kube-apiserver, so it's as reliable as the rest of Kubernetes.
  • Three enforcement modes - enforce blocks stuff, audit logs violations, warn shows warnings to users. You can enable audit mode first to see what would break before you enforce anything.
  • Better error messages - Instead of cryptic PSP errors, you get messages like violates PodSecurity "restricted:latest": securityContext.runAsNonRoot != true. Still not great, but way better than before.

The downside: PSA is namespace-level only. You can't have dev and prod workloads in the same namespace with different security requirements anymore. But honestly, if you were doing that with PSPs, you were probably doing it wrong anyway.

If you want to dig deeper into this stuff:

Pod Security Admission vs Alternative Solutions

Feature

Pod Security Admission

OPA Gatekeeper

Falco

Pod Security Policies

Installation

Built-in since K8s 1.23

Requires separate deployment

Requires separate deployment

Built-in (deprecated)

Policy Language

Fixed security profiles

Rego language

YAML rules

Kubernetes YAML

Granularity

Namespace-level

Pod/resource-level

Runtime events

Pod-level with RBAC

Performance Impact

Minimal (built-in)

Low (admission webhook)

CPU hog that your ops team will hate

Minimal (built-in)

Learning Curve

Low (3 predefined profiles)

High (good luck finding Rego developers)

Medium (rule configuration)

High (RBAC complexity)

Policy Flexibility

Limited to 3 standard profiles

Unlimited custom policies

Event-based detection

Moderate flexibility

Maintenance

None (part of Kubernetes)

Requires updates and monitoring

Requires daemon management

None (deprecated)

Use Case

Standard pod security

Custom compliance policies

Runtime threat detection

Legacy pod security

Cloud Provider Support

Universal

Widely supported

Widely supported

Legacy (removed)

How PSA Actually Works Under the Hood

PSA is a validating admission controller built into the Kubernetes API server. Unlike webhook-based admission controllers that can timeout and fuck up your deployments, PSA runs inside kube-apiserver itself, so it's as reliable as the rest of your control plane.

The Admission Process (When Your Pod Gets Judged)

When you kubectl apply a pod, here's what happens:

  1. API server gets your request - Usually from kubectl, but could be Helm, GitOps, or whatever
  2. Auth check - Are you allowed to create pods in this namespace? Basic RBAC stuff.
  3. Mutating admission - Other controllers might add sidecar containers, inject secrets, whatever
  4. PSA validation - This is where PSA looks at your pod and decides if it meets the namespace security policy
  5. Rejection or acceptance - If PSA doesn't like your pod, you get an error and nothing gets created. If it's cool, your pod gets stored in etcd.

The key thing: PSA checks your pod after mutations but before it gets stored. So if another admission controller adds a privileged sidecar, PSA will catch that and block the whole thing.

Pod Security Admission Process Flow

Namespace Labels: The New Way to Control Security

Instead of the PSP RBAC nightmare, PSA uses simple namespace labels. That's it. Just labels on your namespace:

## What I actually use in prod (pin the version or K8s updates will screw you)
apiVersion: v1
kind: Namespace
metadata:
  name: production-workloads
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: v1.29  # ALWAYS pin this
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: v1.29
    pod-security.kubernetes.io/warn: baseline
    pod-security.kubernetes.io/warn-version: v1.29

Pro tip: Always pin the version (v1.29) or Kubernetes upgrades will change your security policies without warning. I learned this the hard way when a cluster upgrade from 1.28 to 1.29 started blocking pods that worked fine before.

The three modes save your ass during rollout:

  • enforce - Blocks violating pods (use this in prod once you're sure)
  • audit - Logs violations to the audit log (great for seeing what would break)
  • warn - Shows warnings to users but allows the pod (perfect for dev environments)

You can see which policy applies with kubectl get namespace -o yaml - no more guessing like with PSPs.

Cloud Provider Reality Check

Amazon EKS - Works fine, PSA enabled by default in privileged mode. Makes sense since AWS needs their system pods to do whatever they want. EKS 1.23+ handles this properly. No surprises, no weird edge cases, just works.

Google GKE - Standard clusters work normally. Autopilot is different though - forces baseline security and you can't override it to privileged for user workloads. Great until your legacy app needs root, then you're fucked and have to refactor everything or switch to Standard.

Azure AKS - This is where it gets annoying. AKS has PSA AND Azure Policy for Kubernetes, and they can fight each other. You'll get confusing errors about which policy is blocking your pod. Microsoft's typical "let's add another layer of complexity" approach. Pick one system, not both, or you'll spend your weekend debugging policy conflicts.

Pod Security Standards Implementation in AWS EKS

Performance: PSA Doesn't Slow You Down

PSA is fast because it's built into kube-apiserver, not some external webhook that might be slow or unavailable:

  • Sub-millisecond validation - PSA policy checks are simple comparisons, not complex logic
  • No network calls - Unlike webhook admission controllers that can timeout and break your deployments
  • Scales with your cluster - More API server replicas = more PSA capacity
  • Cached policies - PSA caches namespace policies in memory, so it doesn't re-read labels every time

In practice, PSA adds less than 1ms to pod creation. You won't notice it unless you're creating thousands of pods per second, and if you are, PSA performance is the least of your problems.

What PSA Actually Checks

PSA examines your pod's security context and compares it against the namespace policy. Here's what it looks for:

Common gotcha: If your pod doesn't specify a security context, PSA uses the defaults. For restricted mode, you must explicitly set runAsNonRoot: true and runAsUser: 1000 (or any UID > 0). Forgetting this will get you Pod violates PodSecurity \"restricted\": securityContext.runAsNonRoot != true.

Debugging and Monitoring

PSA is way better than PSPs for debugging, but you still need to know where to look:

The error messages are actually readable now, unlike PSP's cryptic bullshit. You'll get messages like violates PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false which actually tells you what's wrong.

The only docs you actually need:

Frequently Asked Questions

Q

What is the difference between Pod Security Standards and Pod Security Admission?

A

Pod Security Standards define three security profiles (Privileged, Baseline, Restricted) that specify what pod configurations are allowed. Pod Security Admission is the built-in admission controller that enforces these standards by validating pods against the profiles. Think of PSS as the policies and PSA as the enforcement mechanism.

Q

Do I need to install anything to use Pod Security Admission?

A

No installation is required. PSA has been enabled by default in Kubernetes since version 1.23. However, it starts with privileged mode globally, so you need to configure namespace labels to enforce stricter security policies. Cloud providers like EKS, GKE, and AKS include PSA in their managed Kubernetes services.

Q

Can Pod Security Admission work alongside other admission controllers?

A

Yes, PSA works seamlessly with other admission controllers including OPA Gatekeeper, Falco, and custom validating/mutating webhooks. PSA handles standard pod security enforcement while other controllers can implement additional policies, compliance rules, or runtime monitoring.

Q

How does Pod Security Admission affect existing workloads?

A

Your existing pods keep running

  • PSA doesn't kill running workloads when you enable it.

But here's the gotcha: when those pods restart (node drain, rolling update, crash), they'll get blocked if they violate the new security policy. I've seen teams enable PSA enforcement on Friday afternoon and come back Monday to half their apps down because pods couldn't restart over the weekend. Always test with audit mode first.

Q

What happens to Pod Security Policies when I upgrade to Kubernetes 1.25+?

A

PSPs just disappear. Poof. Gone. In 1.25, the entire PSP feature was removed from Kubernetes, so any pods that relied on PSPs will fail with errors like error validating data: unknown field "spec.securityContext.seLinuxOptions" or similar cryptic messages. This caught a lot of teams off guard. Have your PSA migration plan ready BEFORE you upgrade, or you'll be firefighting broken deployments.

Q

Can I use different security levels for different applications in the same namespace?

A

Nope. PSA is namespace-level only. Every pod in the namespace gets the same security policy. This forces you to organize your workloads properly

  • no more mixing dev and prod workloads in the same namespace with different security requirements. If you need fine-grained control, either create more namespaces or add OPA Gatekeeper for custom policies. Most teams end up with many more namespaces after implementing PSA.
Q

How do I know which security profile my applications need?

A

Enable audit mode with the restricted profile first and check your audit logs for violations. You'll quickly see what breaks. Legacy apps usually fail restricted mode because they run as root or need write access to the root filesystem. If you see errors like securityContext.runAsNonRoot != true, you know your app needs to be fixed or moved to baseline. Start restrictive and work your way down until your apps actually run.

Q

Does Pod Security Admission impact cluster performance?

A

PSA has minimal performance impact since it's a built-in admission controller with no external dependencies. Validation typically adds less than 1 millisecond to pod creation latency. Unlike webhook-based admission controllers, PSA doesn't introduce network calls or external service dependencies that could affect cluster stability.

Q

Can I just disable this shit when it's blocking my deployment?

A

You can set the namespace to pod-security.kubernetes.io/enforce: privileged to turn off enforcement temporarily. But don't make this your permanent solution. The security restrictions exist because production incidents suck worse than fixing your YAML. If you're constantly hitting PSA violations, your apps probably have bigger security problems anyway.

Q

What cloud providers support Pod Security Admission?

A

All the big cloud providers have PSA working, though each one has their own special way of making it complicated. AWS EKS (1.23+) keeps it simple

  • PSA works like vanilla Kubernetes. Google GKE Standard works fine, but Autopilot forces baseline mode and won't let you go privileged for user workloads (learned this when trying to run an old monitoring stack). Azure AKS: where Microsoft took a simple concept and added 3 layers of complexity because why make it easy? They have PSA AND Azure Policy fighting each other, so you'll get confusing errors about which system is blocking your pods.
Q

Why the hell do my pods keep getting rejected?

A

Check if you pinned the version wrong

  • pod-security.kubernetes.io/enforce-version: v1.29.

If that's missing or wrong, PSA uses different rules than you expect. Also, both your pod AND containers need security contexts for restricted mode. And don't forget init containers

  • they need to follow the policy too. The error messages are better than PSP's garbage, but they're still cryptic half the time. violates PodSecurity "restricted:latest": securityContext.runAsNonRoot != true means you forgot to set runAsNonRoot: true in your security context.
Q

Is Pod Security Admission suitable for regulatory compliance?

A

PSA handles the real security problems. For compliance theater and auditor checkbox-checking, you'll need OPA Gatekeeper and a lot of patience. The three standard profiles handle the obvious security holes, but compliance frameworks love their specific requirements that PSA doesn't know about. Most teams end up adding OPA Gatekeeper for the compliance theater while PSA does the actual security work.

Q

Can I create custom security profiles beyond the three standard ones?

A

No, PSA only supports the three predefined profiles (Privileged, Baseline, Restricted) as defined by the Pod Security Standards. If you need custom security policies beyond these profiles, consider using OPA Gatekeeper for additional validation or implementing custom admission webhooks alongside PSA.

Q

How does Pod Security Admission handle init containers and ephemeral containers?

A

PSA validates all container types within a pod, including init containers, ephemeral containers, and sidecar containers. All containers must comply with the namespace's security profile. This means that if your main application container is compliant but an init container violates the policy, the entire pod will be rejected.

Q

What's the migration path from Pod Security Policies to Pod Security Admission?

A

Follow the official migration guide but here's what they don't tell you: start with audit mode in ONE namespace first, not your entire cluster. I watched a team enable PSA audit globally and get flooded with thousands of violation logs from 47 different PSPs they'd forgotten existed. Check your audit logs for a week, fix the obvious violations, then enable warn mode so users see what's coming. Only then enable enforcement, and do it namespace by namespace. Don't touch PSPs until PSA is rock solid

  • I've seen teams accidentally delete their PSPs during "cleanup" and lose all security for 6 hours until they figured out what happened.
Q

What happens to my existing workloads when I enable PSA?

A

Your running pods keep running

  • PSA doesn't kill anything.

But here's the gotcha: when those pods restart (node maintenance, rolling update, crash), they'll get blocked if they violate the new policy. I've seen teams enable PSA on Friday and come back Monday to half their apps down because pods couldn't restart over the weekend. Always test with audit mode first

  • run kubectl get events --field-selector type=Warning to see what would break.
Q

Why do my pods keep getting rejected even though I set the namespace labels?

A

Check if you have the version pinned correctly

  • pod-security.kubernetes.io/enforce-version: v1.29.

If the version is wrong or missing, PSA might use different policy rules than you expect. Also check if your pod spec includes security contexts for both the pod AND containers

  • restricted mode requires both. Another gotcha: init containers and ephemeral containers must also comply with the policy.

Essential Pod Security Admission Resources

Related Tools & Recommendations

tool
Similar content

Pod Security Standards - Three Security Levels Instead of Policy Hell

Replace the clusterfuck that was Pod Security Policies with simple security profiles

Pod Security Standards
/tool/pod-security-standards/overview
100%
tool
Similar content

GKE Security That Actually Stops Attacks

Secure your GKE clusters without the security theater bullshit. Real configs that actually work when attackers hit your production cluster during lunch break.

Google Kubernetes Engine (GKE)
/tool/google-kubernetes-engine/security-best-practices
85%
tool
Similar content

Pod Security Admission Implementation Guide

Master Kubernetes Pod Security Admission (PSA) with this comprehensive implementation guide. Learn how to enable, configure, troubleshoot, and migrate from PSPs

Pod Security Admission Controller
/tool/pod-security-admission-controller/implementation-guide
48%
tool
Recommended

Open Policy Agent (OPA) - Policy Engine That Centralizes Your Authorization Hell

Stop hardcoding "if user.role == admin" across 47 microservices - ask OPA instead

open-policy-agent
/tool/open-policy-agent/overview
42%
howto
Similar content

Complete Kubernetes Security Monitoring Stack Setup - Zero to Production

Learn to build a complete Kubernetes security monitoring stack from zero to production. Discover why commercial tools fail, get a step-by-step implementation gu

Kubernetes
/howto/setup-kubernetes-security-monitoring/complete-security-monitoring-stack
42%
tool
Similar content

RHACS Security Incident Response

When RHACS catches something nasty in your cluster and you need to figure out what happened

Red Hat Advanced Cluster Security for Kubernetes
/tool/red-hat-advanced-cluster-security-for-kubernetes/security-incident-response
39%
troubleshoot
Similar content

Kubernetes Security Policies Are Blocking Everything - Here's How to Actually Fix It

Learn to diagnose and resolve Kubernetes security policy violations, including PodSecurity and RBAC errors. Get quick triage tips and lasting fixes to unblock y

Kubernetes
/troubleshoot/kubernetes-security-policy-violations/security-policy-violations
39%
tool
Recommended

Amazon EKS - Managed Kubernetes That Actually Works

Kubernetes without the 3am etcd debugging nightmares (but you'll pay $73/month for the privilege)

Amazon Elastic Kubernetes Service
/tool/amazon-eks/overview
38%
tool
Recommended

Google Kubernetes Engine (GKE) - Google's Managed Kubernetes (That Actually Works Most of the Time)

Google runs your Kubernetes clusters so you don't wake up to etcd corruption at 3am. Costs way more than DIY but beats losing your weekend to cluster disasters.

Google Kubernetes Engine (GKE)
/tool/google-kubernetes-engine/overview
38%
tool
Popular choice

jQuery - The Library That Won't Die

Explore jQuery's enduring legacy, its impact on web development, and the key changes in jQuery 4.0. Understand its relevance for new projects in 2025.

jQuery
/tool/jquery/overview
38%
howto
Similar content

Your Kubernetes Cluster is Probably Fucked

Zero Trust implementation for when you get tired of being owned

Kubernetes
/howto/implement-zero-trust-kubernetes/kubernetes-zero-trust-implementation
38%
tool
Similar content

Hardening GKE Enterprise - Security That Actually Works

Secure Google Kubernetes Engine Enterprise (GKE) clusters with this hardening guide. Learn best practices for Workload Identity, Binary Authorization, and the G

Google Kubernetes Engine Enterprise
/tool/gke-enterprise/security-hardening-guide
37%
tool
Similar content

RHACS Compliance Implementation: Stop Panicking When Auditors Show Up

I've been through 5 SOC 2 audits with RHACS. Here's what actually works (and what's complete bullshit)

Red Hat Advanced Cluster Security for Kubernetes
/tool/red-hat-advanced-cluster-security/compliance-implementation-guide
37%
review
Similar content

Container Runtime Security is Where Everything Goes to Hell

I've watched container escapes take down entire production environments. Here's what actually works.

Falco
/review/container-runtime-security/comprehensive-security-assessment
37%
tool
Popular choice

Hoppscotch - Open Source API Development Ecosystem

Fast API testing that won't crash every 20 minutes or eat half your RAM sending a GET request.

Hoppscotch
/tool/hoppscotch/overview
36%
tool
Popular choice

Stop Jira from Sucking: Performance Troubleshooting That Works

Frustrated with slow Jira Software? Learn step-by-step performance troubleshooting techniques to identify and fix common issues, optimize your instance, and boo

Jira Software
/tool/jira-software/performance-troubleshooting
35%
tool
Similar content

kube-bench - Actually Useful K8s Security Auditing

Find out exactly how fucked your cluster security is in under 30 seconds

kube-bench
/tool/kube-bench/overview
34%
tool
Similar content

cert-manager - Stops You From Getting Paged at 3AM Because Certs Expired Again

Because manually managing SSL certificates is a special kind of hell

cert-manager
/tool/cert-manager/overview
34%
tool
Popular choice

Northflank - Deploy Stuff Without Kubernetes Nightmares

Discover Northflank, the deployment platform designed to simplify app hosting and development. Learn how it streamlines deployments, avoids Kubernetes complexit

Northflank
/tool/northflank/overview
33%
tool
Similar content

RHACS Troubleshooting Guide: Fix the Stuff That Breaks

When your security platform decides to become the security problem

Red Hat Advanced Cluster Security for Kubernetes
/tool/red-hat-advanced-cluster-security/troubleshooting-guide
33%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization