Look, I've been burned by security tools before. Spent 6 months with Twistlock that burned through 15% CPU and alerted every time someone ran ps aux
. Falco is different - been running it in production for 2+ years and the worst performance hit I've seen is 7% CPU on a database node running like shit anyway. The CNCF graduation status (February 2024) gave our compliance team confidence, plus you can actually audit what it's doing since it's open source.
What Falco Actually Does
Falco sits on your Linux systems watching syscalls through eBPF (or kernel modules if you're stuck on older kernels). When someone tries to escape a container, escalate privileges, or run a reverse shell, Falco catches it in real-time and sends you an alert that actually means something. The modern eBPF driver uses CO-RE which means it works across kernel versions without recompilation.
The key difference from other tools: Falco knows the difference between your application doing legitimate work and someone trying to pwn your system. I've seen it catch everything from cryptominers to privilege escalation attempts that other tools missed completely. Check out the detection capabilities - it's not just another log parser.
Real Production Experience
CPU overhead: Despite the bullshit marketing claims, I've measured 1-3% CPU usage on busy nodes. Version 0.38.x had memory leaks with high syscall volumes - make sure you're on 0.39+ if you're monitoring chatty applications. The current stable version as of September 2025 is 0.41.x, which includes significant performance improvements and better eBPF probe reliability. The performance impact varies significantly based on your syscall volume.
Memory usage: Starts around 50MB but scales with your rule complexity. I've seen it climb to 200MB+ on nodes with aggressive custom rules and verbose logging enabled. Monitor the built-in metrics if you're running tight on resources - Falco exposes Prometheus endpoints starting from 0.38.
Event volume: Can handle thousands of events per second, but your SIEM integration will be the bottleneck. Learned this when our intern deployed a crypto miner and Falco sent 47,000 alerts in 8 minutes, completely destroying our Splunk cluster. That was a fun Monday morning. Check the event dropping documentation if you're seeing ratelimit
errors.
The Three Driver Options (And Which to Use)
Modern eBPF (recommended): Uses CO-RE technology so it works across kernel versions without recompilation. Requires kernel 5.8+ with BTF support enabled. This is what you want unless you have a specific reason not to use it. Default since Falco 0.38.0, with significant stability improvements in 0.40+.
Classic eBPF: Still requires kernel headers but more compatible than Modern eBPF. Use this if Modern eBPF doesn't work on your distro. The libs repository has performance comparisons between drivers.
Kernel Module: Maximum compatibility but requires root and kernel headers. Only use this if eBPF completely fails on your system. Check the host installation docs for kernel module setup.
Pro tip: The Modern eBPF driver fails to load on RHEL 7.6 nodes constantly - saw bpf_map_create failed: Operation not permitted
errors for 3 days before realizing our kernel was compiled without BTF support. Always have kernel headers installed as a fallback, and check the troubleshooting guide when you see those cryptic eBPF errors.
Integration Hell (And How to Avoid It)
Falco has 50+ output integrations through Falcosidekick but most are community plugins with varying quality. Here's what actually works in production:
Slack/Teams: Works great for initial setup, becomes noise after a week
Elasticsearch: Solid if you already have ELK, pain in the ass to set up from scratch - see the Elastic integration guide
S3: Cheap storage for compliance logging
Webhook: Most flexible - build your own integration using the webhook documentation
Don't try to send every alert to your SIEM initially. Start with critical alerts only or you'll get buried in false positives. The rule adoption guide has good strategies for tuning.
Kubernetes Deployment Reality
Speaking of getting things working properly, let's talk about the actual deployment process.
The Kubernetes operator is still in tech preview (as of 0.41.0) and I've seen it crash on YAML edge cases. Stick with the official Helm charts unless you like debugging operator logs at 3am.
## This actually works:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco --namespace falco-system --create-namespace
The DaemonSet approach ensures every node gets monitored, but watch out for kernel header issues on mixed node types. Auto-scaling can break if new node AMIs don't have headers pre-installed. Check the EKS deployment guide for AWS-specific gotchas, or the falcoctl documentation for artifact management.
But getting Falco installed is just the beginning. The real challenge starts when you try to deploy it in production and discover all the ways it can break. Let's dive into the painful reality of actually making this thing work reliably.