Overview
Falco is the CNCF-graduated open-source runtime security engine for cloud-native workloads. Where image scanning and policy admission catch known-bad images before they run, Falco watches what containers are actually doing at runtime — monitoring system calls, file operations, network connections, and process executions as they happen.
A single Falco deployment gives you:
- Container escape detection — alerts when a process breaks out of its cgroup or mounts the host filesystem
- Privilege escalation detection — catches
sudo,setuid, and capability abuse inside containers - Cryptominer and reverse shell detection — identifies unexpected outbound connections and CPU-hungry processes
- Sensitive file access — alerts on reads to
/etc/shadow,/proc/*/mem, cloud credential files - MITRE ATT&CK mapping — default rules are tagged to ATT&CK tactics and techniques out of the box
- Audit log correlation — Kubernetes Audit Sink integration for control-plane event enrichment
Architecture Overview
| Component | Role | Deployment |
|---|---|---|
| Falco (kernel driver) | Intercepts syscalls via eBPF or kernel module | DaemonSet on every node |
| Falco rules engine | Evaluates conditions against syscall stream | In-process with Falco |
| Falcosidekick | Fans out alerts to Slack, Splunk, Datadog, PagerDuty, webhooks | Deployment (1–2 replicas) |
| Falcosidekick-UI | Real-time alert dashboard | Optional Deployment |
Who Should Use This Guide
- Platform and DevSecOps engineers hardening Kubernetes production clusters
- Security engineers who need runtime visibility beyond static scanning
- SOC teams extending SIEM coverage into containerized workloads
- Red/blue teams testing container escape detections in lab environments
Requirements
Cluster Requirements
| Requirement | Detail |
|---|---|
| Kubernetes version | 1.26+ (1.28+ recommended for eBPF driver stability) |
| Node OS | Ubuntu 20.04/22.04, RHEL 8/9, Amazon Linux 2023, Debian 11/12 |
| Kernel version | 5.8+ for eBPF driver; 4.14+ for kernel module fallback |
| Node privileges | privileged: true DaemonSet required (syscall interception needs host PID namespace) |
| Helm | 3.x |
Networking Requirements
- Outbound HTTPS from Falcosidekick to your alerting targets (Slack, PagerDuty, SIEM webhook endpoint)
- Internal cluster DNS for service discovery between Falco and Falcosidekick
Cloud Managed Clusters (EKS/AKE/GKE): The eBPF driver works on managed clusters without custom kernel modules. On GKE, use the
autodriver strategy and ensure nodes run Container-Optimized OS (COS) 89+.
Step 1: Add the Falco Helm Repository
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo updateVerify the chart is available:
helm search repo falcosecurity/falco
# NAME CHART VERSION APP VERSION DESCRIPTION
# falcosecurity/falco 4.x.x 0.40.x Falco — Runtime SecurityCreate a dedicated namespace:
kubectl create namespace falcoStep 2: Deploy Falco with Helm
Create a falco-values.yaml override file. This configures the eBPF driver (preferred over the kernel module for modern clusters) and enables JSON output for downstream ingestion.
# falco-values.yaml
driver:
kind: ebpf # Use eBPF — no kernel module compilation required
ebpf:
leastPrivileged: false # Must be false; eBPF still needs CAP_BPF + CAP_PERFMON
falco:
json_output: true # Structured JSON alerts — required for Falcosidekick
json_include_output_property: true
log_level: info
priority: debug # Capture DEBUG+ during initial tuning; raise to WARNING in prod
# Output channels
stdout_output:
enabled: true # Always keep stdout; used by log aggregators
grpc:
enabled: true # gRPC API for Falcosidekick connection
grpc_output:
enabled: true
falcosidekick:
enabled: true
replicaCount: 2
webui:
enabled: true # Optional real-time UI at port 2802
replicaCount: 1
config:
slack:
webhookurl: "" # Set via secret — see Step 4
minimumpriority: "warning"
webhook:
address: "" # Your SIEM ingest endpoint
minimumpriority: "notice"
# Resource limits — tune to your node size
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 100m
memory: 512MiDeploy Falco:
helm install falco falcosecurity/falco \
--namespace falco \
--values falco-values.yaml \
--set driver.loader.initContainer.enabled=trueVerify the DaemonSet is running on all nodes:
kubectl get daemonset -n falco
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE
# falco 3 3 3 3 3
kubectl get pods -n falco
# NAME READY STATUS RESTARTS AGE
# falco-abcde 2/2 Running 0 2m
# falco-fghij 2/2 Running 0 2m
# falco-klmno 2/2 Running 0 2m
# falco-falcosidekick-xxx 1/1 Running 0 2m
# falco-falcosidekick-yyy 1/1 Running 0 2mStep 3: Verify Falco Is Detecting Events
Test detection with a controlled trigger. Open a shell into a running pod and perform a sensitive action:
# Trigger: reading /etc/shadow inside a container
kubectl run test-pod --image=ubuntu --restart=Never -- sleep 3600
kubectl exec -it test-pod -- cat /etc/shadowCheck Falco logs on the node where test-pod scheduled:
kubectl logs -n falco -l app.kubernetes.io/name=falco --tail=20You should see an alert like:
{
"output": "Warning Read sensitive file (/etc/shadow) by user=root ...",
"priority": "Warning",
"rule": "Read sensitive file untrusted",
"source": "syscall",
"tags": ["T1003", "filesystem", "mitre_credential_access"],
"time": "2026-03-16T12:00:00.000000000Z",
"output_fields": {
"container.id": "abc123",
"container.name": "test-pod",
"k8s.ns.name": "default",
"k8s.pod.name": "test-pod",
"proc.name": "cat",
"user.name": "root"
}
}Clean up the test pod:
kubectl delete pod test-podStep 4: Configure Falcosidekick Alerting
Falcosidekick routes Falco alerts to 60+ output targets. Store sensitive values as Kubernetes Secrets rather than in the Helm values file.
Slack Alerting
Create a Slack incoming webhook in your workspace (Slack → Apps → Incoming Webhooks), then store it as a Secret:
kubectl create secret generic falcosidekick-secrets \
--namespace falco \
--from-literal=slack-webhookurl="https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"Update falco-values.yaml to reference the secret:
falcosidekick:
config:
slack:
webhookurl: "" # Leave empty — use secretEnv below
extraEnv:
- name: SLACK_WEBHOOKURL
valueFrom:
secretKeyRef:
name: falcosidekick-secrets
key: slack-webhookurlUpgrade the release:
helm upgrade falco falcosecurity/falco \
--namespace falco \
--values falco-values.yamlGeneric Webhook (SIEM / Splunk HEC / Elastic)
For Splunk HTTP Event Collector:
falcosidekick:
config:
webhook:
address: "https://splunk.internal:8088/services/collector"
customheaders: "Authorization:Splunk YOUR_HEC_TOKEN"
minimumpriority: "notice"For Elastic via Logstash HTTP input:
falcosidekick:
config:
elasticsearch:
hostport: "https://elasticsearch.internal:9200"
index: "falco-alerts"
minimumpriority: "notice"Step 5: Write Custom Falco Rules
Default Falco rules cover common threats, but your workloads need tailored rules. Custom rules are loaded from a ConfigMap and hot-reloaded without restarting the DaemonSet.
Rule Structure
# Every Falco rule has: rule, desc, condition, output, priority, tags
- rule: Unexpected Outbound Connection from Web Pod
desc: >
Detects any outbound TCP connection from a web-tier pod to a destination
outside the allowed CDN/API range. Potential C2 or data exfiltration.
condition: >
outbound and
k8s.ns.name = "production" and
k8s.pod.labels.tier = "web" and
not fd.sip in (allowed_web_destinations)
output: >
Unexpected outbound from web pod
(pod=%k8s.pod.name ns=%k8s.ns.name dest=%fd.rip:%fd.rport
proc=%proc.name user=%user.name)
priority: WARNING
tags: [network, mitre_command_and_control, T1071]Macros and Lists
Macros and lists keep rules DRY and easy to maintain:
# Allowed outbound destinations for web pods (CIDR or IP list)
- list: allowed_web_destinations
items:
- "104.18.0.0/16" # Cloudflare CDN range
- "172.217.0.0/16" # Google APIs
- "10.0.0.0/8" # Internal cluster range
# Reusable macro: process is making an outbound TCP connection
- macro: outbound
condition: >
(evt.type = connect and evt.dir = < and
fd.typechar = 4 and fd.sip != "0.0.0.0")Commonly Needed Custom Rules
Detect shell spawned inside a container:
- rule: Shell Spawned in Container
desc: Alert when a shell process is started in a running container.
condition: >
spawned_process and
container and
shell_procs and
not proc.pname in (shell_spawning_binaries)
output: >
Shell spawned in container
(pod=%k8s.pod.name ns=%k8s.ns.name
shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)
priority: WARNING
tags: [process, container, mitre_execution, T1059]
- list: shell_procs
items: [bash, sh, zsh, fish, dash, ksh]
- list: shell_spawning_binaries
items: [containerd-shim, runc, tini, dumb-init]Detect access to cloud credential files:
- rule: Cloud Credentials Read in Container
desc: >
Detects reads of AWS/GCP/Azure credential files. Could indicate a
compromised container attempting to escalate to cloud IAM.
condition: >
open_read and
container and
(fd.name startswith "/root/.aws" or
fd.name startswith "/home/.aws" or
fd.name = "/var/run/secrets/kubernetes.io/serviceaccount/token" or
fd.name startswith "/.config/gcloud") and
not proc.name in (allowed_credential_readers)
output: >
Cloud credential file read in container
(pod=%k8s.pod.name ns=%k8s.ns.name file=%fd.name
proc=%proc.name user=%user.name)
priority: CRITICAL
tags: [filesystem, credentials, mitre_credential_access, T1552]
- list: allowed_credential_readers
items: [aws-sdk, gcloud, kubectl]Loading Custom Rules via ConfigMap
kubectl create configmap falco-custom-rules \
--namespace falco \
--from-file=custom-rules.yaml=./my-custom-rules.yamlAdd to falco-values.yaml:
falco:
rulesFile:
- /etc/falco/falco_rules.yaml
- /etc/falco/falco_rules.local.yaml
- /etc/falco/custom-rules.d/custom-rules.yaml
extraVolumes:
- name: custom-rules
configMap:
name: falco-custom-rules
extraVolumeMounts:
- name: custom-rules
mountPath: /etc/falco/custom-rules.d
readOnly: trueUpgrade and verify rules loaded:
helm upgrade falco falcosecurity/falco --namespace falco --values falco-values.yaml
# Check rules were parsed without errors
kubectl logs -n falco -l app.kubernetes.io/name=falco | grep -i "loading rules"Step 6: Tune Alert Noise
Fresh Falco deployments generate a high volume of alerts, especially in development namespaces. Noise reduction is essential before using Falco for production alerting.
Strategy 1: Namespace Scoping
Suppress alerts for known-noisy namespaces using a macro:
- macro: trusted_namespaces
condition: >
k8s.ns.name in (kube-system, kube-public, cert-manager,
monitoring, logging, istio-system)Then append and not trusted_namespaces to rules you don't need for infrastructure pods:
- rule: Shell Spawned in Container
condition: >
spawned_process and container and shell_procs and
not proc.pname in (shell_spawning_binaries) and
not trusted_namespaces # <- addedStrategy 2: Override Default Rules with append
Instead of disabling default rules globally, append exceptions for specific workloads:
# Append to the default "Write below root" rule to exclude your backup agent
- rule: Write below root
append: true
condition: >
and not (
proc.name = "restic" and
fd.name startswith "/backup"
)Strategy 3: Priority Filtering per Output
Route only high-priority alerts to PagerDuty and allow Slack to receive lower-priority events for investigation:
falcosidekick:
config:
pagerduty:
routingkey: ""
minimumpriority: "critical" # Only CRITICAL to PD
slack:
webhookurl: ""
minimumpriority: "warning" # WARNING+ to Slack
elasticsearch:
hostport: ""
minimumpriority: "notice" # Everything to SIEMStrategy 4: Validate Rules Offline with falco --dry-run
Before deploying custom rules to a cluster, validate them against the Falco binary:
docker run --rm \
-v $(pwd)/my-custom-rules.yaml:/etc/falco/custom-rules.yaml:ro \
falcosecurity/falco:latest \
falco --dry-run -r /etc/falco/falco_rules.yaml \
-r /etc/falco/custom-rules.yamlStep 7: Integrate with Kubernetes Audit Logs
Falco can consume the Kubernetes Audit Log stream to detect control-plane attacks — exec into pods, secret reads, RBAC changes, and more — separately from syscall events.
Enable the k8s_audit Plugin
# Add to falco-values.yaml
falco:
plugins:
- name: k8saudit
library_path: libk8saudit.so
open_params: "http://:9765/k8s-audit"
init_config:
sslCertificate: ""
- name: json
library_path: libjson.so
load_plugins: [k8saudit, json]Configure the Kubernetes Audit Webhook
On managed clusters (EKS/AKS/GKE), configure the Audit Log to forward to Falco's HTTP endpoint. For self-managed clusters, edit the kube-apiserver manifest:
# /etc/kubernetes/manifests/kube-apiserver.yaml — add audit flags
- --audit-webhook-config-file=/etc/kubernetes/audit-webhook.yaml
- --audit-webhook-batch-max-size=10
- --audit-webhook-batch-max-wait=5s# /etc/kubernetes/audit-webhook.yaml
apiVersion: v1
kind: Config
clusters:
- name: falco-audit
cluster:
server: http://falco-falcosidekick.falco.svc.cluster.local:9765/k8s-audit
users:
- name: ""
contexts:
- context:
cluster: falco-audit
user: ""
name: default-context
current-context: default-contextRelevant audit rules to watch for:
- rule: K8s Secret Accessed by Non-System User
desc: A secret was read by a user who is not a system account.
condition: >
ka.verb in (get, list) and
ka.target.resource = "secrets" and
not ka.user.name startswith "system:"
output: >
Secret accessed
(user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace
verb=%ka.verb userAgent=%ka.useragent)
priority: WARNING
source: k8s_audit
tags: [k8s, credentials, mitre_credential_access]Step 8: Access the Falcosidekick UI
If you enabled falcosidekick.webui.enabled: true, expose the UI for investigation:
kubectl port-forward -n falco svc/falco-falcosidekick-ui 2802:2802Open http://localhost:2802 in your browser. The UI shows:
- Real-time alert stream with severity colour coding
- Alert counts by rule, namespace, pod, and node
- Top-N pods generating the most alerts (noise identification)
- Timeline view for alert correlation
For persistent access, create an Ingress or LoadBalancer service — ensure you add authentication (basic auth via ingress annotations or an OAuth2 proxy) since the UI has no built-in auth.
Step 9: Production Hardening Checklist
Before treating Falco as a production security control, complete these steps:
[ ] Driver confirmed as eBPF (not kernel module) on all production nodes
[ ] JSON output enabled and flowing to SIEM
[ ] Falcosidekick high-availability: at least 2 replicas with PodDisruptionBudget
[ ] Custom rules reviewed and tested in a non-prod namespace first
[ ] Noisy default rules suppressed or scoped to production namespaces only
[ ] PagerDuty/alerting tested end-to-end with a deliberate rule trigger
[ ] Falco DaemonSet resource limits tuned — CPU spike expected on busy nodes
[ ] RBAC: Falco ServiceAccount has read-only access to pods/nodes/namespaces
[ ] Falco version pinned in Helm values — auto-upgrades can change rule behavior
[ ] Audit log integration tested (exec into pod triggers k8s_audit rule)
[ ] Alert fatigue baseline: P1/P2 alert volume < 5/day per cluster in steady state
Troubleshooting
Falco Pod Is CrashLooping
The most common cause is a missing kernel driver. Check the init container logs:
kubectl logs -n falco <falco-pod-name> -c falco-driver-loaderIf you see ERROR: kernel version not supported, switch from ebpf to modern_ebpf or kmod:
driver:
kind: modern_ebpf # Fallback for kernels without full BPF CO-RE supportNo Alerts Being Generated
- Confirm
json_output: truein falco-values.yaml - Verify gRPC output and Falcosidekick are configured and the gRPC port (5060) is reachable
- Lower priority to
debugtemporarily and trigger a rule manually (Step 3)
# Check gRPC connectivity from Falcosidekick to Falco
kubectl exec -n falco deploy/falco-falcosidekick -- \
wget -qO- http://falco.falco.svc.cluster.local:5060 || echo "gRPC port unreachable"Rule Syntax Errors
Falco logs rule parse errors at startup:
kubectl logs -n falco <falco-pod-name> | grep -E "ERROR|WARN|rule"Common mistakes:
- Undefined macro referenced before it is declared — reorder rule files
- Missing
andbetween condition clauses - Quoting issues inside
items:lists — use bare strings, not quoted
Next Steps
With Falco running and alerting, consider these enhancements:
- Falco Talon — automated response plugin that can kill pods, isolate namespaces, or trigger Lambda functions when a rule fires
- Prometheus metrics — Falco exposes a
/metricsendpoint; scrape it with your existing Prometheus stack to build SLIs around alert rates - Policy-as-Code alignment — pair Falco (runtime) with OPA/Gatekeeper (admission) for defence-in-depth: Gatekeeper blocks bad configs at admission, Falco catches runtime anomalies that slip through
- Falco Rules Repository — the falco-rules GitHub repository maintains community rules for common attack scenarios; subscribe to releases for rule updates
- Cloud provider integration — Falco Cloud (SaaS) or self-hosted Falco with AWS CloudTrail / GCP Cloud Audit Logs plugins extends coverage beyond the cluster to cloud API calls