Here's what you probably deployed to production:
- AllowAllAuthenticator (anyone can connect)
- AllowAllAuthorizer (anyone can do anything)
- JMX on port 7199 with zero authentication
- Internode encryption disabled
- Client encryption optional (so everyone skips it)
- system_auth keyspace with SimpleStrategy RF=1
Real-world impact: I've personally seen this configuration get breached in under 20 minutes during pen tests. Port scan finds 7199 open, JMX connects without credentials, attacker dumps the entire system_auth keyspace, game over.
The Attack Path That Ruins Your Week
Here's exactly how your "secure" Cassandra cluster gets owned:
## 1. Port scan finds open JMX (takes 2 seconds)
nmap -p 7199 your-cassandra-host.com
## 2. Connect to JMX without authentication (because defaults)
jconsole service:jmx:rmi:///jndi/rmi://your-cassandra-host:7199/jmxrmi
## 3. Execute arbitrary operations via StorageService MBean
## Can trigger compactions, flush data, even shut down nodes
## 4. If that fails, brute force the default 'cassandra' user
cqlsh -u cassandra -p cassandra your-cassandra-host
## 5. Create new superuser, because why not
CREATE ROLE attacker WITH SUPERUSER = true AND LOGIN = true AND PASSWORD = 'owned123';
Why this works every fucking time: Default configs prioritize "easy to set up" over "won't get you fired". The auth system is designed to be disabled by default, and most teams never bother changing it until compliance starts asking questions.
Emergency Hardening (Do This Right Now)
Step 1: Check How Fucked You Are
## Check what auth you're actually using
grep -E "(authenticator|authorizer)" /etc/cassandra/cassandra.yaml
## If you see these, you're wide open:
## authenticator: AllowAllAuthenticator
## authorizer: AllowAllAuthorizer
## Check JMX exposure (this will make you cry)
netstat -tlnp | grep 7199
## Check if encryption is actually enabled
nodetool describecluster | grep "SSL"
Step 2: Enable Real Authentication (Takes 30 minutes, saves your job)
WARNING: If you're on Cassandra 4.0.1 through 4.0.4, authentication has a memory leak that'll crash your nodes under load. Upgrade to 4.0.5+ first or you'll create new problems while fixing security.
## cassandra.yaml - Stop being a security disaster
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
role_manager: CassandraRoleManager
## Fix the system_auth keyspace (single point of failure is bad)
## Do this BEFORE enabling auth or you'll lock yourself out
## Fix system_auth replication FIRST or you'll lock yourself out
cqlsh -u cassandra -p cassandra
ALTER KEYSPACE system_auth WITH REPLICATION = {
'class': 'NetworkTopologyStrategy',
'datacenter1': 3 # Use your actual datacenter name, not 'datacenter1'
};
## This repair takes FOREVER on large clusters - run it at night
nodetool repair system_auth
## Change the default password (cassandra/cassandra is embarrassing)
ALTER ROLE cassandra WITH PASSWORD = 'something-that-isnt-cassandra123';
Pro tip: Test authentication on one node first. I've seen teams enable auth, restart all nodes, then realize they fucked up the replication settings and locked themselves out of the entire cluster. Recovery requires single-node mode and manual keyspace surgery.
Step 3: Lock Down JMX (Before Someone Scripts This Attack)
## cassandra-env.sh - JMX that won't get you owned
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.access.file=/etc/cassandra/jmxremote.access"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=true"
## Bind to localhost only (not 0.0.0.0 like an idiot)
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=127.0.0.1"
Create the auth files (this always breaks the first time):
## /etc/cassandra/jmxremote.password
jmx_admin your_actual_password_not_password123
## /etc/cassandra/jmxremote.access
jmx_admin readwrite
## Lock down permissions or Cassandra won't start with:
## "Error: Password file read access must be restricted"
chmod 600 /etc/cassandra/jmxremote.password
chown cassandra:cassandra /etc/cassandra/jmxremote.*
Debugging JMX failures (because this shit never works the first time):
## If Cassandra won't start after enabling JMX auth:
tail -f /var/log/cassandra/system.log | grep -i jmx
## Common error: "Cannot bind to RMI port"
## Fix: Make sure you're not binding to 0.0.0.0 if firewall is blocking
## If JConsole can't connect with SSL:
## Add this debug flag to see what's actually failing
JVM_OPTS="$JVM_OPTS -Djavax.net.debug=ssl"
Authentication Bypass Through JMX (The Other Attack Vector)
While CVE-2025-24860 gets the headlines, JMX exploitation remains the #1 way clusters get owned. Default configurations expose JMX on port 7199 with zero authentication. Here's what attackers do:
## Connect to exposed JMX port
jconsole service:jmx:rmi:///jndi/rmi://target:7199/jmxrmi
## Execute arbitrary code via MBeans
invoke("org.apache.cassandra.db:type=StorageService", "forceKeyspaceCleanup", ...)
## Dump authentication tables directly
invoke("org.apache.cassandra.db:type=Tables,keyspace=system_auth,table=roles", ...)
Production reality: Exposed JMX ports are basically free root shells. Attackers don't need fancy zero-days when JMX lets them fuck with your database directly. On my M1 Mac, your mileage may vary, but I've seen this attack work against default Cassandra configs every single time.
JMX Hardening That Actually Works:
## cassandra-env.sh - Lock down JMX properly
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.access.file=/etc/cassandra/jmxremote.access"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=true"
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl.need.client.auth=true"
## Bind only to management networks
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=10.0.1.100"
SSL/TLS Configuration That Doesn't Suck
Most teams enable "SSL" and call it secure. Then they use self-signed certificates, disable hostname verification, and wonder why man-in-the-middle attacks work flawlessly.
Actually Secure TLS Configuration:
## cassandra.yaml - Enterprise TLS settings
server_encryption_options:
internode_encryption: all
keystore: /etc/cassandra/keystore.p12
keystore_password: ${KEYSTORE_PASSWORD}
truststore: /etc/cassandra/truststore.p12
truststore_password: ${TRUSTSTORE_PASSWORD}
protocol: TLSv1.3
cipher_suites:
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
require_client_auth: true
store_type: PKCS12
client_encryption_options:
enabled: true
optional: false # Never use optional in production
keystore: /etc/cassandra/keystore.p12
truststore: /etc/cassandra/truststore.p12
protocol: TLSv1.3
require_client_auth: true
Certificate reality: Your certificates WILL expire at the worst possible time. Set up monitoring or get comfortable with 3am phone calls and angry customers:
## Certificate monitoring (check every 30 minutes or whatever works for your setup)
*/30 * * * * /usr/local/bin/cert-check.sh /etc/cassandra/keystore.p12 30 || logger "Cert check failed again"
Don't be an idiot: Use real CA-signed certificates. Self-signed certs in production are just delayed security incidents. I've seen Let's Encrypt + cert-manager work well, or use internal PKI if your company actually has one that doesn't suck.