Redis has been the standard for caching for years. It works great until it doesn't, and then you're debugging at 2am wondering why your app is throwing weird connection errors.
The node-redis client has improved a lot, especially since v4. The Redis documentation covers the basics but somehow misses all the stuff that breaks in production.
Basic connection setup
I'm running node-redis v4.something - way better than the callback nightmare of v3. Still has its moments though.
import { createClient } from 'redis';
const client = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379',
socket: {
connectTimeout: 10000, // Default timeout killed us in AWS
keepAlive: true,
family: 4 // Docker + IPv6 = bad time
}
});
client.on('error', (err) => {
console.error('Redis connection failed:', err);
});
client.on('reconnecting', () => {
console.log('Redis reconnecting...');
});
await client.connect();
That 10 second timeout saved me during our last deployment. All connections were timing out with the default setting - network was just slightly slower than expected. Bumped it up and the errors stopped. Might have been coincidence but I'm not changing it back.
But getting one connection to work is just the beginning. Once you have multiple users hitting your app simultaneously, that single connection becomes a bottleneck. Connection pooling becomes critical.
Connection pools (because one connection isn't enough)
Connection pools sound optional until you get real traffic and everything falls apart. One connection works fine for local development, but we started getting ECONNRESET errors once we had like 30 concurrent users.
// This kills performance under load
const sharedClient = createClient({ url: process.env.REDIS_URL });
// Better - use a proper pool library like generic-pool
import genericPool from 'generic-pool';
const pool = genericPool.createPool({
create: () => {
const client = createClient({ url: process.env.REDIS_URL });
return client.connect().then(() => client);
},
destroy: (client) => client.quit()
}, {
min: 2,
max: 8
});
// Get a connection when needed
const client = await pool.acquire();
try {
const result = await client.get('some-key');
return result;
} finally {
pool.release(client);
}
Without pooling, you're creating new connections for every request. Works fine until traffic picks up. Generic-pool is what I use for this.
Security basics
Redis security was terrible in early versions - no auth by default. Now you can use ACLs and TLS, but lots of people still run Redis with default settings. The Redis security checklist covers the basics.
const secureClient = createClient({
username: 'app-user',
password: process.env.REDIS_PASSWORD,
socket: {
tls: true,
rejectUnauthorized: true
}
});
Create a dedicated user for your app with minimal permissions. The default Redis user has admin access which you don't want.
Security matters, but so does availability. If your single Redis instance can't handle the load, you'll need to think about horizontal scaling.
Clustering (when one Redis isn't enough)
Redis clustering splits your data across multiple Redis instances. Sounds great until you realize it's eventually consistent, which means your data might be slightly out of sync between nodes.
const cluster = createCluster({
rootNodes: [
{ host: 'redis-1.internal', port: 6379 },
{ host: 'redis-2.internal', port: 6379 },
{ host: 'redis-3.internal', port: 6379 }
]
});
cluster.on('nodeError', (error, address) => {
console.error(`Node ${address} is being a problem:`, error);
});
Redis cluster doesn't guarantee strong consistency. Data might be slightly different between nodes, especially during network partitions. Spent one late night debugging disappearing user sessions - turned out the cluster was rebalancing during a network issue. Session got written to one node but another node didn't see it for 30+ seconds.
Pipelining
Node-redis v4+ does automatic pipelining, which batches commands sent in the same tick. Actually works well and makes things faster.
// These get batched automatically
const [user, session] = await Promise.all([
client.get('user:123'),
client.hGetAll('session:abc')
]);
One of those features that actually works as advertised.
Handling Redis failures
Redis will fail. Network issues happen. When Redis is down, don't let it take your entire app with it.
async function getCachedData(key) {
try {
return await client.get(key);
} catch (error) {
console.warn('Redis is being difficult, falling back to database');
return await database.get(key); // Slower but reliable
}
}
Always have a fallback. Cache failures should slow things down, not break everything.