Pattern #1: The Global Array That Never Stops Growing
This is the classic newbie mistake that takes down production apps:
// This will eventually crash your app
const logs = [];
app.post('/api/log', (req, res) => {
logs.push({ message: req.body.message, timestamp: Date.now() });
res.json({ success: true });
});
Why it breaks: That logs
array lives forever and grows with every request. After 50,000 log entries, you're using hundreds of megabytes just for logs. After a million, you're dead.
The fix: Add limits and rotation.
const MAX_LOGS = 1000;
const logs = [];
app.post('/api/log', (req, res) => {
logs.push({ message: req.body.message, timestamp: Date.now() });
// Keep only the latest 1000 entries
if (logs.length > MAX_LOGS) {
logs.splice(0, logs.length - MAX_LOGS);
}
res.json({ success: true });
});
I've seen this exact pattern kill three different production apps. The logs array consumed 2GB of RAM before the app crashed with "heap out of memory".
Pattern #2: Event Listeners That Live Forever
Node.js event emitters leak memory when you keep adding listeners without removing them:
// Memory leak waiting to happen
const { EventEmitter } = require('events');
const emitter = new EventEmitter();
function attachUser(userId) {
emitter.on('notification', (data) => {
sendToUser(userId, data);
});
}
// Every user connection adds another listener
// After 10,000 users, you have 10,000 listeners
The symptoms: Your app starts normally but gets slower over time. Eventually Node.js warns about memory leaks:
(node:1234) MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
11 notification listeners added. Use emitter.setMaxListeners() to increase limit
The fix: Remove listeners when you're done with them.
function attachUser(userId) {
const handler = (data) => sendToUser(userId, data);
emitter.on('notification', handler);
// Return cleanup function
return () => emitter.removeListener('notification', handler);
}
// Usage: store the cleanup function and call it when user disconnects
const cleanup = attachUser('user123');
// Later...
cleanup();
Pattern #3: Database Connections Left Open
This one killed my app on Black Friday. Heavy traffic + connection leaks = database refusing new connections:
// DON'T DO THIS - leaks database connections
async function getUser(id) {
const client = await pool.connect();
const result = await client.query('SELECT * FROM users WHERE id = $1', [id]);
// Forgot to release the connection
return result.rows[0];
}
After 100 requests, your connection pool is exhausted. New requests hang forever waiting for a connection that never comes. This is documented in the PostgreSQL pooling guide and affects MySQL connections too.
The fix: Always release in a finally block.
async function getUser(id) {
const client = await pool.connect();
try {
const result = await client.query('SELECT * FROM users WHERE id = $1', [id]);
return result.rows[0];
} finally {
client.release(); // Always executes
}
}
Pro tip: Use pg-pool or similar libraries that automatically return connections on query completion. This pattern is recommended in the Node.js database best practices and Sequelize documentation:
// This handles connection management for you
const result = await pool.query('SELECT * FROM users WHERE id = $1', [id]);
Pattern #4: Timers That Never Die
Setters and intervals that outlive their usefulness:
// Creates a timer for every user session
function startUserSession(sessionId) {
const timer = setInterval(() => {
pingUser(sessionId);
}, 30000);
// Timer keeps running even after user disconnects
sessions[sessionId] = { id: sessionId, timer };
}
The fix: Clear timers when sessions end.
function startUserSession(sessionId) {
const timer = setInterval(() => {
pingUser(sessionId);
}, 30000);
sessions[sessionId] = {
id: sessionId,
timer,
cleanup: () => clearInterval(timer)
};
return sessions[sessionId];
}
function endUserSession(sessionId) {
const session = sessions[sessionId];
if (session) {
session.cleanup(); // Clear the timer
delete sessions[sessionId];
}
}
Debugging Tools That Actually Work
Option 1: Chrome DevTools (Best for development)
Start your app with debugging enabled according to the official Node.js debugging guide:
node --inspect app.js
Open Chrome and go to chrome://inspect
. Click "inspect" next to your Node process as documented in Chrome DevTools memory profiling.
Go to the Memory tab, take a heap snapshot, run some traffic, take another snapshot. Compare to see what's growing using the memory analysis techniques.
Option 2: Clinic.js (Best for production profiling)
Install clinic.js: npm install -g clinic
## Profile your app under load
clinic doctor -- node app.js
## Generate load in another terminal
## Then stop clinic with Ctrl+C
## Open the HTML report
Clinic shows you memory usage over time, event loop lag, and CPU usage. The flame graph highlights exactly which functions are using the most memory.
Option 3: Automatic heap dumps on crashes
Add this to your app to generate heap dumps when memory runs low:
const v8 = require('v8');
const fs = require('fs');
// Generate heap dump when memory usage gets high
setInterval(() => {
const usage = process.memoryUsage();
const heapUsedMB = usage.heapUsed / 1024 / 1024;
if (heapUsedMB > 400) { // Adjust threshold as needed
const filename = `heap-${Date.now()}.heapsnapshot`;
const heapSnapshot = v8.writeHeapSnapshot(filename);
console.log(`Heap dump written to ${heapSnapshot}`);
}
}, 60000);
Memory Leak Prevention in Code Reviews
Look for these red flags in pull requests:
- Global arrays or objects that grow over time
- Event listeners added without corresponding removal
- Database queries without proper connection handling
- Timers/intervals without cleanup
- Large objects held in closures during async operations
- Caches without size limits or TTL
The golden rule: For every resource you allocate, have a plan to clean it up.
Memory leaks aren't magic - they're just references that stick around longer than they should. Master these patterns and you'll write Node.js code that runs for months without issues.