Python's garbage collector sucks at cleanup sometimes. Objects stick around eating memory until your app dies with MemoryError: unable to allocate array of size X
. External profilers are either too slow or miss the important shit.
What tracemalloc Actually Does
tracemalloc hooks into Python's memory allocation and records what matters:
Where memory gets allocated: Every time Python creates an object, tracemalloc records the complete stack trace. No more guessing which line is eating your RAM.
How much memory each part uses: It shows you exactly which function is allocating 2GB of dictionaries.
Memory growth over time: Take snapshots and compare them. Growing allocations = memory leaks.
When External Tools Fall Short
I've debugged production leaks where memory_profiler made the app 10x slower and py-spy couldn't see Python's internal allocations. tracemalloc runs in-process with about 30% overhead (docs say 30%, feels like more) - painful but workable for debugging.
The killer feature is detailed stack traces. When your Flask app starts eating memory after 6 hours, tracemalloc tells you exactly which view function and line is the problem. External profilers just dump useless aggregate data on you.
Production Reality Check
You need tracemalloc when:
- Memory usage keeps growing in long-running services
- Your app works locally but crashes in production after hours/days
- Memory profilers are too slow or miss Python-specific allocations
- You need to debug without deploying debug builds
Don't leave it running 24/7 in production - the performance hit adds up. Turn it on when things break, get your data, turn it off.