Aider is a terminal-based coding assistant that doesn't make you tab between browser and editor like some kind of caveman. It reads your codebase, edits files directly, and commits everything to git automatically. Think of it as GitHub Copilot meets ChatGPT but actually integrated with your development workflow.
I've been using it for about 6 months now. It's weird at first - the auto-commit thing scared the shit out of me - but now I can't go back to manual copy-paste.
Works with pretty much every language. Python, JavaScript, Rust, whatever. If you can git commit it, Aider can probably edit it. Been around long enough to have 37k+ GitHub stars and people actually use this thing daily. The community is active enough that you'll get real answers when things break.
What Makes It Different
It Reads Your Whole Codebase: Aider builds a repository map so it knows what functions exist where. Copilot only sees whatever file you have open. Aider actually reads your whole damn codebase and understands how your controller depends on that random utility class three directories deep. Uses ctags and tree-sitter to parse code structure.
Git Integration That Actually Works: Every change gets auto-committed with descriptive messages. Sounds scary at first but you'll love it. Easy to git reset --hard
when things go sideways, which they will.
Model Choice Freedom: Works with 50+ language models. I mainly use Claude and sometimes try DeepSeek when I'm broke. Claude costs more but saves me time because I'm not re-running prompts. Also supports OpenAI, local models, and Azure OpenAI.
Terminal-First Design: Runs in your terminal, works with any editor. No VSCode extensions to break, no IDE dependencies. If you're comfortable with git commands, you'll be fine.
Voice Commands: Built-in voice-to-code supposedly works. Haven't tried it yet - talking to my computer feels weird.
Visual Context: Feed it screenshots and URLs. Point it at a design mockup or error message and watch it figure out the fix.
What Actually Happens
Aider ranks high on SWE-Bench, which tests AI tools against real GitHub issues. It's legitimately good at understanding code context, but don't expect miracles. Complex refactoring still needs human oversight, and it'll cheerfully commit a function that returns None instead of handling your edge case.
The auto-commit thing freaks people out initially. Trust the process - having granular git history of AI changes is way better than manually tracking what worked and what didn't. Check out the commit examples to see what automated commit messages look like.
Aider struggles with:
- Monorepos where everything imports everything (it gets confused about dependencies)
- Generated code that looks like human code (migrations, Prisma schemas, etc.)
- Codebases with inconsistent naming conventions (it picks up bad patterns)
- Projects with complex build systems (Docker multi-stage builds break its understanding)
- Binary files and compiled languages without proper tooling
- Projects with non-standard directory structures
We had some weird OAuth issue after using Aider. Took us forever to figure out it was a regex that got 'optimized.' Safari users couldn't log in for like 2 days. I still don't fully understand what that regex was doing, but apparently it was important.