Continue runs in VS Code and JetBrains IDEs. It's got 28k stars on GitHub because it solves the one thing every developer hates about Copilot - you're not locked into Microsoft's models. Open source, Apache 2.0 licensed, and you can switch between any AI model you want.
Why Open Source Actually Matters Here
Unlike Copilot (owned by Microsoft) or Cursor (venture-funded startup), Continue is Apache 2.0 licensed. This means you can audit the code, modify it, and you're not fucked if the company changes direction or gets acquired.
Here's why that matters: I use GPT-4 for heavy refactoring, Claude when I need it to actually think, and local Ollama models when I'm working on shit I can't send to the cloud. It also works with Gemini, Azure OpenAI, and Bedrock. Copilot? You get GPT and you'll fucking like it.
The Four Ways Continue Works (And When Each One Breaks)
Continue has 4 modes, each with different success rates:
Agent Mode - Tell it "implement user authentication" and it tries to do the whole thing. Works maybe 70% of the time for simple stuff, 30% if you ask it to do anything complex. When it nails it, you feel like a wizard. When it fucks up, you'll spend longer cleaning up its mess than just doing it yourself. Demo here if you want to see it work properly. Usually it gets confused about context, starts hallucinating imports that don't exist, or gets stuck in a loop trying to fix something that wasn't broken.
Chat - Ask it about your code. "Explain this function" or "why is this query slow" and it actually knows what you're talking about. The context awareness doesn't suck - it reads your actual project files instead of making you copy-paste everything like ChatGPT.
Inline Edit - Highlight code, tell it what to change, boom. This is the feature that actually works. Gets it right about 85% of the time for simple edits. Way better than the copy-paste dance with ChatGPT.
Autocomplete - Tab completion like Copilot, but you pick the model. Quality totally depends on what model you're running. Local models are slow as fuck but keep your code private. Cloud models are fast but send your shit to OpenAI.
Team Setup (And Why Continue Hub Actually Works)
Continue Hub solves the "every developer configures things differently" problem. You can set team-wide model defaults, shared prompts, and API keys without micromanaging individual setups. Unlike most enterprise dashboards, it's actually useful.
Here's the killer feature nobody talks about: MCP tools. Continue can create Linear tickets from your code, read GitLab repos, or grab docs from Confluence without you switching tabs. It's this protocol thing Anthropic built that lets AI tools talk to other services. Continue works with tons of them - databases, files, APIs, whatever. No other coding AI does this shit.
Reality check: Enterprise adoption works because Continue doesn't lock you into one vendor. You can run local models, use your Azure deployment, or mix whatever works. Our compliance team loves this flexibility - no vendor lock-in means no vendor risk.
I learned this the hard way when GitHub Copilot went down for 6 hours last month and our whole team was dead in the water. With Continue, if OpenAI shits the bed, you just switch to Claude or your local models and keep coding.
Additional Resources:
- Continue Quick Start Guide - Interactive tutorial covering all features
- Complete configuration reference - YAML config documentation
- Model provider setup guides - Step-by-step for different AI providers
- Ollama local model integration - Using Llama 3.1 locally
- Continue Discord community - Active support and troubleshooting
- Continue GitHub Issues - Bug reports and feature requests