The free tier's 5 searches per day are designed to piss you off into upgrading. I hit that limit by 10 AM when researching anything that matters - like when Next.js updates broke our build pipeline and I needed to figure out if it was our code or yet another framework fuckup.
This isn't some marketing gimmick - the search limit really does kill productivity. I learned this the hard way when researching the SVB collapse and hit the free limit after 3 queries. Had to wait until the next day to continue because I'm not paying $20/month to read about bank failures. That lasted exactly one more incident when TypeScript 5.3 dropped and broke half our type definitions.
What You Actually Get for $20/Month
You get access to GPT-4, Claude 3.5, and Gemini in one interface. This is actually convenient - I use GPT-4 when I need creative bullshit or when debugging weird edge cases, Claude when I want fewer hallucinations and more accurate code analysis, and Gemini for analyzing screenshots of error messages. Beats paying for ChatGPT Plus and Claude Pro separately at $40/month total.
The search results include actual citations to real sources, unlike ChatGPT which just makes up links that 404. I've never caught Perplexity fabricating sources, though it sometimes cites some random dev blog from 2019 instead of current documentation. Real-time web access is the killer feature - especially when you're debugging something that broke 2 hours ago and Stack Overflow hasn't caught up yet.
File Upload Works (Mostly)
You can upload PDFs and it'll analyze them through the interface. Works great for research papers and financial documents. Terrible for complex spreadsheets or anything with weird formatting. I've had it completely miss tables in PDFs while confidently analyzing the wrong data - like when I uploaded a PostgreSQL performance report and it analyzed the headers as actual metrics.
The file analysis combines your document with current web search, which is useful when you're reading old reports and want to know what's changed since publication. Saved me hours when analyzing outdated market research from Q1 2024 - it pulled in current data automatically instead of me having to cross-reference 15 different sources.
But here's the gotcha nobody mentions: file upload has a silent 50MB limit that's not documented anywhere obvious. Found this out trying to upload a complex quarterly report. No error message, just silent failure. Wasted 20 minutes thinking my internet was broken before realizing the file was 62MB.
Multiple AI Models in Practice
GPT-4: Good for creative bullshit and complex reasoning when you need to think through weird edge cases. Sometimes verbose as hell and tries too hard to be helpful when you just want a straight answer. Model availability changes without warning - GPT-4 went dark for 6 hours last month during a critical deadline with zero notification. Classic.
Claude: Better at following instructions precisely without going off on tangents. Doesn't hallucinate random function names or make up APIs that don't exist. My go-to for anything requiring accuracy, like when I need to understand someone else's shitty code documentation.
Gemini: Handles images well when it feels like working. Performance varies wildly - sometimes brilliant at reading screenshots, sometimes can't tell the difference between a console error and a success message. Great for analyzing error screenshots but completely useless for scanned documents.
The ability to switch models mid-conversation is actually useful. Start with Claude for research, switch to GPT-4 for creative work or brainstorming solutions, use Gemini if you need image analysis. This flexibility beats being locked into one model's quirks and blind spots - at least when they're all actually available.