AWS CLI is the command-line tool for managing AWS resources without losing your mind. I manage 200+ AWS services from one terminal instead of juggling browser tabs like a maniac. The web console is fine for learning, but try provisioning 50 EC2 instances by clicking through dropdowns - you'll want to throw your laptop out the window.
Version 2 Fixed Everything That Was Broken
AWS CLI v2 is a complete rewrite that actually works. Version 1 was a Python dependency nightmare - half my debugging time was spent figuring out why boto3 was conflicting with system Python. Version 2 embeds its own Python runtime, so you don't have to deal with that bullshit anymore.
The big changes that actually matter:
- No more Python hell: Embedded interpreter means no dependency conflicts
- Proper authentication: SSO integration that doesn't require storing keys everywhere
- Sensible output: YAML support and pagination that doesn't break your terminal
- Auto-complete that works: Unlike v1's broken tab completion
Warning: v1 and v2 have different authentication behaviors. I learned this the hard way when my CI/CD pipeline broke during the migration. Always check your automation before switching.
Every AWS Service in One Place
AWS has 200+ services and every single one is accessible through CLI. Instead of learning 47 different web interfaces, you learn one command pattern: aws [service] [action] [options]
.
Some examples that save me daily:
aws s3 ls
- List S3 buckets instead of clicking through the slow-ass S3 consoleaws ec2 describe-instances
- See all your EC2 instances without waiting for the EC2 page to loadaws iam list-users
- Check IAM users without navigating IAM's maze of menus
The consistency is actually helpful once you get used to it. If you know how to list S3 buckets, you can figure out how to list RDS instances or Lambda functions.
Automation That Actually Works
Every CI/CD tool on the planet uses AWS CLI because it's the only reliable way to automate AWS operations. I've integrated it with GitHub Actions, GitLab CI, Jenkins, and Terraform - it just works everywhere.
The exit codes are standardized, so your scripts can actually handle errors properly. When something fails, you get a real error message instead of a generic "something went wrong" from a web interface.
Real production lesson: Always set explicit regions in your automation. AWS CLI will guess your region and guess wrong at the worst possible moment. I spent 3 hours debugging a deployment that was creating resources in us-east-1 instead of eu-west-1 because I forgot --region
in one command.
Another gotcha: AWS CLI versions can break in subtle ways. Always pin CLI versions in Docker images - we've had pipelines fail when AWS pushed updates that changed authentication behavior. I learned this the hard way when a minor update broke our deployment scripts that had worked fine for months.
Pro tip: Set AWS_DEFAULT_REGION
in your shell profile. I'm tired of accidentally creating resources in the wrong region because I forgot --region
.
The Output Format You'll Actually Use
Default JSON output looks like garbage, but it's pipeable to jq for filtering. The --output table
format is what you want for human-readable results. YAML output is useful if you're feeding data into other tools.
Pro tip: Learn JMESPath queries with the --query
parameter. The syntax looks like someone sneezed on a keyboard. I hate it, but it saves so much bandwidth that I learned it anyway:
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,State.Name]' --output table
This saves bandwidth and makes scripts much faster than downloading everything and filtering locally. Worth the pain once you stop cursing at the bracket syntax.