I'm not gonna sugarcoat this - this is where shit gets real. Done this setup probably 60+ times, and it's never gone smoothly. Ever. But I've learned which commands actually work and which ones send you down 4-hour rabbit holes of pain.

Phase 1: Don't Fuck Up The Basics
Linux Users: You Have It Easy (Relatively)
If you're on Ubuntu 22.04, copy this exactly:
## This works every time, no exceptions
sudo apt update && sudo apt upgrade -y
## Build tools - without these, everything breaks later
sudo apt install -y build-essential curl wget git vim
## Python dev headers - trust me, you need these
sudo apt install -y python3-dev python3-pip python3-venv
## Check it worked
python3 --version # Should show 3.10.x
If any of that shits the bed, your system is borked and you need to fix it before wasting more time.
Windows Users: Welcome to Hell (But Manageable Hell)
WSL2 is the only way to stay sane:
- Install Python from python.org - check "Add to PATH" or hate yourself later
- Get Git from gitforwindows.org
- Enable WSL2: Open PowerShell as admin, run:
wsl --install -d Ubuntu-22.04
- Restart your computer - yes, you actually have to reboot like it's 1995
Reality check: This will take 30+ minutes. Go make coffee.
macOS: Pretty But Painful
Homebrew is mandatory, everything else is optional:
## Install Homebrew - you'll be asked for your password 47 times
/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"
## Actually useful stuff
brew install python@3.11 git wget
## Verify it worked
which python3 # Should point to /opt/homebrew/bin/python3

Phase 2: GPU Drivers (AKA: Enter the Thunderdome)
This is where dreams go to die. NVIDIA drivers are simultaneously essential and the most fragile thing on your system.
Linux: The \"Easy\" Path
## Check if your GPU is even recognized
nvidia-smi
If that command works, you're golden. If not:
## Nuclear option that usually works
sudo ubuntu-drivers autoinstall
sudo reboot # Yes, you have to restart
If it still doesn't work: Welcome to driver hell. Spent an entire goddamn weekend in 2023 fighting NVIDIA driver 535.86. Installed perfectly, then refused to acknowledge CUDA existed. Solution? Downgraded to 535.54 and it magically worked. Sometimes newer means more broken. Check NVIDIA's docs, sacrifice a small animal, try again.
Windows: Download Roulette
Go to NVIDIA's site and download whatever they recommend. It'll probably work. Maybe. If it doesn't, uninstall everything and try again.
Pro tip: Use DDU if you need to completely nuke old drivers.
Phase 3: Conda (Because pip Will Eventually Betray You)

Install Miniconda (Not the Full Anaconda Bloatware)
## Download the installer
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
## Install it silently
bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3
## Initialize conda (this modifies your .bashrc)
$HOME/miniconda3/bin/conda init bash
source ~/.bashrc
## Verify it worked
conda --version
Create Your AI Environment (And Pray It Doesn't Break)
## Create environment with specific Python version
conda create -n ai-dev python=3.11 -y
conda activate ai-dev
## Base packages that don't conflict (yet)
conda install -c conda-forge numpy pandas matplotlib jupyter -y
## Check that it actually worked
python -c \"import numpy; print('NumPy version:', numpy.__version__)\"
When this inevitably fucks up: Nuke the environment (conda env remove -n ai-dev
) and start fresh. Trust me, it's faster than debugging conda's schizophrenic dependency resolver. Learned this after wasting 6 hours trying to unfuck a corrupted environment that took 10 minutes to recreate. Don't be as stubborn as me.
Phase 4: The Framework Gauntlet (Where Everything Goes Wrong)
This is where boys become men and men become alcoholics. Installing AI frameworks is like playing Russian roulette with dependency hell. One wrong version number and you're reinstalling your entire OS.


TensorFlow: Google's Gift and Curse
TensorFlow 2.15 with GPU support:
## This command looks simple. It's not.
pip install tensorflow[and-cuda]==2.15.0
## Test if you got lucky
python -c \"import tensorflow as tf; print('TF version:', tf.__version__); print('GPUs:', tf.config.list_physical_devices('GPU'))\"
If you see GPUs listed: Celebrate. You beat the odds.
If you see an empty list: Welcome to CUDA hell. Check your CUDA version compatibility.
PyTorch: Facebook's \"Easier\" Alternative
## Go to pytorch.org and get the exact command for your CUDA version
## DO NOT GUESS. I repeat: DO NOT GUESS.
## For CUDA 12.1 (check with nvidia-smi):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
## Test it
python -c \"import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}')\"
If CUDA shows False: Your PyTorch version doesn't match your CUDA version. Uninstall and try again with the right index URL.
JAX: For Masochists Only
## Only install if you enjoy pain
pip install \"jax[cuda12_pip]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
JAX is fast when it works. It works 60% of the time, every time.
Phase 5: The Supporting Cast (More Things to Break)
## Data science basics (these usually work)
pip install scikit-learn seaborn plotly
## NLP stack (Hugging Face ecosystem)
pip install transformers datasets tokenizers accelerate
## Computer vision tools
pip install opencv-python Pillow albumentations
## MLOps tools (prepare for config hell)
pip install mlflow wandb tensorboard optuna
## Web frameworks (for demos that impress nobody)
pip install fastapi uvicorn streamlit gradio
Reality Check: At least 3 of these will have dependency conflicts. Use pip install --no-deps
if you need to force something, then deal with the consequences later.
Phase 6: VS Code (The One Thing That Usually Works)
Download VS Code - it's free and doesn't suck.
Essential extensions that won't slow everything down:
- Python Extension Pack - for Python development (duh)
- Jupyter - for notebook integration that actually works
- GitHub Copilot - $10/month but worth every penny
- Remote - SSH - for when you need to work on actual servers
- Docker - for containerized development (optional but recommended)

Setting up Python interpreter:
- Open VS Code in any folder
Ctrl+Shift+P
→ "Python: Select Interpreter"
- Pick
~/miniconda3/envs/ai-dev/bin/python
- If it's not there, your conda environment is broken
Jupyter Setup (For Interactive Development)
## Install JupyterLab (not the old Jupyter Notebook)
pip install jupyterlab jupyter-widgets
## Optional git integration (buggy but sometimes useful)
pip install jupyterlab-git
## Start it up
jupyter lab
Jupyter will open in your browser at localhost:8888
. If it doesn't, check the terminal output for the actual URL with the token.
Pro tip: Use Jupyter for exploration, VS Code for actual development. Don't write production code in notebooks.
Phase 5: Docker and MLOps Setup
9. Install Docker
Linux:
## Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
## Add user to docker group
sudo usermod -aG docker $USER
newgrp docker
## Install NVIDIA Container Toolkit for GPU access
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
Windows/macOS: Install Docker Desktop with GPU support enabled. On Windows, Docker Desktop loves to randomly reset its file sharing permissions, breaking mounted volumes. I've rebuilt containers more times than I can count because of this. Pro tip: check Docker settings first before assuming your code is broken.
10. Set Up Git and GitHub
## Configure Git
git config --global user.name \"Your Name\"
git config --global user.email \"your.email@example.com\"
## Generate SSH key for GitHub
ssh-keygen -t ed25519 -C \"your.email@example.com\"
## Add SSH key to ssh-agent
eval \"$(ssh-agent -s)\"
ssh-add ~/.ssh/id_ed25519
## Display public key (add this to GitHub)
cat ~/.ssh/id_ed25519.pub
Phase 6: Cloud Integration (Optional)
11. AWS CLI Setup
## Install AWS CLI
pip install awscli
## Configure AWS (requires AWS account)
aws configure
12. Google Cloud Setup
## Install Google Cloud CLI
pip install google-cloud-sdk
## Initialize gcloud
gcloud init
The Moment of Truth: Testing Your Frankenstack
Time to see if all this pain was worth it. Save this script and run it:
## test_environment.py - Your moment of truth
def test_import(name, import_as=None):
\"\"\"Test if a package imports without exploding\"\"\"
import_name = import_as or name
try:
__import__(import_name)
print(f\"✅ {name} works\")
return True
except ImportError as e:
print(f\"❌ {name} failed: {e}\")
return False
print(\"🧪 Testing your AI environment...\")
print(\"=\" * 50)
## Critical packages
tests = [
('numpy', 'numpy'),
('pandas', 'pandas'),
('tensorflow', 'tensorflow'),
('torch', 'torch'),
('sklearn', 'sklearn'),
('transformers', 'transformers'),
('cv2', 'cv2'), # OpenCV is always annoying
]
failed = []
for pkg, imp in tests:
if not test_import(pkg, imp):
failed.append(pkg)
print(\"=\" * 50)
## GPU torture test
print(\"🔥 GPU Status Check:\")
try:
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
print(f\"TensorFlow sees {len(gpus)} GPU(s)\")
import torch
cuda_available = torch.cuda.is_available()
print(f\"PyTorch CUDA: {'✅ Available' if cuda_available else '❌ Broken'}\")
if cuda_available:
print(f\"PyTorch GPU count: {torch.cuda.device_count()}\")
print(f\"Current GPU: {torch.cuda.get_device_name()}\")
except Exception as e:
print(f\"💀 GPU test exploded: {e}\")
print(\"=\" * 50)
if not failed:
print(\"🎉 Holy shit, everything works!\")
print(\"Time to train some models and lose money on compute.\")
else:
print(f\"💥 {len(failed)} packages failed: {failed}\")
print(\"Welcome to dependency hell. Good luck.\")
Run it:
python test_environment.py
If everything shows green checkmarks: Congratulations, you beat the odds.
If you see red X's: Time to debug. Check the error messages, Google furiously, maybe cry a little.
That's it. You now have an AI development environment that works (hopefully) and will break spectacularly when you least expect it. Enjoy!