Look, I used to think supply chain attacks were this theoretical thing that happened to other people. Then SolarWinds got hit, and suddenly I'm spending my weekends auditing every build script we've ever written.
SolarWinds (2020) - The One That Woke Everyone Up
This one still keeps me up at night. Russian state hackers got into SolarWinds' build environment - not the source code, the actual build servers - and modified binaries during compilation. For nine months, SolarWinds was basically a government-approved malware distribution service.
The SolarWinds breach affected over 18,000 organizations worldwide, including multiple U.S. government agencies.
The scary part? Everything looked legitimate. The malware was properly signed with SolarWinds' certificates, passed all security scans, and behaved like normal software until it was time to phone home. Even the Pentagon got infected through normal software updates.
What actually happened: Attackers got into the build environment and modified source code during compilation. Not the repo - the build server itself. The SUNBURST backdoor was inserted during the build process.
The attack showed how CI/CD systems are prime targets because they have broad access across infrastructure. CrowdStrike's analysis revealed the malware specifically targeted the build process.
Codecov (2021) - When Code Coverage Became Secret Harvesting
This one hit close to home because we were using Codecov too. Their bash uploader got modified to steal environment variables from every CI/CD run. Two months. That's how long every build leaked secrets before anyone noticed.
The Codecov incident report shows how supply chain attacks can stay hidden for months. CISA's cybersecurity advisory helped organizations assess potential exposure.
The worst part was watching the incident reports roll in. HashiCorp, Twilio, dozens of companies I recognized - all compromised through the same vector. We spent a weekend rotating every credential that might have touched a Codecov build.
What made it so effective was the perfect trust relationship. We're installing a tool specifically to analyze our code, giving it access to our entire build environment, and assuming it'll only do what it's supposed to do. Classic supply chain exploitation - abuse existing trust relationships.
The NPM Package That Ruined My Weekend (2021)
You know that feeling when you're debugging something for hours and then realize the problem isn't your code? That was me when the ua-parser-js
package got compromised.
This package gets downloaded about a billion times a week - basically every JavaScript project uses it directly or indirectly. Someone stole the maintainer's NPM credentials and pushed versions with cryptocurrency miners. If you had automatic dependency updates enabled (and you should), congratulations, you just deployed malware to production.
I spent two days figuring out why our staging environment was suddenly slow as hell, then another day explaining to leadership why our deployment pipeline downloaded and ran mining software. Fun times.
The scary thing about this attack was how perfectly it exploited our automation. The NPM security advisory and detailed analysis from Snyk showed the scope. We built all these great systems for automatic updates, continuous deployment, fast iterations - and the attackers just rode that wave straight into production.
The Current Stuff That's Breaking My Brain
This new wave of attacks is getting smarter. The Ultralytics computer vision library got hit in December - same pattern but targeting AI/ML workflows specifically. They know those teams have expensive GPU clusters and less mature security practices.
The packages looked completely normal until you started training models, then they'd harvest credentials in the background. Clever timing - who's monitoring resource usage during training? That's expected to be high.
Why Most Security Solutions Miss the Point
Look, I get it. Security vendors need to sell something. But most of their solutions don't address the actual problem.
"We use HashiCorp Vault!" Cool, how do your build systems authenticate to Vault? Oh, with credentials stored in environment variables? Amazing.
"We scan our dependencies!" For what, known CVEs? These attacks don't show up in vulnerability databases. They're compromising legitimate packages and adding new malicious code.
"We Use Private Docker Registries"
Awesome, except your CI/CD system needs credentials to access those registries. And if those credentials leak (which they will), attackers have access to all your base images and can inject malware at the infrastructure level.
"Our Builds Run in Isolated Containers"
Sure, but what happens when the container has network access and cloud credentials? Container isolation doesn't help when the malware is designed to steal secrets and exfiltrate data during the build process.
The Real 2025 Threat Landscape
Supply chain attacks are exploding - Sonatype's report says they've increased over 700% in the last few years. 2025 has been particularly brutal for CI/CD security. ENISA's threat landscape report confirms the trend.
The scary part is attackers are getting smarter. Recent GitHub Actions compromises show they understand fork network vulnerabilities, how to abuse pull_request_target triggers, and how to evade audit logs by manipulating git tags. They're targeting specific CI/CD tools, container registries, and build infrastructure that lots of companies depend on.
GitHub Actions attacks keep coming. We've seen multiple Action compromises every few months. Attackers know that one compromised popular Action can give them access to thousands of repositories instantly. The dependency confusion attacks alone affected thousands of repositories before GitHub implemented better protections.
AI-powered attacks are analyzing your build patterns to find the optimal time and place to inject malicious code. They understand that your 3am deployment has less oversight than your Tuesday afternoon release. The Ultralytics attack specifically targeted AI/ML workflows because they knew those engineers have access to expensive GPU infrastructure.
Cloud-native environments introduced new attack vectors nobody thought about. Kubernetes service accounts, ECR permissions, serverless execution roles - every abstraction layer is another opportunity for privilege escalation. OIDC token hijacking is becoming the new credential theft.
What's Actually at Stake
When your CI/CD gets compromised:
- Attackers get persistence: They don't need to maintain access to individual servers when they control the deployment pipeline
- Lateral movement is trivial: CI/CD credentials often have broad access across your entire infrastructure
- Detection is delayed: Malicious changes look like legitimate deployments in your logs
- Recovery is expensive: You need to audit every deployment, rotate every credential, and rebuild trust with customers
Data breaches are getting more expensive - IBM's Cost of Data Breach Report says the average hit over $5 million in 2024. Ponemon Institute research shows supply chain breaches cost 19% more than average. That's just the direct costs, not the months of cleanup hell. Supply chain attacks are particularly devastating because of their wide blast radius across multiple organizations. The Codecov incident alone potentially exposed credentials for hundreds of companies simultaneously.
But the real cost isn't money. It's the months of recovery time, the customer churn, and the realization that your entire software delivery process can't be trusted.
Time for Some Honesty
Most companies are running CI/CD systems that were designed for convenience, not security. We prioritized developer productivity over security controls, and now we're paying for it.
The good news? The fixes aren't that complicated. You just need to stop treating CI/CD security as an afterthought and start treating it like the critical infrastructure it actually is.