Why Anchore Engine Was Deprecated and What Replaced It

Anchore Engine Deprecation Notice

Anchore Engine died in January 2023 after years of being a pain in the ass to maintain. Not because it was a bad idea - actually the opposite. Anchore learned from Engine's nightmare architecture and built something way better: Syft for SBOM generation and Grype for vulnerability scanning that actually works.

The official migration guide provides detailed transition steps, while the Anchore Engine GitHub repository contains the final migration recommendations from the development team.

The Architecture That Led to Deprecation

Engine was a monolithic piece of shit that needed PostgreSQL, multiple containers, and way too much infrastructure. It worked sometimes but had fatal flaws:

Resource Heavy: Engine ate PostgreSQL for breakfast and shit out OOM errors. "Basic" deployment my ass - it needed 4GB RAM minimum and routinely demanded more during scans. The deployment guide was like reading War and Peace, except more depressing.

Complex as Hell: Managing catalogs, policy engines, analyzers, and API servers that all failed in their own special ways. Each service had its own config files, scaling quirks, and creative ways to break at 3am. The architecture docs read like a Rube Goldberg machine manual, and the troubleshooting guides were longer than most novels.

Slow as Molasses: Want to add a new package manager? Good luck coordinating changes across five different services that hate each other. Development moved at the speed of bureaucracy. The community forums were basically a support group for people dealing with Engine's architectural sins.

CI/CD Nightmare: Sure, it had REST APIs, but integrating them was like performing surgery with a chainsaw. Startup times alone could kill your build pipeline. The CI/CD integrations needed more config than a Space Shuttle launch.

The Modern Replacement: Syft + Grype

Container Security Base Workflow

Syft Logo

Grype Logo

Instead of that monolithic monster, Anchore split the functionality into two tools that actually work:

Syft generates SBOMs from container images without being a prima donna about it. Fast, lightweight, and supports CycloneDX and SPDX formats that actually work with other tools. Handles 25+ package ecosystems and private registries without throwing a fit.

Grype scans for vulnerabilities and doesn't suck at it. Takes SBOMs from Syft or scans images directly. Works with multiple output formats, custom templates, and has GitHub Actions that won't break your workflows.

Splitting the tools fixed all the shit everyone hated about Engine:

  • Actually fast scans: Grype finishes in minutes instead of the hour+ clusterfuck Engine put you through
  • No more database babysitting: Zero PostgreSQL maintenance, corruption nightmares, or memory leaks
  • CI/CD that doesn't suck: No waiting for services to start or random API timeouts ruining your day
  • Deploy and forget: Install two binaries and you're done - no Docker Compose hell
  • Development that moves: Adding package managers doesn't require coordinating 5 services that hate each other

What You Lose in Migration

Look, before you get all excited about migrating, here's what Engine had that the CLI tools don't (spoiler: you probably don't need most of it):

Web UI and Dashboard: Engine's web UI was garbage anyway - half the links were broken and it crashed regularly under load. Most teams ended up building Grafana dashboards because the built-in UI was so unreliable.

Centralized Policy Management: Engine's policy system was powerful in theory, nightmare to maintain in practice. Complex JSON policies that nobody understood, and debugging policy failures meant digging through PostgreSQL logs. Most organizations only used basic severity thresholds anyway.

User Management and RBAC: Engine's RBAC was overkill for most use cases. Teams spent weeks configuring permissions that could be handled with proper CI/CD access controls. The CLI tools integrate with whatever auth system you already have.

Persistent Scan History: Engine hoarded everything in PostgreSQL like a digital pack rat. Constant vacuum operations, regular corruption, and "historical data" that was mostly digital garbage. Once you implement continuous scanning that actually works, you won't miss those old scan results collecting dust.

Repository Watch Lists: Engine's registry monitoring was like a drunk security guard - worked when it felt like it. Random failures, missed images, cryptic error messages. A cron job is more reliable than this piece of shit feature ever was.

What You Gain in Migration

Performance: Our Engine deployment was a nightmare - 15-20 minutes scanning a Node.js app when it didn't completely shit the bed. Grype does the same thing in under 2 minutes. Engine OOM-killed itself constantly, didn't matter how much RAM we threw at the bastard.

Reliability: No more 3am pages because PostgreSQL corrupted again and I had to rebuild from backups like some kind of digital archaeologist. No more "analysis stuck" mysteries that required nuking the entire service cluster. Each scan is isolated - fails fast with actual error messages instead of cryptic database voodoo.

Maintenance: Went from weekly PostgreSQL maintenance windows (because it always broke) to zero maintenance. No monitoring 5+ services that hated each other, no log rotation eating our disk, no "catalog service disconnected" errors at random times.

Integration: Engine's API was bipolar - worked fine then randomly threw 500 errors under load. CLI tools are boring in the best way - exit code 0 means success, anything else means it failed. No polling APIs waiting for "analysis_status": "analyzed" like some kind of masochist.

SBOM Standards: Syft's SBOMs work with every scanner we've tested. Engine's custom format only worked with Engine because of course it did. Want to try a different vulnerability scanner? Feed it the same SBOM instead of being locked into Anchore's ecosystem.

Migration Strategy: Don't Make Our Mistakes

Most teams try to rebuild Engine's exact functionality with the CLI tools. Don't make our mistake - you'll end up with something more fucked up than the original.

What We Did Wrong: Wasted three months building databases to store scan results, a shitty web UI nobody used, and orchestration to recreate Engine's service mesh of doom. The result was more complex than Engine and broke twice as often.

What Actually Works: Use the CLI tools where they don't suck - CI/CD pipelines and automation. Your existing monitoring can handle alerts, your existing databases can handle persistence if you really need it.

Skip the gradual migration bullshit. We tried running both systems for "safety" and it was a complete nightmare. Rip the bandaid off and switch to CLI tools. You'll realize you never needed 80% of Engine's "features" once you have tools that just fucking work.

Alright, enough bitching. Here's the technical breakdown so you can plan your escape from Engine hell:

Comparison Table

Feature

Anchore Engine

Syft + Grype

Migration Notes

SBOM Generation

Integrated analysis service

Syft CLI tool

Syft is faster and supports more formats (SPDX, CycloneDX)

Vulnerability Scanning

Policy engine with DB

Grype CLI tool

Grype updates vulnerability data automatically, no DB management

Deployment Architecture

Multi-service (API, Catalog, Policy)

Single binaries

Eliminates service mesh complexity

Database Requirements

PostgreSQL required

None (stateless)

Major operational simplification

Resource Requirements

4GB+ RAM, persistent storage

~100MB RAM per scan

95%+ resource reduction

Web UI

Built-in dashboard

None (CLI only)

Need external UI or dashboards if required

User Management

Built-in RBAC

None

Integrate with existing auth systems

Policy Management

Centralized policy engine

Config files + external orchestration

More flexible but requires external coordination

Scan History

PostgreSQL storage

Stateless (external storage needed)

Build history tracking if needed

Registry Monitoring

Built-in registry polling

External orchestration required

Use CI/CD or cron jobs

API Access

REST API with full CRUD

CLI only (can wrap in API)

Build API wrapper if needed

Performance

5-15 minutes for full scan

30 seconds

  • 2 minutes total

10x+ speed improvement

CI/CD Integration

Complex (service dependencies)

Native CLI integration

Much simpler pipeline integration

Container Support

Docker, basic OCI

Docker, OCI, Singularity, archives

Broader format support

Package Ecosystem Coverage

Limited package managers

25+ package managers

Better language coverage

SBOM Standards

Custom format primarily

SPDX 2.3, CycloneDX 1.6, Syft JSON

Industry standard formats

Maintenance Overhead

High (services, DB, updates)

Minimal (binary updates only)

Operational burden nearly eliminated

Cost of Operation

Infrastructure + maintenance

Compute only during scans

Significant cost reduction

Step-by-Step Migration Process

Migration Workflow Diagram

Container Security Scanning Process

Modern DevSecOps Pipeline: Unlike Engine's complex service architecture, the CLI tools integrate seamlessly into existing CI/CD workflows with simple command-line calls.

Migrating from Engine to Syft and Grype is pretty straightforward once you stop trying to recreate Engine's nightmare architecture. Here's how to do it without losing your mind:

Phase 1: Install and Test the CLI Tools

Install Syft and Grype next to your existing Engine deployment so you can compare results before pulling the plug. The installation guides actually work unlike Engine's setup docs.

## Install both tools (macOS/Linux)
curl -sSfL https://get.anchore.io/syft | sudo sh -s -- -b /usr/local/bin
curl -sSfL https://get.anchore.io/grype | sudo sh -s -- -b /usr/local/bin

## Test on the same images you currently scan with Engine
syft your-registry/your-image:tag -o cyclonedx-json > sbom.json
grype your-registry/your-image:tag --fail-on medium

You can also install via Homebrew, Docker, Chocolatey, or GitHub releases. The container images work with private registries without the usual auth headaches.

Compare results between Engine and the CLI tools. Grype will find way more vulnerabilities because it's not broken like Engine's package detection that missed half your dependencies.

Shit that will break during your first migration attempt (learned this the hard way):

  • Different vulnerability counts: Grype finds vulns Engine missed. Your security team will panic about the "new" vulnerabilities - warn them or they'll think you broke everything. Took us 2 hours to convince our CISO this was actually good news.
  • New package types found: Engine missed packages in weird locations. Syft finds everything, including transitive deps Engine ignored. Our "clean" production images suddenly had 73 vulnerabilities. That was a fun Monday morning.
  • False positive differences: Grype has different false positive patterns. The docs say "few hours" to tune ignore rules. Bullshit. We spent like 3-4 weeks getting the ignore rules right, maybe longer. I lost track after rewriting them the fourth time.

Phase 2: Migrate CI/CD Pipelines

Replace Engine's shitty API calls with CLI tools in your pipelines. This is the easiest win because CLI tools don't randomly timeout or return cryptic errors.

Before (Engine API approach):

## Example Engine API workflow (replace ENGINE_HOST with your actual hostname)
ENGINE_HOST=\"your-engine-host:8228\"

## Add image to Engine for scanning
curl -X POST \"http://${ENGINE_HOST}/v1/images\" \
  -H \"Content-Type: application/json\" \
  -d '{\"tag\":\"myapp:latest\"}'

## Wait for analysis to complete
while [ \"$(curl -s http://${ENGINE_HOST}/v1/images/myapp:latest | jq -r '.analysis_status')\" != \"analyzed\" ]; do
  echo \"Waiting for analysis...\"
  sleep 30
done

## Get vulnerability results
curl \"http://${ENGINE_HOST}/v1/images/myapp:latest/vuln/os\" | jq '.'

After (CLI approach):

## Generate SBOM and scan in one step (way simpler than Engine's bullshit)
grype myapp:latest --fail-on medium --output json > vulnerability-report.json

## Two-step approach if you want more control or need the SBOM for other tools
syft myapp:latest -o cyclonedx-json > sbom.json
grype sbom:sbom.json --fail-on medium --output json > vulnerability-report.json

CLI approach is simpler and faster. Engine's analysis took forever when it worked - usually it hung with "Analysis in progress..." until you killed it in frustration. Grype finishes in minutes or fails fast with error messages that actually mean something.

Phase 3: Replace Repository Monitoring

Engine's registry monitoring was garbage anyway. Here's how to replace it with stuff that actually works:

GitHub Actions (for GitHub Container Registry):
The Anchore Container Scan action provides pre-built workflows, while custom integrations allow for more control:

name: Container Security Scan
on:
  schedule:
    - cron: '0 6 * * *'  # Daily at 6 AM
  workflow_dispatch:

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - name: Install Grype
        run: |
          curl -sSfL https://get.anchore.io/grype | sudo sh -s -- -b /usr/local/bin

      - name: Scan latest images
        run: |
          # Scan production images
          grype ghcr.io/yourorg/app:latest --fail-on high
          grype ghcr.io/yourorg/api:latest --fail-on high

Check GitHub's security guide and workflow examples for SARIF reporting if you need that integration.

Kubernetes CronJob (for registry polling):
Use Kubernetes CronJobs with the official container images for automated scanning:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: vulnerability-scanner
spec:
  schedule: \"0 6 * * *\"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: grype
            image: anchore/grype:latest
            command:
            - /bin/sh
            - -c
            - |
              grype registry.yourorg.com/app:latest --output json > /shared/scan-results.json
          restartPolicy: OnFailure

Follow K8s security practices and Pod Security Standards unless you want your scanning jobs to be attack vectors.

GitLab CI Scheduled Pipeline:
GitLab's container scanning integrates well with Grype through scheduled pipelines:

container-scan:
  image: anchore/grype:latest
  script:
    - grype $CI_REGISTRY_IMAGE:latest --output json > vulnerability-report.json
  artifacts:
    reports:
      junit: vulnerability-report.json
  only:
    - schedules

GitLab has security dashboard integration and custom templates that don't completely suck.

Phase 4: Handle Policy Migration

Policy Migration Pain: Engine's policy system was complex as hell. Grype uses config files like a normal tool.

Engine's policies were sophisticated like a Rube Goldberg machine. Here's how to migrate them without going insane:

Engine Policy Analysis:
First, export your existing Engine policies to see what clusterfuck of rules you've accumulated:

## Export all policies from Engine (good luck parsing this shit)
curl \"http://your-engine-host:8228/v1/policies\" | jq '.' > engine-policies.json

Most Engine policies were just:

  1. Vulnerability thresholds (block High/Critical vulns)
  2. Package blacklists (ban specific shitty packages)
  3. License restrictions (no GPL because lawyers)
  4. Secret detection (find API keys idiots committed)
  5. File content rules (block suspicious files)

Grype Policy Translation:
Grype handles policies through configuration files and command-line flags:

## ~/.grype.yaml
ignore:
  # Equivalent to Engine package blacklist
  - package:
      name: libssl1.1
      version: 1.1.1k-r0
    vulnerability: CVE-2021-3711

  # Equivalent to Engine vulnerability exceptions
  - vulnerability: CVE-2022-12345
    fix-state: wont-fix

## Equivalent to Engine severity thresholds
fail-on-severity: \"high\"

## Equivalent to Engine license policies (requires manual checking)
match:
  java:
    using-cpes: false  # Reduces false positives

Policy Automation Example:

#!/bin/bash
## policy-enforcement.sh - Way simpler than Engine's policy nightmare

## Run Grype with org policy (actually works unlike Engine)
grype \"$1\" \
  --config /opt/security/grype-policy.yaml \
  --fail-on high \
  --output json > scan-results.json

## Custom license checking (because lawyers hate GPL)
syft \"$1\" -o json | jq -r '.artifacts[].licenses[].value' 2>/dev/null | \
  grep -E \"(GPL|AGPL)\" && echo \"License violation detected\" && exit 1

## Secret detection (for devs who commit passwords like idiots)
syft \"$1\" -o json | jq -r '.artifacts[].locations[].path' 2>/dev/null | \
  xargs -I {} grep -l \"password\\|api[_-]key\" {} 2>/dev/null && \
  echo \"Potential secret detected\" && exit 1

echo \"Policy check passed (miracle)\"

Phase 5: Migrate Reporting and Dashboards

Engine's web UI was garbage anyway. Here's what actually works for dashboards:

Grafana Integration:

## Store scan results in InfluxDB (replace the URL with your actual instance)
grype myapp:latest --output json | \
  jq -r '.matches[] | [.vulnerability.severity, .artifact.name, now] | @csv' | \
  curl -XPOST \"http://influxdb:8086/write?db=security\" --data-binary @-

Security Dashboard with ELK Stack:

## Send results to Elasticsearch (way more reliable than Engine's DB corruption)
grype myapp:latest --output json | \
  curl -X POST \"http://elasticsearch:9200/vulnerability-scans/_doc\" \
  -H \"Content-Type: application/json\" -d @-

Simple Web UI with JSON files:

## Generate static reports (ghetto but it works)
mkdir -p /var/www/html/scans/$(date +%Y-%m-%d)
grype myapp:latest --output json > \"/var/www/html/scans/$(date +%Y-%m-%d)/myapp.json\"
syft myapp:latest --output cyclonedx-json > \"/var/www/html/scans/$(date +%Y-%m-%d)/myapp-sbom.json\"

Phase 6: Handle Advanced Use Cases

Air-Gapped Environments:
Both tools work offline, but require different preparation than Engine:

## Download vulnerability database for offline use
grype db update
cp ~/.cache/grype/db/latest.tar.gz /path/to/offline/environment/

## In air-gapped environment
export GRYPE_DB_CACHE_DIR=/opt/grype-db
tar -xzf latest.tar.gz -C $GRYPE_DB_CACHE_DIR
grype myimage:tag --db-cache-dir $GRYPE_DB_CACHE_DIR

High-Volume Scanning:
Engine's database could become a bottleneck under load. CLI tools scale differently:

## Parallel scanning (Engine could never handle this load)
echo \"image1:tag image2:tag image3:tag\" | \
xargs -n 1 -P 10 -I {} sh -c 'grype {} --output json > results/{}.json'

Enterprise Integration:
Large organizations often need centralized vulnerability data:

## Push results to central system (replace with your actual security API)
grype myapp:latest --output json | \
  curl -X POST \"https://security-api.company.com/vulnerability-scans\" \
  -H \"Authorization: Bearer $API_TOKEN\" \
  -H \"Content-Type: application/json\" -d @-

Here's the thing: Syft and Grype are building blocks, not drop-in replacements for Engine's monolithic mess. You'll end up with simpler infrastructure that actually integrates with your existing tools instead of fighting them.

The FAQ below covers questions everyone asks during migration, based on teams who've already escaped Engine hell.

Frequently Asked Questions

Q

Can I migrate gradually from Engine to Syft/Grype?

A

Yeah, and you should definitely do this unless you enjoy pain. Install Syft and Grype next to your existing Engine setup and start using them for new stuff while Engine limps along with your old integrations.The CLI tools don't have persistent state, so they won't corrupt each other like Engine's services did. Compare results side-by-side before you kill Engine completely.

Q

Will I lose historical scan data during migration?

A

Engine's scan history is stuck in Postgre

SQL hell, and there's no way to import it into the CLI tools (because they're not idiotic enough to need persistent state). You could export the data if you hate yourself, but parsing those normalized tables is like solving a Rubik's cube blindfolded.Most teams discover they never looked at that historical data anyway

  • it was just eating disk space. Once you have continuous scanning that doesn't randomly break, you won't miss that archive of ancient scan results.
Q

What about my existing Engine policies?

A

You gotta translate Engine policies to Grype config files. Basic stuff (vuln thresholds, package exceptions) maps over easily. Complex policies need external scripts because Grype isn't trying to be everything to everyone.Export your Engine policies with curl "http://your-engine-host:8228/v1/policies" first to see what rules you actually have. Most orgs find out they can ditch 80% of their policy complexity and nobody notices.

Q

How do I handle the lack of web UI?

A

Syft and Grype are CLI tools that don't try to be shitty web apps. Your options:

  • Grafana dashboards (most teams go this route)
  • ELK stack for searching results
  • Custom web UI if you enjoy building things
  • CI/CD reporting (GitLab, GitHub security tabs)

Most teams prefer this over Engine's built-in UI that crashed half the time anyway. At least now it integrates with your existing monitoring instead of being another thing to babysit.

Q

What's the performance difference in practice?

A

Night and day difference. Engine took 5-15 minutes on good days, sometimes hours when PostgreSQL decided to shit itself. Syft generates an SBOM in under a minute, Grype scans it in another minute or two.

For a typical Node.js image:

  • Engine: 15+ minutes when it worked, infinite time when it hung (which was often)
  • Syft + Grype: 2-3 minutes total

Now you can scan every build instead of just production releases because it doesn't take forever.

Q

Do I need to change my CI/CD pipelines significantly?

A

Not really. CLI tools actually fit in CI/CD pipelines instead of fighting them. Replace API calls with CLI commands and you're basically done.

Biggest change is scans are fast enough to run inline instead of the async polling bullshit Engine forced on you.

Q

What about vulnerability database management?

A

Grype just handles it. Checks for updates before each scan, downloads new data automatically. No manual database management, no PostgreSQL maintenance windows, no feed sync failures at 2am.

Vulnerability DB is cached locally and updated incrementally. Subsequent scans work offline unless there are new updates to grab.

Q

Can I still scan running containers and registries?

A

Yeah, they support everything Engine did plus more:

  • Docker daemon: grype myapp:latest
  • Container registry: grype registry:myregistry.com/myapp:tag
  • Local archives: grype docker-archive:image.tar
  • Running containers: grype docker:container-name
  • Filesystems: grype dir:/path/to/files

Registry monitoring needs external orchestration (cron, scheduled CI/CD), but at least it's predictable unlike Engine's registry polling that worked when it felt like it.

Q

How do I handle air-gapped environments?

A

Both work offline, way simpler than Engine's feed management nightmare:

  1. Download vuln database: grype db update (takes forever on slow connections, sorry)
  2. Copy ~/.cache/grype/db/ to your air-gapped environment
  3. Set GRYPE_DB_AUTO_UPDATE=false or Grype hangs trying to phone home
  4. Scan normally: grype myimage:tag

If the DB cache gets fucked, just delete ~/.cache/grype and start over. Way simpler than Engine's feed sync that broke if you looked at it wrong.

Q

What if I need the policy engine features?

A

Grype's policy system is way simpler but handles most real use cases through config files and ignore rules. Complex policies need wrapper scripts, but honestly most of Engine's policy complexity was overkill.

Most orgs discover they prefer Grype's straightforward approach over Engine's policy engine that required a PhD to understand.

Q

Will Anchore continue supporting the CLI tools?

A

Yeah, Syft and Grype are what Anchore actually cares about now. They're the foundation of Anchore Enterprise, so they're not going anywhere.

Engine is dead and buried - no more security updates, bug fixes, nothing. The CLI tools are the official replacement that actually evolved instead of just accumulating technical debt.

Q

What about integration with Anchore Enterprise?

A

If you're eyeing Anchore Enterprise, the CLI tools are literally what it's built on. Migrating to Syft/Grype now means less pain later if you need Enterprise features.

Enterprise adds centralized policy management, web UI that doesn't suck, compliance reporting, and multi-tenancy on top of the same scanning engines.

Q

How do I migrate 50+ pipelines efficiently?

A

Template approach works best:

  1. Create standard Grype configs for your org
  2. Build wrapper scripts that match your Engine API calls
  3. Update pipelines in batches, compare results against Engine first
  4. Use feature flags to toggle between old and new scanning

CLI tools are consistent unlike Engine's API that had different quirks depending on which service was handling your request. Solve it once, apply everywhere.

Need more help? The links below have official docs, community guides, and stories from teams who've already escaped Engine hell.

Essential Migration Resources