What is Jsonnet and Why You Actually Need It

Updated September 2025

Managing Kubernetes configs starts innocent enough.

You've got 3 services, each needs a deployment YAML. No big deal. Then it's 10 services. Then 25. Before you know it, you're copy-pasting the same YAML blocks like a madman, changing service names and ports, and praying you didn't miss anything.

I missed a fucking quote mark in one ConfigMap and took down 6 services in staging. Spent 3 hours debugging it because they all looked identical. That's when I finally learned Jsonnet.

Kubernetes YAML Configuration Hell

Jsonnet is Google's attempt to fix this disaster.

They created it in 2014 because even Google got tired of copy-pasting JSON. It's basically JSON with programming features

  • variables, functions, loops, inheritance. Instead of maintaining 50 nearly-identical YAML files, you write code that generates them.

The Actual Problem This Solves

Microservices Architecture

Here's what happens: you start with 5 services, each needs deployment + service + ingress YAML.

That's 15 files. No problem. Then product wants 20 services. Now it's 60 files. CEO talks about "microservice architecture" and suddenly you're managing configs for like 35 services. That's over 100 YAML files that are 90% identical.

Need to change resource limits across all services? Hope you like editing 35 files and not fucking up a single one. Because I promise you'll mess up at least 3 and spend your afternoon debugging why some random service won't start.

// This generates all your service configs from one template
local make

Service(name, port, replicas) = {
  deployment: {
    metadata: { name: name },
    spec: {
      replicas: replicas,
      template: {
        spec: {
          containers: [{
            name: name,
            image: 'us-central1-docker.pkg.dev/myproject/%s:latest' % name,
            ports: [{ containerPort: port }],
            resources: if std.ends

With(name, '-prod') then {
              limits: { memory: '2Gi', cpu: '1' }
            } else {
              limits: { memory: '512Mi', cpu: '0.5' }
            }
          }]
        }
      }
    }
  },
  service: {
    metadata: { name: name },
    spec: {
      selector: { app: name },
      ports: [{ port: port, targetPort: port }]
    }
  }
};

// Generate configs for all services
{
  'user-service.json': make

Service('user-service', 8080, 3),
  'order-service.json': make

Service('order-service', 8081, 5),
  'payment-service.json': make

Service('payment-service', 8082, 2)
}

Why Not Just Use Helm?

Helm is fine if you're just installing community charts. But the moment you need custom logic, Go templates become fucking miserable. Ever tried debugging `{{ range $key, $val := .

Values.env }}{{ if ne $val "" }}{{ $key }}={{ $val }}{{ end }}{{ end }}`? I have. At 3am. While production was down. Fun times.

Jsonnet gives you actual programming. Need a loop? Write a loop. Need conditionals? Write if/else. Need to import shared code? Import it. No template gymnastics required.

The Implementation Mess

Here's where Jsonnet gets annoying. There are 3 different implementations and picking the wrong one will waste your day:

C++ (original)

  • Slow as hell, has security warnings, basically abandoned.

Don't use it.

Go (go-jsonnet)

  • Use this one. Way faster than C++, includes a linter, and actually gets updates. When you run brew install jsonnet, this is what you get. Current version is 0.20.0 because they're slow at releasing updates.

Sjsonnet (JVM)

  • Databricks made this because they needed something faster for their massive configs. Startup time sucks for small stuff but it flies through big codebases. Unless you're generating thousands of files, stick with Go.

Who Actually Uses This Thing

Databricks has something like 40,000 lines of Jsonnet generating hundreds of thousands of YAML files. They switched because managing Spark configs manually was making their engineers hate life.

Grafana built an entire ecosystem around Jsonnet for monitoring stacks. Their Tanka project is basically "Jsonnet for Kubernetes"

  • it handles the kubectl integration so you don't have to convert everything manually. Way cleaner than fighting with Kustomize overlays.

The community is small but the people who use it really use it. Not like Helm where half the charts are abandoned.

When to Bother With This

Use Jsonnet if:

  • You've got 15+ services with nearly-identical configs
  • You need actual programming logic in your configs
  • Copy-pasting YAML is driving you insane
  • You want to version control your config generation like real code

Don't use Jsonnet if:

  • You have like 5 services that never change
  • Your team thinks learning new tools is "unnecessary complexity"
  • Kustomize already solves your problems
  • You need the massive Helm chart ecosystem

The break-even point is roughly "am I spending more time copy-pasting YAML than I would learning Jsonnet?" For most small projects, probably not.

How Jsonnet Stacks Up Against Everything Else

Jsonnet

Helm

Kustomize

Terraform

What is it

JSON with programming

Go templates from hell

YAML patches

Infrastructure code

Learning curve

Few weeks if you code

Template debugging will break you

5 minutes if you know YAML

Actually reasonable

Debugging pain

Import paths are cancer

Template errors make no sense

Usually pretty clear

Plan shows what breaks

Real programming

Yes

  • functions, loops, etc

Template gymnastics only

Nope, just overlays

Yes but for infra

Community stuff

Growing but tiny

Charts for everything

Just examples

Modules for everything

kubectl integration

Compile then apply

Template then apply

Built right in

Not applicable

When it breaks

Compile-time errors

Runtime cluster failures

Apply errors (clear)

Plan catches most shit

Speed

Go: decent, C++: awful

Pretty fast

Instant

Depends on APIs

Getting This Thing Working

Installation is easy. Everything else will make you question your life choices. The import system alone ate 6 hours of my weekend trying to figure out why paths worked from one directory but not another.

Configuration Management Diagram

Installation That Actually Works

macOS (Easy Mode):

brew install jsonnet

Linux (Usually Fine):

## Ubuntu/Debian
sudo apt-get install jsonnet

## CentOS/RHEL - good luck, compile from source

Windows:
Use WSL2. Seriously. I wasted 3 hours trying to compile this on Windows before giving up. MSYS2 is pain.

Python/CI:

pip install gojsonnet

Gets you the CLI plus Python bindings. Actually works, unlike most Python packages. Don't use versions before 0.20.x - they crash on anything real.

Pick the Right One or Hate Your Life

C++ (original) - Don't. It's slow and has security warnings. Basically abandoned.

Go (go-jsonnet) - Use this. Way faster than C++, includes linter, actually maintained. This is what brew install jsonnet gives you. Current version is 0.20.0 (they're slow at releases).

Sjsonnet (JVM) - Databricks built this for massive codebases. Starts slow but flies through big files. Unless you're building thousands of configs, stick with Go.

First Steps That Won't Immediately Break

Create a simple hello.jsonnet:

{
  message: "Hello, world!",
  timestamp: std.toString(std.floor(std.extVar("timestamp"))),
  environment: std.extVar("env")
}

Run it:

## This will fail with "External variable not defined: timestamp"
jsonnet hello.jsonnet

## This actually works
jsonnet --ext-str env=dev --ext-str timestamp=1640995200 hello.jsonnet

Output:

{
   "environment": "dev",
   "message": "Hello, world!",
   "timestamp": "1640995200"
}

Things That Will Waste Your Day

Import Path Hell
Imports are resolved relative to where you run the command, not the file. This is insane but that's how it works. You'll have:

configs/base/common.jsonnet
configs/environments/prod.jsonnet  

And prod.jsonnet imports ../base/common.jsonnet. Sometimes works, sometimes doesn't. Always use -J:

jsonnet -J configs environments/prod.jsonnet

External Variables Are Required
Use std.extVar("something") without providing it? Death. No defaults, no mercy:

## Dies with "External variable not defined"
jsonnet config.jsonnet

## Actually works
jsonnet --ext-str env=prod config.jsonnet

Error Messages Suck
Complex files = useless error messages. You'll see "Field does not exist: invalidField" on line 47 but the problem is on line 12 in a completely different file. Break big files into smaller pieces and test individually.

The Formatter Is Unforgiving
jsonnet fmt rewrites your style whether you like it or not. Embrace it or be miserable.

VS Code Setup (Actually Useful)

Code Editor Syntax

Install the Jsonnet extension. It provides:

  • Syntax highlighting
  • Basic error detection
  • Formatting on save (uses jsonnet fmt)

Add this to your VS Code settings:

{
  "[jsonnet]": {
    "editor.formatOnSave": true,
    "editor.tabSize": 2
  }
}

How I Actually Use This

YAML Configuration Structure

After fighting with imports for a weekend, here's what works:

1. Templates go in lib/

// lib/service.jsonnet
function(name, port, image) {
  deployment: { /* deployment stuff */ },
  service: { /* service stuff */ }
}

2. Configs in configs/

// configs/production.jsonnet
local makeService = import '../lib/service.jsonnet';
{
  'api-deploy.yaml': std.manifestYamlDoc(
    makeService('api', 8080, 'myapp/api:v1.2.3').deployment
  )
}

3. Build it

jsonnet -J lib -m generated/ configs/production.jsonnet

Took forever to get the paths right. Now adding a service takes 30 seconds instead of 30 minutes editing YAML.

CI/CD Integration

Your GitHub Actions or GitLab CI should look like:

## .github/workflows/configs.yml
- name: Generate configs
  run: |
    for env in dev staging prod; do
      mkdir -p generated/$env
      jsonnet -J lib --ext-str environment=$env \
        -m generated/$env/ configs/$env.jsonnet
    done

- name: Validate YAML
  run: |
    find generated/ -name '*.yaml' -exec yaml-lint {} \;

- name: Apply to cluster
  run: |
    kubectl apply -R -f generated/prod/

Took forever to figure out that -R flag for recursive application. The kubectl docs don't exactly spell this out.

Questions You'll Have After It Breaks

Q

Why is this so fucking slow?

A

You're using the C++ version. Don't. Get go-jsonnet with brew install jsonnet. Way faster. If you're dealing with thousands of files, Sjsonnet is faster but startup time kills you during development.

Q

How do I fix "External variable not defined: X"?

A

Stop being clever with defaults. Either provide it:bashjsonnet --ext-str environment=prod config.jsonnetOr don't use std.extVar(). Use a local variable instead. External vars are for stuff that actually changes between runs.

Q

Can I output YAML instead of JSON?

A

Jsonnet only outputs JSON. Use std.manifestYamlDoc() inside your code:jsonnet{'deployment.yaml': std.manifestYamlDoc({apiVersion: 'apps/v1',kind: 'Deployment'// ... etc})}Then use -m output-dir/ to get YAML files.

Q

Why do imports break randomly?

A

Imports are relative to where you run the command, not the file. This is insane but that's how it works. Always use -J:bash# Don't do thiscd configs && jsonnet environments/prod.jsonnet# Do thisjsonnet -J configs environments/prod.jsonnet

Q

How do I debug "Field does not exist" errors?

A

Error messages are useless for complex files.

Break things into smaller pieces:bashjsonnet -e 'import "lib/service.jsonnet"'jsonnet -e 'local svc = import "lib/service.jsonnet"; svc("test", 8080)'Also try jsonnet lint

  • catches some stuff runtime errors miss.
Q

Should I ditch Helm for this?

A

Depends how much you hate Go templates. Using community charts? Stay with Helm. Writing custom charts with complex logic? Jsonnet is way cleaner.Tanka lets you mix Jsonnet with Helm charts if you can't decide.

Q

How do I handle secrets without putting them in source code?

A

Don't put secrets in Jsonnet files. Use external secret operators or reference secret names:jsonnet{env: [{ name: 'DATABASE_PASSWORD', valueFrom: { secretKeyRef: { name: 'db-secrets', key: 'password' }}}]}Jsonnet generates the config structure, Kubernetes or HashiCorp Vault provides the actual secrets at runtime.

Q

Can I use this with GitOps or will it break everything?

A

It works fine with ArgoCD and Flux.

Generate your configs in CI/CD:```yaml# In your pipeline

  • name:

Generate configsrun: jsonnet -m generated/ configs/prod.jsonnet

  • name:

Commit generated filesrun: |git add generated/git commit -m "Update generated configs"git push```Then point Argo

CD at the generated/ directory. The only gotcha is you need to commit the generated files, which some people hate.

Q

How long to learn this shit?

A

Basic usage? A day if you code. Object inheritance and imports? About a week of fighting with it.The real time sink is debugging import paths and external variables. Error messages are trash. Community is tiny

  • Stack Overflow has maybe 200 Jsonnet questions. You'll be reading GitHub issues for everything.
Q

Is performance actually that bad?

A

C++ version is trash. go-jsonnet is fine for a few hundred configs. Sjsonnet is fast but startup time sucks for dev.On my machine with ~200 configs: go-jsonnet takes 2 seconds, C++ takes 8, Sjsonnet startup is 0.5s. If yours is slow, you're using C++ or have circular imports.

Q

Should I use this for 5 services?

A

You're overthinking it. Kustomize is fine for simple stuff. Jsonnet pays off at 15+ services or when you need real logic.Break-even: "Am I spending more time copy-pasting YAML than learning Jsonnet?" Usually no for small projects.

Advanced Stuff (Once You Stop Fighting Imports)

After you spend a weekend debugging path issues and give up on formatting preferences, Jsonnet becomes actually useful. Here's what makes it worth the pain.

Object Inheritance (How It Actually Works)

This is where Jsonnet shines compared to YAML copy-paste hell. You can build configuration hierarchies that don't make you want to quit:

// Base service template that every team uses
local BaseService(name, port) = {
  apiVersion: 'apps/v1',
  kind: 'Deployment', 
  metadata: {
    name: name,
    labels: {
      app: name,
      team: 'platform',  // Default team
      environment: std.extVar('env')
    }
  },
  spec: {
    replicas: 1,  // Safe default
    selector: { matchLabels: { app: name }},
    template: {
      metadata: { labels: { app: name }},
      spec: {
        containers: [{
          name: name,
          image: '%s:latest' % name,
          ports: [{ containerPort: port }],
          resources: {
            requests: { memory: '128Mi', cpu: '100m' },
            limits: { memory: '256Mi', cpu: '200m' }
          }
        }]
      }
    }
  }
};

// Frontend team customizes with their specific needs  
local FrontendService(name, port) = BaseService(name, port) + {
  metadata+: { 
    labels+: { team: 'frontend' }  // Override team label
  },
  spec+: {
    replicas: 3,  // Frontend needs more replicas
    template+: { 
      spec+: {
        containers: [super.containers[0] + {
          resources: {
            requests: { memory: '256Mi', cpu: '200m' },
            limits: { memory: '512Mi', cpu: '500m' }  // Frontend is hungrier
          },
          env: [
            { name: 'NODE_ENV', value: 'production' },
            { name: 'API_URL', value: 'https://api.%s.com' % std.extVar('env') }
          ]
        }]
      }
    }
  }
};

The + and super syntax is weird but powerful. Override specific fields, keep the rest. Way cleaner than Helm template hell.

Building Your Own Helper Functions

The built-in standard library has useful stuff like std.map(), std.filter(), and std.join(), but you'll want your own functions:

// lib/k8s.jsonnet - Your team's Kubernetes helpers
{
  // Generate a ConfigMap from a directory of files
  configMapFromDir(name, path):: {
    apiVersion: 'v1',
    kind: 'ConfigMap',
    metadata: { name: name },
    data: std.foldl(
      function(acc, file) acc + { 
        [std.basename(file)]: importstr path + '/' + file 
      },
      std.objectFields(import path),
      {}
    )
  },

  // Generate all the boring service mesh annotations
  istioService(name, port):: BaseService(name, port) + {
    metadata+: {
      annotations+: {
        'sidecar.istio.io/inject': 'true',
        'traffic.sidecar.istio.io/includeInboundPorts': std.toString(port)
      }
    }
  },

  // Because nobody remembers the exact YAML for horizontal pod autoscaling
  hpa(name, minReplicas=2, maxReplicas=10, targetCPU=70):: {
    apiVersion: 'autoscaling/v2',
    kind: 'HorizontalPodAutoscaler',
    metadata: { name: name + '-hpa' },
    spec: {
      scaleTargetRef: {
        apiVersion: 'apps/v1',
        kind: 'Deployment', 
        name: name
      },
      minReplicas: minReplicas,
      maxReplicas: maxReplicas,
      metrics: [{
        type: 'Resource',
        resource: {
          name: 'cpu',
          target: { type: 'Utilization', averageUtilization: targetCPU }
        }
      }]
    }
  }
}

How Companies Actually Use This

Databricks: 40k+ lines of Jsonnet. Shared libraries with team overrides. Works at scale.

Grafana: Published libraries for common stuff like Prometheus configs. Teams share instead of reinventing.

Multi-repo setup:

  • Common libraries in config-lib repo, versioned
  • Service repos import specific versions
  • Platform team maintains libs, service teams customize
  • Renovate keeps versions updated
  • Switched from Helm because debugging templates at 2am sucked

Performance at Scale

C++: Don't. It's slow enough to hurt productivity.

Go: Fine for most teams. Our 50-service setup compiles in 3 seconds on CI, 2 seconds locally. Memory usage is ~200MB. Slow? Check for circular imports.

Sjsonnet: Fast for massive configs, but 0.5s startup kills dev workflow. Use in CI only.

Optimization tricks:

  • Cache compiled files
  • Parallel builds for independent configs
  • Only rebuild what changed

GitOps Integration

GitOps Workflow

ArgoCD:

## Generate manifests in CI
jsonnet -J lib -m generated/prod/ configs/prod.jsonnet
git add generated/ && git commit -m \"Update configs\" && git push

ArgoCD watches generated/ and deploys. Downside: committing generated files. Upside: GitOps sees exactly what deploys.

Flux + Kustomize:
Use Jsonnet for base configs, Kustomize for env differences. Cleaner git history, more complex setup.

Monitoring Automation

Prometheus Dashboard

This is where it gets actually useful. Generate Prometheus alerts from service definitions:

local service = {
  name: 'user-service',
  port: 8080,
  sla: { latency_p99: 500, error_rate: 0.01 }
};

{
  // Kubernetes deployment
  'user-service-deploy.yaml': std.manifestYamlDoc(
    makeDeployment(service.name, service.port)
  ),

  // Prometheus alerts based on SLA
  'user-service-alerts.yaml': std.manifestYamlDoc({
    apiVersion: 'monitoring.coreos.com/v1',
    kind: 'PrometheusRule',
    metadata: { name: service.name + '-alerts' },
    spec: {
      groups: [{
        name: service.name,
        rules: [
          {
            alert: service.name + 'HighLatency',
            expr: 'histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{job=\"%s\"}[5m])) by (le)) > %d' % [service.name, service.sla.latency_p99 / 1000],
            'for': '5m',
            labels: { severity: 'warning' },
            annotations: {
              summary: '%s p99 latency above %dms' % [service.name, service.sla.latency_p99]
            }
          }
        ]
      }]
    }
  })
}

Change the SLA in one place, both deployment and alerts update. Spent a month converting 200 Helm charts - worth it for maintainability. This makes the initial pain worth it.

Integration with Other Stuff

Terraform: Generate terraform.json from Jsonnet for infrastructure.

OPA/Gatekeeper: Generate policies that match your service patterns.

Istio: Generate VirtualServices and DestinationRules that stay in sync.

Real power: change service definition once, everything updates consistently.

Related Tools & Recommendations

howto
Similar content

Set Up Microservices Observability: Prometheus & Grafana Guide

Stop flying blind - get real visibility into what's breaking your distributed services

Prometheus
/howto/setup-microservices-observability-prometheus-jaeger-grafana/complete-observability-setup
100%
tool
Similar content

Helm: Simplify Kubernetes Deployments & Avoid YAML Chaos

Package manager for Kubernetes that saves you from copy-pasting deployment configs like a savage. Helm charts beat maintaining separate YAML files for every dam

Helm
/tool/helm/overview
91%
tool
Similar content

ArgoCD - GitOps for Kubernetes That Actually Works

Continuous deployment tool that watches your Git repos and syncs changes to Kubernetes clusters, complete with a web UI you'll actually want to use

Argo CD
/tool/argocd/overview
69%
tool
Similar content

ArgoCD Production Troubleshooting: Debugging & Fixing Deployments

The real-world guide to debugging ArgoCD when your deployments are on fire and your pager won't stop buzzing

Argo CD
/tool/argocd/production-troubleshooting
69%
integration
Similar content

GitOps Integration: Docker, Kubernetes, Argo CD, Prometheus Setup

How to Wire Together the Modern DevOps Stack Without Losing Your Sanity

/integration/docker-kubernetes-argocd-prometheus/gitops-workflow-integration
66%
integration
Recommended

OpenTelemetry + Jaeger + Grafana on Kubernetes - The Stack That Actually Works

Stop flying blind in production microservices

OpenTelemetry
/integration/opentelemetry-jaeger-grafana-kubernetes/complete-observability-stack
50%
tool
Similar content

Debugging Istio Production Issues: The 3AM Survival Guide

When traffic disappears and your service mesh is the prime suspect

Istio
/tool/istio/debugging-production-issues
46%
integration
Recommended

Setting Up Prometheus Monitoring That Won't Make You Hate Your Job

How to Connect Prometheus, Grafana, and Alertmanager Without Losing Your Sanity

Prometheus
/integration/prometheus-grafana-alertmanager/complete-monitoring-integration
45%
alternatives
Similar content

Escape Kubernetes Complexity: Simpler Container Orchestration

For teams tired of spending their weekends debugging YAML bullshit instead of shipping actual features

Kubernetes
/alternatives/kubernetes/escape-kubernetes-complexity
43%
tool
Similar content

cert-manager: Stop Certificate Expiry Paging in Kubernetes

Because manually managing SSL certificates is a special kind of hell

cert-manager
/tool/cert-manager/overview
43%
tool
Similar content

Change Data Capture (CDC) Integration Patterns for Production

Set up CDC at three companies. Got paged at 2am during Black Friday when our setup died. Here's what keeps working.

Change Data Capture (CDC)
/tool/change-data-capture/integration-deployment-patterns
38%
tool
Similar content

Kubernetes Cluster Autoscaler: Automatic Node Scaling Guide

When it works, it saves your ass. When it doesn't, you're manually adding nodes at 3am. Automatically adds nodes when you're desperate, kills them when they're

Cluster Autoscaler
/tool/cluster-autoscaler/overview
38%
tool
Similar content

containerd - The Container Runtime That Actually Just Works

The boring container runtime that Kubernetes uses instead of Docker (and you probably don't need to care about it)

containerd
/tool/containerd/overview
38%
tool
Similar content

Fix gRPC Production Errors - The 3AM Debugging Guide

Fix critical gRPC production errors: 'connection refused', 'DEADLINE_EXCEEDED', and slow calls. This guide provides debugging strategies and monitoring solution

gRPC
/tool/grpc/production-troubleshooting
38%
news
Recommended

Google Avoids $2.5 Trillion Breakup in Landmark Antitrust Victory

Federal judge rejects Chrome browser sale but bans exclusive search deals in major Big Tech ruling

OpenAI/ChatGPT
/news/2025-09-05/google-antitrust-victory
35%
news
Recommended

Google Avoids Breakup, Stock Surges

Judge blocks DOJ breakup plan. Google keeps Chrome and Android.

go
/news/2025-09-04/google-antitrust-chrome-victory
35%
tool
Similar content

Debug Kubernetes Issues: The 3AM Production Survival Guide

When your pods are crashing, services aren't accessible, and your pager won't stop buzzing - here's how to actually fix it

Kubernetes
/tool/kubernetes/debugging-kubernetes-issues
35%
tool
Similar content

Rancher Desktop: The Free Docker Desktop Alternative That Works

Discover why Rancher Desktop is a powerful, free alternative to Docker Desktop. Learn its features, installation process, and solutions for common issues on mac

Rancher Desktop
/tool/rancher-desktop/overview
35%
troubleshoot
Similar content

Kubernetes Crisis Management: Fix Your Down Cluster Fast

How to fix Kubernetes disasters when everything's on fire and your phone won't stop ringing.

Kubernetes
/troubleshoot/kubernetes-production-crisis-management/production-crisis-management
35%
tool
Similar content

TypeScript Compiler Performance: Fix Slow Builds & Optimize Speed

Practical performance fixes that actually work in production, not marketing bullshit

TypeScript Compiler
/tool/typescript/performance-optimization-guide
31%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization