The Reality of Linear's Built-In GitHub Integration
Linear's GitHub integration handles the basics:
PR created, issue moves to "In Progress". PR merged, issue moves to "Done". But if you're running anything more complex than a personal blog, you'll hit limitations fast.
The sync breaks with cherry-picks, fails with merge conflicts, and has no concept of deployment environments. When our staging deploy failed but the PR was already merged, Linear marked the issue as "Done" while the feature was completely broken. That's when we realized the built-in integration is designed for simple workflows, not production systems.
Building Bulletproof Automation That Scales
After burning two weeks on Linear's documentation (which glosses over the hard parts), here's the automation architecture that actually works for engineering teams shipping to production:
**Layer 1:
Enhanced Git
Hub Actions Integration**
Instead of relying on Linear's webhook interpretation, we control the entire flow with custom GitHub Actions.
This gives us visibility into deployment status across environments and prevents the "false positive" completions that plague the built-in sync.
## .github/workflows/production-deploy.yml
name:
Production Deployment with Linear Integration
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name:
Extract Linear issue IDs from commits
id: linear-issues
run: |
# Get all commit messages and extract Linear IDs
issues=$(git log --oneline ${{ github.event.before }}..${{ github.event.after }} | grep -o 'LIN-[0-9]\+' | sort | uniq | tr '
' ',' | sed 's/,$//')
echo "issues=$issues" >> $GITHUB_OUTPUT
- name:
Update issues to "Deploying"
if: steps.linear-issues.outputs.issues != ''
run: |
IFS=',' read -ra ISSUE_ARRAY <<< "${{ steps.linear-issues.outputs.issues }}"
for issue in "${ISSUE_ARRAY[@]}"; do
curl -X POST https://api.linear.app/graphql \
-H "Authorization: ${{ secrets.
LINEAR_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{\"query\": \"mutation { issueUpdate(id: \\\"$issue\\\", input: { stateId: \\\"${{ vars.
LINEAR_DEPLOYING_STATE_ID }}\\\" }) { success } }\\"}"
done
- name: Deploy to production
id: deploy
run: |
# Your deployment logic here
echo "Deploying to production..."
- name:
Mark issues as completed on successful deploy
if: success() && steps.linear-issues.outputs.issues != ''
run: |
IFS=',' read -ra ISSUE_ARRAY <<< "${{ steps.linear-issues.outputs.issues }}"
for issue in "${ISSUE_ARRAY[@]}"; do
curl -X POST https://api.linear.app/graphql \
-H "Authorization: ${{ secrets.
LINEAR_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{\"query\": \"mutation { issueUpdate(id: \\\"$issue\\\", input: { stateId: \\\"${{ vars.
LINEAR_DONE_STATE_ID }}\\\" }) { success } }\\"}"
done
- name: Create incident issue on deployment failure
if: failure()
run: |
curl -X POST https://api.linear.app/graphql \
-H "Authorization: ${{ secrets.
LINEAR_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{\"query\": \"mutation { issueCreate(input: { teamId: \\\"${{ vars.
LINEAR_INCIDENTS_TEAM_ID }}\\\", title: \\\"Production deployment failed
- $(date)\\\", description: \\\"Deployment workflow failed.
Check: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\\\", priority: 1 }) { issue { id url } } }\\"}"
The key insight?
Don't trust Git state to determine deployment state. Control the entire lifecycle through your CI/CD pipeline.
**Layer 2:
Smart Webhook Processing**
We spent an entire Saturday debugging why our webhook kept timing out during deployments. Turns out Linear's webhook system can't handle 50+ commits hitting it at once.
Their docs don't mention this lovely limitation.
Linear's webhooks are unreliable during high-traffic periods (learned this during a 200+ commit deployment). The solution is implementing a webhook processor that handles failures gracefully using Bull Queue for reliability:
// webhook-processor.js
- Handles Linear webhooks reliably
const express = require('express');
const Queue = require('bull');
const webhook
Queue = new Queue('Linear webhook processing');
const app = express();
// Queue webhooks instead of processing immediately
app.post('/webhook/linear', express.json(), (req, res) => {
webhookQueue.add('process', req.body, {
attempts: 5,
backoff: {
type: 'exponential',
delay: 2000,
},
});
res.status(200).send('Queued');
});
// Process webhooks with proper error handling
webhook
Queue.process('process', async (job) => {
const { data: webhook } = job;
try {
await processLinearWebhook(webhook);
} catch (error) {
if (error.response?.status === 429) {
throw new Error('Rate limited
- will retry');
}
throw error;
}
});
async function processLinearWebhook(webhook) {
switch (webhook.type) {
case 'Issue':
if (webhook.action === 'update' && webhook.data.state.name === 'Done') {
// Trigger deployment to staging
await triggerStagingDeploy(webhook.data.id);
}
break;
// Handle other webhook types
}
}
**Layer 3:
Environment-Aware Status Updates**
The missing piece in most Linear automations is environment awareness. Issues should have different states for "deployed to staging" vs "deployed to production". Here's how we handle multi-environment deployments:
## Environment-specific status updates
- name:
Update Linear for staging deploy
run: |
curl -X POST https://api.linear.app/graphql \
-H "Authorization: ${{ secrets.
LINEAR_API_KEY }}" \
-d "{\"query\": \"mutation { issueUpdate(id: \\\"$issue_id\\\", input: { stateId: \\\"${{ vars.
LINEAR_STAGING_STATE_ID }}\\\", labels: [\\\"deployed-staging\\\"] }) { success } }\\"}"
- name:
Update Linear for production deploy
run: |
curl -X POST https://api.linear.app/graphql \
-H "Authorization: ${{ secrets.
LINEAR_API_KEY }}" \
-d "{\"query\": \"mutation { issueUpdate(id: \\\"$issue_id\\\", input: { stateId: \\\"${{ vars.
LINEAR_DONE_STATE_ID }}\\\", labels: [\\\"deployed-production\\\"] }) { success } }\\"}"
The Gotchas That Will Bite You
Rate Limiting Pain
Linear's 1,500 requests per hour limit (about 25 per minute) sounds generous until you're batch-processing 100 commits during a big merge.
The limit is per user, so multiple workflows using the same API key compete for the same quota and cause random failures.
Solution: I've personally spent 4 hours debugging a webhook that failed because Linear's API was having 'a moment'
- still not sure what exactly happened but it started working again after their status page claimed everything was fine.
Implement request queueing and use separate API keys for different workflow types (deployment vs. incident management vs. reporting).
State ID Hell
Linear uses UUIDs for state IDs (stateId: "f47ac10b-58cc-4372-a567-0e02b2c3d479"
), and they're different for every team.
You can't hardcode them in your workflows. Instead, query them dynamically:
## Get state IDs for your team using Linear's GraphQL endpoint
curl -X POST "https://api.linear.app/graphql" \
-H "Authorization: ${{ secrets.
LINEAR_API_KEY }}" \
-d '{\"query\": \"query { team(id: \\\"your-team-id\\\") { states { nodes { id name } } } }\\"}' | jq '.data.team.states.nodes'
Store these as GitHub repository secrets and reference them in workflows.
Webhook Replay Attacks
Linear doesn't include webhook signatures by default.
If your webhook endpoint is public, anyone can trigger fake deployment updates by replaying webhook payloads. We found this out the hard way when a competitor started spamming our deployment webhooks with fake "issue completed" events
- took us a whole afternoon to figure out why our deployment board was showing everything as done when nothing had actually shipped. Always validate webhook authenticity using HMAC signatures:
// Add webhook signature validation
const crypto = require('crypto');
function validate
LinearWebhook(payload, signature, secret) {
const expectedSignature = crypto
.createHmac('sha256', secret)
.update(payload)
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(signature, 'hex'),
Buffer.from(expectedSignature, 'hex')
);
}
Performance Impact and Monitoring
Our automation adds something like 30 seconds to each deployment (mostly API calls to Linear).
That's acceptable for production deployments but painful for development environments. We only run the full automation on main branch deploys.
Monitor your Linear API usage with custom metrics. Linear doesn't provide usage dashboards, so track API response times and rate limit headers yourself:
// Track Linear API performance
const linearApiCall = async (query) => {
const start = Date.now();
try {
const response = await fetch('https://api.linear.app/graphql', {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}` },
body:
JSON.stringify({ query })
});
// Log rate limit info
console.log('Rate limit remaining:', response.headers.get('X-RateLimit-Remaining'));
console.log('API response time:', Date.now()
- start, 'ms');
return response.json();
} catch (error) {
console.error('Linear API failed:', error);
throw error;
}
};
This setup has probably saved our team around 15 hours per week
- that's just a rough estimate, but everyone noticed they weren't constantly updating issue status anymore.
The initial setup took us about 3 days
- mostly because we kept hitting weird edge cases that weren't in the docs
- but pays for itself within two weeks.