Cloud & Infrastructure · DevOps
GitHub Actions in 2026: Faster Pipelines, Smaller Bills
Most teams treat their CI pipeline as a black box that occasionally fails. A few hours of optimization can cut your CI time by 40-60% and your GitHub Actions bill by a similar margin. Here's exactly how to do it.
Anurag Verma
8 min read
Sponsored
The GitHub Actions bill arrives at the end of the month and most teams look at it, wince slightly, and move on. CI time feels like a fixed cost of software development — a 12-minute pipeline is just what 12-minute pipelines cost.
It isn’t. Most pipelines doing real work can be cut to 4-6 minutes with caching and parallelization. The spend follows the time.
Here’s a methodical approach to optimization, starting with the changes that consistently produce the biggest gains.
Start With Data: What Is Actually Slow?
Before optimizing anything, look at the GitHub Actions timing breakdown. In the Actions UI, click on a recent run and expand each job. The per-step timing tells you where to focus.
The most common time sinks, roughly in order of frequency:
- Dependency installation (npm install, pip install, go mod download) — 2-5 minutes
- Build steps that don’t need to re-run (unchanged packages in a monorepo)
- Tests running sequentially that could run in parallel
- Docker image builds that pull a fresh base every time
- Jobs that block each other unnecessarily
Fix these roughly in this order. The others rarely matter until you’ve addressed the top few.
Cache Dependencies Aggressively
This is the single highest-impact change for most pipelines.
Node.js:
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm' # This one line handles cache setup and restoration
- name: Install dependencies
run: npm ci
The cache: 'npm' option in setup-node uses the package-lock.json hash as the cache key. If the lockfile hasn’t changed, npm ci completes in under 10 seconds instead of 90-120 seconds because packages are restored from cache.
Python:
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pip'
- run: pip install -r requirements.txt
Go:
- uses: actions/setup-go@v5
with:
go-version: '1.22'
cache: true # Caches Go module download cache
Custom cache keys when the above isn’t enough:
- uses: actions/cache@v4
with:
path: |
~/.npm
node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
The restore-keys fallback is important: if the exact cache key misses (a package changed), it restores the closest previous cache instead of starting from nothing. Restoring a stale cache and running npm ci on top of it is typically 30-50% faster than a cold install.
Parallelize Jobs That Can Run Independently
Look at your pipeline graph. Most teams run test, lint, and type-check sequentially in a single job. They can run in parallel.
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { cache: 'npm' }
- run: npm ci
- run: npm run lint
typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { cache: 'npm' }
- run: npm ci
- run: npm run typecheck
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { cache: 'npm' }
- run: npm ci
- run: npm run test
deploy:
needs: [lint, typecheck, test] # Only deploy if all three pass
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- run: ./deploy.sh
The npm ci runs three times instead of once. But because each is cached, each takes under 10 seconds. Total wall time for three parallel jobs is roughly max(lint, typecheck, test) + deployment, compared to lint + typecheck + test + deployment in a sequential pipeline. For a 12-minute pipeline, this typically cuts 4-6 minutes.
Matrix Builds for Cross-Platform or Multi-Version Testing
When you need to test across Node versions, Python versions, or operating systems, matrix builds run cases in parallel:
jobs:
test:
strategy:
matrix:
node: ['20', '22']
os: [ubuntu-latest, windows-latest]
fail-fast: false # Don't cancel all matrix jobs when one fails
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
cache: 'npm'
- run: npm ci
- run: npm test
This runs 4 combinations in parallel (2 Node versions × 2 OSes). The fail-fast: false option means a failure on Windows won’t cancel the Linux runs, which is usually what you want — you want to see all failures, not just the first.
Skip CI on Docs-Only Changes
Every documentation or README commit that runs a full 10-minute test suite is wasted spend. Path filtering lets you skip:
on:
push:
branches: [main]
paths-ignore:
- '**/*.md'
- 'docs/**'
- '.github/CODEOWNERS'
pull_request:
paths-ignore:
- '**/*.md'
- 'docs/**'
Alternatively, trigger selective jobs based on what changed:
jobs:
changes:
runs-on: ubuntu-latest
outputs:
backend: ${{ steps.filter.outputs.backend }}
frontend: ${{ steps.filter.outputs.frontend }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
backend:
- 'api/**'
- 'requirements.txt'
frontend:
- 'web/**'
- 'package.json'
test-backend:
needs: changes
if: needs.changes.outputs.backend == 'true'
runs-on: ubuntu-latest
steps:
- run: pytest api/
test-frontend:
needs: changes
if: needs.changes.outputs.frontend == 'true'
runs-on: ubuntu-latest
steps:
- run: npm test
For a monorepo with multiple packages, this can cut 60-70% of unnecessary CI runs on PRs that touch only one package.
Docker Build Caching
Docker image builds are often the biggest single time sink in pipelines that deploy containerized services. A 10-layer image that rebuilds from scratch on every push takes 4-8 minutes. Caching can reduce this to 30-60 seconds for builds where only the application code changed.
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.event_name == 'push' }}
tags: myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
type=gha uses GitHub Actions cache storage for Docker layer caching. The mode=max option caches all layers, not just the final image. This is the lowest-friction option — no external registry needed.
For larger teams hitting GitHub cache limits, use a registry-based cache:
cache-from: type=registry,ref=ghcr.io/myorg/myapp:buildcache
cache-to: type=registry,ref=ghcr.io/myorg/myapp:buildcache,mode=max
Also make sure your Dockerfile is ordered to maximize layer reuse:
# Dependency layer — changes rarely
COPY package.json package-lock.json ./
RUN npm ci --only=production
# App code — changes frequently
COPY src/ ./src/
RUN npm run build
Dependencies before source code is the oldest Docker optimization and still the most impactful. If your Dockerfile copies everything first and then installs, every code change invalidates the dependency cache.
Larger Runners for Compute-Intensive Steps
GitHub’s default runner (ubuntu-latest) is a 2-core machine. For steps that benefit from parallelism — Webpack builds, TypeScript compilation of large codebases, test runners that parallelize across cores — a 4-core or 8-core runner often cuts that step’s time in half.
GitHub offers larger runners on paid plans:
jobs:
build:
runs-on: ubuntu-latest-4-cores
The cost per minute is 2-4x higher, but if a 2-core 8-minute build becomes a 4-core 4-minute build, the minute cost is the same and the wall time is half. For steps where users are waiting (like a PR status check), the tradeoff is usually worth it.
Putting It Together: Before and After
A typical 12-minute Node.js pipeline that runs tests, lints, type-checks, builds, and deploys:
Before: Sequential, no caching.
checkout: 30s
npm install: 90s
lint: 60s
typecheck: 120s
test: 180s
build: 90s
deploy: 60s
Total: ~10 min
After: Parallel jobs, caching enabled, Docker layer cache.
(parallel)
lint job: checkout + cached install + lint = 80s
typecheck job: checkout + cached install + typecheck = 140s
test job: checkout + cached install + test = 200s
(all parallel, wall time = 200s = ~3.5 min)
deploy job (after all pass):
checkout + cached Docker build + deploy = 90s
Total wall time: ~4.5 min
Four and a half minutes versus ten. The code that runs is the same. The only difference is ordering and caching.
Watching Spend
GitHub Actions charges per minute of compute time, rounded up to the nearest minute per job. Parallelizing jobs increases the number of job-minutes consumed while reducing wall time. Whether that’s a net cost reduction depends on your plan.
For teams on the free tier (2000 minutes/month on public repos, 500 on private): parallelizing three 3-minute jobs into three parallel 3-minute jobs uses 9 minutes of compute for 3 minutes of wall time. That’s 3x the compute consumption. Caching mitigates this by making each job shorter.
For teams on paid plans where cost is a concern, the path to lower spend is caching (reduces time per job) plus path filtering (skips jobs entirely on irrelevant changes). Parallelization is about developer experience, not about cost reduction.
Track your monthly minutes in Settings → Billing → Actions. Set a spending limit to avoid surprise charges if a bug causes workflows to loop.
Sponsored
More from this category
More from Cloud & Infrastructure
Terraform vs OpenTofu vs Pulumi: Picking Your IaC Tool in 2026
Valkey in 2026: What Happened When Redis Changed Its License
Caching LLM Responses: When It Helps, When It Hurts, and How to Implement It
Sponsored
The dispatch
Working notes from
the studio.
A short letter twice a month — what we shipped, what broke, and the AI tools earning their keep.
Discussion
Join the conversation.
Comments are powered by GitHub Discussions. Sign in with your GitHub account to leave a comment.
Sponsored