We migrated our website from Astro content collections to a fully dynamic SSR architecture with Supabase as the data layer. The migration took 3 days. It changed everything about how we manage content, deploy updates, and think about our website's architecture.
This is not a tutorial. This is the full, honest breakdown of why we made every decision, what we got right, what we got wrong, and the exact architecture that powers codercops.com today. If you are building a content-heavy site and trying to decide between static generation, SSR, or some hybrid approach, this post will save you weeks of research.
I am sharing our actual code, our real performance numbers, and the mistakes that cost us time. Because when I was researching this migration, every blog post I found was either a toy example or a marketing piece from a CMS vendor. Neither was useful.
Why Astro (And Why Not Next.js)
Let me get the framework debate out of the way first, because someone is going to ask.
We evaluated Next.js 15, Remix 3, SvelteKit, and Astro 5 before building the current version of codercops.com. Here is why Astro won:
Content-first architecture. Our website is 80% content pages -- blog posts, project case studies, team bios, service descriptions. Astro was designed from the ground up for content-heavy sites. Next.js can do content sites, but it was designed for applications. That difference shows up in developer experience, bundle size, and performance.
Island architecture. We have some interactive components (the contact form, the mobile navigation, the newsletter signup). With Astro, these are isolated "islands" of JavaScript in a sea of static HTML. The rest of the page ships zero JS. With Next.js, even a simple blog post includes the React runtime.
Bundle size reality check. Here is a real comparison from our testing:
| Metric | Astro 5 (our build) | Next.js 15 (equivalent) |
|---|---|---|
| Blog post page JS | 0 KB (no islands) | 87 KB (React runtime) |
| Blog post page with interactive widget | 12 KB (island only) | 94 KB (React + component) |
| Homepage total JS | 18 KB | 112 KB |
| Time to Interactive (3G) | 1.2s | 2.8s |
| Lighthouse Performance | 98-100 | 88-94 |
I am not saying Next.js is bad. We use it for client projects all the time. But for a content-driven agency website, Astro is the better fit by a wide margin.
Hybrid rendering. Astro lets us mix SSR and static rendering at the page level. Our contact page and 404 page are pre-rendered at build time. Our blog posts and project pages are server-rendered from Supabase at request time. We make this decision per-page with a single line of code:
// Static page (rendered at build time)
// src/pages/contact.astro
export const prerender = true;// Dynamic page (rendered per request from Supabase)
// src/pages/blog/[slug].astro
// No prerender export = SSR by default (output: 'server')The Architecture: How It All Fits Together
Here is the full picture of how codercops.com works:
┌─────────────────────────────────────────────────────────────┐
│ Content Authors (us) │
│ Write MDX/MD files in VS Code │
└─────────────────┬───────────────────────────────────────────┘
│ git push
v
┌─────────────────────────────────────────────────────────────┐
│ Content Repo (codercops-agency-content) │
│ │
│ blog/ projects/ team/ │
│ ├── post1.mdx ├── project1.md ├── member1.md │
│ ├── post2.mdx ├── project2.md ├── member2.md │
│ └── ... └── ... └── ... │
└─────────────────┬───────────────────────────────────────────┘
│ GitHub Actions trigger
v
┌─────────────────────────────────────────────────────────────┐
│ Sync Pipeline (GitHub Actions) │
│ │
│ 1. Validate frontmatter (required fields, types) │
│ 2. Render MDX/MD to HTML (marked + highlight.js) │
│ 3. Extract headings for table of contents │
│ 4. Compute SHA-256 hash of each file │
│ 5. Compare with stored hashes in Supabase │
│ 6. Upload only changed files (delta sync) │
│ 7. Upload images to Supabase Storage │
│ 8. Trigger Vercel redeploy │
└─────────────────┬───────────────────────────────────────────┘
│
v
┌─────────────────────────────────────────────────────────────┐
│ Supabase (hfyczaxyaafgxntbodpy) │
│ │
│ Tables: │
│ ├── blog_posts (title, slug, content_html, metadata...) │
│ ├── projects (title, slug, description, tech_stack...) │
│ ├── team_members (name, role, bio, avatar...) │
│ └── page_views (slug, count, last_viewed) │
│ │
│ Edge Functions: │
│ ├── newsletter-subscribe │
│ └── increment-page-views │
│ │
│ Storage: │
│ └── content-images/ (blog and project images) │
└─────────────────┬───────────────────────────────────────────┘
│ Supabase JS client queries
v
┌─────────────────────────────────────────────────────────────┐
│ Website (codercops-agency-website on Vercel) │
│ │
│ Astro 5 (output: 'server') │
│ ├── src/lib/supabase.ts (client singleton) │
│ ├── src/lib/content.ts (data access layer) │
│ ├── src/pages/blog/[slug].astro (SSR) │
│ ├── src/pages/contact.astro (prerendered) │
│ └── src/pages/404.astro (prerendered) │
│ │
│ Deployed to Vercel (Serverless Functions for SSR) │
└─────────────────────────────────────────────────────────────┘There are three separate systems working together: the content repo (where we write), Supabase (where data lives), and the website (where it renders). Let me walk through each one.
The Content Repo: Git as the Source of Truth
Our content lives in a separate Git repository: codercops-agency-content. It has a simple structure:
codercops-agency-content/
├── blog/
│ ├── post-slug-here.mdx
│ ├── another-post.mdx
│ └── ...
├── projects/
│ ├── project-name.md
│ └── ...
├── team/
│ ├── anurag-verma.md
│ └── ...
├── scripts/
│ ├── sync-to-supabase.js
│ └── validate-content.js
├── .github/
│ └── workflows/
│ └── validate-and-deploy.yml
└── package.jsonWhy a separate repo? Two reasons:
Content changes should not trigger website rebuilds. When we fix a typo in a blog post, we do not want to rebuild the entire Astro site. The sync script pushes the change to Supabase, and the next request to that page gets the updated content. Zero downtime, no redeploy needed.
Different permissions. We can give content writers access to the content repo without exposing the website codebase. This matters when you scale.
Content Format
Blog posts are MDX files with YAML frontmatter. Here is what a typical post looks like:
---
title: "The Post Title"
description: "A 120-180 character description."
pubDate: 2026-02-21
author: "Anurag Verma"
image: "https://images.unsplash.com/photo-xxx?w=1200&h=630&fit=crop"
tags: ["Tag1", "Tag2", "Tag3"]
category: "Category Name"
subcategory: "Subcategory"
featured: false
---
The post content goes here. Standard Markdown with MDX extensions.
## Headings Become Table of Contents
The sync script extracts all h2 and h3 headings to build a table of contents
automatically. No manual TOC needed.Projects and team members use plain Markdown (.md) with simpler frontmatter.
The Sync Pipeline: SHA-256 Delta Sync
This is the piece I am most proud of. The sync script is a Node.js script that runs in GitHub Actions whenever content changes are pushed.
Why Delta Sync Matters
When we first built the sync script, it was dumb: on every push, it re-synced every file to Supabase. With 100+ blog posts, this took 45 seconds and hit Supabase rate limits on the free tier. Not great.
The fix was SHA-256 content hashing. The script computes a hash of each file's content and compares it with the hash stored in Supabase. If the hash matches, the file has not changed, and we skip it.
Here is the core logic:
// scripts/sync-to-supabase.js (simplified)
import { createHash } from 'crypto';
import { createClient } from '@supabase/supabase-js';
import { marked } from 'marked';
import hljs from 'highlight.js';
import * as fs from 'fs';
import * as path from 'path';
import matter from 'gray-matter';
const supabase = createClient(
process.env.SUPABASE_URL,
process.env.SUPABASE_SERVICE_ROLE_KEY
);
// Configure marked with syntax highlighting
marked.setOptions({
highlight: (code, lang) => {
if (lang && hljs.getLanguage(lang)) {
return hljs.highlight(code, { language: lang }).value;
}
return hljs.highlightAuto(code).value;
},
});
function computeHash(content) {
return createHash('sha256').update(content).digest('hex');
}
function extractHeadings(html) {
const headings = [];
const regex = /<(h[23])[^>]*id="([^"]*)"[^>]*>(.*?)<\/\1>/gi;
let match;
while ((match = regex.exec(html)) !== null) {
headings.push({
level: parseInt(match[1].charAt(1)),
id: match[2],
text: match[3].replace(/<[^>]*>/g, ''),
});
}
return headings;
}
async function syncBlogPosts() {
const blogDir = path.join(process.cwd(), 'blog');
const files = fs.readdirSync(blogDir).filter(
(f) => f.endsWith('.mdx') || f.endsWith('.md')
);
// Get existing hashes from Supabase
const { data: existing } = await supabase
.from('blog_posts')
.select('slug, content_hash');
const existingMap = new Map(
(existing || []).map((row) => [row.slug, row.content_hash])
);
let synced = 0;
let skipped = 0;
for (const file of files) {
const raw = fs.readFileSync(path.join(blogDir, file), 'utf-8');
const hash = computeHash(raw);
const slug = file.replace(/\.(mdx|md)$/, '');
// Skip if content has not changed
if (existingMap.get(slug) === hash) {
skipped++;
continue;
}
// Parse frontmatter and render content
const { data: frontmatter, content } = matter(raw);
const contentHtml = marked.parse(content);
const headings = extractHeadings(contentHtml);
// Upsert to Supabase
const { error } = await supabase
.from('blog_posts')
.upsert({
slug,
title: frontmatter.title,
description: frontmatter.description,
content_html: contentHtml,
pub_date: frontmatter.pubDate,
author: frontmatter.author,
image: frontmatter.image,
tags: frontmatter.tags,
category: frontmatter.category,
subcategory: frontmatter.subcategory,
featured: frontmatter.featured || false,
headings: JSON.stringify(headings),
content_hash: hash,
updated_at: new Date().toISOString(),
}, { onConflict: 'slug' });
if (error) {
console.error(`Failed to sync ${slug}:`, error);
process.exit(1);
}
synced++;
console.log(`Synced: ${slug}`);
}
console.log(`Done. Synced: ${synced}, Skipped: ${skipped}`);
}
syncBlogPosts();Sync Performance
With our current content volume (170+ blog posts, 12 projects, 6 team members):
| Scenario | Time | API Calls |
|---|---|---|
| Full sync (all files changed) | ~35 seconds | ~190 |
| Typical push (1-3 files changed) | ~3 seconds | ~5 |
| No changes (hash match) | ~1.5 seconds | 1 (fetch hashes) |
The delta sync reduced our average sync time by 92%. On the Supabase free tier, this matters because you have limited API calls.
The Data Layer: src/lib/content.ts
This is the bridge between Supabase and our Astro pages. When we migrated from Astro content collections, the biggest challenge was that our page templates expected data in Astro's getEntry() / getCollection() format. Instead of rewriting every template, we built a compatibility layer that returns data in the same shape.
// src/lib/content.ts
import { supabase } from './supabase';
export interface BlogPost {
slug: string;
data: {
title: string;
description: string;
pubDate: Date;
author: string;
image: string;
tags: string[];
category: string;
subcategory: string;
featured: boolean;
};
body: string;
headings: Array<{ level: number; id: string; text: string }>;
}
export async function getAllBlogPosts(): Promise<BlogPost[]> {
const { data, error } = await supabase
.from('blog_posts')
.select('*')
.order('pub_date', { ascending: false });
if (error) throw new Error(`Failed to fetch blog posts: ${error.message}`);
return (data || []).map(transformPost);
}
export async function getBlogEntry(slug: string): Promise<BlogPost | null> {
const { data, error } = await supabase
.from('blog_posts')
.select('*')
.eq('slug', slug)
.single();
if (error) {
if (error.code === 'PGRST116') return null; // Not found
throw new Error(`Failed to fetch post ${slug}: ${error.message}`);
}
return transformPost(data);
}
export async function getFeaturedPosts(limit = 3): Promise<BlogPost[]> {
const { data, error } = await supabase
.from('blog_posts')
.select('*')
.eq('featured', true)
.order('pub_date', { ascending: false })
.limit(limit);
if (error) throw new Error(`Failed to fetch featured posts: ${error.message}`);
return (data || []).map(transformPost);
}
export async function getPostsByTag(tag: string): Promise<BlogPost[]> {
const { data, error } = await supabase
.from('blog_posts')
.select('*')
.contains('tags', [tag])
.order('pub_date', { ascending: false });
if (error) throw new Error(`Failed to fetch posts by tag: ${error.message}`);
return (data || []).map(transformPost);
}
function transformPost(row: any): BlogPost {
return {
slug: row.slug,
data: {
title: row.title,
description: row.description,
pubDate: new Date(row.pub_date),
author: row.author,
image: row.image,
tags: row.tags || [],
category: row.category,
subcategory: row.subcategory,
featured: row.featured,
},
body: row.content_html,
headings: typeof row.headings === 'string'
? JSON.parse(row.headings)
: row.headings || [],
};
}The key insight here is the transformPost function. It maps Supabase's flat row structure into the nested { slug, data, body } shape that Astro content collections use. This meant our Astro page templates barely needed any changes during the migration.
The Supabase Client
The client setup is minimal:
// src/lib/supabase.ts
import { createClient } from '@supabase/supabase-js';
const supabaseUrl = import.meta.env.SUPABASE_URL;
const supabaseAnonKey = import.meta.env.SUPABASE_ANON_KEY;
export const supabase = createClient(supabaseUrl, supabaseAnonKey);We use the anon key (not the service role key) in the website. Row Level Security (RLS) policies on Supabase ensure that the public can only read published content. The service role key is only used in the sync script running in GitHub Actions.
SSR on Vercel: Performance Reality
With output: 'server' in our Astro config, every page request hits Vercel's serverless function, which queries Supabase, renders the HTML, and sends it back.
Here are the real response times:
// astro.config.mjs
import { defineConfig } from 'astro/config';
import vercel from '@astrojs/vercel';
export default defineConfig({
output: 'server',
adapter: vercel({
edgeMiddleware: true,
}),
});Response Time Breakdown
| Page Type | Supabase Query | Astro Render | Total TTFB | Cached (Vercel Edge) |
|---|---|---|---|---|
| Blog post (single) | 30-50ms | 15-25ms | 50-80ms | 5-10ms |
| Blog listing (all) | 60-100ms | 20-30ms | 80-130ms | 5-10ms |
| Blog listing (filtered by tag) | 40-70ms | 15-25ms | 60-100ms | 5-10ms |
| Project page | 25-40ms | 10-20ms | 40-65ms | 5-10ms |
| Homepage | 50-80ms | 25-35ms | 80-120ms | 5-10ms |
Those numbers are from Vercel's US East region querying our Supabase instance (also US East). Supabase query time is the dominant factor. Astro's server-side rendering is fast -- usually under 30ms.
Vercel Edge Caching
We set cache headers on our SSR responses so that Vercel's edge network caches pages:
// src/pages/blog/[slug].astro
---
import { getBlogEntry } from '../../lib/content';
const { slug } = Astro.params;
const post = await getBlogEntry(slug);
if (!post) {
return Astro.redirect('/404');
}
// Cache for 5 minutes at the edge, revalidate in background
Astro.response.headers.set(
'Cache-Control',
'public, s-maxage=300, stale-while-revalidate=600'
);
---With this setup, the first visitor to a page gets an SSR response (50-130ms). Every subsequent visitor in the next 5 minutes gets an edge-cached response (5-10ms). After 5 minutes, the page is revalidated in the background while the stale version is served. Content updates propagate within 5-10 minutes without any manual cache purging.
Supabase as a Headless CMS: The Honest Review
Let me be clear: Supabase is not a CMS. It is a database with an API layer. We are using it as a CMS, and that comes with tradeoffs.
What Works Well
SQL queries for content. This is the killer feature. With Astro content collections, if you wanted "all blog posts tagged 'AI' sorted by date, limited to 10" you had to load all posts and filter in JavaScript. With Supabase, it is a single efficient query:
const { data } = await supabase
.from('blog_posts')
.select('slug, title, description, pub_date, image, tags')
.contains('tags', ['AI'])
.order('pub_date', { ascending: false })
.limit(10);Row Level Security. We have RLS policies that ensure the anon key can only read published posts (where pub_date <= now()). Draft posts are invisible to the website until their publish date. No application-level logic needed.
-- RLS policy for blog_posts
CREATE POLICY "Public can read published posts"
ON blog_posts FOR SELECT
USING (pub_date <= now());Real-time capabilities. We do not use this yet, but we could add a real-time "currently reading" counter or live comments without changing the architecture.
Free tier. Our entire content layer runs on Supabase's free tier. 500MB database, 1GB storage, 50,000 monthly active users. For an agency website, that is more than enough.
What Does Not Work Well
No visual editor. Content authors write in VS Code, commit to Git, and wait for the sync pipeline. There is no "log into the CMS and click edit" experience. This is fine for our team (we are all developers), but it would not work for a client who needs non-technical editors.
Manual schema management. Every time we add a new field to our content format, we need to manually update the Supabase table schema, update the sync script, update the content.ts layer, and update the Astro templates. With Contentful or Sanity, the schema is defined in one place.
No content preview. We cannot preview a blog post before publishing it. The workflow is: write in VS Code, push to a branch, wait for the sync script, check the staging site. With a proper CMS, you get instant preview.
No image optimization pipeline. We rely on Unsplash URLs with query parameters for image sizing. A real CMS like Contentful gives you automatic image optimization, resizing, and format conversion via their CDN.
Comparison with Alternatives
| Feature | Supabase (our setup) | Contentful | Sanity | Strapi |
|---|---|---|---|---|
| Cost (our usage) | $0/month | $300+/month | $99+/month | $0 (self-hosted) |
| Content editing | VS Code + Git | Visual editor | Visual editor (GROQ) | Admin panel |
| Query flexibility | Full SQL | GraphQL/REST | GROQ queries | REST/GraphQL |
| Real-time | Built-in | Webhooks | Built-in | Webhooks |
| Image optimization | None (manual) | Built-in CDN | Built-in CDN | Plugin |
| Self-hostable | Yes | No | No | Yes |
| Learning curve | Medium (SQL) | Low | Medium (GROQ) | Low |
| Vendor lock-in | Low | High | Medium | Low |
We chose our DIY approach because: (1) $0/month matters when you are bootstrapping, (2) we wanted full SQL query power, and (3) Git-based content fits our workflow. If we had non-technical content editors, we would go with Sanity.
The Migration: 3 Days From Content Collections to Supabase
Here is how the 3-day migration went, for anyone considering a similar move.
Day 1: Database Schema and Sync Script
We created the Supabase tables and wrote the initial sync script. The schema mirrors the frontmatter fields:
CREATE TABLE blog_posts (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
description TEXT,
content_html TEXT NOT NULL,
pub_date TIMESTAMPTZ NOT NULL,
author TEXT NOT NULL DEFAULT 'Anurag Verma',
image TEXT,
tags TEXT[] DEFAULT '{}',
category TEXT,
subcategory TEXT,
featured BOOLEAN DEFAULT false,
headings JSONB DEFAULT '[]',
content_hash TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
CREATE INDEX idx_blog_posts_pub_date ON blog_posts(pub_date DESC);
CREATE INDEX idx_blog_posts_tags ON blog_posts USING gin(tags);
CREATE INDEX idx_blog_posts_category ON blog_posts(category);
CREATE INDEX idx_blog_posts_featured ON blog_posts(featured);The sync script took about 4 hours to write and test. Most of that time was spent on edge cases: posts with special characters in slugs, MDX components that needed special handling, and code blocks with YAML frontmatter inside them (meta, I know).
Day 2: Content Layer and Page Templates
We wrote src/lib/content.ts and updated every page that used getCollection() or getEntry() from astro:content. The compatibility layer made this mostly mechanical:
// Before (Astro content collections)
import { getCollection, getEntry } from 'astro:content';
const posts = await getCollection('blog');
const post = await getEntry('blog', slug);
// After (Supabase via content.ts)
import { getAllBlogPosts, getBlogEntry } from '../../lib/content';
const posts = await getAllBlogPosts();
const post = await getBlogEntry(slug);The data shapes are identical, so templates did not need changes. The only breaking change was removing getStaticPaths() from dynamic routes, since SSR does not use it.
Day 3: GitHub Actions, Testing, and Edge Cases
We set up the GitHub Actions workflow, tested the full sync pipeline, and handled edge cases:
# .github/workflows/validate-and-deploy.yml
name: Validate and Deploy Content
on:
push:
branches: [main]
paths:
- 'blog/**'
- 'projects/**'
- 'team/**'
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: node scripts/validate-content.js
sync:
needs: validate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: node scripts/sync-to-supabase.js
env:
SUPABASE_URL: ${{ secrets.SUPABASE_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.SUPABASE_SERVICE_ROLE_KEY }}The validation script checks that every content file has valid frontmatter with all required fields, correct types, and no duplicate slugs. It fails the pipeline if anything is wrong, preventing bad data from reaching Supabase.
Challenges and Things That Bit Us
Astro 5 Scoped Styles and Dark Mode
Astro's scoped styles use a specificity-based approach. Each component gets a unique attribute (like data-astro-cid-abc123) and styles are scoped to that attribute. This works great until you need to override styles for dark mode.
We had a contact form with scoped styles that looked fine in light mode. In dark mode, the input backgrounds needed to change, but the scoped styles had higher specificity than our global dark mode styles. The fix was adding :global() wrappers for dark mode overrides:
/* This does NOT work in Astro scoped styles */
.dark input {
background: #1a1a2e;
}
/* This DOES work */
:global(.dark) input {
background: #1a1a2e;
}It took us 2 hours to figure this out. The Astro docs mention it, but it is easy to miss.
TypeScript Strictness with Supabase
Supabase's generated types are good but not perfect. When you query with .select('*'), TypeScript knows the return type. But when you use .select('slug, title, description') to fetch only certain columns, the return type is still the full row type. This means you can accidentally access columns you did not select:
// This compiles fine but crashes at runtime
const { data } = await supabase
.from('blog_posts')
.select('slug, title');
// TypeScript thinks data[0].content_html exists, but it does not
console.log(data[0].content_html); // undefined at runtimeWe worked around this by always using .select('*') for data access functions and defining explicit return types in our content layer.
SSR Cold Starts on Vercel
When a Vercel serverless function has not been called recently, the first request experiences a cold start. For our Astro SSR pages, cold starts add 200-400ms to the response time. Subsequent requests are fast (50-130ms).
This is noticeable but not critical. The fix is Vercel's edge caching -- after the first request, the edge cache serves responses in 5-10ms. We also keep the function warm by pinging it every 5 minutes from an uptime monitor.
What We Would Do Differently
If we were starting over today, here is what we would change:
1. Start with Supabase from day one. We originally used Astro content collections (file-based content) and migrated to Supabase 6 months later. The migration was not hard, but we would have saved time by starting with Supabase.
2. Build a content preview system. We should have a /preview/[slug] route that reads content directly from the Git branch (before sync) and renders it. Right now, we have to push to the content repo and wait for the sync to see how a post looks.
3. Use Supabase's generated TypeScript types from the start. We wrote manual type definitions for our content layer. We should have used supabase gen types typescript to generate types from the database schema and kept them in sync.
4. Add full-text search earlier. We currently filter posts by tags and categories. Full-text search using Postgres's pg_tsvector would make the blog much more useful. It is on our roadmap for Q2 2026.
5. Set up a staging environment. We deploy content changes directly to production. A staging branch that syncs to a separate Supabase project would let us catch issues before they reach users.
Performance Summary
Here are the Lighthouse scores for codercops.com as of March 2026:
| Page | Performance | Accessibility | Best Practices | SEO |
|---|---|---|---|---|
| Homepage | 99 | 100 | 100 | 100 |
| Blog listing | 98 | 100 | 100 | 100 |
| Blog post | 99 | 100 | 100 | 100 |
| Contact | 100 | 100 | 100 | 100 |
| About | 99 | 100 | 100 | 100 |
The 98-100 performance scores are a direct result of Astro's zero-JS-by-default approach. Blog posts ship no JavaScript at all unless they have interactive components. The contact page ships only the 8KB of JS needed for the form validation and submission.
The Stack at a Glance
| Layer | Technology | Why |
|---|---|---|
| Framework | Astro 5 | Content-first, island architecture, hybrid rendering |
| Hosting | Vercel | Serverless SSR, edge caching, zero-config deploys |
| Database | Supabase (Postgres) | SQL queries, RLS, free tier, real-time ready |
| Content Format | MDX / Markdown | Developer-friendly, version controlled, portable |
| Sync | Custom Node.js script | SHA-256 delta sync, full control |
| CI/CD | GitHub Actions | Free, integrated with both repos |
| Styling | Astro scoped CSS + CSS custom properties | No runtime CSS-in-JS |
| Edge Functions | Supabase Edge Functions | Newsletter, page views |
This stack costs us effectively $0/month in infrastructure (Vercel hobby plan, Supabase free tier, GitHub Actions free tier). The only cost is the domain name.
Building a content-heavy website and trying to choose the right architecture? At CODERCOPS, we build high-performance websites with modern stacks tailored to your content workflow. Check out our other engineering deep dives or reach out to discuss your project.
Comments