We deployed the same API endpoint to Supabase Edge Functions, Vercel Serverless Functions, Vercel Edge Functions, Fly.io containers, and AWS Lambda. The performance difference between the fastest and slowest was 47x. But the cheapest option was not the fastest. And the fastest option was not the best for most use cases.

I am tired of "it depends" blog posts. You know the type -- they compare three technologies, list pros and cons, and conclude with "choose the right tool for the job." That is useless advice. You already knew that.

This post is different. We ran actual benchmarks, measured real latency numbers, calculated real costs at different traffic levels, and built a concrete decision framework based on workload type. By the end, you will know exactly which option to pick for your next project -- not because I said so, but because the numbers say so.

The Contenders: Clear Definitions

Before we dive into benchmarks, let me define these terms precisely. The internet uses them loosely, and that causes confusion.

Edge Functions

What they are: Lightweight functions that run at CDN edge locations, physically close to your users. They use V8 isolates (the same engine that powers Chrome) instead of full containers, which makes them start in milliseconds instead of seconds.

Runtime: Deno (Supabase Edge Functions, Deno Deploy), V8 isolates (Cloudflare Workers, Vercel Edge Functions)

Key constraint: Limited runtime APIs. No Node.js fs module, no native binaries, limited CPU time (usually 10-50ms of CPU time per request, though wall-clock time can be longer for I/O).

Examples: Supabase Edge Functions, Cloudflare Workers, Vercel Edge Functions, Deno Deploy

Serverless Functions

What they are: Functions that run in a single region, spin up on demand, and scale to zero when idle. They run in full containers or microVMs, which means you get the complete Node.js, Python, or Go runtime.

Runtime: Node.js, Python, Go, Java, .NET -- virtually any language

Key constraint: Cold starts. When a function has not been invoked recently, it needs to spin up a new container, which adds 200-3000ms of latency.

Examples: Vercel Serverless Functions, AWS Lambda, Google Cloud Functions, Azure Functions

Containers

What they are: Full OS-level isolation with an always-running (or scale-to-zero) process. You control the entire runtime environment, including the operating system, installed packages, and background processes.

Runtime: Anything you can put in a Docker container

Key constraint: You manage more infrastructure. You pay for idle time (unless using scale-to-zero, which reintroduces cold starts).

Examples: Fly.io, Railway, AWS ECS/Fargate, Google Cloud Run, Render

The Benchmark Setup

Here is exactly what we tested, so you can reproduce it.

The Endpoint

A realistic API endpoint that:

  1. Receives a JSON payload (a page view event with page_path, visitor_id, metadata)
  2. Validates the payload (checks required fields, sanitizes input)
  3. Queries a Postgres database (inserts the event, then queries the last 10 events for that page)
  4. Returns a transformed JSON response (the inserted event + recent activity)

This is not a "hello world" benchmark. It involves JSON parsing, validation logic, database I/O, and response serialization -- things a real API does.

The Database

A Supabase Postgres instance running in us-east-1 (AWS Virginia). All functions connected to the same database to keep the comparison fair.

Important note: Edge functions connecting to a database in a single region lose some of their geographic advantage. The function runs near the user, but the database query still crosses the network to wherever the database lives. This is a real-world constraint that most edge computing benchmarks conveniently ignore.

Test Locations

We ran tests from three locations using a custom benchmarking script:

  • Virginia, US (same region as the database)
  • London, UK (cross-Atlantic)
  • Mumbai, India (cross-continent)

What We Measured

  • Cold start time: Time from first request (after 30+ minutes idle) to response received
  • Warm response time (P50): Median response time after the function is warm
  • P95 latency: The 95th percentile -- the worst-case experience for 1 in 20 users
  • P99 latency: The 99th percentile -- tail latency that can kill user experience
  • Throughput: Maximum sustained requests per second before errors
  • Memory usage: Peak memory during the benchmark

Each test ran 10,000 requests over 5 minutes with 50 concurrent connections (after an initial warm-up phase for cold start measurement).

The Results

Cold Start Times

This is where the differences are most dramatic.

Platform Cold Start (Virginia) Cold Start (London) Cold Start (Mumbai)
Supabase Edge Functions 12ms 14ms 18ms
Vercel Edge Functions 8ms 11ms 15ms
Cloudflare Workers 6ms 9ms 12ms
Vercel Serverless (Node.js) 280ms 310ms 340ms
AWS Lambda (Node.js) 350ms 390ms 420ms
Fly.io (256MB, scale-to-zero) 1,200ms 1,250ms 1,300ms
Fly.io (256MB, always-on) 0ms 0ms 0ms

The takeaway: Edge functions have effectively zero cold starts (6-18ms is imperceptible to users). Serverless functions have noticeable but tolerable cold starts (280-420ms). Containers with scale-to-zero have painful cold starts (1.2+ seconds). Always-on containers have no cold starts, but you pay for idle time.

The 47x difference I mentioned in the intro: Cloudflare Workers cold start (6ms) vs Fly.io scale-to-zero cold start (1,300ms from Mumbai). Same API, same logic, 47x slower to start.

Warm Response Times (P50, P95, P99)

Once everything is warm, the playing field levels -- but geography matters.

From Virginia (same region as database):

Platform P50 P95 P99
Supabase Edge Functions 24ms 38ms 52ms
Vercel Edge Functions 22ms 35ms 48ms
Vercel Serverless 18ms 28ms 42ms
AWS Lambda 20ms 32ms 45ms
Fly.io (256MB) 15ms 22ms 35ms

From London:

Platform P50 P95 P99
Supabase Edge Functions 95ms 120ms 145ms
Vercel Edge Functions 92ms 115ms 140ms
Vercel Serverless 105ms 130ms 155ms
AWS Lambda 108ms 135ms 160ms
Fly.io (256MB, us-east) 110ms 138ms 165ms

From Mumbai:

Platform P50 P95 P99
Supabase Edge Functions 160ms 195ms 230ms
Vercel Edge Functions 155ms 190ms 225ms
Vercel Serverless 175ms 210ms 250ms
AWS Lambda 180ms 215ms 255ms
Fly.io (256MB, us-east) 182ms 220ms 260ms

The takeaway: When the function and database are co-located, containers are the fastest (15ms P50 for Fly.io). But from distant locations, edge functions win by 15-25ms because the function execution happens closer to the user -- only the database query crosses the network.

Here is the critical insight: edge functions with a single-region database give you edge-speed function execution but still-central-database latency. The total response time is dominated by the database round trip, not the function execution. Edge functions shine when they do not need a database (auth validation, redirects, A/B testing) or when the database is replicated to multiple regions.

Throughput

Platform Max Requests/Second Errors at Max Load
Supabase Edge Functions 850 req/s 0.1% after 900
Vercel Edge Functions 1,100 req/s 0.2% after 1,200
Vercel Serverless 600 req/s 0.5% after 700
AWS Lambda 1,000 req/s 0.1% after 1,100
Fly.io (256MB) 400 req/s 2% after 450

The takeaway: Serverless and edge platforms handle bursts better because they auto-scale instantly. Containers need time to scale up (or you need to over-provision). For spiky traffic patterns, serverless wins. For steady traffic, containers are fine.

Note: The Fly.io numbers are for a single 256MB machine. Adding more machines or using larger instances would increase throughput proportionally.

The Cost Comparison

This is where it gets interesting. Performance is one thing; cost is what determines real-world decisions.

Monthly Cost at Different Traffic Levels

At 100,000 requests/month:

Platform Monthly Cost Notes
Supabase Edge Functions $0 Free tier: 500K invocations/month
Vercel Edge Functions $0 Included in Hobby plan
Cloudflare Workers $0 Free tier: 100K requests/day
Vercel Serverless $0 Included in Hobby plan
AWS Lambda $0 Free tier: 1M requests/month
Fly.io (256MB, always-on) $3.19 $0.0000036/s for shared-cpu-1x
Google Cloud Run $0 Free tier: 2M requests/month

At 1,000,000 requests/month (1M):

Platform Monthly Cost Notes
Supabase Edge Functions $0-$10 Pro plan includes 2M invocations
Vercel Edge Functions $20 Pro plan at $20/month
Cloudflare Workers $5 $5/month for 10M requests
Vercel Serverless $20 Pro plan at $20/month
AWS Lambda $0.40 $0.20/1M requests + compute
Fly.io (256MB, always-on) $3.19 Same machine handles the load
Google Cloud Run $7.50 Per-request + compute pricing

At 10,000,000 requests/month (10M):

Platform Monthly Cost Notes
Supabase Edge Functions $25-$40 Pro plan + overage
Vercel Edge Functions $20-$45 Pro plan + bandwidth overage
Cloudflare Workers $5 Still $5/month -- incredible value
Vercel Serverless $20-$60 Pro plan + function execution overage
AWS Lambda $4.50 $0.20/1M + compute charges
Fly.io (256MB x 3, always-on) $9.57 Three machines for redundancy
Google Cloud Run $35 Per-request + compute pricing

The takeaway: At low traffic (under 1M requests), everything is basically free. At high traffic (10M+), Cloudflare Workers and AWS Lambda are remarkably cheap. Fly.io is the cheapest option for steady, predictable traffic because you pay per machine, not per request.

The hidden cost nobody talks about: Developer time. Cloudflare Workers and edge functions have a different runtime than Node.js. You cannot use npm install express and call it a day. The development and debugging experience is different, and that difference costs time. For a small team, the developer productivity cost of an unfamiliar platform can exceed the infrastructure savings.

The Use Case Matrix

Based on our benchmarks and real project experience, here is when to use what.

Edge Functions: Best For

Use Case Why Edge
Auth token validation No database needed, runs at edge, sub-10ms
A/B testing / feature flags Decision logic at edge, no origin round trip
URL redirects and rewrites Edge-speed routing
Request header manipulation Add/remove headers before hitting origin
Simple API gateways Rate limiting, CORS, auth checks
Geolocation-based responses Edge has access to user's location
HTML/JSON transformation Modify responses before they reach the user

Avoid edge functions for: Heavy computation, large npm dependencies, database-heavy workloads with a single-region database, anything requiring Node.js-specific APIs.

Serverless Functions: Best For

Use Case Why Serverless
API endpoints with database access Full Node.js, connection pooling, ORM support
Webhook handlers Scale to zero between events, handle bursts
Form submissions Occasional traffic, no need for always-on
Scheduled tasks (cron) Built-in cron support on most platforms
File processing (small) Up to 50MB files, 10-second processing
Third-party API integrations Full npm ecosystem available

Avoid serverless for: Long-running tasks (>30s), WebSocket connections, high-frequency polling, tasks requiring persistent state, large file processing.

Containers: Best For

Use Case Why Containers
WebSocket / real-time connections Persistent connections need persistent servers
Image/video processing CPU-intensive, needs native binaries (ffmpeg, sharp)
Machine learning inference GPU access, model loading, large memory
Background job workers Long-running processes, queue consumers
Stateful services In-memory caches, session management
Full-stack apps When you need a "normal" server

Avoid containers for: Highly spiky traffic (you will over-provision or have slow scale-up), simple API endpoints (over-engineering), anything that can run at the edge.

Developer Experience Comparison

Performance and cost are important, but developer experience determines how fast your team ships. Here is how the platforms compare on DX.

Local Development

Platform Local Dev Story
Supabase Edge Functions supabase functions serve -- good, uses Deno runtime locally
Vercel Edge Functions vercel dev -- decent, but edge runtime quirks appear only in production
Cloudflare Workers wrangler dev -- excellent, accurate local simulation with Miniflare
Vercel Serverless vercel dev -- good, standard Node.js
AWS Lambda SAM CLI or SST -- clunky but functional, cold starts differ locally
Fly.io Standard Docker -- docker compose up, identical to production

Winner: Fly.io and Cloudflare Workers. Fly.io because Docker is Docker -- your local environment IS your production environment. Cloudflare because Miniflare accurately simulates the Workers runtime.

Deployment

Platform Deployment Time to Production
Supabase Edge Functions supabase functions deploy 15-30 seconds
Vercel Edge/Serverless git push (auto-deploy) 30-90 seconds
Cloudflare Workers wrangler deploy 10-20 seconds
AWS Lambda SAM/CDK/Serverless Framework 60-180 seconds
Fly.io fly deploy 60-120 seconds

Winner: Cloudflare Workers. Deploys globally in under 20 seconds. Vercel is a close second because git push deployment is hard to beat for team workflows.

Debugging and Logging

Platform Logging Debugging
Supabase Edge Functions Dashboard logs, supabase functions logs console.log + local dev
Vercel Dashboard logs, log drains Limited -- console.log mainly
Cloudflare Workers wrangler tail (live logs) Excellent -- live tail + source maps
AWS Lambda CloudWatch (verbose, hard to search) X-Ray tracing (powerful but complex)
Fly.io fly logs (live tail) SSH into running machines

Winner: Fly.io for debugging (you can SSH into the machine and use normal debugging tools). Cloudflare for logging (live log tailing is brilliant).

Error Handling

This is where edge functions show their weakness. When something goes wrong in a Cloudflare Worker or Vercel Edge Function, the error messages are often cryptic. "Script exceeded CPU time limit" does not tell you which function call was slow. "Network connection lost" does not tell you why the database connection failed.

Serverless functions and containers give you full stack traces, familiar error messages, and standard debugging tools. For complex applications, this matters more than the performance difference.

What We Use at CODERCOPS (And Why)

After running these benchmarks and using all of these platforms in production for client projects, here is what we actually deploy:

Supabase Edge Functions

For: Auth webhooks, input validation, lightweight API endpoints that sit close to the Supabase database. When the project already uses Supabase, Edge Functions are the natural choice for server-side logic that does not need the full Node.js runtime.

Vercel Serverless Functions

For: Most of our API endpoints, especially those powering Next.js and Astro applications. The git push deployment, preview deployments, and tight framework integration make it the default choice.

Fly.io Containers

For: Any project requiring WebSockets, background processing, or heavy computation. We run our internal monitoring tools on Fly.io because they need persistent connections and in-memory state.

What We Do NOT Use

Cloudflare Workers for database-heavy workloads. Despite Workers being incredibly fast and cheap, the runtime limitations and debugging experience make them harder to work with for complex API endpoints. We use Workers for edge middleware (auth checks, redirects, header manipulation) but not for primary application logic.

AWS Lambda directly. The DX is not worth it for our team size. If we need Lambda-like functionality, we use Vercel Serverless (which runs on Lambda under the hood) with a much better developer experience.

The Decision Framework

Here is how we decide for client projects. Answer these questions:

Question 1: Does your function need a database?

  • No (auth validation, redirects, feature flags) --> Edge Function. No contest. Sub-10ms cold starts, global distribution, usually free.
  • Yes, but read-only with cached/replicated data --> Edge Function with a global database (PlanetScale, Turso, Neon with read replicas)
  • Yes, read-write to a single-region database --> Serverless Function co-located with the database. Edge functions gain you nothing if the DB round trip dominates.

Question 2: How spiky is your traffic?

  • Very spiky (0 to 1000 req/s in seconds) --> Serverless or Edge. They auto-scale instantly.
  • Steady (predictable load) --> Container. Pay-per-machine pricing is cheaper than pay-per-request at scale.
  • Near-zero with occasional bursts --> Serverless. Scale-to-zero saves money during idle periods.

Question 3: How long does your function run?

  • Under 50ms CPU time --> Edge Function
  • Under 30 seconds --> Serverless Function
  • Over 30 seconds --> Container

Question 4: What is your team's experience?

  • Familiar with Deno/Web APIs --> Edge functions are a natural fit
  • Familiar with Node.js --> Serverless functions (Vercel/AWS Lambda)
  • Familiar with Docker --> Containers (Fly.io/Railway)
  • Small team, want simplicity --> Vercel Serverless (best DX overall)

Question 5: What is your budget?

  • $0 (early stage/hobby) --> Supabase Edge Functions or AWS Lambda free tier
  • Under $20/month --> Vercel Pro (includes both Edge and Serverless)
  • Over $20/month, predictable traffic --> Fly.io containers (best cost efficiency at scale)
  • Over $20/month, unpredictable traffic --> Cloudflare Workers ($5/month for 10M requests)

The Hybrid Approach: What Most Production Systems Actually Need

Here is the thing nobody says in comparison posts: most production systems use multiple compute platforms. A single API can have:

  • Edge functions at the CDN layer for auth, rate limiting, and geolocation
  • Serverless functions for the main API endpoints
  • Containers for background jobs, WebSocket servers, and heavy processing

This is not over-engineering. This is using the right tool for each layer of your stack.

For example, one of our client projects (a SaaS analytics platform) uses:

  • Vercel Edge Functions for bot detection and rate limiting at the edge
  • Vercel Serverless Functions for the REST API (dashboard queries, user management)
  • Supabase Edge Functions for real-time event ingestion (close to the Supabase database)
  • Fly.io container for the WebSocket server that pushes live updates to connected dashboards

Each component runs on the platform best suited for its workload. The total infrastructure cost is $47/month for a system handling 2M+ requests/month. If we had put everything on a single platform, we would either be paying more (containers for everything) or fighting runtime limitations (edge functions for everything).

What Changed Since Last Year

If you read cloud computing comparisons from 2024-2025, a few things have changed:

  1. Edge function runtimes got better. Supabase Edge Functions now support more Web APIs, and Cloudflare Workers added database connectors (Hyperdrive) that pool connections and reduce latency to centralized databases.

  2. Serverless cold starts improved. Vercel and AWS invested heavily in snapshot-based warm-up. Cold starts for Node.js functions dropped from 500-800ms to 250-350ms.

  3. Container platforms added scale-to-zero. Fly.io and Railway now offer scale-to-zero for machines, making containers viable for low-traffic workloads. But cold starts for containers (1-2 seconds) are still much slower than serverless (200-300ms).

  4. Pricing converged. Most platforms now have generous free tiers. The cost difference matters only at scale (1M+ requests/month).

Conclusion: Stop Overthinking, Start Shipping

After running all these benchmarks, here is the honest truth: for 80% of projects, the choice does not matter. If you are building a startup with under 100K requests/month, pick the platform your team knows best, ship fast, and optimize later.

The performance differences we measured -- 15ms here, 25ms there -- are imperceptible to users browsing a website. The cold start differences matter more, but only for functions that are invoked infrequently.

Where the choice DOES matter:

  • Global user base with latency-sensitive operations --> Edge functions
  • Cost-sensitive at scale (10M+ requests) --> Cloudflare Workers or Fly.io
  • Complex applications with many dependencies --> Serverless or Containers
  • Real-time features --> Containers

Pick one. Ship. Measure. Adjust if the numbers tell you to. Do not spend a week benchmarking compute platforms for an app that gets 100 requests per day.


Need Help Choosing Your Stack?

At CODERCOPS, we build on all of these platforms and have strong opinions (backed by data) about which to use when. If you are starting a new project and want to get the infrastructure right from the beginning, reach out to us.

We have built production systems on Supabase Edge Functions, Vercel Serverless, Cloudflare Workers, and Fly.io -- and we know where each one shines and where each one falls over. Check out more of our engineering deep-dives on the blog.

Comments