WebAssembly was supposed to bring C++ to the browser. That was the pitch in 2017 — run native code alongside JavaScript, make games and video editors work on the web. And it did that. But something more interesting happened along the way.
WebAssembly escaped the browser. It is now running on servers, at the edge, inside databases, as plugin runtimes, and as a universal binary format. And most developers have not noticed, because WASM does not have a marketing team. There is no VC-funded startup shouting about it on Twitter. It is just quietly becoming infrastructure.
At CODERCOPS, we started paying attention to WASM about two years ago when we noticed Cloudflare Workers, Supabase Edge Functions, and Figma's plugin system all using it under the hood. The more we looked, the more we found it everywhere. This post covers the five production use cases that convinced us WASM is not a curiosity — it is the future of portable compute.
A Quick Primer: What WASM Actually Is
If you only vaguely know what WebAssembly is, here is the 60-second version.
WebAssembly (WASM) is a binary instruction format — a compact bytecode that runs in a sandboxed virtual machine. You write code in Rust, Go, C, or other languages, compile it to .wasm, and run it anywhere there is a WASM runtime.
Think of it like the JVM (Java Virtual Machine), but:
- Language-agnostic: Rust, Go, C/C++, Python, Swift, Kotlin, and dozens more can compile to WASM
- Near-native performance: Typically 80-95% of native speed, far faster than interpreted languages
- Sandboxed by default: WASM modules cannot access the filesystem, network, or memory of the host unless explicitly granted
- Tiny: A WASM binary is typically 100KB-5MB, compared to 50-500MB for a Docker container
WASI (WebAssembly System Interface) is the standard that gives WASM modules access to system resources — files, network, environment variables. WASI is to WASM what POSIX is to C: a standard interface for interacting with the operating system.
The Component Model is the newest addition. It lets WASM modules define typed interfaces and compose together, like microservices but at the function call level. A Rust module can call a Go module can call a Python module, all running in the same process with no serialization overhead.
Traditional Container:
OS → Container Runtime → Linux Kernel → Your App (50-500MB)
Cold start: 100ms - 5 seconds
WASM Module:
OS → WASM Runtime → Your App (100KB - 5MB)
Cold start: <1ms - 10msThat cold start difference is not a typo. It is the reason WASM is eating into use cases that containers used to own.
Use Case 1: Edge Computing
This is the most mature WASM use case outside the browser, and it is already in production at massive scale.
How It Works
Cloudflare Workers, Fastly Compute, and similar platforms run your code at the network edge — in hundreds of data centers worldwide, close to your users. But they cannot spin up Docker containers at each location. Containers are too large, too slow to start, and require too much memory.
WASM solves all three problems. A WASM module is small enough to distribute to 300+ edge locations, starts in under a millisecond, and runs in a secure sandbox with minimal memory overhead.
Cloudflare Workers runs WASM under the hood through the V8 engine (the same engine that powers Chrome and Node.js). You can write Workers in JavaScript/TypeScript (which V8 compiles to machine code) or compile Rust/Go/C to WASM and run that directly.
// A Cloudflare Worker written in Rust, compiled to WASM
use worker::*;
#[event(fetch)]
async fn main(req: Request, env: Env, _ctx: Context) -> Result<Response> {
let url = req.url()?;
// Route based on path
match url.path() {
"/api/transform" => {
let body = req.bytes().await?;
let transformed = transform_image(&body)?; // CPU-intensive, runs at native speed
Response::from_bytes(transformed)
}
_ => Response::error("Not Found", 404),
}
}Real-World Scale
Shopify uses WASM for storefront customization. Merchants write custom logic (pricing rules, shipping calculations, product recommendations) that runs at the edge, close to the shopper. Processing happens in under 5ms instead of round-tripping to a central server.
Fastly powers 10% of internet traffic through its edge platform. Their Compute platform runs customer WASM modules at the edge for content personalization, A/B testing, and security rules.
Performance numbers from real deployments:
| Metric | Docker Container (Lambda) | WASM at Edge |
|---|---|---|
| Cold start | 100-3000ms | 0.5-5ms |
| p99 latency (same region) | 50-200ms | 5-20ms |
| p99 latency (cross-region) | 150-500ms | 5-30ms |
| Memory per instance | 128MB-10GB | 2-128MB |
| Binary size | 50-500MB | 100KB-10MB |
The edge computing use case alone makes WASM worth paying attention to. If your application serves users globally and latency matters, WASM at the edge is 10-100x faster than centralized serverless.
Use Case 2: Plugin Systems
This is the use case that surprised me most. WASM turns out to be the perfect sandbox for running untrusted third-party code.
The Problem with Plugins
Every platform that supports plugins faces the same dilemma: you want to let third-party developers extend your product, but you cannot let them crash your process, access your data, or mine cryptocurrency on your servers.
Traditional approaches have serious tradeoffs:
| Approach | Pros | Cons |
|---|---|---|
| Separate process | Full isolation | Slow IPC, high memory overhead |
| Docker container | Good isolation | Very slow startup, huge overhead |
| JavaScript eval | Fast | Security nightmare |
| Scripting language (Lua) | Fast, safe | Limited ecosystem, unfamiliar to devs |
| WASM sandbox | Fast, safe, polyglot | Newer tooling |
WASM gives you the speed of eval with the security of containers. A WASM module runs in a sandbox that has no access to the host filesystem, network, or memory. You explicitly grant capabilities through WASI, and the module can only do what you allow.
Figma: The Gold Standard
Figma's plugin system is the best-known example. Figma plugins run in a WASM sandbox inside the application. This means:
- Plugins cannot access your filesystem or other tabs
- Plugins cannot make unauthorized network requests
- A buggy plugin cannot crash Figma — the sandbox catches it
- Plugins start instantly (no Docker spin-up)
Figma's plugin API exposes a controlled set of functions for manipulating the design canvas. The WASM sandbox ensures plugins can only call those functions — nothing else.
Envoy Proxy: WASM Filters
Envoy, the proxy server used by Istio and most service mesh implementations, supports WASM-based filters for custom request processing. Instead of recompiling Envoy with C++ extensions (the old way), you write a filter in Rust or Go, compile to WASM, and load it dynamically.
// Envoy WASM filter in Rust
use proxy_wasm::traits::*;
use proxy_wasm::types::*;
struct RateLimiter;
impl HttpContext for RateLimiter {
fn on_http_request_headers(&mut self, _: usize, _: bool) -> Action {
let client_ip = self.get_http_request_header("x-forwarded-for")
.unwrap_or_default();
if self.is_rate_limited(&client_ip) {
self.send_http_response(429, vec![], Some(b"Rate limited"));
return Action::Pause;
}
Action::Continue
}
}This filter runs inside Envoy at near-native speed, with no risk of crashing the proxy. You deploy it without restarting Envoy. That is a game-changer for infrastructure teams.
Database UDFs (User-Defined Functions)
SingleStore, Supabase, and several other databases now support WASM-based user-defined functions. Instead of writing stored procedures in SQL or PL/pgSQL, you write functions in Rust, compile to WASM, and run them inside the database engine.
The advantage: complex data transformations (JSON parsing, text analysis, geospatial calculations) run at near-native speed, right next to the data, without round-tripping to an application server.
Use Case 3: Serverless Functions
This use case directly competes with AWS Lambda, Google Cloud Functions, and Azure Functions. The pitch is simple: WASM functions start faster and cost less.
Spin by Fermyon
Spin is a serverless platform built on WASM. You write functions in Rust, Go, JavaScript, Python, or other supported languages, and Spin runs them as WASM modules.
// A Spin HTTP function in Rust
use spin_sdk::http::{IntoResponse, Request, Response};
use spin_sdk::http_component;
#[http_component]
fn handle_request(req: Request) -> anyhow::Result<impl IntoResponse> {
let body = serde_json::json!({
"message": "Hello from WASM!",
"method": req.method().to_string(),
"path": req.uri().path(),
});
Ok(Response::builder()
.status(200)
.header("content-type", "application/json")
.body(serde_json::to_string(&body)?)
.build())
}Cold Start Comparison
This is where WASM serverless genuinely shines. Cold start times:
| Platform | Language | Cold Start (p50) | Cold Start (p99) |
|---|---|---|---|
| AWS Lambda | Node.js | 200ms | 800ms |
| AWS Lambda | Python | 250ms | 1000ms |
| AWS Lambda | Java | 3000ms | 8000ms |
| Cloudflare Workers | JS/WASM | 0ms | 5ms |
| Spin (Fermyon) | Rust/WASM | 0.5ms | 3ms |
| wasmCloud | Rust/WASM | 1ms | 5ms |
The difference is not incremental — it is categorical. A Lambda function takes 200-8000ms to cold start. A WASM function takes 0.5-5ms. That is 100-1000x faster.
For user-facing APIs where cold starts directly impact user experience, this matters enormously. A payment processing endpoint that cold starts in 3 seconds versus 1 millisecond is the difference between a completed purchase and an abandoned cart.
wasmCloud: Distributed WASM
wasmCloud takes WASM serverless further by enabling distributed WASM applications. Components run across multiple nodes, communicate through typed interfaces, and can move between machines transparently. Think Kubernetes for WASM, but without the complexity.
# wasmCloud application manifest
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: order-processor
spec:
components:
- name: order-api
type: component
properties:
image: ghcr.io/myorg/order-api:0.1.0
traits:
- type: spreadscaler
properties:
instances: 5
- name: payment-processor
type: component
properties:
image: ghcr.io/myorg/payment:0.1.0
traits:
- type: spreadscaler
properties:
instances: 3Use Case 4: Embedded Runtimes
WASM is becoming the universal extension mechanism for existing software. Instead of building a plugin API in your language of choice, embed a WASM runtime and let users extend your software in any language.
SQLite Extensions
SQLite now supports WASM extensions through the sqlite-wasm project. You can write custom functions in Rust, compile them to WASM, and load them into SQLite:
// Custom SQLite function in Rust, compiled to WASM
use sqlite_wasm::*;
#[sqlite_entrypoint]
fn register(db: &Connection) -> Result<()> {
db.create_scalar_function("distance", 4, |ctx| {
let lat1: f64 = ctx.get(0)?;
let lon1: f64 = ctx.get(1)?;
let lat2: f64 = ctx.get(2)?;
let lon2: f64 = ctx.get(3)?;
// Haversine formula
let r = 6371.0; // Earth radius in km
let dlat = (lat2 - lat1).to_radians();
let dlon = (lon2 - lon1).to_radians();
let a = (dlat / 2.0).sin().powi(2)
+ lat1.to_radians().cos() * lat2.to_radians().cos()
* (dlon / 2.0).sin().powi(2);
let c = 2.0 * a.sqrt().asin();
Ok(r * c)
})?;
Ok(())
}Now you can use it in SQL: SELECT name, distance(lat, lon, 28.6139, 77.2090) AS km FROM stores ORDER BY km LIMIT 10;
Game Mod Engines
Several game engines use WASM as a modding runtime. Mods written in any language that compiles to WASM can extend the game without risking crashes or security issues. The WASM sandbox prevents mods from accessing the filesystem, making unauthorized network calls, or corrupting game state.
Supabase Edge Functions
Supabase Edge Functions run on Deno, which uses V8 under the hood. V8 can execute WASM modules alongside JavaScript. This means you can write performance-critical parts of your Edge Functions in Rust, compile to WASM, and call them from TypeScript:
// Supabase Edge Function calling a WASM module
import { serve } from "https://deno.land/std/http/server.ts"
import { instantiate } from "./image_processor.wasm.js" // Generated bindings
serve(async (req) => {
const imageBytes = new Uint8Array(await req.arrayBuffer())
// CPU-intensive image processing runs as WASM (near-native speed)
const wasm = await instantiate()
const processed = wasm.resize_image(imageBytes, 800, 600)
return new Response(processed, {
headers: { "content-type": "image/webp" },
})
})The JavaScript handles HTTP and orchestration. The Rust-compiled WASM handles compute-intensive work. Each plays to its strength.
Use Case 5: Cross-Platform Libraries
This is the use case with the most long-term potential. Write a library once, compile it to WASM, and run it everywhere — browsers, Node.js, Python, Ruby, Go, edge functions, mobile apps.
The "Write Once, Run Anywhere" Promise (For Real This Time)
Java promised this. It did not fully deliver. WASM is actually delivering it, because WASM is a compile target, not a language. You do not need to write Java. You write in whatever language you want, and it runs wherever WASM runs.
Real example: a validation library
// lib.rs — Write once in Rust
use wasm_bindgen::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
pub struct ValidationResult {
pub valid: bool,
pub errors: Vec<String>,
}
#[wasm_bindgen]
pub fn validate_email(email: &str) -> JsValue {
let result = if email.contains('@') && email.contains('.') {
ValidationResult { valid: true, errors: vec![] }
} else {
ValidationResult {
valid: false,
errors: vec!["Invalid email format".into()],
}
};
serde_wasm_bindgen::to_value(&result).unwrap()
}This compiles to WASM and runs in:
- Browsers: Via
wasm-bindgenand JavaScript imports - Node.js: Via
@aspect/wasm-nodeor native WASM support - Python: Via
wasmtime-pybindings - Ruby: Via
wasmtime-rbbindings - Edge functions: Directly in Cloudflare Workers or Deno
- Mobile: Via native WASM runtimes on iOS and Android
One codebase. One set of tests. One source of truth. Six or more runtime targets.
Libraries Already Doing This
Several popular libraries ship WASM builds for cross-platform support:
- swc (JavaScript/TypeScript compiler): Written in Rust, compiled to WASM for browser-based tools
- Prisma (database ORM): Query engine compiled to WASM for edge deployment
- Resvg (SVG renderer): Rust library with WASM builds for browser and server
- tree-sitter (parser): C library compiled to WASM for browser-based code editors
- libsql (SQLite fork): Compiled to WASM for browser-embedded databases
Language Support for WASM
Not all languages compile to WASM equally. Here is an honest assessment:
| Language | WASM Support | Binary Size | Performance | Ecosystem Maturity | Best For |
|---|---|---|---|---|---|
| Rust | Excellent | Small (100KB-2MB) | Near-native | Mature | Everything WASM |
| C/C++ | Excellent | Small | Near-native | Mature | Legacy code, game engines |
| Go | Good | Large (2-10MB) | Good | Improving | Backend services |
| AssemblyScript | Good | Very small | Good | Growing | TypeScript developers |
| Swift | Experimental | Medium | Good | Early | iOS developers |
| Kotlin | Experimental | Large | Decent | Early | Android developers |
| Python | Via Pyodide | Very large (15MB+) | Slow | Niche | Data science in browser |
| C#/.NET | Good (Blazor) | Large (5-15MB) | Good | Mature | .NET shops |
Our recommendation: Rust is the best language for WASM. Its ownership model eliminates garbage collection overhead (WASM does not have a built-in GC), its binaries are small, and its tooling (wasm-pack, wasm-bindgen) is the most mature.
If your team does not know Rust, AssemblyScript is a good starting point. It looks like TypeScript, compiles to WASM, and produces small binaries. The learning curve is minimal for JavaScript developers.
Getting Started: Your First WASM Module in Rust
Here is a complete walkthrough to go from zero to a working WASM module:
1. Install the Tools
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Add WASM target
rustup target add wasm32-unknown-unknown
# Install wasm-pack (builds and packages WASM modules)
cargo install wasm-pack2. Create the Project
cargo new --lib hello-wasm
cd hello-wasm3. Configure Cargo.toml
[package]
name = "hello-wasm"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2"4. Write the Code
// src/lib.rs
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn fibonacci(n: u32) -> u64 {
if n <= 1 {
return n as u64;
}
let mut a: u64 = 0;
let mut b: u64 = 1;
for _ in 2..=n {
let temp = a + b;
a = b;
b = temp;
}
b
}
#[wasm_bindgen]
pub fn greet(name: &str) -> String {
format!("Hello, {}! This greeting was generated by Rust compiled to WebAssembly.", name)
}5. Build and Use
# Build for web
wasm-pack build --target web
# The output is in pkg/
# - hello_wasm_bg.wasm (the binary)
# - hello_wasm.js (JavaScript glue code)
# - hello_wasm.d.ts (TypeScript types)<!-- Use in a web page -->
<script type="module">
import init, { fibonacci, greet } from './pkg/hello_wasm.js'
async function run() {
await init()
console.log(fibonacci(50)) // 12586269025
console.log(greet('World')) // Hello, World! This greeting was...
}
run()
</script>That is it. You have a Rust function running in the browser as WASM. The fibonacci(50) call executes at near-native speed — orders of magnitude faster than the equivalent JavaScript implementation for large values.
Performance Benchmarks: WASM vs Everything
Here are real-world benchmarks from our testing and published benchmarks from the WASM community:
WASM vs Native (Rust compiled to WASM vs native Rust)
| Benchmark | Native Rust | WASM (wasmtime) | Overhead |
|---|---|---|---|
| Fibonacci(40) | 0.8ms | 1.1ms | 37% |
| JSON parse (1MB) | 2.3ms | 3.1ms | 35% |
| Image resize (4K) | 45ms | 58ms | 29% |
| SHA-256 hash (10MB) | 18ms | 22ms | 22% |
| Regex matching (100K strings) | 12ms | 16ms | 33% |
WASM typically runs at 65-85% of native speed. For most use cases, that overhead is invisible.
WASM vs JavaScript (in V8)
| Benchmark | JavaScript | WASM | Speedup |
|---|---|---|---|
| Fibonacci(40) | 12ms | 1.1ms | 10.9x |
| JSON parse (1MB) | 8ms | 3.1ms | 2.6x |
| Image resize (4K) | 320ms | 58ms | 5.5x |
| SHA-256 hash (10MB) | 95ms | 22ms | 4.3x |
| Regex matching (100K strings) | 45ms | 16ms | 2.8x |
For CPU-intensive tasks, WASM is 3-10x faster than JavaScript. For I/O-bound tasks, the difference is smaller because the bottleneck is not computation.
WASM vs Docker Containers (Cold Start + Execution)
| Metric | Docker (Alpine) | WASM (Spin) | Difference |
|---|---|---|---|
| Image size | 50MB | 2MB | 25x smaller |
| Cold start | 500ms | 1ms | 500x faster |
| Memory (idle) | 30MB | 2MB | 15x less |
| Request/sec (simple API) | 8,000 | 45,000 | 5.6x more |
| Startup to first request | 800ms | 3ms | 266x faster |
The cold start difference is the killer feature. For serverless workloads where functions scale to zero and cold start on demand, WASM is in a different league.
What Is Still Hard
WASM is not perfect. Here is what still trips people up:
Debugging
Debugging WASM is painful compared to debugging native code or JavaScript. Browser DevTools support WASM debugging with source maps, but the experience is clunky. Stepping through Rust code compiled to WASM in Chrome DevTools works, but it is slow and sometimes confusing.
Server-side WASM debugging is even worse. println! debugging is often your best tool. That is not great.
Tooling Maturity
The WASM toolchain is improving rapidly, but it is not at the level of Docker or npm. Building WASM modules requires understanding compiler targets, linker flags, and runtime-specific quirks. wasm-pack hides most of this for Rust, but other languages have rougher edges.
Limited Standard Library
WASM does not have built-in networking, filesystem access, or threading (though WASI is fixing this). If your code needs to make HTTP requests or read files, you depend on the host runtime to provide those capabilities. This is by design (security), but it means you cannot just compile any existing program to WASM and expect it to work.
Garbage Collection
Languages with garbage collectors (Go, Python, Java) produce larger WASM binaries because they have to include their GC runtime. Rust and C do not have this problem, which is one reason they produce much smaller WASM modules.
The WASM GC proposal (now shipping in Chrome and Firefox) adds built-in garbage collection to the WASM spec, which will help languages like Kotlin and Dart produce smaller modules. But adoption is still early.
Ecosystem Fragmentation
The WASM ecosystem is fragmented across multiple standards and runtimes:
- wasmtime (Bytecode Alliance): The reference implementation
- wasmer: Focus on package management and ease of use
- wazero: Go-native runtime (no CGo dependency)
- V8: Chrome and Node.js engine (runs WASM alongside JavaScript)
- SpiderMonkey: Firefox engine
Code compiled for one runtime usually works on others, but there are edge cases. WASI standardization is helping, but we are not at "compile once, run anywhere" yet — more like "compile once, run on most things with minor adjustments."
Why CODERCOPS Is Watching This Closely
We are not building everything in WASM yet. For most web applications, JavaScript and TypeScript remain the pragmatic choice. But we are evaluating WASM for specific use cases in our stack:
Edge Function workloads: When we need CPU-intensive processing at the edge — image transformations, content parsing, validation logic — WASM lets us write it in Rust and run it at near-native speed on Cloudflare Workers.
Plugin systems for client products: Several clients have asked us to build extensibility into their platforms. WASM gives us a secure sandbox for third-party code without the overhead of Docker containers.
Performance-critical libraries: When a JavaScript implementation is too slow (complex parsing, data transformation, cryptographic operations), we write it in Rust and compile to WASM. The library works in both browser and server contexts.
Our prediction: within three years, WASM will be a standard part of the web development toolchain, the way TypeScript became standard despite initial skepticism. You do not need to learn Rust today (though it helps). But you should understand what WASM enables, because your tools and platforms are increasingly built on it.
Want to Explore WASM for Your Product?
At CODERCOPS, we help teams evaluate whether WASM is the right fit for their performance-critical workloads. Whether you need edge computing, a plugin system, or cross-platform libraries, we can help you assess the tradeoffs and build a proof of concept.
What we bring to the table:
- Rust and WASM development experience
- Edge computing architecture design
- Performance benchmarking and optimization
- Integration with existing JavaScript/TypeScript stacks
If you are curious whether WASM could solve a performance problem in your stack, reach out to us. We will give you an honest assessment — including when WASM is not the right answer.
For more deep dives into emerging technology, check out our engineering blog.
Comments