Why We Chose WebAssembly Over Cloud APIs
Why We Chose WebAssembly Over Cloud APIs
Late one evening, our team was staring at a spreadsheet of cloud bills and latency histograms. Every spike in traffic meant another conversation about autoscaling, cold starts, and whether we really needed to ship PDFs across the ocean just to merge two pages. That night we asked a simple question: what if the browser could do the heavy lifting instead?
The moment that changed our architecture
Ai2Done was born from a belief that productivity tools should not require you to trust a third-party server with your contracts, invoices, or family photos. Cloud APIs are powerful, but they introduce a permanent dependency: your bytes leave your machine, traverse networks you do not control, and land on disks you will never audit. For many workflows that is acceptable; for our users it was a deal-breaker.
WebAssembly gave us a credible path to run real code—compiled from Go, the same language we use on the server—inside the user’s tab. Instead of sketching a thin client that uploads files to a black box, we could ship a portable binary that executes locally, with predictable performance and no surprise egress fees.
Why not “just” use JavaScript?
We love JavaScript for UI and orchestration, but moving megabytes of PDF logic or cryptography into hand-written JS would have been slower to ship and harder to keep in sync with our Go tooling. WASM lets us reuse battle-tested libraries (think pdfcpu-style pipelines) and share types and tests across the stack. The browser becomes a host, not a rewrite target.
A minimal bridge in JS loads the .wasm module, passes file handles and progress callbacks, and renders results. The rest stays in Go:
// Conceptual shape: WASM exports a thin API; the real work stays in internal/tools.
func ProcessPDF(input []byte) ([]byte, error) {
// merge, split, encrypt — all deterministic, all local
return merged, nil
}
What we gained
Privacy by construction. When processing happens in WASM, we do not need a privacy policy that says “we might process your files”—we can say “we cannot,” because the bytes never leave the device. That is not marketing; it is physics.
Latency where it matters. For batch operations, the network round-trip often dominates. Local WASM removes that variable entirely. Progress bars reflect real CPU work, not queue depth in someone else’s region.
Cost predictability. Our infrastructure bill scales with traffic to the site, not with gigabytes of user documents. That alignment keeps us focused on product quality instead of metering every conversion.
Trade-offs we accept honestly
WASM is not free. Initial download size matters, so we compress with Brotli, lazy-load per tool, and set sensible file limits so users get friendly messages instead of tab crashes. Memory ceilings in the browser are real; we document them and surface progress so users know the app is working, not frozen.
We still use the server for things that belong there: authentication, search indexing, and static asset delivery. The split is intentional: server for identity and discovery, client for transformation.
Looking ahead
Choosing WASM over cloud APIs was not a bet against the cloud—it was a bet on user sovereignty. Every time we ship a new tool, we ask whether it can run locally first. That discipline keeps Ai2Done fast, honest, and aligned with the people who rely on it every day. If you are building in this space, we hope our path gives you permission to question the default upload-to-process pipeline—and to let the browser do more than it gets credit for.