How to compile Go to WebAssembly, shrink those bloated binaries, and actually call Go functions from the browser without losing your mind.

Building WebAssembly Modules in Go for Real Performance Gains


Go can compile directly to WebAssembly. That means you can run Go code in a browser, on edge runtimes, or in any Wasm-compatible host. The toolchain support has been there since Go 1.11, and with the newer GOARCH=wasm GOOS=wasip1 target in Go 1.21, it’s become much more practical for production use.

But “it compiles to Wasm” doesn’t mean it’s fast by default. Go’s runtime, garbage collector, and standard library all get bundled into the .wasm binary. A hello-world module can be 2MB+. If you care about web performance, you need to know what’s happening under the hood and how to control it.

This post covers how Go compiles to Wasm, where the bottlenecks are, and what you can do about them.

How Go compiles to WebAssembly

Go supports two Wasm targets:

  • GOOS=js GOARCH=wasm — targets the browser via a JavaScript glue layer
  • GOOS=wasip1 GOARCH=wasm — targets the WASI standard (Go 1.21+)

The browser target is what you want for web apps. The WASI target works for server-side runtimes like Wasmtime or Wazero.

Here’s the simplest browser-targeted build:

GOOS=js GOARCH=wasm go build -o main.wasm main.go

And the Go code:

package main

import (
	"syscall/js"
)

func add(this js.Value, args []js.Value) interface{} {
	a := args[0].Int()
	b := args[1].Int()
	return a + b
}

func main() {
	c := make(chan struct{})
	js.Global().Set("goAdd", js.FuncOf(add))
	// Block forever so the Go runtime stays alive
	<-c
}

A few things to note. The syscall/js package is Go’s bridge to JavaScript. js.FuncOf wraps a Go function so JavaScript can call it. The blocking channel at the end keeps the Go runtime alive — without it, the Wasm module exits immediately and your exported functions vanish.

You load this in the browser with the wasm_exec.js file that ships with your Go installation (found at $(go env GOROOT)/misc/wasm/wasm_exec.js).

The binary size problem

That simple add function compiles to roughly 2.5MB of Wasm. Why? Because Go bundles its entire runtime: goroutine scheduler, garbage collector, and chunks of the standard library.

For a web context, that’s painful. Users download that binary before anything runs.

Here’s what you can do about it:

Use TinyGo. TinyGo is an alternative Go compiler built for constrained environments. It produces much smaller Wasm binaries because it uses a simpler runtime and a different garbage collector.

tinygo build -o main.wasm -target wasm main.go

The same add function compiles to around 30KB with TinyGo. That’s roughly 80x smaller.

The tradeoff: TinyGo doesn’t support the full Go standard library. Reflection is limited. Some packages like net/http won’t work. If you’re writing computation-heavy modules that don’t need the full stdlib, TinyGo is the obvious pick.

Strip debug info with standard Go. If you stick with the standard compiler:

GOOS=js GOARCH=wasm go build -ldflags="-s -w" -o main.wasm main.go

The -s flag strips the symbol table, -w strips DWARF debug info. This saves a few hundred KB. Not transformative, but free.

Compress the binary. Serve the .wasm file with Brotli or gzip compression. Wasm binaries compress well — you’ll often see 60-70% reduction.

Passing complex data between Go and JavaScript

The syscall/js API is low-level. There’s no automatic serialization of Go structs to JavaScript objects. You handle the conversion yourself.

Here’s a more realistic example — a function that processes a slice of data and returns results:

package main

import (
	"encoding/json"
	"syscall/js"
)

type Record struct {
	Name  string  `json:"name"`
	Score float64 `json:"score"`
}

type Result struct {
	Count   int     `json:"count"`
	Average float64 `json:"average"`
}

func processRecords(this js.Value, args []js.Value) interface{} {
	raw := args[0].String()

	var records []Record
	if err := json.Unmarshal([]byte(raw), &records); err != nil {
		return js.ValueOf(map[string]interface{}{
			"error": err.Error(),
		})
	}

	if len(records) == 0 {
		return js.ValueOf(map[string]interface{}{
			"count":   0,
			"average": 0,
		})
	}

	var total float64
	for _, r := range records {
		total += r.Score
	}

	result := Result{
		Count:   len(records),
		Average: total / float64(len(records)),
	}

	out, _ := json.Marshal(result)
	return js.ValueOf(string(out))
}

func main() {
	c := make(chan struct{})
	js.Global().Set("processRecords", js.FuncOf(processRecords))
	<-c
}

The pattern is: receive JSON strings from JavaScript, unmarshal into Go types, process, marshal back. It’s clunky, but it works and keeps the boundary explicit.

The performance cost is in the serialization. For hot paths, you want to pass typed arrays instead. js.CopyBytesToGo and js.CopyBytesToJS let you move byte slices without JSON overhead:

func processBytes(this js.Value, args []js.Value) interface{} {
	jsArray := args[0]
	buf := make([]byte, jsArray.Get("byteLength").Int())
	js.CopyBytesToGo(buf, jsArray)

	// Process buf directly
	for i := range buf {
		buf[i] = buf[i] * 2
	}

	result := js.Global().Get("Uint8Array").New(len(buf))
	js.CopyBytesToJS(result, buf)
	return result
}

This is faster for large payloads. No JSON parsing, no allocation churn.

WASI: Go Wasm outside the browser

The wasip1 target is interesting for different reasons. It lets you run Go code in sandboxed Wasm runtimes on the server. This matters for plugin systems, edge computing, and multi-tenant execution environments where you need isolation without containers.

Wazero is a Wasm runtime written entirely in Go — no CGo dependencies. That makes it straightforward to embed in Go applications.

Here’s a Go host that runs a Wasm module:

package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/tetragonal/wazero"
	"github.com/tetragonal/wazero/imports/wasi_snapshot_preview1"
)

func main() {
	ctx := context.Background()
	r := wazero.NewRuntime(ctx)
	defer r.Close(ctx)

	wasi_snapshot_preview1.MustInstantiate(ctx, r)

	wasmBytes, err := os.ReadFile("plugin.wasm")
	if err != nil {
		log.Fatal(err)
	}

	mod, err := r.Instantiate(ctx, wasmBytes)
	if err != nil {
		log.Fatal(err)
	}

	result, err := mod.ExportedFunction("compute").Call(ctx, 42)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Result: %d\n", result[0])
}

The context.Context integration is standard Go. You can pass deadlines and cancellation, which means you can enforce execution timeouts on untrusted Wasm modules. If you’re not familiar with how context works in Go, I wrote about it in What is Context in Go?.

Wazero’s API is pure Go. No system dependencies, no CGo. It compiles and runs anywhere Go runs. This matters when you’re building a web service that needs to execute user-provided logic safely — think a plugin system where users define custom data transformations as Wasm modules.

Performance: Go Wasm vs native Go

Wasm is not as fast as native code. The Go Wasm compiler generates code that runs inside a virtual machine. Expect roughly 2-5x slowdown compared to native Go for CPU-bound work.

Where Go-compiled Wasm makes sense:

  • Offloading computation to the browser, which beats a network round-trip to your server
  • Sharing logic between server and client with identical Go code
  • Sandboxed execution of untrusted code without container overhead
  • Plugin systems where you load and run Go modules dynamically

Where it doesn’t:

  • Pure DOM manipulation (JavaScript is faster, full stop)
  • Latency-sensitive hot paths that need native speed
  • Anything requiring the full Go standard library if you’re using TinyGo

Goroutines in Wasm

Goroutines work in Wasm, but they’re cooperative. The browser target doesn’t have real threads (unless you use SharedArrayBuffer and Web Workers). The Go scheduler runs goroutines on a single thread, yielding at specific points.

time.Sleep, channel operations, and other blocking calls yield correctly. But CPU-bound goroutines won’t be preempted as aggressively as they would in native Go.

func heavyWork(this js.Value, args []js.Value) interface{} {
	results := make(chan int, 10)

	go func() {
		sum := 0
		for i := 0; i < 1000000; i++ {
			sum += i
		}
		results <- sum
	}()

	return js.ValueOf(<-results)
}

This works. But both the calling goroutine and the worker goroutine share one OS thread. If you need real parallelism in the browser, you need to spin up multiple Wasm instances in Web Workers. Understanding how goroutines map to threads helps here.

Practical tips for Go Wasm performance

Keep the API surface small. Export a few functions. Each js.FuncOf call has overhead.

Minimize GC pressure. Allocations in Wasm trigger Go’s garbage collector, which is slower in the Wasm context. Reuse buffers. Use sync.Pool where it makes sense — I covered some patterns for this previously.

Profile with the browser’s Wasm profiler. Chrome DevTools shows Wasm function calls in the Performance tab. Use it.

Batch operations. Instead of calling a Go Wasm function 1000 times from JavaScript, pass all 1000 items in one call. The boundary crossing is expensive.

Go hybrid. Use Go Wasm for compute-heavy tasks (parsing, validation, data transformation) and JavaScript for everything else. Don’t try to build your entire frontend in Go Wasm. I’ve seen people try. It’s not great.

When to use Go for Wasm

If you have a Go codebase with business logic that needs to run in a browser, compiling to Wasm is far simpler than rewriting in JavaScript. If you’re building a plugin system with Wazero, you get sandboxing with no CGo dependency. These are real, practical wins.

For green-field web-only projects where binary size is the top priority, Rust or AssemblyScript will produce smaller binaries. But if your team already writes Go and you need shared logic across server and client, Go’s Wasm support is solid and improving with each release.

The wasip1 target in particular opens up architectural patterns worth paying attention to: edge functions, database UDFs, sandboxed user code execution — all written in Go, all running safely inside a Wasm sandbox. I’d bet this side of Go Wasm gets more interesting than the browser side over the next couple of years.