Gecit: How a Go tool bypasses DPI censorship with eBPF and TUN interfaces
Governments and ISPs use deep packet inspection to block websites. A DPI box sits on the wire, reads your TLS ClientHello, sees the hostname in the SNI field, and kills the connection. Gecit is a Go tool that fights back by fragmenting those TCP packets before the DPI device can read them. On Linux it uses eBPF sock-ops programs. On macOS and Windows it creates a TUN interface and rewrites packets in userspace. The DPI box sees gibberish fragments. The destination server reassembles them normally. Connection goes through.
What grabbed me about Gecit is the range of low-level primitives it stitches together in Go: eBPF program loading, raw packet construction, TUN device management, all wired up with goroutines. Let’s dig in.
How DPI bypass works at the TCP level
DPI devices look for the SNI (Server Name Indication) field in TLS ClientHello messages. SNI contains the hostname you’re connecting to, and it’s sent in plaintext during the handshake. That’s the fingerprint DPI uses to decide whether to kill your connection.
The bypass is almost embarrassingly simple: split the TCP segment carrying the ClientHello into smaller pieces. No single fragment contains the full SNI. The DPI box can’t extract a hostname from half a hostname. Meanwhile, the destination server doesn’t care — TCP reassembly handles this automatically.
On Linux, Gecit pulls this off at the socket level through eBPF. On macOS and Windows, it intercepts packets via a TUN interface and chops them up before they hit the real network stack.
eBPF sock-ops on Linux: controlling TCP from Go
Gecit’s Linux implementation uses eBPF sock_ops programs. These hook into the kernel’s TCP stack and can tweak socket-level parameters, including the Maximum Segment Size (MSS). Lower the MSS, and the kernel fragments outgoing TCP data into smaller segments on its own. You don’t touch the packets yourself.
The project uses cilium/ebpf, which is the go-to Go library for eBPF. If you’ve loaded eBPF programs from Go before, the shape of this will be familiar. If you haven’t, this is a clean example to learn from.
Here’s a simplified version of loading and attaching an eBPF sock-ops program:
package main
import (
"log"
"os"
"os/signal"
"syscall"
"github.com/cilium/ebpf"
"github.com/cilium/ebpf/link"
)
func main() {
// Load the compiled eBPF program from an ELF object file.
spec, err := ebpf.LoadCollectionSpec("sockops.o")
if err != nil {
log.Fatalf("loading eBPF spec: %v", err)
}
coll, err := ebpf.NewCollection(spec)
if err != nil {
log.Fatalf("creating eBPF collection: %v", err)
}
defer coll.Close()
// Attach the sock_ops program to the root cgroup.
prog := coll.Programs["mss_modifier"]
cgroupPath := "/sys/fs/cgroup"
l, err := link.AttachCgroup(link.CgroupOptions{
Path: cgroupPath,
Attach: ebpf.AttachCGroupSockOps,
Program: prog,
})
if err != nil {
log.Fatalf("attaching to cgroup: %v", err)
}
defer l.Close()
log.Println("eBPF sock_ops program attached, lowering MSS on outgoing connections")
// Wait for interrupt signal.
sig := make(chan os.Signal, 1)
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM)
<-sig
log.Println("detaching eBPF program")
}
The eBPF C program itself hooks into BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB, which fires when a TCP connection is established. At that point it slams a low MSS value onto the socket, forcing the kernel to fragment the ClientHello across multiple segments.
cilium/ebpf handles the ugly parts: loading bytecode into the kernel, setting up maps, managing lifetimes. Gecit uses go generate with bpf2go to compile the C source and produce Go bindings at build time:
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go -target amd64 sockops sockops.c
This spits out a sockopsObjects struct with typed fields for every eBPF program and map. You get compile-time safety, which means you won’t accidentally reference a program that doesn’t exist in the ELF. That’s a nice guardrail when you’re working this close to the kernel.
TUN interface on macOS and Windows: userspace packet rewriting
macOS and Windows don’t have eBPF sock-ops. So Gecit takes the harder route: create a TUN (network tunnel) interface, route traffic through it, and rewrite packets in userspace before sending them on their way.
Go has solid TUN support through libraries like wireguard-go’s tun package. The flow:
- Create a TUN device
- Set up routing rules so traffic to blocked IPs goes through the TUN
- Read raw packets off the TUN device
- Find TCP segments carrying ClientHello data and fragment them
- Write the fragments out through a raw socket
Here’s what the packet parsing looks like:
package main
import (
"encoding/binary"
"log"
"net"
)
// parseTCPPayload extracts the TCP payload from a raw IPv4 packet.
func parseTCPPayload(pkt []byte) (srcPort, dstPort uint16, payload []byte, err error) {
if len(pkt) < 20 {
return 0, 0, nil, fmt.Errorf("packet too short for IPv4 header")
}
ihl := int(pkt[0]&0x0f) * 4
totalLen := int(binary.BigEndian.Uint16(pkt[2:4]))
protocol := pkt[9]
if protocol != 6 { // Not TCP
return 0, 0, nil, fmt.Errorf("not TCP")
}
if len(pkt) < ihl+20 {
return 0, 0, nil, fmt.Errorf("packet too short for TCP header")
}
tcpHeader := pkt[ihl:]
srcPort = binary.BigEndian.Uint16(tcpHeader[0:2])
dstPort = binary.BigEndian.Uint16(tcpHeader[2:4])
dataOffset := int(tcpHeader[12]>>4) * 4
payloadStart := ihl + dataOffset
if payloadStart > totalLen {
return srcPort, dstPort, nil, nil
}
payload = pkt[payloadStart:totalLen]
return srcPort, dstPort, payload, nil
}
// isTLSClientHello checks if a TCP payload starts with a TLS ClientHello.
func isTLSClientHello(payload []byte) bool {
if len(payload) < 6 {
return false
}
// ContentType: Handshake (0x16), then version, then HandshakeType: ClientHello (0x01)
return payload[0] == 0x16 && payload[5] == 0x01
}
The TUN approach is way more work than eBPF. You’re parsing raw IP and TCP headers, computing checksums, tracking sequence numbers. But it’s portable. The same Go code runs on macOS and Windows with only the TUN creation logic differing per platform.
Go’s encoding/binary package is a great fit for wire-format parsing like this. And since Go compiles to a single static binary, deploying Gecit is just “download and run.” No runtime dependencies to wrestle with.
Platform abstraction with build tags
Gecit needs completely different implementations for each OS. Go’s build tags make this straightforward:
// file: bypass_linux.go
//go:build linux
package bypass
func Start(config Config) error {
// Load eBPF program, attach to cgroup
return startEBPF(config)
}
// file: bypass_darwin.go
//go:build darwin
package bypass
func Start(config Config) error {
// Create TUN device, start packet processing loop
return startTUN(config)
}
// file: bypass_windows.go
//go:build windows
package bypass
func Start(config Config) error {
return startTUN(config)
}
Each file compiles only on its target OS. Everything else in the codebase calls bypass.Start() and doesn’t think about the implementation. WireGuard-Go uses the same pattern, and it works well for this kind of thing.
If you want more on how Go handles platform-specific code and managing long-running operations that need cleanup (exactly what Gecit does with eBPF attach/detach), the post on using Go’s context package covers related ground.
Goroutines for packet processing
The TUN implementation needs to read packets continuously, decide what to do with them, and write them back out. Go’s concurrency model fits this naturally:
func processPackets(tunDev *tun.Device, rawConn net.PacketConn) error {
buf := make([]byte, 1500)
for {
n, err := tunDev.Read(buf)
if err != nil {
return fmt.Errorf("reading from TUN: %w", err)
}
pkt := buf[:n]
_, dstPort, payload, err := parseTCPPayload(pkt)
if err != nil {
// Not TCP or malformed — forward as-is.
writeRaw(rawConn, pkt)
continue
}
if dstPort == 443 && isTLSClientHello(payload) {
fragments := fragmentTCP(pkt, 40) // Split into 40-byte payload chunks
for _, frag := range fragments {
writeRaw(rawConn, frag)
}
continue
}
writeRaw(rawConn, pkt)
}
}
The hot path is a tight loop: read, check, write. No per-packet allocations beyond the initial buffer. For a DPI bypass tool, latency matters. Users shouldn’t notice the proxy is there. Go’s goroutine scheduler handles the blocking I/O reads without burning OS threads, and you can spin up separate goroutines for reading and writing if throughput becomes a concern.
Error wrapping with %w matters here. If the TUN device disappears, you want that error to carry context so the caller can decide whether to retry or bail. We’ve written more about this in the post on error handling best practices in Go.
Why Go works well for this
I keep coming back to Go for networking tools like this, and Gecit is a good example of why.
Single binary distribution. People in censored regions need to download one file and run it. No Python virtualenvs. No npm. GOOS=windows GOARCH=amd64 go build produces a Windows executable from your Linux laptop.
Cross-compilation is painless. Gecit targets three operating systems. One CI pipeline, three binaries. Done.
The standard library handles raw networking. net, encoding/binary, syscall — you can parse and construct packets without pulling in C dependencies. That matters when you’re trying to keep your build simple and your binary portable.
eBPF tooling is mature. cilium/ebpf is battle-tested and actively maintained. It papers over the messy bits (map management, verification errors, kernel version quirks) so your Go code stays readable.
If you’re interested in more Go networking projects, the post on building gRPC services tackles a different slice of Go’s networking capabilities.
Try it yourself
Gecit’s codebase is small enough to read in an afternoon. Clone the repo and look at the platform-specific files. The eBPF C source is under 50 lines.
If you’re in a region where DPI blocks the websites you need, tools like Gecit can restore access. And if you’re a Go developer who wants to understand eBPF sock-ops, TUN interfaces, or raw TCP manipulation, these are techniques that show up everywhere: VPNs, load balancers, network monitors. Worth learning once and carrying with you.