Building a WeChat Bot Platform in Go: How Openilink Hub Does It
WeChat has over a billion users. If you’re building automation around it — message forwarding, AI auto-reply, bot management — you need something solid sitting between your application logic and WeChat itself. Openilink Hub is a self-hosted WeChat bot management platform written in Go. It relays messages through WebSockets and webhooks, supports passkey authentication via WebAuthn, and ships SDKs for seven languages.
I’m less interested in the feature list, though. What caught my attention is how the project uses Go to solve the gnarly parts: concurrent message routing, WebSocket lifecycle management, webhook delivery that doesn’t fall over, and LLM integration that stays clean.
The architecture: Go as the message backbone
Openilink Hub is a relay. Messages arrive from WeChat, the Go backend processes them, and they fan out to registered webhook endpoints or connected WebSocket clients. The server also handles AI auto-reply by forwarding messages to whatever LLM provider you’ve configured.
The fan-out router at the center looks something like this:
package main
import (
"context"
"log"
"net/http"
"sync"
)
// MessageRouter fans out incoming messages to all registered handlers
type MessageRouter struct {
mu sync.RWMutex
handlers []func(ctx context.Context, msg Message) error
}
type Message struct {
From string `json:"from"`
To string `json:"to"`
Content string `json:"content"`
Type string `json:"type"`
}
func (r *MessageRouter) Register(h func(ctx context.Context, msg Message) error) {
r.mu.Lock()
defer r.mu.Unlock()
r.handlers = append(r.handlers, h)
}
func (r *MessageRouter) Dispatch(ctx context.Context, msg Message) {
r.mu.RLock()
defer r.mu.RUnlock()
var wg sync.WaitGroup
for _, h := range r.handlers {
wg.Add(1)
go func(handler func(ctx context.Context, msg Message) error) {
defer wg.Done()
if err := handler(ctx, msg); err != nil {
log.Printf("handler error: %v", err)
}
}(h)
}
wg.Wait()
}
A read-write mutex protecting a slice of handlers, goroutine-per-handler dispatch. Simple and effective. Each handler runs concurrently, so a slow webhook endpoint won’t block WebSocket delivery or AI processing. The sync.RWMutex lets you register new handlers at runtime without pausing message flow.
If you’ve spent time with Go’s concurrency primitives, none of this will surprise you. But it’s satisfying to see it applied cleanly in a real project.
WebSocket management with gorilla/websocket
Real-time message delivery means WebSockets, and WebSockets in Go means managing a lot of concurrent connections carefully. Each connection typically needs two goroutines — one reading, one writing.
Here’s a simplified version of the hub pattern this kind of architecture uses:
package ws
import (
"log"
"sync"
"github.com/gorilla/websocket"
)
type Client struct {
conn *websocket.Conn
send chan []byte
}
type Hub struct {
mu sync.RWMutex
clients map[*Client]bool
}
func NewHub() *Hub {
return &Hub{
clients: make(map[*Client]bool),
}
}
func (h *Hub) Register(c *Client) {
h.mu.Lock()
defer h.mu.Unlock()
h.clients[c] = true
}
func (h *Hub) Unregister(c *Client) {
h.mu.Lock()
defer h.mu.Unlock()
if _, ok := h.clients[c]; ok {
delete(h.clients, c)
close(c.send)
}
}
func (h *Hub) Broadcast(message []byte) {
h.mu.RLock()
defer h.mu.RUnlock()
for client := range h.clients {
select {
case client.send <- message:
default:
// Client buffer full, drop message and clean up
log.Println("client buffer full, removing")
go h.Unregister(client)
}
}
}
// WritePump runs in its own goroutine per client
func (c *Client) WritePump() {
defer c.conn.Close()
for msg := range c.send {
if err := c.conn.WriteMessage(websocket.TextMessage, msg); err != nil {
return
}
}
}
The send channel per client is the buffer. If a client can’t keep up, the select with a default case drops the message rather than blocking the broadcast. For a message relay platform, this matters a lot. One laggy consumer shouldn’t punish everyone else.
I like how WritePump uses channel closure for shutdown. When the hub unregisters a client and closes its send channel, the range loop in WritePump ends naturally. No explicit stop signal, no boolean flag. Just channel semantics doing what they’re supposed to do.
Webhook delivery with retry logic
Webhooks fail. Endpoints go down, networks hiccup, DNS resolves to nothing. You need retries with backoff, and you need them to respect cancellation.
Here’s the webhook deliverer:
package webhook
import (
"bytes"
"context"
"encoding/json"
"fmt"
"math"
"net/http"
"time"
)
type Deliverer struct {
client *http.Client
maxRetries int
}
func NewDeliverer(timeout time.Duration, maxRetries int) *Deliverer {
return &Deliverer{
client: &http.Client{
Timeout: timeout,
},
maxRetries: maxRetries,
}
}
func (d *Deliverer) Send(ctx context.Context, url string, payload any) error {
body, err := json.Marshal(payload)
if err != nil {
return fmt.Errorf("marshal payload: %w", err)
}
for attempt := 0; attempt <= d.maxRetries; attempt++ {
if attempt > 0 {
backoff := time.Duration(math.Pow(2, float64(attempt-1))) * time.Second
select {
case <-time.After(backoff):
case <-ctx.Done():
return ctx.Err()
}
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
if err != nil {
return fmt.Errorf("create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := d.client.Do(req)
if err != nil {
continue // retry on network error
}
resp.Body.Close()
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
return nil
}
// Don't retry client errors (4xx) — they won't succeed
if resp.StatusCode >= 400 && resp.StatusCode < 500 {
return fmt.Errorf("webhook returned %d", resp.StatusCode)
}
}
return fmt.Errorf("webhook delivery failed after %d attempts", d.maxRetries+1)
}
Two things worth calling out. First, http.NewRequestWithContext means the delivery respects the parent context. Server shutting down? Stop retrying. Second, 4xx errors bail immediately — retrying a 400 Bad Request won’t fix anything, but a 503 might clear up in a few seconds.
The select on time.After and ctx.Done() is the standard Go pattern for interruptible sleep. If you’ve worked with context cancellation in Go, you’ve written this exact block before.
Passkey authentication with WebAuthn
One choice I find genuinely interesting here: Openilink Hub uses passkey authentication instead of passwords. The go-webauthn/webauthn library implements the WebAuthn spec, and integrating it in Go means satisfying a small interface:
package auth
import (
"github.com/go-webauthn/webauthn/webauthn"
)
type User struct {
ID []byte
Name string
DisplayName string
Credentials []webauthn.Credential
}
func (u *User) WebAuthnID() []byte { return u.ID }
func (u *User) WebAuthnName() string { return u.Name }
func (u *User) WebAuthnDisplayName() string { return u.DisplayName }
func (u *User) WebAuthnCredentials() []webauthn.Credential { return u.Credentials }
Registration and login each have two steps — the server generates a challenge, then verifies the response. The library handles the crypto. Your job is persisting users and credentials.
This is where Go’s interface model really shines. The webauthn.User interface is four methods. You slap those on whatever user struct your app already has. No inheritance hierarchies, no framework to wire up. It just works.
AI auto-reply: plugging in LLM providers
The AI chatbot feature routes incoming messages to an LLM and sends the response back through WeChat. Since the project supports multiple providers, you need an abstraction that isn’t going to get in the way:
package ai
import (
"context"
)
type Provider interface {
Complete(ctx context.Context, prompt string) (string, error)
}
type AutoReplier struct {
provider Provider
}
func NewAutoReplier(p Provider) *AutoReplier {
return &AutoReplier{provider: p}
}
func (a *AutoReplier) Reply(ctx context.Context, incoming string) (string, error) {
return a.provider.Complete(ctx, incoming)
}
A one-method interface, dependency injection through the constructor, and the caller picks the implementation. Plug in OpenAI, a local Ollama instance, whatever. The AutoReplier is blissfully unaware. If you want to dig into how to configure components like these flexibly, the functional options pattern is worth a read.
Running it with Docker
Getting it running is straightforward:
docker pull openilink/openilink-hub:latest
docker run -d -p 8080:8080 openilink/openilink-hub:latest
The Go binary compiles to a single static binary, so Docker images stay small. The React frontend is served by the same binary — embedded with go:embed, which is a pattern I’ve come to love in self-hosted Go apps. One artifact, one process, done.
Why this project is worth reading
What I appreciate about Openilink Hub is how many real production patterns show up in one codebase. Fan-out routing with goroutines and sync.RWMutex. WebSocket connection management with buffered channels and clean shutdown. Context-aware HTTP retries with exponential backoff. Interface-based abstractions that stay small. WebAuthn integration that leans on Go’s implicit interface satisfaction.
If you’re building a bot platform, message relay, or any kind of WeChat automation in Go, these patterns port directly into your own code. And even if WeChat isn’t your domain, the concurrency and networking patterns here are worth studying on their own.
The full source is on GitHub: Openilink Hub repository.