How Karpov Gateway structures a multi-service Go API gateway
Karpov Gateway splits its REST API gateway into eight separate Go binaries — auth, billing, gateway, music, pool, quota, worker, and a QQ Music gateway — each with its own main.go under gateway/cmd/. That structure tells you a lot about how the project thinks about Go service boundaries. Let’s look at the patterns it uses and what Go developers can take away from them.
Eight binaries, one module
The gateway/cmd/ directory has eight entry points:
gateway/cmd/auth/main.gogateway/cmd/billing/main.gogateway/cmd/gateway/main.gogateway/cmd/music/main.gogateway/cmd/pool/main.gogateway/cmd/qqmusic-gateway/main.gogateway/cmd/quota/main.gogateway/cmd/worker/main.go
This follows the standard Go project layout where each binary gets its own directory under cmd/. Each main.go is a thin bootstrap file that wires dependencies and starts the service. The real logic lives in gateway/internal/.
The important split is HTTP at the edge and gRPC behind it. The gateway binary is the Gin-facing edge service: it registers the REST routes under /v1/*, handles proxying, and talks to the backend services. The auth, music, pool, quota, and billing binaries are started as gRPC services; qqmusic-gateway can also run the stack in one process with the backend services listening on ports like :9001 through :9005. Separate binaries mean you can deploy and scale them independently. Run the all-in-one gateway during development, then split services across containers when production needs it.
This is a well-worn Go pattern for monorepo microservices. The internal/ package prevents other modules from importing your business logic, and each cmd/ binary picks what it needs. go build ./cmd/auth gives you just the auth binary with no extra baggage. Clean.
Auth internals: TOTP, OAuth, and API keys
The auth service is the most complex piece. Its code lives under gateway/internal/auth/, and the file names tell the story:
gateway/internal/auth/totp.go— TOTP two-factor authenticationgateway/internal/auth/oauth/service.go— OAuth2 flow handling (with Linux.do as a provider)gateway/internal/auth/apikey.goandgateway/internal/auth/apikey_pgrepo.go— API key generation and PostgreSQL storagegateway/internal/auth/password.goandgateway/internal/auth/strength.go— password hashing and strength validationgateway/internal/auth/session.go— session managementgateway/internal/auth/ratelimit.go— rate limitinggateway/internal/auth/hibp/hibp.go— Have I Been Pwned integration for compromised password checks
The split between apikey.go (domain logic) and apikey_pgrepo.go (PostgreSQL repository) is textbook Go. The interface lives in gateway/internal/auth/apikey_repo.go, and the Postgres implementation fulfills it. Testing becomes easy — swap in a mock repository without touching the business logic.
The same pattern shows up in the OAuth subsystem. gateway/internal/auth/oauth/repo_mem.go is the in-memory implementation, gateway/internal/auth/oauth/repo_pg.go is Postgres. Define a small interface, write two implementations, pick one at startup. No framework needed for dependency injection.
OAuth state management
gateway/internal/auth/oauth/state.go handles OAuth state tokens — the random values you pass to an OAuth provider and verify on callback to prevent CSRF attacks. gateway/internal/auth/oauth/identity.go defines the identity struct that comes back from the provider, and gateway/internal/auth/oauth/provider.go abstracts over different OAuth providers.
This is worth stealing. In this repo the provider interface is concrete about what the OAuth service needs:
type Provider interface {
Name() string
DisplayName() string
OAuth2Config() *oauth2.Config
FetchProfile(ctx context.Context, tok *oauth2.Token) (*Profile, error)
Validate(p *Profile) error
}
The current implementation is LinuxDoProvider, including provider-specific validation such as trust-level checks. The design still leaves a clean path for future providers: add a provider that satisfies the same interface, register it, and the service in gateway/internal/auth/oauth/service.go can orchestrate the flow without learning provider-specific details.
Rate limiting and credential pools
gateway/internal/auth/ratelimit.go handles per-API-key rate limiting, not login throttling. The edge gateway wires it through APIKeyRateLimitMiddleware, so quota enforcement happens around API key usage: requests per minute, daily limits, and retry-after information. The project uses Redis as a backing store, which is the standard approach for distributed rate limiting — each request increments shared state with a TTL, and you reject requests once the limit is hit.
The gateway/cmd/pool/main.go binary manages an encrypted credential pool. This is the service that stores and rotates credentials for upstream APIs the gateway proxies to. I like that credentials live in a separate service with encryption at rest. That’s a real security boundary, not just organizational tidiness. The pool service likely uses Go’s crypto packages for encryption and exposes a gRPC interface for the gateway to fetch credentials when proxying requests.
gRPC for inter-service communication
The project uses gRPC for communication between services. This is a natural fit in Go — google.golang.org/grpc is mature and code generation from protobuf definitions gives you type-safe clients and servers with minimal effort. The gateway binary calls into auth, billing, quota, and pool over gRPC, while exposing a REST API to external clients via Gin.
External clients talk HTTP/JSON. Internal services talk protobuf. You get the ergonomics of REST for consumers and the performance of gRPC internally. The grpc-go repository has solid examples if you want to dig deeper.
Bootstrap and service wiring
gateway/internal/auth/bootstrap.go handles initial setup of the auth service — creating default admin users, running migrations, seeding data. I really like this pattern: a dedicated bootstrap function that runs once at startup, completely separate from request handling.
In Go, this usually looks like a function that takes a database connection and a config struct, does its work, and returns an error. No magic. No framework lifecycle hooks.
func main() {
db := connectDB()
if err := auth.Bootstrap(db, cfg); err != nil {
log.Fatal(err)
}
// start server
}
Explicit initialization in main() with clear error handling. You can read it top to bottom and know exactly what happens at startup.
What Go developers can learn
Karpov Gateway is a good case study in structuring a multi-service Go application. The patterns that matter most:
One cmd/ directory per binary keeps entry points thin and deployment flexible. Interface-based repositories (like apikey_repo.go defining the contract, apikey_pgrepo.go implementing it) make testing painless. Having in-memory and Postgres implementations side by side (oauth/repo_mem.go for tests, oauth/repo_pg.go for production) means you’re never blocked on infrastructure during development. Dedicated bootstrap logic in bootstrap.go keeps one-time setup separate from request handlers. And the REST-externally, gRPC-internally split gives you the best of both worlds without fighting either protocol.
If you’re building an API gateway in Go, or any multi-service system, this project’s file structure alone is worth studying. It tells you how to split responsibilities before you read a single line of code. For more on Go project patterns, check out how AList structures its storage providers or how functional options can simplify service configuration.