Traefik: The reverse proxy that makes Kubernetes ingress simple
If you’ve ever configured nginx or HAProxy for a containerised application, you know the pain. Every time you add a service, you edit config files. Every time you scale, you update backends. Traefik changes that entirely.
Traefik is a cloud-native reverse proxy and load balancer written in Go. It watches your infrastructure and configures itself automatically. Add a new Docker container? Traefik picks it up. Deploy to Kubernetes? It reads your ingress resources. No config file edits required.
Why Traefik Works So Well
Traditional proxies need you to define every route. Traefik flips this model. It connects to providers like Docker, Kubernetes, Consul, etcd, Marathon, Mesos, and Zookeeper. It discovers services and creates routes on the fly.
Here’s what makes it stand out:
- Automatic HTTPS: Built-in Let’s Encrypt support. Certificates are generated and renewed without manual intervention.
- Dynamic configuration: Routes update in real-time as services come and go.
- Multiple providers: Works with containers, orchestrators, and service meshes.
- Dashboard: A web UI to see your routes and services at a glance.
Running Traefik with Docker
Let’s start with a simple Docker setup. Create a docker-compose.yml:
version: '3'
services:
traefik:
image: traefik:v3.0
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
whoami:
image: traefik/whoami
labels:
- "traefik.http.routers.whoami.rule=Host(`whoami.localhost`)"
Run docker-compose up and hit http://whoami.localhost. Traefik automatically detected the whoami service through Docker labels. No manual routing config.
Using Traefik as a Load Balancer in Go
You might want to test Traefik’s load balancing with your own Go services. Here’s a simple HTTP server that reports its instance ID:
package main
import (
"fmt"
"log"
"net/http"
"os"
)
func main() {
instanceID := os.Getenv("INSTANCE_ID")
if instanceID == "" {
instanceID = "unknown"
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello from instance: %s\n", instanceID)
})
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("OK"))
})
log.Printf("Starting server on :8080 (instance: %s)", instanceID)
log.Fatal(http.ListenAndServe(":8080", nil))
}
Deploy multiple instances with different INSTANCE_ID values. Traefik will round-robin requests between them. You can configure different load balancing strategies through labels or middleware.
Kubernetes Ingress Made Simple
Traefik shines with Kubernetes. It reads Ingress resources and IngressRoute custom resources. Here’s a basic setup:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-go-service
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-go-service
port:
number: 8080
Traefik watches for changes. Scale your deployment up or down, and the load balancer updates automatically. This is especially useful when building microservice architectures where services frequently change.
Adding HTTPS with Let’s Encrypt
One of Traefik’s best features is automatic TLS. Configure it in your static config:
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: websecure
scheme: https
websecure:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: you@example.com
storage: /letsencrypt/acme.json
httpChallenge:
entryPoint: web
Now any service can request a certificate by adding a label:
labels:
- "traefik.http.routers.myapp.tls.certresolver=letsencrypt"
Traefik handles the ACME challenge, stores the certificate, and renews it before expiry. If you’re building Go services that need proper context handling for graceful shutdowns, Traefik’s health checks integrate nicely.
Middleware for Cross-Cutting Concerns
Traefik supports middleware for common tasks. Rate limiting, authentication, compression, and headers can all be configured without touching your Go code:
// Your Go service stays simple
func main() {
http.HandleFunc("/api/data", func(w http.ResponseWriter, r *http.Request) {
// Traefik already handled auth, rate limiting, and compression
// Just focus on business logic
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"status": "ok"}`))
})
http.ListenAndServe(":8080", nil)
}
Configure rate limiting in Traefik:
labels:
- "traefik.http.middlewares.ratelimit.ratelimit.average=100"
- "traefik.http.middlewares.ratelimit.ratelimit.burst=50"
- "traefik.http.routers.myapp.middlewares=ratelimit"
This keeps your Go services focused on business logic. Infrastructure concerns live in the infrastructure layer.
When to Choose Traefik
Traefik fits well when:
- You’re running containers and need dynamic routing
- You want automatic HTTPS without managing certificates
- You’re building microservices that scale frequently
- You need a load balancer that understands your orchestrator
For static deployments where routes rarely change, nginx might be simpler. But for modern cloud-native Go applications, Traefik removes a lot of operational overhead.
The project is actively maintained and has excellent documentation. If you’re building Go services for Kubernetes or Docker, it’s worth exploring.