The Go Engineer repo teaches Go through actual projects — concurrency, design patterns, data structures — not another TODO app tutorial.

Learn Go by Building Things, Not Reading About Them


Most Go tutorials follow the same formula: explain syntax, show a “Hello World,” maybe build a TODO app, call it a day. You finish and still have no idea how to structure a real Go project. The Go Engineer skips that playbook. It’s a collection of hands-on Go projects organized by topic — concurrency, data structures, design patterns, the standard library — and you learn by writing real code, not reading about it.

I spent time going through the repository and want to share what makes it worth your attention, and what Go patterns you’ll absorb along the way.

How it’s organized

The repo breaks learning into modules:

  • Fundamentals: types, control flow, functions
  • Data structures: linked lists, stacks, queues, trees implemented in Go
  • Concurrency: goroutines, channels, sync primitives
  • Design patterns: factory, singleton, observer, strategy in idiomatic Go
  • Standard library: net/http, encoding/json, io, os
  • Projects: real backend services that pull everything together

Each section has its own directory with Go files you can read, run, and modify. This matters more than you’d think. Go has a strong opinion about project layout, and seeing it applied consistently builds good habits before you even realize it’s happening.

Data structures in Go: more than academic exercises

I like that the project implements classic data structures using Go’s type system. This is where beginners get comfortable with structs, pointers, and methods — the real building blocks, not toy examples.

Here’s a stack implementation similar to what you’ll find in the repo:

package stack

import "errors"

// Stack is a generic LIFO data structure.
type Stack[T any] struct {
	items []T
}

// Push adds an element to the top of the stack.
func (s *Stack[T]) Push(item T) {
	s.items = append(s.items, item)
}

// Pop removes and returns the top element.
func (s *Stack[T]) Pop() (T, error) {
	var zero T
	if len(s.items) == 0 {
		return zero, errors.New("stack is empty")
	}
	top := s.items[len(s.items)-1]
	s.items = s.items[:len(s.items)-1]
	return top, nil
}

// Peek returns the top element without removing it.
func (s *Stack[T]) Peek() (T, error) {
	var zero T
	if len(s.items) == 0 {
		return zero, errors.New("stack is empty")
	}
	return s.items[len(s.items)-1], nil
}

// Size returns the number of elements.
func (s *Stack[T]) Size() int {
	return len(s.items)
}

A few idiomatic Go things happening here worth calling out:

Generics ([T any]) were introduced in Go 1.18 and let you create a stack that works with any type. No more interface{} casting. Error returns instead of panics on an empty stack — this is the Go way, and fighting it will make your life miserable. Pointer receivers (*Stack[T]) because Push and Pop modify the stack. Size could technically use a value receiver, but keeping them consistent is cleaner. And the zero value trick: var zero T gives you a typed zero value to return alongside the error, which is a pattern you’ll use constantly with generics.

Building data structures from scratch teaches you how Go manages memory with slices, when pointers matter, and how to design clean APIs. If you want to see how Go slices behave in surprising ways, check out this post on slice internals.

Concurrency patterns that actually matter

The concurrency section is where this project earns its keep. Go’s concurrency model — goroutines and channels — is one of the language’s best features, but it trips people up constantly. I’ve seen experienced developers from other languages write Go concurrency code that’s subtly broken in ways they don’t discover until production.

Here’s a worker pool pattern similar to what the repo covers:

package main

import (
	"fmt"
	"sync"
)

func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
	defer wg.Done()
	for job := range jobs {
		fmt.Printf("Worker %d processing job %d\n", id, job)
		results <- job * 2
	}
}

func main() {
	const numWorkers = 3
	const numJobs = 10

	jobs := make(chan int, numJobs)
	results := make(chan int, numJobs)

	var wg sync.WaitGroup

	// Start workers
	for i := 1; i <= numWorkers; i++ {
		wg.Add(1)
		go worker(i, jobs, results, &wg)
	}

	// Send jobs
	for j := 1; j <= numJobs; j++ {
		jobs <- j
	}
	close(jobs)

	// Wait for all workers, then close results
	go func() {
		wg.Wait()
		close(results)
	}()

	// Collect results
	for result := range results {
		fmt.Printf("Result: %d\n", result)
	}
}

Things to notice:

Directional channels — jobs <-chan int is receive-only, results chan<- int is send-only. The compiler enforces this, so you can’t accidentally send on a receive-only channel. That’s a compile-time guarantee most languages don’t give you.

sync.WaitGroup coordinates when all workers finish. You Add(1) before launching each goroutine and Done() when it completes. Mess up the ordering and you’ll get races or deadlocks.

Closing channels is where bugs hide. close(jobs) signals workers to stop. close(results) lets the final range loop terminate. Forgetting to close a channel is probably the single most common concurrency bug in Go code I’ve reviewed.

Buffered channels — make(chan int, numJobs) — mean senders don’t block immediately. The buffer size choice has real performance implications.

This pattern shows up everywhere in Go backend services: processing HTTP requests, running database queries in parallel, batch processing. Get comfortable with it early.

Design patterns: Go’s take on the classics

The Gang of Four design patterns were written with Java and C++ in mind. Go doesn’t have classes or inheritance, so the patterns look different — sometimes radically so. The repo shows how to translate them into idiomatic Go using interfaces and composition.

Here’s a strategy pattern:

package main

import "fmt"

// Compressor defines the strategy interface.
type Compressor interface {
	Compress(data []byte) ([]byte, error)
}

// GzipCompressor implements Compressor using gzip.
type GzipCompressor struct{}

func (g GzipCompressor) Compress(data []byte) ([]byte, error) {
	fmt.Println("Compressing with gzip")
	// Real implementation would use compress/gzip
	return data, nil
}

// ZstdCompressor implements Compressor using zstd.
type ZstdCompressor struct{}

func (z ZstdCompressor) Compress(data []byte) ([]byte, error) {
	fmt.Println("Compressing with zstd")
	// Real implementation would use a zstd library
	return data, nil
}

// FileProcessor uses a Compressor strategy.
type FileProcessor struct {
	compressor Compressor
}

func NewFileProcessor(c Compressor) *FileProcessor {
	return &FileProcessor{compressor: c}
}

func (fp *FileProcessor) Process(data []byte) ([]byte, error) {
	return fp.compressor.Compress(data)
}

func main() {
	data := []byte("some file content")

	// Swap strategies without changing FileProcessor
	gzipProcessor := NewFileProcessor(GzipCompressor{})
	gzipProcessor.Process(data)

	zstdProcessor := NewFileProcessor(ZstdCompressor{})
	zstdProcessor.Process(data)
}

The Go idiom that matters here is implicit interface satisfaction. GzipCompressor and ZstdCompressor never declare that they implement Compressor. They just have the right method signature. The compiler checks at call sites. If you’re coming from Java or C# where you write implements explicitly, this feels weird at first. Then it feels freeing.

This also shows Go’s preference for composition over inheritance. FileProcessor has a Compressor, it doesn’t extend one. If you’re coming from an OOP language, this mental shift is one of the biggest adjustments you’ll make. The functional options pattern is another example of how Go handles configuration without the class hierarchies you might be used to.

The standard library focus

One of the repo’s best decisions is dedicating an entire section to Go’s standard library. Too many beginners jump straight to third-party frameworks without realizing how much Go gives you out of the box.

net/http alone can handle most backend web services. encoding/json handles serialization. io provides composable readers and writers. testing gives you unit tests, benchmarks, and fuzzing. That’s a lot of ground covered before you go get anything.

Learning the standard library before reaching for external dependencies is a habit that separates Go developers who write maintainable code from those who don’t. The repo reinforces this by building projects with minimal dependencies first, only introducing external packages when they genuinely add something.

What you’ll walk away with

After working through the repo, a beginner should have a working understanding of Go’s type system (structs, interfaces, generics, and how they compose), error handling done properly (returning errors, wrapping with fmt.Errorf), concurrency primitives (goroutines, channels, sync.Mutex, sync.WaitGroup, context.Context), project structure (how to organize packages, when to export vs. keep private), testing (table-driven tests, the testing package), and standard library fluency with net/http, encoding/json, io, os, and sort.

If you want problems that test these skills from a different angle, the Advent of Code series on this blog pairs well — especially for getting comfortable with algorithmic thinking in Go.

Who this is for

If you’re coming from Python, JavaScript, or Java, this project-based approach will teach you Go faster than reading documentation. You write code, hit compiler errors, fix them, and build intuition for how Go wants you to think. Go is opinionated, and working with those opinions (rather than against them) is half the battle.

If you already know Go basics but haven’t built anything real, the projects section will force you to combine everything. Building a backend service that uses concurrency, proper error handling, and the standard library in one codebase — that’s where things click.

The repo is on GitHub. Clone it, pick a section, and start writing code. You won’t learn Go by reading about it. I tried.