Stop Committing YAML Blindly: Debug GitHub Actions Locally with Go
If you’ve ever pushed a commit just to see if your GitHub Actions workflow works, you know the pain. Change a line of YAML. Push. Wait. Fail. Repeat. It’s slow, it’s frustrating, and it makes you feel like you’re debugging with fmt.Println, except worse because each iteration takes minutes.
ci-debugger is a Go CLI tool that attacks this problem directly. It parses your GitHub Actions workflow files, spins up Docker containers locally, and lets you set breakpoints on individual steps. You pause execution, inspect the environment, figure out what went wrong. No pushing to remote.
The tool is written in Go and uses several patterns worth stealing. Let me walk through how it works.
The problem with GitHub Actions debugging
GitHub Actions workflows live in YAML. There’s no local runtime. No step-through debugger. No way to inspect state mid-run. The feedback loop is miserable:
- Edit
.github/workflows/ci.yml - Commit and push
- Wait for a runner to pick up the job
- Read logs
- Go back to step 1
Tools like act let you run workflows locally using Docker, which helps. ci-debugger goes further by adding breakpoints — you can pause at any step and drop into the container.
If you’ve built CLI tools in Go before, you’ll recognize many of the patterns here. If not, you might want to check out how to build CLI tools in Go first.
Parsing workflow YAML in Go
ci-debugger needs to parse GitHub Actions YAML files and understand their structure: jobs, steps, environment variables, uses directives, and run commands. Go’s gopkg.in/yaml.v3 package does the heavy lifting.
Here’s a simplified version of the parsing. The tool defines Go structs that map to the GitHub Actions YAML schema:
package workflow
import (
"os"
"gopkg.in/yaml.v3"
)
type Workflow struct {
Name string `yaml:"name"`
On interface{} `yaml:"on"`
Jobs map[string]Job `yaml:"jobs"`
}
type Job struct {
RunsOn string `yaml:"runs-on"`
Steps []Step `yaml:"steps"`
}
type Step struct {
Name string `yaml:"name"`
Uses string `yaml:"uses,omitempty"`
Run string `yaml:"run,omitempty"`
With map[string]string `yaml:"with,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
}
func Parse(path string) (*Workflow, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, err
}
var wf Workflow
if err := yaml.Unmarshal(data, &wf); err != nil {
return nil, err
}
return &wf, nil
}
Straightforward struct-tag-based deserialization. The interface{} type for On is worth calling out — GitHub Actions supports multiple trigger formats (on: push, on: [push, pull_request], and the map form), so the parser has to stay flexible. In production code, you’d follow up with a type switch to normalize the value.
One pattern ci-debugger uses: it iterates over the parsed steps and matches them against user-defined breakpoints by step name or index. That’s where the debugging logic kicks in.
Docker integration with the Go SDK
The real power comes from Docker integration. ci-debugger uses the Docker SDK for Go (github.com/docker/docker/client) to create and manage containers that simulate the CI environment.
Here’s a simplified example: creating a container, executing a workflow step inside it, and pausing for a breakpoint:
package runner
import (
"context"
"fmt"
"io"
"os"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
)
type Runner struct {
cli *client.Client
containerID string
}
func NewRunner(ctx context.Context, image string) (*Runner, error) {
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
return nil, fmt.Errorf("creating docker client: %w", err)
}
resp, err := cli.ContainerCreate(ctx, &container.Config{
Image: image,
Cmd: []string{"sleep", "infinity"},
Tty: true,
}, nil, nil, nil, "")
if err != nil {
return nil, fmt.Errorf("creating container: %w", err)
}
if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil {
return nil, fmt.Errorf("starting container: %w", err)
}
return &Runner{cli: cli, containerID: resp.ID}, nil
}
func (r *Runner) ExecStep(ctx context.Context, command string) error {
execConfig := container.ExecOptions{
Cmd: []string{"/bin/sh", "-c", command},
AttachStdout: true,
AttachStderr: true,
}
execID, err := r.cli.ContainerExecCreate(ctx, r.containerID, execConfig)
if err != nil {
return fmt.Errorf("creating exec: %w", err)
}
resp, err := r.cli.ContainerExecAttach(ctx, execID.ID, container.ExecAttachOptions{})
if err != nil {
return fmt.Errorf("attaching to exec: %w", err)
}
defer resp.Close()
_, err = io.Copy(os.Stdout, resp.Reader)
return err
}
func (r *Runner) Cleanup(ctx context.Context) error {
return r.cli.ContainerRemove(ctx, r.containerID, container.RemoveOptions{Force: true})
}
A few things worth noting:
The sleep infinity trick. The container starts with sleep infinity as its command, which keeps it alive while the tool sends individual steps via ContainerExecCreate. This is a common pattern when you need a long-lived container you send commands to incrementally rather than running one command and exiting.
Context threading. Every Docker SDK call takes a context.Context. This matters because when a user hits a breakpoint, the tool needs to pause execution without killing the container. You can use context.WithCancel to wire up cleanup when the user quits the debug session. If you want a refresher on context, check out what is context in Go.
Error wrapping with %w. The code uses fmt.Errorf with %w consistently, which lets callers use errors.Is and errors.As to inspect errors up the chain. For a CLI tool, this matters — you want to tell users what failed. Docker not running? Image not found? Permission denied? Each of those needs a different message.
The breakpoint mechanism
The breakpoint implementation is where things get interesting. When the runner hits a step marked as a breakpoint, it needs to:
- Pause the step execution loop
- Give the user an interactive shell into the container
- Resume when the user says so
Here’s a simplified version:
package runner
import (
"bufio"
"context"
"fmt"
"os"
"strings"
)
type Breakpoint struct {
StepIndex int
StepName string
}
func (r *Runner) RunWithBreakpoints(ctx context.Context, steps []Step, breakpoints []Breakpoint) error {
bpSet := make(map[int]bool)
for _, bp := range breakpoints {
bpSet[bp.StepIndex] = true
}
for i, step := range steps {
fmt.Printf("▶ Step %d: %s\n", i, step.Name)
if step.Run != "" {
if err := r.ExecStep(ctx, step.Run); err != nil {
return fmt.Errorf("step %d failed: %w", i, err)
}
}
if bpSet[i] {
fmt.Printf("⏸ Breakpoint hit at step %d: %s\n", i, step.Name)
fmt.Println(" Container ID:", r.containerID)
fmt.Println(" Run: docker exec -it", r.containerID, "/bin/sh")
fmt.Print(" Press Enter to continue...")
reader := bufio.NewReader(os.Stdin)
_, _ = reader.ReadString('\n')
}
}
return nil
}
The breakpoint set uses a map[int]bool for O(1) lookups. Simple, but you see this all the time in Go instead of scanning through a slice on every check.
The tool prints the container ID and a docker exec command so you can open a separate terminal and poke around. A more polished version could use ContainerExecCreate with Tty: true and AttachStdin: true to embed an interactive shell directly in the CLI, piping os.Stdin and os.Stdout through the Docker SDK’s hijacked connection.
Environment variable injection
GitHub Actions workflows lean hard on environment variables: GITHUB_SHA, GITHUB_REF, secrets, matrix values. ci-debugger injects these into the container so steps behave the way they would on a real runner.
In Go, you pass these through the Docker SDK’s Config.Env field when creating the container:
func buildEnvSlice(workflow *Workflow, job *Job, step *Step) []string {
env := []string{
"CI=true",
"GITHUB_ACTIONS=true",
"GITHUB_WORKSPACE=/github/workspace",
}
// Job-level env
for k, v := range job.Env {
env = append(env, fmt.Sprintf("%s=%s", k, v))
}
// Step-level env overrides job-level
for k, v := range step.Env {
env = append(env, fmt.Sprintf("%s=%s", k, v))
}
return env
}
Notice the order: step-level env vars come after job-level ones. Docker uses the last occurrence when there are duplicates, so this gives step-level variables precedence. That matches how GitHub Actions actually behaves.
Building the CLI with Cobra
ci-debugger uses Cobra for its CLI, which is the go-to choice for Go CLI tools. The command structure is clean:
ci-debugger run --workflow .github/workflows/ci.yml --breakpoint 3
ci-debugger run --workflow .github/workflows/ci.yml --breakpoint "Run tests"
The --breakpoint flag accepts either a step index or a step name. Handling this is simple:
func parseBreakpoint(value string, steps []Step) (int, error) {
// Try as integer index first
if idx, err := strconv.Atoi(value); err == nil {
if idx < 0 || idx >= len(steps) {
return 0, fmt.Errorf("step index %d out of range (0-%d)", idx, len(steps)-1)
}
return idx, nil
}
// Try as step name
for i, s := range steps {
if strings.EqualFold(s.Name, value) {
return i, nil
}
}
return 0, fmt.Errorf("no step found matching %q", value)
}
strings.EqualFold for case-insensitive matching is a nice touch. Users shouldn’t have to remember exact casing of step names. If you’re interested in structuring a Go CLI with Cobra, there’s a detailed post on building CLI tools with Cobra.
What’s worth stealing from this project
ci-debugger is a good case study for several Go patterns:
Struct-tag-based YAML parsing gives you a clean mapping between external config formats and Go types. The Docker SDK usage here, particularly the ContainerCreate + sleep infinity + ContainerExecCreate pattern for individual commands, gives you fine-grained control over the container lifecycle. Way more control than shelling out to docker run. Context propagation through long-running operations keeps cancellation and cleanup straightforward. And the Cobra CLI design shows how to accept flexible input (integers or strings) for the same flag without overcomplicating things.
The project is early-stage, but the core idea is solid: bring the debugging experience closer to what you’d expect from a real development workflow instead of the push-and-pray cycle that everyone seems to have accepted as normal.
If you’re building CI-related tooling in Go, the Docker SDK integration pattern here is the thing I’d focus on. Check out the ci-debugger repo and give it a shot next time a workflow file has you staring at logs for the fifth time in a row.