Rclone is written in Go — here is what you can learn from it
Rclone calls itself “rsync for cloud storage.” It syncs files to and from Google Drive, S3, Dropbox, Backblaze B2, OneDrive, Azure Blob Storage, SFTP, and about 70 other backends. It handles encryption, chunked uploads, bandwidth throttling, and even mounts remote storage as a FUSE filesystem.
But what makes rclone interesting for us is how it’s built. The codebase is a masterclass in Go interface design, and there’s a lot to learn from the patterns it uses. Let’s break down the Go techniques that power rclone.
The fs.Fs Interface: One Interface to Rule Them All
Rclone supports dozens of cloud storage providers — S3, Google Cloud Storage, Azure Files, Dropbox, WebDAV, FTP, OpenStack Swift, and many more. How do you write sync logic that works across all of them? You define a common interface.
At the heart of rclone is the fs.Fs interface:
// Fs is the interface a cloud storage system must provide
type Fs interface {
Info
// List the objects and directories in dir into entries
List(ctx context.Context, dir string) (DirEntries, error)
// NewObject finds the Object at remote
NewObject(ctx context.Context, remote string) (Object, error)
// Put uploads to the remote path with the modTime given of the given size
Put(ctx context.Context, in io.Reader, src ObjectInfo, options ...OpenOption) (Object, error)
// Mkdir creates the directory if it doesn't exist
Mkdir(ctx context.Context, dir string) error
// Rmdir removes the directory
Rmdir(ctx context.Context, dir string) error
}
Every backend — whether it talks to Google Drive’s REST API or an SFTP server — implements this interface. The sync engine never knows or cares which provider it’s working with. It just calls List, Put, NewObject, and so on.
This is a textbook application of Go’s implicit interface satisfaction. A backend like backend/s3 doesn’t declare “I implement fs.Fs.” It just has the right methods, and the compiler checks the rest. If you’ve read about Go interfaces and how they work, you’ll recognise this pattern.
Optional Capabilities with Interface Upgrades
Not all storage backends have the same features. S3 supports server-side copy. Google Drive supports moving files. FTP doesn’t support either. How does rclone handle this without polluting the base interface?
It uses optional interfaces — a pattern sometimes called “interface upgrades”:
// Mover is an optional interface for Fs
type Mover interface {
// Move src to this remote using server-side move operations
Move(ctx context.Context, src Object, remote string) (Object, error)
}
// Copier is an optional interface for Fs
type Copier interface {
// Copy src to this remote using server-side copy operations
Copy(ctx context.Context, src Object, remote string) (Object, error)
}
Then, at runtime, rclone checks whether a backend supports a given capability using type assertions:
func serverSideCopy(ctx context.Context, fdst fs.Fs, src fs.Object, remote string) (fs.Object, error) {
do, ok := fdst.(fs.Copier)
if !ok {
return nil, errors.New("server-side copy not supported")
}
return do.Copy(ctx, src, remote)
}
This is a powerful pattern. The base fs.Fs stays small and clean. Backends opt in to extra capabilities by implementing additional interfaces. The caller checks at runtime with a type assertion. No flags, no config booleans, no inheritance hierarchy.
Rclone defines about 20 of these optional interfaces: Purger, Mover, DirMover, PublicLinker, Abouter, UserInfoer, and more. Each one represents a capability that a backend may or may not have.
This approach lets you extend a system without modifying existing code — which maps nicely to the Open/Closed Principle. If you’ve worked with functional options in Go, you’ll appreciate this kind of extensible design.
Backend Registration with init()
Rclone has 70+ backends. How does it wire them all together? It uses Go’s init() function and a global registry.
Each backend package registers itself on import:
// In backend/s3/s3.go
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers",
NewFs: NewFs,
Options: []fs.Option{
// ... provider-specific config options
},
})
}
The fs.Register function adds the backend to a global map:
var registry []*RegInfo
func Register(info *RegInfo) {
registry = append(registry, info)
}
Then in the main package, blank imports pull in every backend:
import (
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/b2"
// ... many more
)
Each blank import triggers the init() function, which registers the backend. By the time main() runs, all backends are available.
This is the same plugin pattern used by Go’s database/sql package and image decoders. It’s simple and effective, but be aware of the tradeoff: init() functions run before main(), so registration errors can be hard to debug. The Go team has documented this pattern and its caveats.
Concurrency: Parallel Transfers with Goroutines and Semaphores
Syncing thousands of files one at a time would be painfully slow. Rclone runs transfers in parallel using goroutines, controlled by a bounded semaphore.
The core idea is straightforward. Rclone uses a token-based approach to limit concurrency:
package main
import (
"context"
"fmt"
"sync"
)
type Semaphore struct {
tokens chan struct{}
}
func NewSemaphore(n int) *Semaphore {
return &Semaphore{
tokens: make(chan struct{}, n),
}
}
func (s *Semaphore) Acquire(ctx context.Context) error {
select {
case s.tokens <- struct{}{}:
return nil
case <-ctx.Done():
return ctx.Error()
}
}
func (s *Semaphore) Release() {
<-s.tokens
}
func main() {
sem := NewSemaphore(4) // max 4 concurrent transfers
var wg sync.WaitGroup
files := []string{"file1.txt", "file2.txt", "file3.txt", "file4.txt", "file5.txt", "file6.txt"}
for _, f := range files {
wg.Add(1)
go func(name string) {
defer wg.Done()
ctx := context.Background()
if err := sem.Acquire(ctx); err != nil {
fmt.Printf("cancelled: %s\n", name)
return
}
defer sem.Release()
fmt.Printf("transferring: %s\n", name)
// simulate transfer work here
}(f)
}
wg.Wait()
}
Rclone’s actual implementation is more sophisticated — it uses a transfer manager that tracks in-progress operations, handles retries with exponential backoff, and respects context cancellation throughout. But the core pattern is the same: buffered channels as semaphores, goroutines for parallelism, and context.Context for cancellation.
The --transfers flag controls how many parallel transfers rclone runs (default is 4). The --checkers flag controls parallel hash-checking goroutines. Both use this same bounded concurrency approach.
If you want to understand how context cancellation works in practice, check out what is context in Go.
The io.Reader Pipeline for Encryption and Streaming
Rclone supports client-side encryption via its crypt backend. Files get encrypted before they leave your machine. The implementation leans heavily on Go’s io.Reader composition.
Rather than reading a file into memory, encrypting it, and then uploading it, rclone wraps readers:
// Simplified version of how rclone chains readers
func encryptedUpload(ctx context.Context, dst fs.Fs, plaintext io.Reader, key []byte) error {
// Wrap the plaintext reader with an encrypting reader
encrypted, err := newEncrypter(plaintext, key)
if err != nil {
return err
}
// The dst.Put call streams from the encrypted reader
// No full file buffered in memory
_, err = dst.Put(ctx, encrypted, objectInfo)
return err
}
The newEncrypter returns an io.Reader that encrypts bytes on the fly as they’re read. The upload function just sees an io.Reader. It doesn’t know or care that encryption is happening.
This is the io.Reader composability pattern that makes Go’s I/O model so effective. You can stack readers for encryption, compression, progress tracking, and bandwidth limiting — all without buffering the entire file.
Rclone’s crypt backend uses NaCl secretbox for file content encryption and scrypt for key derivation. File names get encrypted too, using EME (ECB-Mix-ECB) wide-block encryption.
FUSE Filesystem Mount
One of rclone’s more impressive features is rclone mount, which presents remote cloud storage as a local filesystem. It uses bazil.org/fuse, a Go FUSE library, to implement this.
The mount code implements FUSE callbacks that translate filesystem operations into fs.Fs method calls:
// Simplified FUSE read handler
func (f *File) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error {
// Open the remote object
reader, err := f.obj.Open(ctx, &fs.SeekOption{Offset: req.Offset})
if err != nil {
return err
}
defer reader.Close()
buf := make([]byte, req.Size)
n, err := io.ReadFull(reader, buf)
resp.Data = buf[:n]
return err
}
When you cat /mnt/remote/file.txt, the kernel sends a FUSE read request. Rclone translates that into an HTTP range request to your cloud provider. The file is streamed back through the FUSE layer. Your application thinks it’s reading a local file.
This works on Linux and macOS. On Windows, rclone uses WinFsp instead.
What You Can Take Away
Rclone is a large project, but the Go patterns it relies on are applicable to any codebase:
-
Small core interfaces, optional capabilities via type assertions. Keep your base interface minimal. Let implementations opt in to extra features.
-
Self-registering plugins with
init()and blank imports. Great for extensible systems where you want to add new implementations without changing the core. -
Bounded concurrency with buffered channels. Simple, effective, and easy to reason about.
-
io.Readercomposition for streaming pipelines. Avoid buffering entire files in memory. Chain readers instead.
These aren’t rclone-specific tricks. They’re idiomatic Go patterns that work well in many contexts. If you’re building anything that talks to multiple backends or handles file I/O at scale, rclone’s source code is worth reading.
You can browse the full source at github.com/rclone/rclone and the official docs at rclone.org.