Every Go concurrency course covers goroutines, channels, mutexes, WaitGroups, and maybe semaphores. Very few touch
sync.Cond. It’s either treated as advanced or dismissed as unnecessary because “just use channels.” But there’s a whole class of coordination problem where channels are the wrong tool andsync.Condis exactly right. Once you see the pattern, you’ll recognize it everywhere — and you’ll stop reaching fortime.Sleeppolling loops when you shouldn’t.This is a bonus lesson. Not because the topic is minor, but because it builds on mutexes (Lesson 6) and you really need to understand lock ownership before
sync.Condclicks.The first time I looked at
sync/atomic, it felt like a niche tool for systems programmers writing lock-free data structures. Turns out it’s one of the most practically useful packages in the standard library — and most Go developers reach for it way too late, after they’ve already built something with a mutex and then profiled it into submission.Atomic operations are CPU-level instructions that read or modify memory as a single, indivisible unit. There’s no scheduler window between the read and the write. That means multiple goroutines can operate on the same memory location without a lock — and without any blocking. When your shared state is a counter, a flag, or a single configuration value, atomics are almost always the right tool.
Go doesn’t have a built-in enum keyword. What it has is
iota, a constant counter that resets to zero at the start of eachconstblock and increments with every constant declaration. It sounds underwhelming. In practice it gives you typed enums, bitmask permissions, and self-maintaining constant sequences — all without any runtime overhead.The Problem
The naive approach to enums in Go is plain integer constants or string constants. Both work, but neither gives you type safety:
There’s a class of production bugs I see over and over — and the cause is almost always the same: nothing is telling the program to slow down. A spike in traffic arrives, every goroutine blasts outbound, and suddenly you’ve got five hundred simultaneous connections against a Postgres instance that’s configured for a hundred. The database starts rejecting connections. The application throws errors. Everyone’s paged at 2am.
The fix isn’t complicated. It’s a semaphore — a primitive that limits how many concurrent operations are in flight at once. Go doesn’t ship a dedicated semaphore type, but it doesn’t need to. A buffered channel of the right size is a semaphore, and that insight unlocks a whole class of resource-limiting patterns.
Go has two visibility levels: exported (starts with a capital letter) and unexported (doesn’t). Most engineers use only these two. But there’s a third option that the language gives you for free, and it’s more useful than most people realize. The
internaldirectory enforces that certain packages can only be imported by code within your own module — and the compiler, not documentation or convention, does the enforcing.Exported symbols are API commitments. Once something is exported and external code is depending on it, changing it is a breaking change. The
internalpackage is the escape hatch: share code across multiple packages in your own codebase without accidentally publishing an API surface you’ll have to maintain forever.Transactions are the part of database programming where “it’s fine most of the time” really isn’t good enough. A buggy SELECT just returns wrong data. A buggy transaction can leave your database in a half-written state — an order placed without inventory decremented, money debited without the transfer completing, a user created without their profile record. The bugs are subtle, often don’t manifest in testing, and only show up in production when two things happen at the same time.
There’s a pattern that comes up constantly in backend services: make N concurrent calls, collect all their results, and if any one of them fails, cancel the rest and return the error. This is the “all succeed or all cancel” pattern — and before
errgroup, implementing it correctly required a non-trivial amount of boilerplate involvingWaitGroup, error channels, and manual context cancellation. People got it wrong often enough thaterrgroupwas created specifically to handle it.I have a strongly held opinion about timeouts: if you’re making a network call, a database query, or waiting on any external resource without a timeout, you’ve written a production bug. It just hasn’t fired yet. The network will eventually hang. The database will eventually have a slow query. The external API will eventually stop responding. And when it does, your goroutine will wait. And wait. And wait — holding a connection, a file descriptor, a slot in your worker pool — until the process runs out of resources or someone restarts it.
In Java or C#, you declare that a class implements an interface. You write
implements Runnable, and the compiler ties that class to that interface forever. Go doesn’t work that way. A type satisfies an interface the moment it has the right methods — no declaration, no explicit relationship. This sounds like a minor syntactic difference, but it changes how you design systems in ways that compound over time.The Problem
When interfaces are declared by the implementor (the Java way), you end up with a few recurring problems.
Goroutine leaks are the memory leaks of concurrent Go. They’re slow, invisible, and tend to surface only under load — or worse, only after days of continuous running when the service has accumulated tens of thousands of stuck goroutines. I’ve debugged two production incidents that traced back to leaks in code that had been in production for months, completely unnoticed during normal load.
The root cause is almost always the same: someone started a goroutine and didn’t give it a way to exit. Not a way to exit eventually — a way to exit in every possible code path.