Advanced Go (Golang) Guide: Concurrency, Generics, Performance & Production Patterns
Deep dive into advanced Go programming: goroutines, channels, context, generics, sync primitives, memory model, pprof profiling, table-driven tests, REST APIs with Gin/Chi, GORM vs sqlx vs pgx, multi-stage Dockerfiles, and Go vs Rust vs Node.js comparison.
- Goroutines are ~1000x lighter than OS threads; channels are the preferred communication mechanism
- context propagates cancellation, deadlines, and request-scoped values across goroutines
- Go 1.18+ generics: type parameters + interface constraints eliminate code duplication
- sync package: Mutex, RWMutex, WaitGroup, Once, Pool for shared state
- pprof + trace pinpoint CPU hotspots and memory leaks
- Table-driven tests + benchmarks + fuzz testing are the Go testing trifecta
- Multi-stage Dockerfiles shrink images from ~1GB to ~20MB
- Don't communicate by sharing memory; share memory by communicating
- errors.Is / errors.As / fmt.Errorf %w form a complete error wrapping chain
- Interfaces should be defined at the consumer, not the producer — keep them small
- sync.Pool reduces GC pressure for frequently allocated temporary objects
- Go memory model: no visibility guarantee without synchronization primitives
- go test -race should be in every CI pipeline
1. Goroutines and Channels: Deep Dive
Goroutines are the cornerstone of Go's concurrency model. Managed by the Go runtime, they start with only ~2KB of stack space (growing dynamically), making it practical to run millions simultaneously. Channels are typed conduits between goroutines, following the CSP (Communicating Sequential Processes) model.
Unbuffered vs Buffered Channels
package main
import (
"fmt"
"time"
)
func main() {
// Unbuffered channel — sender blocks until receiver is ready
unbuffered := make(chan int)
go func() {
fmt.Println("Sending...")
unbuffered <- 42 // blocks here until main() receives
fmt.Println("Sent!")
}()
time.Sleep(100 * time.Millisecond)
val := <-unbuffered
fmt.Println("Received:", val)
// Buffered channel — sender only blocks when buffer is full
buffered := make(chan string, 3)
buffered <- "a" // does not block
buffered <- "b" // does not block
buffered <- "c" // does not block
// buffered <- "d" // would block — buffer full
fmt.Println(<-buffered) // a
fmt.Println(<-buffered) // b
fmt.Println(<-buffered) // c
// Range over channel (close signals completion)
ch := make(chan int, 5)
for i := 1; i <= 5; i++ {
ch <- i
}
close(ch) // must close to stop range
for v := range ch {
fmt.Print(v, " ") // 1 2 3 4 5
}
fmt.Println()
}select Statement and Done Patterns
package main
import (
"fmt"
"time"
)
// Fan-in: merge multiple channels into one
func fanIn(ch1, ch2 <-chan string) <-chan string {
out := make(chan string)
go func() {
defer close(out)
for {
select {
case v, ok := <-ch1:
if !ok { ch1 = nil } else { out <- v }
case v, ok := <-ch2:
if !ok { ch2 = nil } else { out <- v }
}
if ch1 == nil && ch2 == nil {
return
}
}
}()
return out
}
// Done channel pattern — signal goroutines to stop
func worker(done <-chan struct{}, id int) {
for {
select {
case <-done:
fmt.Printf("Worker %d stopping\n", id)
return
default:
// do work
fmt.Printf("Worker %d working\n", id)
time.Sleep(200 * time.Millisecond)
}
}
}
func main() {
done := make(chan struct{})
for i := 1; i <= 3; i++ {
go worker(done, i)
}
time.Sleep(500 * time.Millisecond)
close(done) // broadcasts stop to ALL goroutines listening on done
time.Sleep(100 * time.Millisecond)
// Timeout with select
ch := make(chan int)
select {
case v := <-ch:
fmt.Println("Got:", v)
case <-time.After(1 * time.Second):
fmt.Println("Timed out")
}
}2. Context Package: Cancellation, Deadlines, and Values
context.Context is the standard mechanism for passing cancellation signals, deadlines, and request-scoped values across API boundaries. It is convention to pass it as the first argument, and virtually all I/O operations, database queries, and HTTP requests should accept a context.
package main
import (
"context"
"database/sql"
"fmt"
"time"
)
// Always pass context as first argument
func fetchUser(ctx context.Context, db *sql.DB, id int) (string, error) {
var name string
// QueryRowContext respects cancellation/deadline
err := db.QueryRowContext(ctx, "SELECT name FROM users WHERE id = $1", id).Scan(&name)
return name, err
}
func main() {
// WithCancel — manual cancellation
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // always defer cancel to release resources
go func() {
time.Sleep(2 * time.Second)
cancel() // cancel from another goroutine
}()
select {
case <-ctx.Done():
fmt.Println("Cancelled:", ctx.Err()) // context.Canceled
}
// WithTimeout — auto-cancel after duration
ctxTO, cancelTO := context.WithTimeout(context.Background(), 500*time.Millisecond)
defer cancelTO()
// Simulate slow operation
resultCh := make(chan string, 1)
go func() {
time.Sleep(1 * time.Second) // slower than timeout
resultCh <- "result"
}()
select {
case r := <-resultCh:
fmt.Println("Got:", r)
case <-ctxTO.Done():
fmt.Println("Timeout:", ctxTO.Err()) // context.DeadlineExceeded
}
// WithDeadline — cancel at absolute time
deadline := time.Now().Add(3 * time.Second)
ctxDL, cancelDL := context.WithDeadline(context.Background(), deadline)
defer cancelDL()
fmt.Println("Deadline set at:", deadline)
fmt.Println("Time remaining:", time.Until(ctxDL.Deadline()))
// WithValue — attach request-scoped data (use typed keys!)
type contextKey string
const userIDKey contextKey = "userID"
ctxVal := context.WithValue(context.Background(), userIDKey, 42)
userID := ctxVal.Value(userIDKey).(int)
fmt.Println("User ID from context:", userID)
}3. Interface Design and Embedding
Go interfaces are satisfied implicitly — a type need not declare that it implements an interface; method signature matching is sufficient. Best practice is to define small interfaces at the consumer, not large interfaces at the producer. Interface embedding composes multiple interfaces.
package main
import (
"fmt"
"io"
"strings"
)
// Small, focused interfaces (the Go way)
type Reader interface {
Read(p []byte) (n int, err error)
}
type Writer interface {
Write(p []byte) (n int, err error)
}
// Embedding composes interfaces
type ReadWriter interface {
Reader
Writer
}
// Stringer is satisfied by any type with a String() string method
type Stringer interface {
String() string
}
// Define interfaces at the point of use
type UserStore interface {
GetUser(id int) (User, error)
CreateUser(u User) error
}
type User struct {
ID int
Name string
Email string
}
// Concrete implementation
type PostgresUserStore struct {
// db *sql.DB
}
func (s *PostgresUserStore) GetUser(id int) (User, error) {
// real DB query here
return User{ID: id, Name: "Alice", Email: "alice@example.com"}, nil
}
func (s *PostgresUserStore) CreateUser(u User) error {
// real DB insert here
return nil
}
// Function accepts interface, not concrete type
func printUser(store UserStore, id int) {
u, err := store.GetUser(id)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Printf("User: %+v\n", u)
}
// Type assertion and type switch
func describe(i interface{}) {
switch v := i.(type) {
case int:
fmt.Printf("int: %d\n", v)
case string:
fmt.Printf("string: %q\n", v)
case Stringer:
fmt.Printf("Stringer: %s\n", v.String())
default:
fmt.Printf("unknown: %T\n", v)
}
}
func main() {
store := &PostgresUserStore{}
printUser(store, 1)
// Interface value holding *strings.Reader satisfies io.ReadWriter
var rw io.ReadWriter = strings.NewReader("hello")
_ = rw
describe(42)
describe("hello")
}4. Error Wrapping: fmt.Errorf %w, errors.Is, errors.As
Go 1.13 introduced the standard error wrapping mechanism. fmt.Errorf with the %w verb wraps errors while preserving the chain. errors.Is checks if a specific error value exists in the chain; errors.As extracts an error of a specific type from the chain.
package main
import (
"errors"
"fmt"
)
// Sentinel errors — compare with errors.Is
var (
ErrNotFound = errors.New("not found")
ErrPermission = errors.New("permission denied")
ErrDatabase = errors.New("database error")
)
// Custom error type for structured errors
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("validation error: %s — %s", e.Field, e.Message)
}
func getUser(id int) (string, error) {
if id <= 0 {
// Wrap with context using %w
return "", fmt.Errorf("getUser: invalid id %d: %w", id, ErrNotFound)
}
if id == 999 {
dbErr := fmt.Errorf("connection refused")
return "", fmt.Errorf("getUser: DB query failed: %w", fmt.Errorf("%w: %w", ErrDatabase, dbErr))
}
return "Alice", nil
}
func updateUser(id int, name string) error {
if len(name) == 0 {
return &ValidationError{Field: "name", Message: "cannot be empty"}
}
if id <= 0 {
return fmt.Errorf("updateUser: %w", ErrPermission)
}
return nil
}
func main() {
// errors.Is — checks the entire error chain
_, err := getUser(-1)
if errors.Is(err, ErrNotFound) {
fmt.Println("Not found:", err)
}
_, err = getUser(999)
if errors.Is(err, ErrDatabase) {
fmt.Println("Database issue:", err)
}
// errors.As — extract concrete type from chain
err = updateUser(1, "")
var valErr *ValidationError
if errors.As(err, &valErr) {
fmt.Printf("Field: %s, Message: %s\n", valErr.Field, valErr.Message)
}
err = updateUser(-1, "Alice")
if errors.Is(err, ErrPermission) {
fmt.Println("Permission denied:", err)
}
// Unwrap manually
wrapped := fmt.Errorf("outer: %w", fmt.Errorf("inner: %w", ErrNotFound))
fmt.Println("Unwrap chain:", errors.Unwrap(errors.Unwrap(wrapped)))
}5. Generics in Go 1.18+: Type Parameters and Constraints
Generics allow writing type-safe code that works across multiple types without duplication. Type parameters are declared in square brackets, and constraints are defined using interfaces. The golang.org/x/exp/constraints package provides common constraints like Ordered and Integer.
package main
import (
"fmt"
)
// Constraint: any type that supports < and >
type Ordered interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 |
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 |
~float32 | ~float64 | ~string
}
// Generic function with type parameter T constrained to Ordered
func Min[T Ordered](a, b T) T {
if a < b {
return a
}
return b
}
func Max[T Ordered](a, b T) T {
if a > b {
return a
}
return b
}
// Map, Filter, Reduce — generic functional utilities
func Map[T, U any](s []T, f func(T) U) []U {
result := make([]U, len(s))
for i, v := range s {
result[i] = f(v)
}
return result
}
func Filter[T any](s []T, pred func(T) bool) []T {
var result []T
for _, v := range s {
if pred(v) {
result = append(result, v)
}
}
return result
}
func Reduce[T, U any](s []T, init U, f func(U, T) U) U {
acc := init
for _, v := range s {
acc = f(acc, v)
}
return acc
}
// Generic Stack data structure
type Stack[T any] struct {
items []T
}
func (s *Stack[T]) Push(v T) { s.items = append(s.items, v) }
func (s *Stack[T]) Pop() (T, bool) {
if len(s.items) == 0 {
var zero T
return zero, false
}
v := s.items[len(s.items)-1]
s.items = s.items[:len(s.items)-1]
return v, true
}
func (s *Stack[T]) Len() int { return len(s.items) }
// Generic Set
type Set[T comparable] map[T]struct{}
func NewSet[T comparable](items ...T) Set[T] {
s := make(Set[T])
for _, item := range items {
s[item] = struct{}{}
}
return s
}
func (s Set[T]) Contains(v T) bool { _, ok := s[v]; return ok }
func (s Set[T]) Add(v T) { s[v] = struct{}{} }
func main() {
fmt.Println(Min(3, 5)) // 3
fmt.Println(Min("apple", "banana")) // apple
fmt.Println(Max(3.14, 2.71)) // 3.14
nums := []int{1, 2, 3, 4, 5}
doubled := Map(nums, func(n int) int { return n * 2 })
fmt.Println(doubled) // [2 4 6 8 10]
evens := Filter(nums, func(n int) bool { return n%2 == 0 })
fmt.Println(evens) // [2 4]
sum := Reduce(nums, 0, func(acc, n int) int { return acc + n })
fmt.Println(sum) // 15
var s Stack[string]
s.Push("a")
s.Push("b")
v, _ := s.Pop()
fmt.Println(v, s.Len()) // b 1
set := NewSet("go", "rust", "python")
fmt.Println(set.Contains("go")) // true
fmt.Println(set.Contains("java")) // false
}6. sync Package: Mutex, RWMutex, WaitGroup, Once, Pool
The sync package provides low-level synchronization primitives. Use Mutex when shared state cannot be elegantly expressed with channels. RWMutex allows multiple concurrent reads but exclusive writes. WaitGroup waits for a set of goroutines to finish. Once ensures initialization runs exactly once. Pool caches objects to reduce GC pressure.
package main
import (
"fmt"
"sync"
"sync/atomic"
)
// Mutex — protect shared state
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
// RWMutex — multiple concurrent readers, one exclusive writer
type Cache struct {
mu sync.RWMutex
data map[string]string
}
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock() // multiple goroutines can hold RLock simultaneously
defer c.mu.RUnlock()
v, ok := c.data[key]
return v, ok
}
func (c *Cache) Set(key, val string) {
c.mu.Lock() // exclusive lock — no readers while writing
defer c.mu.Unlock()
c.data[key] = val
}
// WaitGroup — wait for all goroutines to finish
func processItems(items []string) {
var wg sync.WaitGroup
for _, item := range items {
wg.Add(1)
go func(i string) {
defer wg.Done()
fmt.Println("Processing:", i)
}(item)
}
wg.Wait() // blocks until all Done() calls
fmt.Println("All items processed")
}
// Once — singleton initialization
var (
instance *Cache
once sync.Once
)
func GetCache() *Cache {
once.Do(func() {
instance = &Cache{data: make(map[string]string)}
fmt.Println("Cache initialized once")
})
return instance
}
// Pool — reuse objects, reduce allocations
var bufPool = sync.Pool{
New: func() interface{} {
return make([]byte, 0, 1024) // allocate 1KB buffer
},
}
func processRequest(data string) {
buf := bufPool.Get().([]byte)
defer bufPool.Put(buf[:0]) // reset slice length before returning
buf = append(buf, data...)
fmt.Printf("Processed %d bytes\n", len(buf))
}
// Atomic operations — lock-free for simple counters
var atomicCounter int64
func incrementAtomic() {
atomic.AddInt64(&atomicCounter, 1)
}
func main() {
c := &SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
c.Increment()
}()
}
wg.Wait()
fmt.Println("Counter:", c.Value()) // always 1000
cache := GetCache()
cache.Set("user:1", "Alice")
v, _ := cache.Get("user:1")
fmt.Println("Cached:", v)
processItems([]string{"a", "b", "c", "d"})
processRequest("hello world")
for i := 0; i < 100; i++ {
wg.Add(1)
go func() { defer wg.Done(); incrementAtomic() }()
}
wg.Wait()
fmt.Println("Atomic counter:", atomic.LoadInt64(&atomicCounter))
}7. Go Memory Model and Race Conditions
The Go memory model defines when a goroutine is guaranteed to observe writes made by another goroutine. Without synchronization primitives, concurrent reads and writes to the same variable constitute a data race with undefined behavior. Use go test -race or go build -race to enable the race detector.
// DATA RACE — DO NOT DO THIS
// var counter int
// for i := 0; i < 1000; i++ {
// go func() { counter++ }() // race: concurrent writes!
// }
// Fix 1: Use a Mutex
// Fix 2: Use atomic operations
// Fix 3: Use a channel
package main
import (
"fmt"
"sync"
"sync/atomic"
)
// Happens-before relationship examples:
// 1. Channel send happens-before the corresponding channel receive
// 2. sync.Mutex.Lock() happens-before the corresponding Unlock()
// 3. sync.WaitGroup.Done() happens-before Wait() returns
// Channel as synchronization (no mutex needed)
func safeWithChannel() {
ch := make(chan int, 1)
var data int
// Writer goroutine
go func() {
data = 42 // write
ch <- 1 // signal: write is complete
}()
<-ch // receive happens-after send, so data=42 is visible
fmt.Println("Data:", data)
}
// Detecting races: build/test with -race flag
// go build -race ./...
// go test -race ./...
// sync/atomic for simple lock-free counters
type AtomicCounter struct {
n int64
}
func (c *AtomicCounter) Inc() { atomic.AddInt64(&c.n, 1) }
func (c *AtomicCounter) Load() int64 { return atomic.LoadInt64(&c.n) }
func (c *AtomicCounter) Store(v int64) { atomic.StoreInt64(&c.n, v) }
func (c *AtomicCounter) CAS(old, new int64) bool {
return atomic.CompareAndSwapInt64(&c.n, old, new)
}
// Double-checked locking with sync.Once (correct pattern)
var (
dbInstance *DatabaseClient
dbOnce sync.Once
)
type DatabaseClient struct{ url string }
func GetDB(url string) *DatabaseClient {
dbOnce.Do(func() {
dbInstance = &DatabaseClient{url: url}
})
return dbInstance
}
func main() {
safeWithChannel()
var c AtomicCounter
var wg sync.WaitGroup
for i := 0; i < 10000; i++ {
wg.Add(1)
go func() { defer wg.Done(); c.Inc() }()
}
wg.Wait()
fmt.Println("Atomic count:", c.Load()) // always 10000
db1 := GetDB("postgres://localhost/mydb")
db2 := GetDB("postgres://other/db") // ignored — Once already ran
fmt.Println("Same instance:", db1 == db2) // true
}8. Profiling Go Apps: pprof and trace
Go has powerful built-in profiling tools. net/http/pprof exposes CPU, memory, goroutine, and other profiles via HTTP endpoints. runtime/pprof writes profiles programmatically. go tool trace provides fine-grained execution tracing with goroutine scheduling visibility.
package main
import (
"log"
"net/http"
_ "net/http/pprof" // blank import registers /debug/pprof handlers
"os"
"runtime"
"runtime/pprof"
"runtime/trace"
)
func main() {
// --- HTTP pprof server (for long-running services) ---
// Simply import net/http/pprof and start an HTTP server
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
// Then capture profiles:
// CPU: go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
// Heap: go tool pprof http://localhost:6060/debug/pprof/heap
// Goroutines: http://localhost:6060/debug/pprof/goroutine?debug=2
// Web UI: go tool pprof -http=:8080 profile.pb.gz
// --- File-based CPU profile (for CLI tools / benchmarks) ---
cpuFile, _ := os.Create("cpu.prof")
defer cpuFile.Close()
pprof.StartCPUProfile(cpuFile)
defer pprof.StopCPUProfile()
// --- Heap profile at program exit ---
defer func() {
heapFile, _ := os.Create("heap.prof")
defer heapFile.Close()
runtime.GC() // force GC before heap snapshot
pprof.WriteHeapProfile(heapFile)
}()
// --- Execution trace ---
traceFile, _ := os.Create("trace.out")
defer traceFile.Close()
trace.Start(traceFile)
defer trace.Stop()
// Analyze: go tool trace trace.out
// --- Memory stats ---
var stats runtime.MemStats
runtime.ReadMemStats(&stats)
log.Printf("Alloc: %v MiB", stats.Alloc/1024/1024)
log.Printf("TotalAlloc: %v MiB", stats.TotalAlloc/1024/1024)
log.Printf("Sys: %v MiB", stats.Sys/1024/1024)
log.Printf("NumGC: %v", stats.NumGC)
// your application logic here...
}pprof Command Reference
# Capture 30-second CPU profile from running service
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
# Capture heap profile
go tool pprof http://localhost:6060/debug/pprof/heap
# Open interactive web UI with flame graphs
go tool pprof -http=:8080 cpu.prof
# Analyze profile in CLI
(pprof) top10 # top 10 functions by CPU time
(pprof) top10 -cum # top 10 by cumulative time
(pprof) list main. # show line-by-line for main package
(pprof) web # open SVG in browser
(pprof) pdf # export to PDF
# Run benchmarks with profiling
go test -bench=BenchmarkMyFunc -cpuprofile=cpu.prof -memprofile=mem.prof
go tool pprof -http=:8080 cpu.prof
# Execution trace
go test -trace=trace.out
go tool trace trace.out
# Check for goroutine leaks
curl http://localhost:6060/debug/pprof/goroutine?debug=29. Go Testing Patterns: Table-Driven Tests, Benchmarks, Fuzz Testing
Go's testing package natively supports unit tests, benchmarks, and (1.18+) fuzz tests. Table-driven testing is idiomatic Go: define a slice of test cases and run subtests with t.Run. Benchmarks measure performance with testing.B. Fuzz testing auto-generates inputs to discover edge cases.
package calculator_test
import (
"testing"
"errors"
)
func Divide(a, b float64) (float64, error) {
if b == 0 {
return 0, errors.New("division by zero")
}
return a / b, nil
}
// Table-driven tests
func TestDivide(t *testing.T) {
t.Parallel() // run in parallel with other tests
tests := []struct {
name string
a, b float64
want float64
wantErr bool
}{
{name: "basic division", a: 10, b: 2, want: 5},
{name: "negative numbers", a: -6, b: 3, want: -2},
{name: "float result", a: 1, b: 3, want: 0.3333333333333333},
{name: "divide by zero", a: 5, b: 0, wantErr: true},
{name: "zero numerator", a: 0, b: 5, want: 0},
}
for _, tt := range tests {
tt := tt // capture range variable for t.Parallel()
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got, err := Divide(tt.a, tt.b)
if (err != nil) != tt.wantErr {
t.Errorf("Divide(%v, %v) error = %v, wantErr %v", tt.a, tt.b, err, tt.wantErr)
return
}
if !tt.wantErr && got != tt.want {
t.Errorf("Divide(%v, %v) = %v, want %v", tt.a, tt.b, got, tt.want)
}
})
}
}
// Benchmark
func BenchmarkDivide(b *testing.B) {
b.ReportAllocs() // report memory allocations
for i := 0; i < b.N; i++ {
_, err := Divide(1000, 7)
if err != nil {
b.Fatal(err)
}
}
}
// Fuzz test (Go 1.18+)
func FuzzDivide(f *testing.F) {
// Seed corpus
f.Add(10.0, 2.0)
f.Add(-5.0, 3.0)
f.Add(0.0, 1.0)
f.Fuzz(func(t *testing.T, a, b float64) {
// Invariant: if b != 0, result * b should approximate a
result, err := Divide(a, b)
if b == 0 {
if err == nil {
t.Errorf("expected error when dividing by zero")
}
return
}
if err != nil {
t.Errorf("unexpected error: %v", err)
}
_ = result
})
}Test Helpers and Mocking
package main
import (
"net/http"
"net/http/httptest"
"testing"
"encoding/json"
)
// Mock interface for testing
type MockUserStore struct {
users map[int]string
}
func (m *MockUserStore) GetUser(id int) (string, error) {
if name, ok := m.users[id]; ok {
return name, nil
}
return "", errors.New("not found")
}
// httptest — test HTTP handlers without starting a server
func TestUserHandler(t *testing.T) {
handler := func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{"name": "Alice"})
}
req := httptest.NewRequest(http.MethodGet, "/users/1", nil)
rec := httptest.NewRecorder()
handler(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("expected 200, got %d", rec.Code)
}
var body map[string]string
json.NewDecoder(rec.Body).Decode(&body)
if body["name"] != "Alice" {
t.Errorf("expected Alice, got %s", body["name"])
}
}
// TestMain for setup/teardown
func TestMain(m *testing.M) {
// Setup: start DB, seed data, etc.
setupTestDB()
// Run tests
exitCode := m.Run()
// Teardown
teardownTestDB()
os.Exit(exitCode)
}10. Building REST APIs with Gin, Chi, and Echo
The Go ecosystem has several excellent HTTP frameworks. Gin is the most popular high-performance framework with built-in routing, middleware, JSON binding, and validation. Chi is a lightweight router compatible with standard net/http. Echo offers a Gin-like API with excellent performance. The standard net/http is sufficient for simple use cases.
// go get github.com/gin-gonic/gin
package main
import (
"net/http"
"strconv"
"github.com/gin-gonic/gin"
)
type User struct {
ID int `json:"id"`
Name string `json:"name" binding:"required,min=2,max=50"`
Email string `json:"email" binding:"required,email"`
}
type UserService struct {
users map[int]User
nextID int
}
func NewUserService() *UserService {
return &UserService{users: make(map[int]User), nextID: 1}
}
func main() {
r := gin.Default() // includes Logger and Recovery middleware
svc := NewUserService()
// Middleware
r.Use(func(c *gin.Context) {
c.Header("X-Request-ID", "req-123")
c.Next()
})
v1 := r.Group("/api/v1")
{
v1.GET("/users", func(c *gin.Context) {
// Query params
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
limit, _ := strconv.Atoi(c.DefaultQuery("limit", "10"))
_ = page
_ = limit
var users []User
for _, u := range svc.users {
users = append(users, u)
}
c.JSON(http.StatusOK, gin.H{"data": users, "total": len(users)})
})
v1.GET("/users/:id", func(c *gin.Context) {
id, err := strconv.Atoi(c.Param("id"))
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid id"})
return
}
user, ok := svc.users[id]
if !ok {
c.JSON(http.StatusNotFound, gin.H{"error": "user not found"})
return
}
c.JSON(http.StatusOK, user)
})
v1.POST("/users", func(c *gin.Context) {
var u User
// ShouldBindJSON validates based on struct tags
if err := c.ShouldBindJSON(&u); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
u.ID = svc.nextID
svc.nextID++
svc.users[u.ID] = u
c.JSON(http.StatusCreated, u)
})
v1.DELETE("/users/:id", func(c *gin.Context) {
id, _ := strconv.Atoi(c.Param("id"))
delete(svc.users, id)
c.Status(http.StatusNoContent)
})
}
r.Run(":8080") // listen on :8080
}Chi Router Example (net/http Compatible)
// go get github.com/go-chi/chi/v5
package main
import (
"encoding/json"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
)
func main() {
r := chi.NewRouter()
// Standard middleware
r.Use(middleware.RequestID)
r.Use(middleware.RealIP)
r.Use(middleware.Logger)
r.Use(middleware.Recoverer)
r.Use(middleware.Compress(5))
r.Route("/api/v1", func(r chi.Router) {
r.Route("/users", func(r chi.Router) {
r.Get("/", listUsers)
r.Post("/", createUser)
// URL parameters with chi.URLParam
r.Route("/{id}", func(r chi.Router) {
r.Get("/", getUser)
r.Put("/", updateUser)
r.Delete("/", deleteUser)
})
})
})
http.ListenAndServe(":8080", r)
}
func listUsers(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode([]map[string]string{{"id": "1", "name": "Alice"}})
}
func createUser(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(map[string]string{"status": "created"})
}
func getUser(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id")
json.NewEncoder(w).Encode(map[string]string{"id": id})
}
func updateUser(w http.ResponseWriter, r *http.Request) {
json.NewEncoder(w).Encode(map[string]string{"status": "updated"})
}
func deleteUser(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNoContent)
}11. Database: GORM vs sqlx vs pgx
Each library has a distinct focus: GORM is a full-featured ORM ideal for rapid development; sqlx is a thin wrapper over database/sql preserving SQL control; pgx is a native PostgreSQL driver delivering the best performance with full PG type support.
GORM — Full-Featured ORM
// go get gorm.io/gorm gorm.io/driver/postgres
package main
import (
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"time"
)
type User struct {
gorm.Model // embeds ID, CreatedAt, UpdatedAt, DeletedAt
Name string `gorm:"not null;size:100"`
Email string `gorm:"uniqueIndex;not null"`
Age int
Posts []Post `gorm:"foreignKey:UserID"`
}
type Post struct {
gorm.Model
Title string `gorm:"not null"`
Content string `gorm:"type:text"`
UserID uint
}
func main() {
dsn := "host=localhost user=postgres password=secret dbname=myapp port=5432"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{
Logger: logger.Default.LogMode(logger.Info),
})
if err != nil {
panic("failed to connect to database")
}
// Auto-migrate
db.AutoMigrate(&User{}, &Post{})
// Create
user := User{Name: "Alice", Email: "alice@example.com", Age: 30}
result := db.Create(&user)
if result.Error != nil {
panic(result.Error)
}
// Read with associations
var u User
db.Preload("Posts").First(&u, user.ID)
// Update
db.Model(&u).Updates(User{Name: "Alice Smith", Age: 31})
// Or update specific field
db.Model(&u).Update("email", "alice.smith@example.com")
// Delete (soft delete with gorm.Model)
db.Delete(&u)
// Hard delete
db.Unscoped().Delete(&u)
// Find with conditions
var users []User
db.Where("age > ?", 25).
Order("name ASC").
Limit(10).
Offset(0).
Find(&users)
// Raw SQL
db.Raw("SELECT id, name FROM users WHERE email = ?", "alice@example.com").Scan(&u)
// Transaction
db.Transaction(func(tx *gorm.DB) error {
if err := tx.Create(&User{Name: "Bob", Email: "bob@example.com"}).Error; err != nil {
return err // rollback
}
return nil // commit
})
// Connection pool
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(10)
sqlDB.SetMaxOpenConns(100)
sqlDB.SetConnMaxLifetime(time.Hour)
}sqlx — Thin Wrapper, Full SQL Control
// go get github.com/jmoiron/sqlx
package main
import (
"context"
"github.com/jmoiron/sqlx"
_ "github.com/lib/pq" // PostgreSQL driver
)
type User struct {
ID int `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
}
func main() {
db, err := sqlx.Connect("postgres",
"host=localhost user=postgres password=secret dbname=myapp sslmode=disable")
if err != nil {
panic(err)
}
defer db.Close()
ctx := context.Background()
// Get single row into struct
var u User
err = db.GetContext(ctx, &u, "SELECT * FROM users WHERE id = $1", 1)
// Select multiple rows into slice
var users []User
err = db.SelectContext(ctx, &users, "SELECT * FROM users WHERE age > $1 ORDER BY name", 25)
// Named queries (cleaner for inserts/updates)
_, err = db.NamedExecContext(ctx,
"INSERT INTO users (name, email) VALUES (:name, :email)",
&User{Name: "Alice", Email: "alice@example.com"})
// NamedQuery for SELECT
rows, err := db.NamedQueryContext(ctx,
"SELECT * FROM users WHERE name = :name",
map[string]interface{}{"name": "Alice"})
if err == nil {
defer rows.Close()
for rows.Next() {
var user User
rows.StructScan(&user)
}
}
// Transaction
tx, err := db.BeginTxx(ctx, nil)
if err != nil {
panic(err)
}
defer tx.Rollback() // no-op if committed
_, err = tx.ExecContext(ctx, "UPDATE users SET name = $1 WHERE id = $2", "Alice Updated", 1)
if err != nil {
return
}
tx.Commit()
_ = u
_ = users
_ = err
}pgx — Native PostgreSQL Driver, Best Performance
// go get github.com/jackc/pgx/v5
package main
import (
"context"
"fmt"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgxpool"
)
type User struct {
ID int32
Name string
Email string
}
func main() {
ctx := context.Background()
// Connection pool (recommended for production)
pool, err := pgxpool.New(ctx, "postgres://postgres:secret@localhost/myapp")
if err != nil {
panic(err)
}
defer pool.Close()
// Configure pool
config, _ := pgxpool.ParseConfig("postgres://postgres:secret@localhost/myapp")
config.MaxConns = 20
config.MinConns = 5
pool2, _ := pgxpool.NewWithConfig(ctx, config)
defer pool2.Close()
// Query single row
var u User
err = pool.QueryRow(ctx,
"SELECT id, name, email FROM users WHERE id = $1", 1).
Scan(&u.ID, &u.Name, &u.Email)
if err != nil {
if err == pgx.ErrNoRows {
fmt.Println("No user found")
}
}
// Query multiple rows
rows, err := pool.Query(ctx, "SELECT id, name, email FROM users ORDER BY id")
defer rows.Close()
for rows.Next() {
var user User
rows.Scan(&user.ID, &user.Name, &user.Email)
fmt.Printf("%+v\n", user)
}
// Exec (insert, update, delete)
tag, err := pool.Exec(ctx,
"INSERT INTO users (name, email) VALUES ($1, $2)",
"Alice", "alice@example.com")
fmt.Println("Rows affected:", tag.RowsAffected())
// Batch queries (send multiple queries in one roundtrip)
b := &pgx.Batch{}
b.Queue("SELECT 1")
b.Queue("SELECT 2")
results := pool.SendBatch(ctx, b)
defer results.Close()
// Transaction
tx, _ := pool.Begin(ctx)
defer tx.Rollback(ctx)
tx.Exec(ctx, "UPDATE users SET name = $1 WHERE id = $2", "Alice Smith", 1)
tx.Commit(ctx)
_ = err
_ = u
}GORM vs sqlx vs pgx Comparison
| Feature | GORM | sqlx | pgx |
|---|---|---|---|
| Abstraction | High (ORM) | Low (SQL) | Lowest (native driver) |
| Performance | Medium | Good | Best |
| SQL Control | Partial | Full | Full |
| Migrations | Built-in AutoMigrate | External tool | External tool |
| PG-specific types | Limited | Limited | Full support |
| Connection pool | Via database/sql | Via database/sql | Native pgxpool |
| Best for | CRUD-heavy admin tools | Apps needing SQL control | High-perf production apps |
12. Dockerfile for Go: Multi-Stage Builds
Multi-stage builds are the Go containerization best practice. The build stage uses the full Go image (~1GB) to compile the binary; the final stage copies only the binary into a minimal scratch or distroless image, shrinking the final image to ~10-20MB.
# syntax=docker/dockerfile:1
# ---- Build Stage ----
FROM golang:1.22-alpine AS builder
# Install build dependencies
RUN apk add --no-cache git ca-certificates tzdata
WORKDIR /build
# Cache dependencies separately (layer caching optimization)
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build the binary
# CGO_ENABLED=0 produces a static binary (no glibc dependency)
# -ldflags trims debug info for smaller binary
# -trimpath removes file system paths from binary
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-ldflags="-w -s -X main.version=1.0.0 -X main.buildTime=$(date -u +%Y%m%dT%H%M%S)" \
-trimpath \
-o /app/server \
./cmd/server
# ---- Runtime Stage (scratch = minimal, ~0MB base) ----
FROM scratch
# Copy timezone data and CA certificates for HTTPS
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy the binary
COPY --from=builder /app/server /server
# Non-root user (scratch: use numeric UID)
USER 65534:65534
EXPOSE 8080
ENTRYPOINT ["/server"]
---
# Alternative: use distroless for debugging capability
FROM gcr.io/distroless/static-debian12:nonroot AS runtime
COPY --from=builder /app/server /server
USER nonroot:nonroot
EXPOSE 8080
ENTRYPOINT ["/server"]docker-compose.yml for Local Development
version: "3.9"
services:
api:
build:
context: .
dockerfile: Dockerfile
target: builder # stop at builder stage for dev
command: go run ./cmd/server
ports:
- "8080:8080"
- "6060:6060" # pprof
environment:
- DATABASE_URL=postgres://postgres:secret@db:5432/myapp
- REDIS_URL=redis://cache:6379
- LOG_LEVEL=debug
volumes:
- .:/build:cached # live reload with air
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pg_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
migrate:
image: migrate/migrate
volumes:
- ./migrations:/migrations
command:
[
"-path", "/migrations",
"-database", "postgres://postgres:secret@db:5432/myapp?sslmode=disable",
"up"
]
depends_on:
db:
condition: service_healthy
volumes:
pg_data:13. Go vs Rust vs Node.js: Backend Comparison
Each language excels in different scenarios. Go achieves the best balance between development speed and runtime performance. Rust delivers maximum performance and memory safety with a steep learning curve. Node.js has the largest ecosystem and is ideal for I/O-heavy and real-time applications.
| Dimension | Go | Rust | Node.js |
|---|---|---|---|
| Runtime Performance | Excellent (compiled, GC) | Best (no GC, zero-cost) | Good (V8 JIT) |
| Memory Usage | Low (GC managed) | Lowest (manual) | Medium (V8 heap) |
| Concurrency Model | Goroutines + Channels (CSP) | async/await + Tokio | Event loop (single-threaded) |
| Learning Curve | Gentle (simple syntax) | Steep (ownership/borrow) | Low (JS foundation) |
| Compile Speed | Very fast (seconds) | Slow (minutes for large) | None (interpreted) |
| Ecosystem | Medium, backend-focused | Growing, crates.io | Huge, npm 2M+ packages |
| Deployment | Single static binary | Single static binary | Requires Node.js runtime |
| Best For | Microservices, APIs, DevOps | Systems, WebAssembly | Real-time, full-stack JS |
# Performance benchmark reference (approx, varies by workload):
# HTTP requests/sec (hello world):
# Rust (axum/hyper): ~400,000 req/s
# Go (fasthttp): ~350,000 req/s
# Go (net/http): ~150,000 req/s
# Node.js (uWebSocket):~280,000 req/s
# Node.js (Fastify): ~60,000 req/s
# Memory footprint (idle REST API):
# Rust: ~3 MB
# Go: ~15 MB
# Node.js: ~60 MB
# Startup time:
# Rust: ~5ms
# Go: ~10ms
# Node.js: ~200ms (without bundling)
# Choose Go when:
# - Team productivity > absolute performance
# - Building microservices or Kubernetes operators
# - Need fast compilation + easy deployment
# - Writing CLIs, gRPC services, data pipelines
# Choose Rust when:
# - Zero-copy, zero-allocation is required
# - Building OS kernels, embedded systems, WebAssembly
# - Memory safety without GC pauses is critical
# Choose Node.js when:
# - Team is JavaScript-native
# - Building real-time apps (Socket.IO, SSE)
# - Sharing code between frontend and backend
# - Rapid prototyping with NPM ecosystemAdvanced Go Commands Quick Reference
go test -race ./...Run tests with race detectorgo test -bench=. -benchmemBenchmarks with memory statsgo test -fuzz=FuzzXxxRun fuzz testgo test -coverprofile=c.out && go tool cover -html=c.outGenerate HTML coverage reportgo tool pprof -http=:8080 cpu.profpprof web UI with flame graphgo build -gcflags="-m" ./...Print escape analysisgo vet ./...Static analysis for common mistakesgolangci-lint runRun 50+ linters (install separately)go mod graph | dot -Tpng > deps.pngVisualize dependency graphgo generate ./...Run //go:generate directivesConclusion
Go's advanced features — goroutine/channel concurrency, context cancellation propagation, generics, sync primitives, and pprof profiling — together form a powerful production-ready toolbox. Understanding the memory model and race conditions is foundational for correct concurrent code. Table-driven tests and benchmarks ensure correctness and performance. Multi-stage Dockerfiles make deployment lightweight and efficient.
Whether building high-throughput microservices, Kubernetes operators, CLI tools, or data pipelines, Go provides everything needed with minimal runtime overhead, excellent readability, and an outstanding toolchain. Master these advanced patterns, and you will be confident building scalable, maintainable Go systems.
Frequently Asked Questions
What is the difference between buffered and unbuffered channels in Go?
Unbuffered channels (make(chan T)) block the sender until a receiver is ready, and block the receiver until a sender sends — they provide synchronization. Buffered channels (make(chan T, n)) allow up to n values to be sent without a receiver being ready. Use unbuffered channels for synchronization and rendezvous, buffered for decoupling producer/consumer speeds.
How does the Go context package work for cancellation and deadlines?
The context package propagates cancellation signals, deadlines, and request-scoped values down a call chain. context.WithCancel returns a cancel function you must call to release resources. context.WithDeadline and context.WithTimeout automatically cancel at a specific time. Always pass context.Context as the first argument and check ctx.Err() or ctx.Done() in long-running operations.
How do Go generics work and when should I use them?
Go generics (Go 1.18+) use type parameters with constraints defined by interfaces. Syntax: func Map[T, U any](s []T, f func(T) U) []U. Use generics for type-safe data structures (stacks, queues, sets), utility functions (Map, Filter, Reduce), and APIs that need to work across multiple types without code duplication. Avoid generics for simple cases where an interface suffices.
What is the Go memory model and how do I avoid race conditions?
The Go memory model defines when one goroutine is guaranteed to observe writes made by another. Without synchronization (channels, sync.Mutex, sync/atomic), writes may not be visible across goroutines. Use the -race flag (go test -race) to detect data races. Prefer channels for communication and sync.Mutex for protecting shared state.
How do I profile a Go application with pprof?
Import net/http/pprof in your main package to expose /debug/pprof endpoints. Capture a CPU profile with: go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30. For heap profiles use /debug/pprof/heap. Analyze interactively with the pprof CLI or go tool pprof -http=:8080 for a web UI with flame graphs.
What is the difference between GORM, sqlx, and pgx for database access in Go?
GORM is a full-featured ORM with associations, hooks, migrations, and auto-join — great for rapid development but adds abstraction overhead. sqlx is a thin extension over database/sql providing struct scanning and named queries while keeping raw SQL control. pgx is a native PostgreSQL driver with superior performance, full PostgreSQL type support, and connection pooling via pgxpool. Choose sqlx or pgx for performance-critical apps and GORM for CRUD-heavy admin tools.
How do I write table-driven tests in Go?
Table-driven tests define a slice of test cases (structs with input, expected output, and description), then loop over them calling t.Run for subtests. This pattern reduces repetition, makes adding new cases trivial, and provides clear failure messages. Use testify/assert for readable assertions, and run with go test -v to see each subtest name.
How does Go compare to Rust and Node.js for backend development?
Go offers the best balance: faster than Node.js with true parallelism, simpler than Rust with a garbage collector, and excellent tooling. Rust delivers maximum performance and memory safety without GC but has a steep learning curve. Node.js has the largest ecosystem and is ideal for I/O-heavy real-time apps. Choose Go for microservices, APIs, and DevOps tooling; Rust for systems programming and WebAssembly; Node.js for teams with heavy JavaScript expertise.