Beyond State: A Practical Introduction to Event Sourcing in Go


For decades, the dominant approach to persisting data has been state-oriented. We create, read, update, and delete (CRUD) records in a database. When a user's details change, we run an UPDATE statement, overwriting the old data. This approach is simple and well-understood, but it has a fundamental drawback: it erases history. We know the current state of our data, but we lose the rich context and intent behind how it got there. Why was this order's status changed to "returned"? When was this product's price last adjusted? Answering these questions with a state-oriented model is often difficult or impossible.

Event Sourcing offers a powerful alternative. Instead of storing just the current state, we store the full sequence of immutable events that have ever occurred. The state of our application becomes a derivative of this event history. Think of it like a bank account: a traditional database might only store your current balance ($150), but an event-sourced system stores the entire transaction ledger (+$200 deposit, -$50 withdrawal). You can always calculate the current balance from the ledger, but you can never reconstruct the ledger from just the balance.

This article explores the principles of Event Sourcing, its significant advantages for complex domains, and how to implement the core concepts in Go.


What is Event Sourcing? ๐Ÿ“–

Event Sourcing is an architectural pattern where the single source of truth is a log of domain events. An "event" is a record of something that happened in the business domain. The state of an entity, called an Aggregate, is derived by replaying these events in order.

Core Components:

  • Events: These are immutable facts about the past, named in the past tense. For example, OrderPlaced, ItemAddedToCart, or PaymentProcessed. They contain all the necessary data to understand what happened.
  • Commands: A command is a request to perform an action. For example, PlaceOrder or AddItemToCart. A command is processed by an Aggregate and may result in one or more events being generated, or it may be rejected.
  • Aggregate: An Aggregate is a consistency boundary that processes commands and produces events. It's a business object (like an Order or ShoppingCart) that encapsulates state and enforces business rules. Its state is never persisted directly; it's always rebuilt from its history of events.
  • Event Store: This is the database that stores the events. It's an append-only log, optimized for writing new events and reading the full stream of events for a specific aggregate.

The typical flow is: Client -> Command -> Aggregate -> Event(s) -> Event Store


The Benefits of Thinking in Events ๐Ÿ“ˆ

Adopting Event Sourcing is a paradigm shift, but it unlocks several powerful capabilities that are difficult to achieve with traditional CRUD systems.

1. Complete Audit Trail

Since every state change is recorded as an immutable event, you have a perfect, unchangeable log of everything that has ever happened in the system. This is invaluable for debugging ("What series of actions led to this bug?"), auditing ("Show me all changes related to this user"), and business intelligence ("What is the most common time for cart abandonment?").

2. Temporal Queries & Time Travel

Because the state is a derivative of the event log, you can reconstruct the state of any aggregate at any point in time. You simply stop replaying events at a specific timestamp or event number. This "time travel" ability is incredibly powerful for historical analysis and understanding how your data has evolved.

3. Decoupling and Projections (CQRS)

This is arguably the most significant benefit. The event stream is the source of truth for writes, but it's not always the most efficient format for reads. With Event Sourcing, you can create multiple, highly-optimized read models (or Projections) by listening to the stream of events. For example, a single OrderPlaced event could be used to:

  • Update a relational database table that backs the customer's order history page.
  • Update a search index like Elasticsearch to make the order searchable.
  • Feed data into a reporting dashboard.

This separation of read and write models is the core idea behind Command Query Responsibility Segregation (CQRS), a pattern that pairs naturally with Event Sourcing.


A Practical Example in Go

Let's model a simple shopping cart to see how these concepts translate into Go code. We won't build a full event store, but we'll focus on the aggregate and event logic.

1. Define the Events

First, we define our events as structs. We'll use an interface to handle them polymorphically.

package main
 
// Event is the interface for all domain events.
type Event interface {
    isEvent() // Marker method
}
 
// ItemAdded represents the event when an item is added to the cart.
type ItemAdded struct {
    ItemID   string
    Price    float64
    Quantity int
}
 
func (e ItemAdded) isEvent() {}
 
// CartCheckedOut represents the event when the cart is finalized.
type CartCheckedOut struct {
    CheckoutID string
}
 
func (e CartCheckedOut) isEvent() {}

2. Define the Aggregate

The ShoppingCart aggregate holds the state, which is built by applying events. Notice it doesn't export its fields; state changes are controlled internally.

package main
 
import "fmt"
 
// ShoppingCart is our aggregate.
type ShoppingCart struct {
    ID        string
    items     map[string]int
    totalCost float64
    checkedOut bool
}
 
// NewShoppingCart creates an empty cart.
func NewShoppingCart(id string) *ShoppingCart {
    return &ShoppingCart{
        ID:    id,
        items: make(map[string]int),
    }
}
 
// HandleCommand processes a command and returns the resulting event.
// In a real app, this would be split into multiple methods like AddItem, Checkout etc.
func (c *ShoppingCart) AddItem(itemID string, price float64, quantity int) (Event, error) {
    if c.checkedOut {
        return nil, fmt.Errorf("cannot add items to a checked-out cart")
    }
    if quantity <= 0 {
        return nil, fmt.Errorf("quantity must be positive")
    }
 
    // Command is valid, produce an event
    return ItemAdded{ItemID: itemID, Price: price, Quantity: quantity}, nil
}
 
// Apply modifies the aggregate's state based on an event.
// THIS IS THE ONLY PLACE STATE SHOULD BE MUTATED.
func (c *ShoppingCart) Apply(event Event) {
    switch e := event.(type) {
    case ItemAdded:
        c.items[e.ItemID] += e.Quantity
        c.totalCost += e.Price * float64(e.Quantity)
    case CartCheckedOut:
        c.checkedOut = true
    }
}

3. Rehydrating the Aggregate

To get the current state of a cart, we don't fetch it from a database row. We fetch its event history and replay them. This process is called rehydration.

package main
 
// RehydrateShoppingCart rebuilds the state of a cart from its event history.
func RehydrateShoppingCart(id string, events []Event) *ShoppingCart {
    cart := NewShoppingCart(id)
    for _, event := range events {
        cart.Apply(event)
    }
    return cart
}
 
// --- Conceptual Event Store ---
// type EventStore interface {
//     GetEvents(aggregateID string) []Event
//     SaveEvents(aggregateID string, events []Event) error
// }

In a real application, a command handler would first Rehydrate the aggregate, then call a command method (AddItem), and if an event is produced, save it to the EventStore.


Challenges and Considerations ๐Ÿค”

Event Sourcing is not a silver bullet. It introduces its own set of challenges:

  • Learning Curve: It requires a shift in mindset away from the familiar CRUD model.
  • Eventual Consistency: Because read models (projections) are often updated asynchronously, the system becomes eventually consistent, which can impact user experience if not handled carefully.
  • Event Versioning: Over time, the structure of your events may need to change. This requires a strategy for versioning and migrating events, such as upcasting (transforming old events to a new format on the fly).

Conclusion

Event Sourcing is a powerful architectural pattern that provides a complete historical record of your data, enables robust auditing and temporal analysis, and fosters highly decoupled and scalable systems through projections. While it introduces complexity, for applications with complex business logic and a need for a strong audit trail, it offers a fundamentally more robust and flexible way to model and persist data. By thinking in events instead of state, you can build systems that not only know what their data is, but also how it came to be.


By Marko Leinikka

26 July 2025 at 03:00

Word count: 1300
6 min read