Skip to content

Event Sourcing Fundamentals

What event sourcing actually is and when you should use it

If you’ve built CRUD apps, you already know the usual deal: a row is the truth. A user updates their email, you overwrite users.email, and the old value disappears unless you remembered to log it somewhere.

Event sourcing flips that around. The truth is not the row. The truth is the story of what happened.

Instead of storing this:

┌────────────┬─────────────────┬───────────┐
│ user_id │ email │ plan │
├────────────┼─────────────────┼───────────┤
│ user_123 │ ada@work.dev │ pro │
└────────────┴─────────────────┴───────────┘

You store this:

stream: user_123
┌─────┬────────────────────┬───────────────────────────────┐
│ 1 │ user.registered │ email: ada@gmail.com │
│ 2 │ user.emailChanged │ from: ada@gmail.com │
│ │ │ to: ada@work.dev │
│ 3 │ user.planChanged │ from: free, to: pro │
└─────┴────────────────────┴───────────────────────────────┘

Same user. Different source of truth.

An event is a fact your business cares about. Not “set this field to that value.” More like “this thing happened.”

  • order.placed
  • payment.failed
  • user.emailChanged
  • document.published

The wording matters. Events should read like the logbook of your product. If a customer support person, product manager, or future-you could understand the timeline, you’re probably naming them well.

Events are append-only. You do not edit yesterday. If something changed today, you add another event.

Yesterday: payment.failed
Today: payment.retried
Today: payment.succeeded

That sounds almost too simple, which is why people keep making it complicated.

Event sourcing does not mean your app has no current state. Your UI still wants to show an order status. Your API still wants to return a user profile.

That current state is a projection: a useful view built from events. Sometimes the projection is computed on the fly. Sometimes it is saved into a read model, like a table, document, cache, or search index.

If you know Array.prototype.reduce, the idea is familiar. Start with an empty state. Apply each event. End up with the state you want to read.

Functional event sourcing usually calls that little reducer evolve, because it evolves state from one event to the next.

import type { Event, ReadEvent } from '@delta-base/toolkit';
type UserEvent =
| Event<'user.registered', { email: string }>
| Event<'user.emailChanged', { email: string }>
| Event<'user.planChanged', { plan: 'free' | 'pro' }>;
type User = {
email?: string;
plan: 'free' | 'pro';
};
const initialUser = (): User => ({ plan: 'free' });
const evolve = (state: User, event: ReadEvent<UserEvent>): User => {
switch (event.type) {
case 'user.registered':
case 'user.emailChanged':
return { ...state, email: event.data.email };
case 'user.planChanged':
return { ...state, plan: event.data.plan };
}
};
const { state: user } = await eventStore.aggregateStream<User, UserEvent>(
'user_123',
{
initialState: initialUser,
evolve,
}
);

The projection gives you “now.” The events tell you how “now” happened.

┌────────────────────┐
│ user.registered │
│ user.emailChanged │
│ user.planChanged │
└─────────┬──────────┘
│ project / evolve
┌────────────────────┐
│ email: ada@work.dev│
│ plan: pro │
└────────────────────┘

There is one more word worth knowing: decide. Projection logic answers “given these events, what is the current state?” Decision logic answers “given a command and current state, what events should happen next?”

Command + State ── decide ──▶ Event(s)
Event + State ── evolve ──▶ New State

That is the decider pattern. You do not need it to understand this page, but it is the natural next step once you start putting business rules around your events. The deeper version is in Functional Event Sourcing with the Decider Pattern.

CRUD is great for plenty of things. A feature flag table does not need a novel. A list of countries does not need a backstory.

But CRUD gets awkward when the story matters.

await db.orders.update(orderId, {
status: 'cancelled',
});

Why was it cancelled? By whom? Was it paid first? Did inventory get reserved? Did we email the customer? Maybe you have columns for some of that. Maybe you have audit logs. Maybe you have five tables trying to remember the thing your main table forgot.

With events, the story is the model:

await eventStore.appendToStream('order_123', [
{
type: 'order.cancelled',
data: {
cancelledBy: 'customer',
reason: 'delivery_too_late',
},
},
]);

Now the interesting business fact is not hiding in a side channel. It is the thing you saved.

Events usually live in streams. A stream is just the ordered history for one thing.

order_123
├─ order.placed
├─ payment.authorized
├─ inventory.reserved
├─ order.shipped
└─ order.delivered
order_456
├─ order.placed
├─ payment.failed
└─ order.cancelled

When someone changes order_123, you append to the order_123 stream. When you need the current order, you read that stream and project it.

That is the core loop:

command from app
decide what happened
append event to stream
build current state for reads

You do not have to replay events on every page load. In real apps, you usually save projections into read models.

The projection is the transformation. The read model is where the result lives.

Events Read models
────── ───────────
order.placed ───────▶ orders table
payment.succeeded ───────▶ revenue dashboard
order.shipped ───────▶ customer timeline

This is the nice part: one history can feed many useful views. The support screen, analytics dashboard, and public API can all be shaped differently without changing the facts you stored.

Event sourcing also fits nicely with sync engine-style software: local-first apps, offline clients, background workers, edge replicas, mobile apps, browser tabs, whatever needs to keep its own copy of the world.

In a CRUD system, sync often means asking “what rows changed?” after the fact. That is why teams reach for update timestamps, polling, CDC pipelines, trigger tables, and other plumbing that tries to reconstruct a timeline from mutable tables.

With event sourcing, the timeline is already the database.

┌──────────────┐
│ Event log │
└──────┬───────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Web client │ │ Mobile app │ │ Search index │
│ position: 42 │ │ position: 39 │ │ position: 42 │
└──────────────┘ └──────────────┘ └──────────────┘

Each subscriber remembers the last event it processed. When it reconnects, it asks for everything after that position and catches up. No mystery diff. No “did this row change or did we just touch updated_at?” dance.

That does not make distributed software free. Conflicts, permissions, schema changes, and deleted personal data still need real thought. But an append-only log gives every replica the same simple job: read facts in order, project the local state it needs.

Event sourcing shines when history has real product value:

  • Orders, payments, subscriptions, carts, shipments, and workflows.
  • Systems where support needs to answer “what happened?”
  • Sync engines where clients, workers, or replicas subscribe to changes.
  • Products where debugging production means replaying a real customer journey.
  • Domains where “what did we know at the time?” matters.

It is probably too much for:

  • Static reference data.
  • Simple profile settings.
  • Admin CRUD screens nobody thinks about twice.
  • Anything where the history is less valuable than the simplicity you would give up.

This is not a religion. Use it where the story matters.

What you get:

  • A real audit trail, not an afterthought.
  • Better debugging, because bugs come with a timeline.
  • The ability to rebuild new read models from old facts.
  • Business data that says what happened, not just what survived the last update.

What you pay:

  • You have to design events, not just tables.
  • You need to think about projections and eventual consistency.
  • You need a way to handle long streams, usually with snapshots later.
  • Deletes and privacy rules need care because events are meant to be immutable.

Still, the mental model is smaller than its reputation: save facts, derive state.

Do not rewrite your whole app. Pick one place where the sequence of events is already important and your current model feels a little dishonest.

A shopping cart is a good first pass:

cart.created
item.added
item.quantityChanged
coupon.applied
item.removed
cart.checkedOut

Build that one stream. Write one reducer. Make one projection for the UI.

After that, event sourcing stops feeling like architecture astronaut stuff and starts feeling like a pretty honest way to model software: record what happened, then let the current state fall out of it.