Some CQRS Notes
March 22, 2017
Notes taken while building a CQRS application.
In this post, by CQRS I mean the Edument style, which includes aggregates, commands and events.
Use database transactions to serialize writes.
An aggregate encapsulates transactions and spans enough state that you can ensure business rules are kept. This requires serializing access to the aggregate so that only one command is processed at a time.
A typical approach is the optimistic locking “unit of work” pattern:
- load Sometimes also called “hydrating” the aggregate. the aggregate from it’s event history,
- apply the command to the aggregate,
- verify no new events written for aggregate since step 1
- if new events have been written, go to Optionally, stop after a limited number of retries. step 1
- persist any new events produced by command.
A simpler approach is to use your database locking to ensure serialization.
- begin immediate transaction In SQLite3, the immediate qualifier blocks writers immediately (instead of waiting for the first update, insert or delete statement).
- select event history from the DB and load the aggregate,
- handle the command
- write new events to DB.
- commit
You are locking the entire table, but for small systems when you are starting out with this will work fine. As your system grows, if you partition aggregate data by aggregate id into different tables, you can keep the same design.
When designing your event store, think about partitioning by aggregate ID. Remember that an aggregate stores enough data to guarantee a business rule, so partitioning event history by aggregate ID will still let you keep your rules. Taking this idea to an extreme, you could have one SQLite3 database file for each aggregate in your system.
Put the aggregate ID in the event.
At first glance, it seems like wasted space. You are already writing the aggregate ID into it’s own (indexed) column, why include it in the event body as well?
The reason is that view models will span span multiple aggregates. Events are sent to a read model and the read model updates its state. If the event does not include the aggregate ID in it’s body, the read model now needs the event plus the aggregate ID in the message. Just put it in the event.
Make your event stream rich, even if you don’t use them now.
Commands don’t change that often. But read models change a lot.
If you make your event history rich from the get go, you will have more flexibility in building read models that go back to the start of time.
The downside is that this uses more space, but if you are partitioning by aggregate ID, and it turns out you want to reclaim space from events you don’t need, you can turn off processing for that aggregate, swap in a new SQLite3 database file with the events deleted, and then turn processing back on.
In a system that is growing and changing quickly, having a rich event stream gives you more flexible in getting data out of your system.
Put the submitted time in the command.
This is useful if you want to support off-line, disconnected commands.
Say you want a web app that someone can use when they are not connected to the internet. You could use browser local storage, save the sequence of commands, and then process them when they come back on-line. If you have any business rules that depend on the time the command was created, then you need to store that in the command.
You can also hydrate read models.
I like servers that are $15 a year. The drawback is that they typically only come with 256MB of RAM. Instead of keeping read models hanging around in memory, you can build them on demand, by processing the event history.
This is a short-term strategy, because event streams get big fast, but it is simple to code and simple to get started with. You can also monitor response times and disk usage; you may be surprised at how long you can get away with this simpler approach.
Tags: cqrs