CQRS and ES with Akka

No Comments

Somewhere back in December 2015 Heiko Seeberger visited us for a Scala training. At the end of the last day he showed us Akka Persistence. Knowing the concepts of CQRS (Command-Query Responsibility Segregation) and to some extent ES (Event Sourcing), this fired my interests and I decided to build a demo app to try out these concepts.
In this blog post I’ll go through the code of my demo app, explain what I tried to accomplish and the reasoning behind certain decisions. The way I did things might not be the right way and most certainly is not the only way. I’m looking forward to any feedback and improvements!

Short CQRS and ES intro

In this section I will give a quick introduction to the concepts I tried to achieve with my demo app. If you like to know more, I can recommend these resources:

CQRS is all about separating the writing of data (changes) from the reading (querying) of data.
A CQRS Journey - Microsoft

© A CQRS Journey – Microsoft
CQRS has several advantages:

  • Storing and reading of data can be optimized separately
  • Architecture is split up into small components, easy to reason about
  • Good fit for event-based system / task-based UI

Of course there are also the disadvantages:

  • Complexity of the system as a whole increases significantly
  • Architecture often leads to eventual consistency, with its own related issues

The fundamental concept of Event Sourcing is to store all changes to data / state in a sequential, logbook-like manner. By replaying all events it should be possible to reconstruct the state of the system.
Event Sourcing combines well with CQRS as it can be applied for the write-side of the system and can be highly optimized to make storing changes very fast.

The things I wanted to accomplish with my demo app were:

  • Store changes / updates in a logbook-like manner
  • Query a relational database on the read side
  • ‘Physically’ separate the write- and read side and use events to keep them in sync
  • Make sure the system is reliable and consistent to a reasonable extent

I started out with Akka Persistence and Cassandra for the write side based on Heiko’s demo. Camel and RabbitMQ are used to propagate changes from the write side to the SQL database at the read side. On the read side I used Slick in combination with MariaDB to store and query the data. Many thanks to my colleagues for providing some input there 😉

Because the demo app evolved in many small steps I will only show the end result and not all steps in between.

The demo app

First some background for the application. In its current state the application only supports creating new users and querying for a list of users. A user consists of their first name, last name and an email address. The email address must be unique.
The app is reachable via a REST api which, in an early form, is described here: Swagger for Akka HTTP. Everything runs locally with the services (RabbitMQ, Cassandra and MariaDB) running in Docker containers. Remember that the source code is available on Github.

Implementing the read side

Let’s start off with the read side, because it’s the simplest.

The read side will receive updates via RabbitMQ. It should be sufficient to receive the current state for a user, only to persist and then query the data. After all, the write side has already handled the command resulting in the current state. We therefore only receive the User object as an event. As can be seen in the code below, the message also contains a MESSAGE_ID which is persisted together with the user and can be used to prevent handling duplicate messages.
Since updates are not possible yet we check whether the user already exists at the receiving side. If not, we can add this user and confirm to Camel / RabbitMQ that the message was received correctly (the “origSender ! Ack“).

Another noteworthy thing is the “val origSender = sender()” statement. Since Slick returns futures as response for all methods, we need to capture the value for the sending actor to be able to send a response later. Futures are executed within their own ExecutionContext. This is a different context from the one in which the Actor’s Receive method is executed. And since certain Akka values such as the sender are local to their context, they won’t be available later on when the code inside the future is being executed.

class EventReceiver(userRepository: UserRepository) extends Consumer with ActorSettings with ActorLogging {
  override def endpointUri: String = settings.rabbitMQ.uri
  override def autoAck = false
  override def receive: Receive = {
    case msg: CamelMessage =>
      val origSender = sender()
      val body: Xor[Error, User] = decode[User](msg.bodyAs[String])
      body.fold({ error =>
        origSender ! Failure(error)
      }, { user =>
        val messageId: Long = msg.headers.get(RabbitMQConstants.MESSAGE_ID) match {
          case Some(id: Long) => id
          case Some(id: String) => id.toLong
          case _ => -1
        log.info("Event Received with id {} and for user: {}", messageId, user.email)
        userRepository.getUserByEmail(user.email).foreach {
          case Some(_) => log.debug("User with email {} already exists")
          case None =>
            userRepository.createUser(UserEntity(messageSeqNr = messageId, userInfo = user)).onComplete {
              case scala.util.Success(_) => origSender ! Ack // Send ACK when storing User succeeded
              case scala.util.Failure(t) => log.error(t, "Failed to persist user with email: {}", user.email)
    case _ => log.warning("Unexpected event received")

The UserRepository being used contains some methods for persisting and querying the UserEntity persistence object. Note that methods return Futures, making all database actions asynchronous. The nice thing about Slick is that you can interact with the database as if it were a collection.

class UserRepository(val databaseService: DatabaseService)(implicit executionContext: ExecutionContext)
    extends UserEntityTable {
  def getUsers(): Future[Seq[UserEntity]] = db.run(users.result)
  def getUserByEmail(email: String): Future[Option[UserEntity]] =
    db.run(users.filter(_.email === email).result.headOption)
  def createUser(user: UserEntity): Future[Long] = db.run((users returning users.map(_.id)) += user)

The UserEntity is a simple case class extending our existing User object, adding some fields such as the message id and insert / update timestamps. At a later stage these timestamps can help us maintain consistency or help in debugging when records were added or updated.
The id, createdAt and updatedAt fields are of type Option because they are generated by the database.

final case class UserEntity(
  id: Option[Long] = None,
  createdAt: Option[Timestamp] = None,
  updatedAt: Option[Timestamp] = None,
  messageSeqNr: Long,
  userInfo: User

Finally there is the UserEntityTable, an object required by Slick for mapping between database tables and the entity object. Note here the special syntax of “.?” for the optional fields mentioned for the entity above.

trait UserEntityTable {
  class Users(tag: Tag) extends Table[UserEntity](tag, "CFE_USERS") {
    def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
    def createdAt =
      column[Timestamp]("CREATED_AT", SqlType("timestamp not null default CURRENT_TIMESTAMP"))
    def updatedAt =
        SqlType("timestamp not null default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP")
    def messageSeqNr = column[Long]("MSG_SEQ_NR")
    def email = column[String]("EMAIL")
    def firstName = column[String]("LAST_NAME")
    def lastName = column[String]("FIRST_NAME")
    def * =
      (id.?, createdAt.?, updatedAt.?, messageSeqNr, (email, firstName, lastName)).shaped <> ({
        case (id, createdAt, updatedAt, messageSeqNr, userInfo) =>
          UserEntity(id, createdAt, updatedAt, messageSeqNr, User.tupled.apply(userInfo))
      }, { ue: UserEntity =>
        def f1(u: User) = User.unapply(u).get
        Some((ue.id, ue.createdAt, ue.updatedAt, ue.messageSeqNr, f1(ue.userInfo)))
    def idx_user = index("idx_user", email, unique = true)
  protected val users = TableQuery[Users]

Implementing the write side

The write side consists of three actors: UserAggregate, UserRepository and EventSender. The UserAggregate will receive a command – in our case only to add new users – and decide how to handle it. The command to add the user is persisted only when the user does not yet exist and an event is sent to inform the read side.

The UserAggregate mixes in the AtLeastOnceDelivery trait. This makes it a special type of PersistentActor which will expect sent messages to be confirmed. In the updateState function, the deliver function is being called to send an event to a specific Actor. A delivery id is being generated to later confirm that the sending was successful. By persisting all messages that are sent and received, the system is made reliable and can pick up where it left off even in the event of a crash. See Akka – AtLeastOnceDelivery for details about this trait and how it works.

Besides having to capture certain values when working with Futures within Actors, the persist function needs even more attention. This function must also be executed within the Actor’s ExecutionContext, which makes it hard to combine with the ask pattern. For that purpose the GetUsersForwardResponse case class was created.
When we receive a response from the UserRepository Actor, this result is piped to ourself. That way it is received as a separate message which is executed within the Actor’s own ExecutionContext. From there we can safely persist any objects.

object UserAggregate {
  sealed trait Evt
  final case class MsgAddUser(u: User) extends Evt
  final case class MsgConfirmed(deliveryId: Long) extends Evt
  final case class GetUsersForwardResponse(senderActor: ActorRef, existingUsers: Set[User], newUser: User)
class UserAggregate extends PersistentActor with AtLeastOnceDelivery with ActorLogging {
  override def receiveCommand: Receive = {
    case AddUserCmd(newUser) =>
      val origSender = sender()
      val usersFuture = userRepository ? GetUsers
      pipe(usersFuture.mapTo[Set[User]].map(GetUsersForwardResponse(origSender, _, newUser))) to self
    case GetUsersForwardResponse(origSender, users, newUser) =>
      if (users.exists(_.email == newUser.email)) {
        origSender ! UserExistsResp(newUser)
      } else {
        persist(MsgAddUser(newUser)) { persistedMsg =>
          origSender ! UserAddedResp(newUser)
    case ConfirmAddUser(deliveryId) =>
    case Confirm(deliveryId) =>
  override def receiveRecover: Receive = {
    case evt: Evt => updateState(evt)
  def updateState(evt: Evt): Unit = evt match {
    case MsgAddUser(u) =>
      deliver(eventSender.path)(deliveryId => Msg(deliveryId, u))
      deliver(userRepository.path)(deliveryId => AddUser(deliveryId, u))
    case MsgConfirmed(deliveryId) =>

The UserRepository is really simple, in the sense that it receives an AddUser command and just adds the user to its set. All necessary checks should have been performed by the UserAggregate already.

class UserRepository extends PersistentActor with ActorLogging {
  private var users = Set.empty[User]
  override def receiveCommand: Receive = {
    case GetUsers =>
      sender() ! users
    case AddUser(id, user) =>
      log.info(s"Adding $id new user with email; ${user.email}")
      persist(user) { persistedUser =>
        sender() ! ConfirmAddUser(id)
  override def receiveRecover: Receive = {
    case user: User => users += user

Finally the EventSender actually consists of two Actors, because we are using Camel. The CamelSender is needed to set the endpoint uri and override some defaults. In the EventSender we have some logic to convert the User object to JSON and keep track of unconfirmed messages. As the UserAggregate expects a confirmation message and the Camel confirmation message is sent separately we use a map to keep track of our senders that need to receive a confirmation.

class EventSender extends Actor with ActorLogging {
  private var unconfirmed = immutable.SortedMap.empty[Long, ActorPath]
  override def receive: Receive = {
    case Msg(deliveryId, user) =>
      log.info("Sending msg for user: {}", user.email)
      unconfirmed = unconfirmed.updated(deliveryId, sender().path)
      val headersMap = Map(RabbitMQConstants.MESSAGE_ID -> deliveryId, RabbitMQConstants.CORRELATIONID -> deliveryId)
      camelSender ! CamelMessage(user.asJson.noSpaces, headersMap)
    case CamelMessage(_, headers) =>
      val deliveryId: Long = headers.getOrElse(RabbitMQConstants.MESSAGE_ID, -1L).asInstanceOf[Long]
      log.info("Event successfully delivered for id {}, sending confirmation", deliveryId)
          senderActor => {
            unconfirmed -= deliveryId
            context.actorSelection(senderActor) ! Confirm(deliveryId)
    case Status.Failure(ex) =>
      log.error("Event delivery failed. Reason: {}", ex.toString)
class CamelSender extends Actor with Producer with ActorSettings {
  override def endpointUri: String = settings.rabbitMQ.uri
  override def headersToCopy: Set[String] =
    super.headersToCopy + RabbitMQConstants.CORRELATIONID + RabbitMQConstants.MESSAGE_ID


The examples I showed you here are loosely based on Heiko Seeberger’s reactive-flows project regarding the Akka HTTP setup. A demo project can be easily generated with SBT-Fresh sbt-fresh.

Hope you liked this blog and I would love to hear your comments!


More content about Architecture


Your email address will not be published. Required fields are marked *