Spring Boot Akka Event Sourcing Starter – Part 4 – Final

Now here we will share some possible designs when you use the spring boot event sourcing toolkit starter plus some remarks and action points .

What are some possible designs using the toolkit for event sourcing and CQRS services :

Using the toolkit with Apache ignite and Kafka for event streaming :

springEventSourcingOverviewFinal

 

Here we do the following :

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. The read side application can have different data model for the query needs
  3. We use Apache Ignite data grid as the event store which can be easily scaled by adding more server nodes and you can benefit from the data grid rich features to some computations , Rich SQL query support  plus we will use the Apache ignite continuous query to push new added events to kafka.
  4. We do integration between Apache and Kafka via Kafka connect to read the new added events from the events cache and stream that to the read side application and any other interested application like Fraud detection , reporting …ect.
  5. Infrastructure structure :  Akka Cluster , Ignite cluster , Kafka Cluster Plus Service orchestration like kubernetes .

Using the toolkit with Apache Cassandra :

CassandraFinal2

Here we do the following :

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. We use Cassandra as the event sore
  3. We can keep use Kafka connect to stream events to other systems for read query and other analysis and reporting needs.
  4. Infrastructure structure : Akka cluster , Cassandra Cluster , Kafka Cluster Plus Service orchestration like kubernetes .

Using the toolkit with Apache Ignite only:

If you application does not need all those complexisities and just small sized service you use Ignite only with the toolkit to implement the Write and Read side of your CQRS and event sourcing application .

OverviewWithCassandra

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. We use the Ignite data grid for event store and for query read projection by using the continuous query or cache interceptors to push the new added event to another cache with the target read model
  3. You can separate the read and write caches into 2 different cluster groups.
  4. You can still use Kafka Connect to stream events to other systems if you like

Using the toolkit with Apache Ignite and Kafka Streams:

KafkaStreams

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. We use Apache Ignite for the event store with Kafka connect to stream the events
  3. We use Kafka streams to implement the read side

Off-course there are many other designs , I just shared some in the blog here now we need to summarize some remarks and actions points to be taken into consideration

Summary notes:

  1. Event sourcing and CQRS is not a golden bullet for every need , use it properly when it is really needed and when it fit the actual reasons behind it
  2. You need to have distributed tracing and monitoring for your different clusters for better traceability and error handling
  3. With Akka persistance , you need to cover the following when using it for your domain entities :
    1. Use split brain resolver when using Akka clustering to avoid split brains and to have a predictable cluster partitioning behavior. Few useful links
    2. Make sure to not use Java serialization as it is really bad for your performance and throughput of your application with Akka persistence
    3. Need to think through about active-active model for cross cluster support due to the cluster sharding limitation with that but it is covered in the next points below
  4. When it comes to Active-Active support model for your application , you have multiple options for active active data center support which will come with latency and performance impact , nothing is for free anyhow:
    1. Akka persistence active active model support extension which is an commercial add on : Akka-Persistance-Active-Active
    2. If you use Apache ignite as your event store , you have 2 options :
      1. You can use a backing store for your data grid that support cross data center replication for example Cassandra
      2. You can use GridGain cross data center replication feature which is the commercial version of Apache ignite
    3. You can use Kafka cluster cross data center replication to replicate your event data cross multiple data centers .
    4. If you use Cassandra as event store , you can use cross data center replication feature of Cassandra
    5. At the end you need to think through about how you can will handle active-active model for your event sourced entities and all its side effects with state replication and construction especially if you use Akka persistence which most likely will not be supported without the commercial add-on or implement your solution as well for that.

Hoping I have shared some useful insights which they are open for discussion and validation anytime.

Advertisements

Spring Boot Akka Event Sourcing Starter – Part 1

Here I am going to share a custom toolkit wrapped as a spring boot with AKKA persistence starter to act as a read made toolkit for event driven asynchronous non blocking flow API ,  event sourcing and CQRS implementation within spring boot services which can be part of spring cloud micro-services infrastructure , we will cover the following :

  1. Overview of the toolkit for DDD, event sourcing and CQRS implementation
  2. The integration between Akka persistance and spring boot via a starter implementation with a lot of abstraction for , abstract entity aggregate, cluster sharding , integration testing  and flow definition
  3. A working application example that show case how it can be used
  4. Summary of possible designs
  5. What is next and special remarks

The Overview :

Before going through the toolkit implementation , you need just to go through domain driven design , event sourcing and CQRS principles , here one good URL that can help you to get a nice overview to understand the pros and cons of that design and when you need it and when it is not :

Instead of implementing those patterns from scratch , I have decided to use Akka persistence to apply the core principles of event sourcing plus my layer above to abstract how to define your aggregate with its command and event handling flow .

Within the toolkit , the Aggregate command and flow handling will be as the following :

Aggregate flow(3).png

The flow definition API is as the following :

  • There are state changing command handlers flow definition which match command class type to a specific command handler
  • There are event handlers that match event class type to an event handler which will do the related logic of that event triggering
  • there are read ONLY command handlers which does not change the state of the aggregate entity , it can be used for query actions or other actions that does not mutate the entity state by appending new events

So the flow API different semantic branches are :

  1. If Command message is received
    • if the command is transnational ?
      1. Get the related command handler for that command type based into the flow API definition for that aggregate and the related current flow context with the current aggregate state
      2. Execute the command handler logic which will trigger one of the following 2 cases :
        • single event to be persisted then any configurable post action to be executed after persisting the event to the event store like post processing and sending back response to the sender
        • List of events to be persisted  then any configurable post action to be executed after persisting the event to the event store like post processing and sending back response to the sender
    • if the command is read ONLY ?
      • Just execute the configurable command handler for it based into the flow API definition for that aggregate and the related current flow context with the current aggregate state  then execute any configurable post processing actions
  2. If Event message is received
    • Get the related event handler based into the  defined flow for the aggregate then execute it against the current flow context and aggregate state
  3. if Stop message is received
    • it will trigger a safe stop flow for the aggregate entity actor
  4. If Receive time-out is message received
    • it will be received when there is ASYNC flow executed for a command and the waiting for response mode is of the aggregate entity actor is timed-out to avoid blocking the actor for long time which which can cause starvation and performance issues

Now in Part 2 we will cover the spring boot Akka event sourcing starter details which will cover the following for you :

  1. Smooth integration between Akka Persistance and Spring Boot
  2. Generic DSL for the aggregate flow definition for commands and events
  3. Abstract Aggregate persistent entity actor with all common logic in place and which can be used with the concrete managed spring beans implementation of different aggregate entities
  4. Abstract cluster sharding run-time configuration and access via spring boot custom configuration and a generic entity broker that abstract the cluster shading implementation for you

References :