Spring Boot Akka Event Sourcing Starter – Part 4 – Final

Now here we will share some possible designs when you use the spring boot event sourcing toolkit starter plus some remarks and action points .

What are some possible designs using the toolkit for event sourcing and CQRS services :

Using the toolkit with Apache ignite and Kafka for event streaming :

springEventSourcingOverviewFinal

 

Here we do the following :

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. The read side application can have different data model for the query needs
  3. We use Apache Ignite data grid as the event store which can be easily scaled by adding more server nodes and you can benefit from the data grid rich features to some computations , Rich SQL query support  plus we will use the Apache ignite continuous query to push new added events to kafka.
  4. We do integration between Apache and Kafka via Kafka connect to read the new added events from the events cache and stream that to the read side application and any other interested application like Fraud detection , reporting …ect.
  5. Infrastructure structure :  Akka Cluster , Ignite cluster , Kafka Cluster Plus Service orchestration like kubernetes .

Using the toolkit with Apache Cassandra :

CassandraFinal2

Here we do the following :

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. We use Cassandra as the event sore
  3. We can keep use Kafka connect to stream events to other systems for read query and other analysis and reporting needs.
  4. Infrastructure structure : Akka cluster , Cassandra Cluster , Kafka Cluster Plus Service orchestration like kubernetes .

Using the toolkit with Apache Ignite only:

If you application does not need all those complexisities and just small sized service you use Ignite only with the toolkit to implement the Write and Read side of your CQRS and event sourcing application .

OverviewWithCassandra

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. We use the Ignite data grid for event store and for query read projection by using the continuous query or cache interceptors to push the new added event to another cache with the target read model
  3. You can separate the read and write caches into 2 different cluster groups.
  4. You can still use Kafka Connect to stream events to other systems if you like

Using the toolkit with Apache Ignite and Kafka Streams:

KafkaStreams

  1. We use the event sourcing toolkit starter to define the domain write service that will be act as the command side plus we can benefit from Spring Cloud if you will need to support micro-services architecture
  2. We use Apache Ignite for the event store with Kafka connect to stream the events
  3. We use Kafka streams to implement the read side

Off-course there are many other designs , I just shared some in the blog here now we need to summarize some remarks and actions points to be taken into consideration

Summary notes:

  1. Event sourcing and CQRS is not a golden bullet for every need , use it properly when it is really needed and when it fit the actual reasons behind it
  2. You need to have distributed tracing and monitoring for your different clusters for better traceability and error handling
  3. With Akka persistance , you need to cover the following when using it for your domain entities :
    1. Use split brain resolver when using Akka clustering to avoid split brains and to have a predictable cluster partitioning behavior. Few useful links
    2. Make sure to not use Java serialization as it is really bad for your performance and throughput of your application with Akka persistence
    3. Need to think through about active-active model for cross cluster support due to the cluster sharding limitation with that but it is covered in the next points below
  4. When it comes to Active-Active support model for your application , you have multiple options for active active data center support which will come with latency and performance impact , nothing is for free anyhow:
    1. Akka persistence active active model support extension which is an commercial add on : Akka-Persistance-Active-Active
    2. If you use Apache ignite as your event store , you have 2 options :
      1. You can use a backing store for your data grid that support cross data center replication for example Cassandra
      2. You can use GridGain cross data center replication feature which is the commercial version of Apache ignite
    3. You can use Kafka cluster cross data center replication to replicate your event data cross multiple data centers .
    4. If you use Cassandra as event store , you can use cross data center replication feature of Cassandra
    5. At the end you need to think through about how you can will handle active-active model for your event sourced entities and all its side effects with state replication and construction especially if you use Akka persistence which most likely will not be supported without the commercial add-on or implement your solution as well for that.

Hoping I have shared some useful insights which they are open for discussion and validation anytime.

Advertisements

Spring Boot Akka Event Sourcing Starter – Part 1

Here I am going to share a custom toolkit wrapped as a spring boot with AKKA persistence starter to act as a read made toolkit for event driven asynchronous non blocking flow API ,  event sourcing and CQRS implementation within spring boot services which can be part of spring cloud micro-services infrastructure , we will cover the following :

  1. Overview of the toolkit for DDD, event sourcing and CQRS implementation
  2. The integration between Akka persistance and spring boot via a starter implementation with a lot of abstraction for , abstract entity aggregate, cluster sharding , integration testing  and flow definition
  3. A working application example that show case how it can be used
  4. Summary of possible designs
  5. What is next and special remarks

The Overview :

Before going through the toolkit implementation , you need just to go through domain driven design , event sourcing and CQRS principles , here one good URL that can help you to get a nice overview to understand the pros and cons of that design and when you need it and when it is not :

Instead of implementing those patterns from scratch , I have decided to use Akka persistence to apply the core principles of event sourcing plus my layer above to abstract how to define your aggregate with its command and event handling flow .

Within the toolkit , the Aggregate command and flow handling will be as the following :

Aggregate flow(3).png

The flow definition API is as the following :

  • There are state changing command handlers flow definition which match command class type to a specific command handler
  • There are event handlers that match event class type to an event handler which will do the related logic of that event triggering
  • there are read ONLY command handlers which does not change the state of the aggregate entity , it can be used for query actions or other actions that does not mutate the entity state by appending new events

So the flow API different semantic branches are :

  1. If Command message is received
    • if the command is transnational ?
      1. Get the related command handler for that command type based into the flow API definition for that aggregate and the related current flow context with the current aggregate state
      2. Execute the command handler logic which will trigger one of the following 2 cases :
        • single event to be persisted then any configurable post action to be executed after persisting the event to the event store like post processing and sending back response to the sender
        • List of events to be persisted  then any configurable post action to be executed after persisting the event to the event store like post processing and sending back response to the sender
    • if the command is read ONLY ?
      • Just execute the configurable command handler for it based into the flow API definition for that aggregate and the related current flow context with the current aggregate state  then execute any configurable post processing actions
  2. If Event message is received
    • Get the related event handler based into the  defined flow for the aggregate then execute it against the current flow context and aggregate state
  3. if Stop message is received
    • it will trigger a safe stop flow for the aggregate entity actor
  4. If Receive time-out is message received
    • it will be received when there is ASYNC flow executed for a command and the waiting for response mode is of the aggregate entity actor is timed-out to avoid blocking the actor for long time which which can cause starvation and performance issues

Now in Part 2 we will cover the spring boot Akka event sourcing starter details which will cover the following for you :

  1. Smooth integration between Akka Persistance and Spring Boot
  2. Generic DSL for the aggregate flow definition for commands and events
  3. Abstract Aggregate persistent entity actor with all common logic in place and which can be used with the concrete managed spring beans implementation of different aggregate entities
  4. Abstract cluster sharding run-time configuration and access via spring boot custom configuration and a generic entity broker that abstract the cluster shading implementation for you

References :

Spring boot with Ehcache 3 and JSR-107

Here we are going to cover how to use Ehcache 3 as a Spring caching  in Spring boot based into JSR-107, before we start we need to just highlight what us JSR-107 :

JSR-107(JCache) Annotations:

In regards to caching, Spring offers support for two sets of annotations that can be used to implement caching. You have the original Spring annotations and the new JSR-107 annotations, for more information you can check :

https://spring.io/blog/2014/04/14/cache-abstraction-jcache-jsr-107-annotations-support

Steps to use EhCache3 with Spring boot :

1- Create a spring boot maven project

2- Add the following maven dependencies in your pom.xml along with spring boot dependencies

3- Set the spring.cache.jcache.config property to include the classpath and ehcache.xml file, enable the following in application.yml file

Screen Shot 2017-12-22 at 15.55.12.png

4- Enable caching in spring boot main class

5- Configure your EhCache xml file as the following

6- Then you can easily inject the cache manger in your bean class if you do not want to use just simple annotations to enable caching for your operations

7- Start accessing your caches from cache manager if you want to do direct operation over it like below , please check EhcacheAlertsStore.java in the GitHub project for more information

8- Complete code sample for testing is on GitHub where you can run it and play with the REST APIs for cache operations via the generated run-time swagger :

https://github.com/Romeh/spring-boot-web-Ehchache-3-Persistance

 

References :

 

 

 

Spring boot with embedded config server via spring cloud config

When it comes to micro-services, it is really normal to have a configuration server that all your services will connect to fetch its own configuration but what about if you just need to externalize your configuration and make it manageable via source control like Git and your infrastructure is not yet ready for micro-services deployment and operation model.

manage-distributed-configuration-and-secrets-with-spring-cloud-and-vault-spring-io-2017-10-638

What if you have spring boot app and you want to use spring cloud config semantic to do the same for you , is it possible to start embedded spring cloud config inside the your spring boot app to fetch its configuration remotely from Git for example ? the answer is yes and i am going to show how :

configServerBoot

The steps needed are the following :

1- add the following as a maven dependency in your spring boot app pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring-cloud.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-config-server</artifactId>
</dependency>

2- add the spring cloud configuration to point to your Git configuration server in bootstrap.yml file of your spring boot application :

3- add your application configuration yml file in the server with the same spring boot application name

4- start your app and you should see in the console that is fetching your yml file from the remote Git server as being shown below

Remarks :

  1. you should in production enable HTTPS and authentication between your embedded config server and Git repo
  2. you can use spring cloud config encryption to encrypt any sensitive data in your configuration like passwords
  3. you should use spring streams with kafka or other options(ex: spring cloud bus) to push the configuration change and force reload of the values without forcing yourself to restart the app

References :

  1. Spring cloud config : https://cloud.spring.io/spring-cloud-config/
  2. Code sample in GitHub : https://github.com/Romeh/spring-boot-sample-app

How to write a spring boot web maven archetype with common practices in place

Here I am sharing a custom spring boot web maven archetype I have created to encapsulate all the common practices as an example how you can do the same in your team for common standards that could be imposed by your company or your team.

AppArchtype

the Maven archetype for Spring Boot web application which has all common standards on place ready for development

  • Java 1.8+
  • Maven 3.3+
  • Spring boot 1.5.6+
  • Lombok abstraction
  • JPA with H2 for explanation
  • Swagger 2 API documentation
  • Spring retry and circuit breaker for external service call
  • REST API model validation
  • Spring cloud config for external configuration on GIT repository
  • Cucumber and Spring Boot test for integration test
  • Jenkins Pipeline for multi branch project
  • continuous delivery and integration standards with Sonar check and release management
  • Support retry in sanity checks
  • Logback configuration

Installation

To install the archetype in your local repository execute following commands:

$ git clone https://github.com/Romeh/spring-boot-quickstart-archtype.git
$ cd spring-boot-quickstart-archtype
$ mvn clean install

Create a project

$ mvn archetype:generate \
     -DarchetypeGroupId=com.romeh.spring-boot-archetypes \
     -DarchetypeArtifactId=spring-boot-quickstart \
     -DarchetypeVersion=1.0.0 \
     -DgroupId=com.test \
     -DartifactId=sampleapp \
     -Dversion=1.0.0-SNAPSHOT \
     -DinteractiveMode=false

Test the generated app rest API via SWAGGER

http://localhost:8080/swagger-ui.html

Sample app generated from that archetype can be found here :

https://github.com/Romeh/spring-boot-sample-app

 

References :

  1. https://projects.spring.io/spring-boot/
  2. https://maven.apache.org/guides/introduction/introduction-to-archetypes.html

Spring boot integration test with cucumber and Jenkins pipeline

Here I am sharing how you can integrate cucumber for behavior driven testing with spring boot integration test and how you collect the reports in Jenkins pipeline.

 

In a sample spring boot app generated from my custom spring boot archetype we will show a small integration test suite with cucumber and spring boot.

Steps to follow are :

1- Add cucumber maven dependencies to your spring boot pom.xml

<!-- Cucumber-->
<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-java</artifactId>
    <version>${cucumber-version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-junit</artifactId>
    <version>${cucumber-version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-spring</artifactId>
    <version>${cucumber-version}</version>
    <scope>test</scope>
</dependency>

2- Define cucumber features in your test resources :

Screen Shot 2017-12-03 at 19.00.06

3- How to define the features implementation to be executed with your spring boot app logic :

the feature description :

the feature implementation :

4- How to execute the integration test :

you need to configure the root executor with Cucumber runner as the following:

and the integration test triggering which will be done via spring boot integration test :

5- how to collect the test reports in Jenkins pipeline :

Complete working sample is here :

GitHub: https://github.com/Romeh/spring-boot-sample-app

References :

  1. Cucumber: https://cucumber.io/
  2. Spring boot : https://projects.spring.io/spring-boot/