Now here we will share some possible designs when you use the spring boot event sourcing toolkit starter plus some remarks and action points .
What are some possible designs using the toolkit for event sourcing and CQRS services :
Now here we will share some possible designs when you use the spring boot event sourcing toolkit starter plus some remarks and action points .
What are some possible designs using the toolkit for event sourcing and CQRS services :
Now I will share a working service example of how to use the event sourcing toolkit starter in practice , in the example I will show the following:
Here I am going to share a custom toolkit wrapped as a spring boot with AKKA persistence starter to act as a read made toolkit for event driven asynchronous non blocking flow API , event sourcing and CQRS implementation within spring boot services which can be part of spring cloud micro-services infrastructure . Continue reading
Here we are going a cover a case need in Apache ignite , what if you want to do distributed compute jobs that do data computations or external service calls using Apache Ignite distributed closures that has map reduce nature and fail fast once of the computations fail or it has the unexpected results , how to do that ? below we are going to explain that .
In this post we will share a starting project to use Apache ignite data grid an event and snapshot store to mix the benefits of the event sourcing and the data grid .
The implementation is based into the Journal plugin TCK specs provided by Akka persistence.
This is mainly using Apache ignite with akka persistence to provide journal and snapshot store by using the partitioned caches and benefit from the distributed highly available data grid features plus the nice query and data computations features in Ignite that can be used to have normalized views from the event store and do analytical jobs over them despite it is advised to keep write nodes separate from read nodes for better scalability.
In this post we will show how we can do the following :
How to guarantee your single computation task is guaranteed to failover in case of node failures in apache Ignite ?
As you know failover support in apache ignite for computation tasks is only covered for master slave jobs where slave nodes will do computations then reduce back to the master node , and in case of any failure in slave nodes where slave jobs are executing , then it that failed slave job will fail over to another node to continue execution .
Ok what about if I need to execute just single computation task and I need to have failover guarantee due may be it is a critical task that do financial data modification or must finished task in an acceptable status (Success or Failure) , how we can do that ? it is not supported out of the box by Ignite but we can have a small design extension using Ignite APIs to cover the same , HOW ?
Code reference is hosted into my github :
https://github.com/Romeh/failover-singlejob-ignite