Here we are going to cover how to use Ehcache 3 as a Spring caching in Spring boot based into JSR-107, before we start we need to just highlight what us JSR-107 :
In regards to caching, Spring offers support for two sets of annotations that can be used to implement caching. You have the original Spring annotations and the new JSR-107 annotations, for more information you can check :
Steps to use EhCache3 with Spring boot :
1- Create a spring boot maven project
2- Add the following maven dependencies in your pom.xml along with spring boot dependencies
3- Set the spring.cache.jcache.config property to include the classpath and ehcache.xml file, enable the following in application.yml file
4- Enable caching in spring boot main class
5- Configure your EhCache xml file as the following
6- Then you can easily inject the cache manger in your bean class if you do not want to use just simple annotations to enable caching for your operations
7- Start accessing your caches from cache manager if you want to do direct operation over it like below , please check EhcacheAlertsStore.java in the GitHub project for more information
8- Complete code sample for testing is on GitHub where you can run it and play with the REST APIs for cache operations via the generated run-time swagger :
- Ehcache 3.0 Documentation
- Spring Cache Abstraction
- Spring Cache Abstraction, JCache (JSR-107) annotations
Here we are going a cover a case need in Apache ignite , what if you want to do distributed compute jobs that do data computations or external service calls using Apache Ignite distributed closures that has map reduce nature and fail fast once of the computations fail or it has the unexpected results , how to do that ? below we are going to explain that .
- The main node will submit a collection of Ignite callable plus the custom fail fast reducer that we will explain into details later
- The list of jobs will be distributed between the server nodes in the current cluster topology with the same cluster group for actual execution and to use the distributed parallel map reduce nature execution of Ignite compute grid in synchronous or asynchronous non blocking way
- each single Job will return the result or error to the fail fast reducer which upon receiving the results of each single compute task , it will determine if it can keep collection other results before reducing the final aggregated result or fail fast immediately once one of the jobs failed or has the unexpected results
So how it is implemented ?
- The fail fast Ignite compute grid reducer :
- Generic Ignite compute utility to trigger the map reduce tasks in synchronous or asynchronous non blocking :
- The custom aggregated reducer response class:
- The single task response class:
- Example service for calling the Ignite compute grid with the distributed closures and we will use the synchronous way for testing the execution :
- Unit test for fail fast and successful cases using spring boot integration test:
- Ignite compute grid : https://apacheignite.readme.io/docs/compute-grid
- The code is on GitHub : https://github.com/Romeh/spring-boot-ignite
In this post we will share a starting project to use Apache ignite data grid an event and snapshot store to mix the benefits of the event sourcing and the data grid .
The implementation is based into the Journal plugin TCK specs provided by Akka persistence.
This is mainly using Apache ignite with akka persistence to provide journal and snapshot store by using the partitioned caches and benefit from the distributed highly available data grid features plus the nice query and data computations features in Ignite that can be used to have normalized views from the event store and do analytical jobs over them despite it is advised to keep write nodes separate from read nodes for better scalability.
Akka and Ignite used versions:
Akka version :2.5.7+ , Ignite Version :2.3.0+
- All operations required by the Akka Persistence journal plugin API are fully supported.
- It use apache ignite partitioned cache with default number of backups to 1 , that can be changed into reference.conf file.
Snapshot store plugin
How to use
Enable the plugins into your akka cluster configuration:
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
Configure Ignite data grid properties , default configured on localhost.
//to start client or server node to connect to Ignite data cluster
isClientNode = false
// for ONLY testing we use localhost
// used for grid cluster connectivity
tcpDiscoveryAddresses = "localhost"
metricsLogFrequency = 0
// thread pools used by Ignite , should based into target machine specs
queryThreadPoolSize = 4
dataStreamerThreadPoolSize = 1
managementThreadPoolSize = 2
publicThreadPoolSize = 4
systemThreadPoolSize = 2
rebalanceThreadPoolSize = 1
asyncCallbackPoolSize = 4
peerClassLoadingEnabled = false
// to enable or disable durable memory persistance
enableFilePersistence = true
// used for grid cluster connectivity, change it to suit your configuration
igniteConnectorPort = 11211
// used for grid cluster connectivity , change it to suit your configuration
igniteServerPortRange = "47500..47509"
//durable memory persistance storage file system path , change it to suit your configuration
ignitePersistenceFilePath = "./data"
and you will have ignite enabled as your journal and snapshot plugins , you can enable it by starting server node or client based into the configuration above .
Technical details :
the main journal implementation is IgniteWriteJournal :
the main snapshot implementation class is IgniteSnapshotStore :
For more details feel free to dive into the code based , it is a small code base for now !.
- This is working in progress any contribution would be really helpful , please check the project on GitHub and give a hand !
When it comes to micro-services, it is really normal to have a configuration server that all your services will connect to fetch its own configuration but what about if you just need to externalize your configuration and make it manageable via source control like Git and your infrastructure is not yet ready for micro-services deployment and operation model.
What if you have spring boot app and you want to use spring cloud config semantic to do the same for you , is it possible to start embedded spring cloud config inside the your spring boot app to fetch its configuration remotely from Git for example ? the answer is yes and i am going to show how :
The steps needed are the following :
1- add the following as a maven dependency in your spring boot app pom.xml:
2- add the spring cloud configuration to point to your Git configuration server in bootstrap.yml file of your spring boot application :
3- add your application configuration yml file in the server with the same spring boot application name
4- start your app and you should see in the console that is fetching your yml file from the remote Git server as being shown below
- you should in production enable HTTPS and authentication between your embedded config server and Git repo
- you can use spring cloud config encryption to encrypt any sensitive data in your configuration like passwords
- you should use spring streams with kafka or other options(ex: spring cloud bus) to push the configuration change and force reload of the values without forcing yourself to restart the app
- Spring cloud config : https://cloud.spring.io/spring-cloud-config/
- Code sample in GitHub : https://github.com/Romeh/spring-boot-sample-app
before we show how we can apply continuous delivery and deployment using Jenkins pipeline as a code , we need to refresh our mind about the difference between continuous integration vs delivery vs deployment as being show below :
What about releasing process and versioning in continuous delivery :
basically i am not big fan of maven release plugin for some reasons :
Let’s consider the principles of Continuous Delivery.
- “Every build is a potential release”
- “Team need to eliminate manual bottlenecks”
- “Team need to automate wherever possible”
Current drawbacks and limitations of maven release way :
- Overhead. The maven-release plugin runs 3 (!) full build and test cycles, manipulates the POM 2 times and creates 3 Git revisions.
- Not isolated. The plugin can easily get in a mess when someone else commits changes during the release build.
- Not atomic. If something went wrong in the last step (e.g. during the artifact upload), we are facing an invalid state. We have to clean up the created Git tag, Git revisions and fix the wrong version in the pom manually.
So let us simplify the versioning process in an efficient way within out continuous delivery model :
- checking out the software as it is
- building, testing and packaging it
- giving it a version so it can be uniquely identified and tractable (release step)
- deploying it to an artifact repository where it can then be picked for actual roll out on target machines
- tagging this state in SCM so it can be associated with the matching artifact
Which leads to :
- Every artifact is potentially shippable. So there is no need for a dedicated release workflow anymore!
- The delivery pipeline is significantly simplified and automated
- Traceable. It’s obvious, which commits are included.
So how we can implement continuous delivery and a conditional continuous deployment using Jenkins pipeline , off-course you can add more stages based into team process like for example performance test, just added till acceptance as production will be just the same :
we will use Jenkins pipeline as a code as it give more power to the team to control they delivery process:
- when a Git commit is triggered to development branch , it will trigger Jenkins pipeline to checkout the code , build it , unit test it and if all is green it will trigger the integration test plus sonar check for your code quality gate
- if all green , deploy to development server using the created package from Jenkins work space or Nexus Snapshots if you prefer
- then if there is a release candidate which means a merge request to your master branch , the it will do the same steps as above plus creating the release candidate using the release versioning explained above.
- Push the released artifact to Nexus and do acceptance deployment and auto acceptance functional testing
- Then prompt to production and if it is approved it will be deployed into production
- Off-course deployment task must be automated if you need continuous deployment using an automation tools like Ansible or other options.
Now let us explain how it will be done via Jenkins Pipeline code as the following :
then you can use it by creating multi branch project in Jenkins and mark it to get use Jenkins pipeline as a code
- Jenkins pipeline : https://jenkins.io/doc/book/pipeline/
- Jenkins file with a sample app for testing the pipeline execution on GitHub: https://github.com/Romeh/spring-boot-sample-app
Here I am sharing a custom spring boot web maven archetype I have created to encapsulate all the common practices as an example how you can do the same in your team for common standards that could be imposed by your company or your team.
the Maven archetype for Spring Boot web application which has all common standards on place ready for development
- Java 1.8+
- Maven 3.3+
- Spring boot 1.5.6+
- Lombok abstraction
- JPA with H2 for explanation
- Swagger 2 API documentation
- Spring retry and circuit breaker for external service call
- REST API model validation
- Spring cloud config for external configuration on GIT repository
- Cucumber and Spring Boot test for integration test
- Jenkins Pipeline for multi branch project
- continuous delivery and integration standards with Sonar check and release management
- Support retry in sanity checks
- Logback configuration
To install the archetype in your local repository execute following commands:
$ git clone https://github.com/Romeh/spring-boot-quickstart-archtype.git
$ cd spring-boot-quickstart-archtype
$ mvn clean install
Create a project
$ mvn archetype:generate \
Test the generated app rest API via SWAGGER
Sample app generated from that archetype can be found here :
Here I am sharing how you can integrate cucumber for behavior driven testing with spring boot integration test and how you collect the reports in Jenkins pipeline.
In a sample spring boot app generated from my custom spring boot archetype we will show a small integration test suite with cucumber and spring boot.
Steps to follow are :
1- Add cucumber maven dependencies to your spring boot pom.xml
2- Define cucumber features in your test resources :
3- How to define the features implementation to be executed with your spring boot app logic :
the feature description :
the feature implementation :
4- How to execute the integration test :
you need to configure the root executor with Cucumber runner as the following:
and the integration test triggering which will be done via spring boot integration test :
5- how to collect the test reports in Jenkins pipeline :
Complete working sample is here :
- Cucumber: https://cucumber.io/
- Spring boot : https://projects.spring.io/spring-boot/