Spring boot with embedded config server via spring cloud config

When it comes to micro-services, it is really normal to have a configuration server that all your services will connect to fetch its own configuration but what about if you just need to externalize your configuration and make it manageable via source control like Git and your infrastructure is not yet ready for micro-services deployment and operation model.

manage-distributed-configuration-and-secrets-with-spring-cloud-and-vault-spring-io-2017-10-638

What if you have spring boot app and you want to use spring cloud config semantic to do the same for you , is it possible to start embedded spring cloud config inside the your spring boot app to fetch its configuration remotely from Git for example ? the answer is yes and i am going to show how :

configServerBoot

The steps needed are the following :

1- add the following as a maven dependency in your spring boot app pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring-cloud.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-config-server</artifactId>
</dependency>

2- add the spring cloud configuration to point to your Git configuration server in bootstrap.yml file of your spring boot application :

3- add your application configuration yml file in the server with the same spring boot application name

4- start your app and you should see in the console that is fetching your yml file from the remote Git server as being shown below

Remarks :

  1. you should in production enable HTTPS and authentication between your embedded config server and Git repo
  2. you can use spring cloud config encryption to encrypt any sensitive data in your configuration like passwords
  3. you should use spring streams with kafka or other options(ex: spring cloud bus) to push the configuration change and force reload of the values without forcing yourself to restart the app

References :

  1. Spring cloud config : https://cloud.spring.io/spring-cloud-config/
  2. Code sample in GitHub : https://github.com/Romeh/spring-boot-sample-app
Advertisements

Jenkins pipeline for continuous delivery and deployment

before we show how we can apply continuous delivery and deployment using Jenkins pipeline as a code , we need to refresh our mind about the difference between continuous integration vs delivery vs deployment as being show below :

Screen Shot 2017-12-03 at 15.12.06.png

What about releasing process and versioning in continuous delivery  :

basically i am not big fan of maven release plugin for some reasons :

Let’s consider the principles of Continuous Delivery.

  • “Every build is a potential release”
  • “Team need to eliminate manual bottlenecks”
  • “Team need to automate wherever possible”

Current drawbacks and limitations of maven release way :

  • Overhead. The maven-release plugin runs 3 (!) full build and test cycles, manipulates the POM 2 times and creates 3 Git revisions.
  • Not isolated. The plugin can easily get in a mess when someone else commits changes during the release build.
  • Not atomic. If something went wrong in the last step (e.g. during the artifact upload), we are facing an invalid state. We have to clean up the created Git tag, Git revisions and fix the wrong version in the pom manually.

So let us simplify the versioning process in an efficient way within out continuous delivery model :

  • checking out the software as it is
  • building, testing and packaging it
  • giving it a version so it can be uniquely identified and tractable (release step)

  Version=Major.Minor.Patch.GitCommit-Id

  • deploying it to an artifact repository where it can then be picked for actual roll out on target machines
  • tagging this state in SCM so it can be associated with the matching artifact

Which leads to :

  • Every artifact is potentially shippable. So there is no need for a dedicated release workflow anymore!
    • The delivery pipeline is significantly simplified and automated
    • Traceable. It’s obvious, which commits are included.

Screen Shot 2017-12-03 at 15.35.41

So how we can implement continuous delivery and a conditional continuous deployment using Jenkins pipeline , off-course you can add more stages  based into team process like for example performance test, just added till acceptance as production will be just the same :

we will use Jenkins pipeline as a code as it give more power to the team to control they delivery process:

  1. when a Git commit is triggered to development branch , it will trigger Jenkins pipeline to checkout the code , build it , unit test it and if all is green it will trigger the integration test plus sonar check for your code quality gate
  2. if all green , deploy to development server using the created package from Jenkins work space or Nexus Snapshots if you prefer
  3. then if there is a release candidate which means a merge request to your master branch , the it will do the same steps as above plus creating the release candidate using the release versioning explained above.
  4. Push the released artifact to Nexus and do acceptance deployment and auto acceptance functional testing
  5. Then prompt to production and if it is approved it will be deployed into production
  6. Off-course deployment task must be automated if you need continuous deployment using an automation tools like Ansible or other options.

Now let us explain how it will be done via Jenkins Pipeline code as the following :

then you can use it by creating multi branch project in Jenkins and mark it to get use Jenkins pipeline as a code

References :

  1. Jenkins pipeline : https://jenkins.io/doc/book/pipeline/
  2. Jenkins file with a sample app for testing the pipeline execution on GitHub: https://github.com/Romeh/spring-boot-sample-app

How to write a spring boot web maven archetype with common practices in place

Here I am sharing a custom spring boot web maven archetype I have created to encapsulate all the common practices as an example how you can do the same in your team for common standards that could be imposed by your company or your team.

AppArchtype

the Maven archetype for Spring Boot web application which has all common standards on place ready for development

  • Java 1.8+
  • Maven 3.3+
  • Spring boot 1.5.6+
  • Lombok abstraction
  • JPA with H2 for explanation
  • Swagger 2 API documentation
  • Spring retry and circuit breaker for external service call
  • REST API model validation
  • Spring cloud config for external configuration on GIT repository
  • Cucumber and Spring Boot test for integration test
  • Jenkins Pipeline for multi branch project
  • continuous delivery and integration standards with Sonar check and release management
  • Support retry in sanity checks
  • Logback configuration

Installation

To install the archetype in your local repository execute following commands:

$ git clone https://github.com/Romeh/spring-boot-quickstart-archtype.git
$ cd spring-boot-quickstart-archtype
$ mvn clean install

Create a project

$ mvn archetype:generate \
     -DarchetypeGroupId=com.romeh.spring-boot-archetypes \
     -DarchetypeArtifactId=spring-boot-quickstart \
     -DarchetypeVersion=1.0.0 \
     -DgroupId=com.test \
     -DartifactId=sampleapp \
     -Dversion=1.0.0-SNAPSHOT \
     -DinteractiveMode=false

Test the generated app rest API via SWAGGER

http://localhost:8080/swagger-ui.html

Sample app generated from that archetype can be found here :

https://github.com/Romeh/spring-boot-sample-app

 

References :

  1. https://projects.spring.io/spring-boot/
  2. https://maven.apache.org/guides/introduction/introduction-to-archetypes.html

Spring boot integration test with cucumber and Jenkins pipeline

Here I am sharing how you can integrate cucumber for behavior driven testing with spring boot integration test and how you collect the reports in Jenkins pipeline.

 

In a sample spring boot app generated from my custom spring boot archetype we will show a small integration test suite with cucumber and spring boot.

Steps to follow are :

1- Add cucumber maven dependencies to your spring boot pom.xml

<!-- Cucumber-->
<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-java</artifactId>
    <version>${cucumber-version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-junit</artifactId>
    <version>${cucumber-version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>info.cukes</groupId>
    <artifactId>cucumber-spring</artifactId>
    <version>${cucumber-version}</version>
    <scope>test</scope>
</dependency>

2- Define cucumber features in your test resources :

Screen Shot 2017-12-03 at 19.00.06

3- How to define the features implementation to be executed with your spring boot app logic :

the feature description :

the feature implementation :

4- How to execute the integration test :

you need to configure the root executor with Cucumber runner as the following:

and the integration test triggering which will be done via spring boot integration test :

5- how to collect the test reports in Jenkins pipeline :

Complete working sample is here :

GitHub: https://github.com/Romeh/spring-boot-sample-app

References :

  1. Cucumber: https://cucumber.io/
  2. Spring boot : https://projects.spring.io/spring-boot/

 

Implement Retry in Unit test

If you are doing an integration test or sanity testing for real application endpoints in continuous delivery process where you hot deploy the application and and you want to trigger some checkups and sanity testing part of your delivery pipeline , how you can add retry logic with delay to unit testing and to your integration test ?

Sample app that include a working demo for it at the following :

https://github.com/Romeh/spring-boot-sample-app

So I will through about how you can do it :

  • We will create annotation to mark the test cases needed to be retried :

  • We will create the implementation using JUnit rules option :

  • sample of how you can use it :

Then when you execute the test and your endpoint is not reachable right away , it will retry again based into your retry configuration .

Spring boot with Apache ignite persistent durable memory storage plus sql queries over ignite cache

In this post we will show how we can do the following :

  1. Integrate spring boot with Apache Ignite
  2. How to enable and use persistent durable memory feature of Apache Ignite which can persist your cache data to the file disk to survive crash or restart so you can avoid data losing.
  3. How to execute SQL queries over ignite caches
  4. How to unit test and integration test ignite with spring boot
  5. Simple Jenkins pipeline reference
  6. Code repository in GitHub : GithubRepo

ignitedurablememory

what is Ignite durable memory ?

Apache Ignite memory-centric platform is based on the durable memory architecture that allows storing and processing data and indexes both in memory and on disk when the Ignite Native Persistence feature is enabled. The durable memory architecture helps achieve in-memory performance with the durability of disk using all the available resources of the cluster

What is ignite data-grid SQL queries ?

Ignite supports a very elegant query API with support for Predicate-based Scan Queries, SQL Queries (ANSI-99 compliant), and Text Queries. For SQL queries ignites supports in-memory indexing, so all the data lookups are extremely fast. If you are caching your data in off-heap memory, then query indexes will also be cached in off-heap memory as well.

Ignite also provides support for custom indexing via IndexingSpi and SpiQuery class.

more information on : https://apacheignite.readme.io/docs/cache-queries

So to have Apache Ignite server node integrated and started in your spring boot app we need to do the following :

  1. Add the following maven dependencies to your spring boot app pom file

  1. Define ignite configuration via java DSL for better portability and management as a spring configuration and the properties values will be loaded from the application.yml file :

  1. then you can just inject ignite instance as a Spring bean which make unit testing much easier

How to enable Ignite durable memory :

How to use Ignite SQL queries over in memory storage:

How to do atomic thread safe action over the same record via cache invoke API:

How to unit test Apache ignite usage in spring boot service :

How to trigger integration test with Ignite, check test resources as well :

How to run and test the application over swagger rest api :

  • build the project via maven : mvn clean install
  • you can run it from IDEA via AlertManagerApplication.java or via java -jar jarName

Screen Shot 2017-11-17 at 16.28.03.png

  • swagger which contain the REST API and REST API model documentation will be accessible on the URL below where you can start triggering different REST API calls exposed by the spring boot app:

   http://localhost:8080/swagger-ui.html#/

Screen Shot 2017-11-17 at 16.24.11

  • if you STOP the app or restart it and do query again , you will find all created entities from last run so it survived the crash plus any possible restart
  • you can build a portable docker image of the whole app using maven Spotify docker plugin if you wish

 

References :

 

 

Guarantee your single computation task to be finished in case of node failures/crash in apache Ignite

 

How to guarantee your single computation task is guaranteed to failover in case of node failures in apache Ignite ?

As you know failover support in apache ignite for computation tasks is only covered for master slave jobs where slave nodes will do computations then reduce back to the master node , and in case of any failure in slave nodes where slave jobs are executing , then it that failed slave job will fail over to another node to continue execution .

Ok what about if I need to execute just single computation task and I need to have failover guarantee due may be it is a critical task that do financial data modification or must finished task in an acceptable status (Success or Failure) , how we can do that ? it is not supported out of the box by Ignite but we can have a small design extension using Ignite APIs to cover the same , HOW ?

Code reference is hosted into my github :

https://github.com/Romeh/failover-singlejob-ignite

Single Job fail over guarantee overview

Here is the main steps from the overview above via the following flow :

1- You need to create 2 partitioned caches , one for single jobs reference and one for node Ids reference , you should make those caches backed by persistence store in production if you need to survive total grid crash

2- Define jobs cache after put interceptor to set the node id which is the primary owner and triggerer of that compute task

3- Define nodes cache interceptor to intercept after put actions so it can query for all pending jobs for that node id then submit them again into the compute grid with affinity

4- Enable event listening for node left and node removal in the grid to intercept node failure

Then let us run the show , imagine you have data and compute grid of 2 server nodes :

a- you trigger a job in node 1 which will do sensitive action like financial action and you need to be sure it is finished with a valid state whatever the case

b- what if that primary node 1 crashed , what will happen to that compute task , without the extension highlighted above it will disappear with the wind

c- but with that failover small extension , Node 2 . will catch an event that Node 1 left , then it will query jobs cache for all jobs that has that node id and resubmit them again for computation , optimal case if you have idempotent actions so it can be executed multiple times or use job checkpointing for saving the execution state to resume from the last saved point

Job data model for Jobs cache where we mark node id an ignite SQL queryable indexed field :

How the ignite failed nodes cache interceptor is implemented :

How the ignite jobs cache interceptor is implemented :

Apache ignite config :

Enable Node removal and failure events listening ONLY as enabling too much events will cause some performance overhead:

Main App tester :

 

Testing flow :

1- first run the first ignite server node with that code commented out :

Screen Shot 2017-11-15 at 15.20.44

2- then run the second server node but before doing it , uncomment the highlighted code above which simulate creating now jobs for computation by inserting them into the jobs cache

3- once you run the second node , after 5 seconds kill it by shutting it down once you see it started to submit jobs from the code you just uncommented, like:

intercepting for job action triggering and setting node id : f0920c5b-3655–4e85-aa60-f763a9eb1111
Executing computation logic for the request0Key

4- you will see in the first still running node a message that highlight it received and event about the removal of the second node which from it , it will fetch the node id , then insert it on the failed nodes cache where its cache interceptor will intercept the after put action , use the node id and query in jobs cache for still pending jobs that has the same node id and resubmit them again for execution in the compute grid and here we are happy that we caught the non finished jobs from the failed crashed primary node that submitted those jobs

Received Node event [evt=NODE_LEFT, nodeID=TcpDiscoveryNode [id=2da3e806–72e3–415b-acd3–07b7da0eabe0, addrs=[0:0:0:0:0:0:0:1%lo0, 127.0.0.1, 192.168.1.169], sockAddrs=[/192.168.1.169:47501, /0:0:0:0:0:0:0:1%lo0:47501, /127.0.0.1:47501], discPort=47501, order=2, intOrder=2, lastExchangeTime=1510666504589, loc=false, ver=2.3.1#20171031-sha1:d2c82c3c, isClient=false]]

and you will see it is fetching pending jobs and submitting it again, for example you will see the following in the IDEA console:

found a pending jobs for node id: c2a32b7d-1420–4e1a-8ca2-b7080e91dc22 and job id: 19Key
Executing the expiry post action for the request19Key

References :