Ok

En poursuivant votre navigation sur ce site, vous acceptez l'utilisation de cookies. Ces derniers assurent le bon fonctionnement de nos services. En savoir plus.

  • Using JBoss Fuse Integration on JBoss EAP

    JBoss Fuse Integration is the legitimate heir of Apache ServiceMix and Progress Software Fuse. It has been acquired by RedHat in 2012 and, in the beginning, it was available in two architectures: JBoss Fuse, which ran on OSGi platforms, and JBoss Fuse Service Works (FSW), running on Java EE platforms. Currently the two releases, dedicated to these two architectures, were unified in a unique one: JBoss Fuse Integration. It can be run on OSGi platforms, like Apache Karaf, or on Java EE platforms, like JBoss EAP. This article a show how easy is to develop and deploy Apache Camel routes based services using JBoss Fuse Integartion on JBoss EAP. You may find here the project which illustrates the article.

    The first step to perform, after having downloaded and installed JBoss EAP 6.4, is to add to the existent installation JBoss Fuse Integration 6.3, as described here. This process goes through downloading JBoss Fuse Integration 6.3 from the RedHat Customer Portal, unzip it into the existent JBoss EAP home directory and executing the install script. This process should terminate successfully and you’ll find yourself with a Java EE 6 full compliant platform which natively includes Apache Camel, Switchyard, Riftsaw and other Fuse tools.

    The money-transfer project, which illustrates this blog ticket, is a Camel route performing quite complex integration and transformation process. Thanks to Fuse Integration and JBDS (JBoss Developer Studio) this complex service can be developed and tested in minutes. Here is what it does:

    1. XML files containing money transfer orders are landing in a dedicated file-system directory. Their arrival in this landing directory triggers the route execution.
    2. Once a file is received, its XML content is unmarshalled, by JAXB, into Java objects.
    3. The result is a list containing a certain number of money transfer orders. This list is splitted out such that to separate each money transfer order of the list.
    4. Each money transfer order will be converted in JSON.
    5. The result of this conversion will be stored in a JMS destination on the EAP platform.
    6. A 2nd route listens on the given JMS destination and, as soon as a message is received, it will be processed. In our case, for simplicity sake, this processing simply consists in displaying the message into the EAP log file.
    7. Different messages are logged into the EAP log file at different stages of the process, such that to trace the route execution and to facilitate debugging.

    The steps above imply the use of Enterprise Integration Patterns (EIP) like transformers, splitters, wire-taps, etc. Their implementation would take several days, perhaps one week, to an experienced Java developer. But using Apache Camel on EAP provided by Fuse Integration, such an implementation is done in minutes. The first thing to notice is that JBDS 11.1.0.GA provides a Fuse Integration project wizard. Just click File->New->Fuse Integration Project and let yourself guided by the wizard. It will generate a maven project with all the required dependencies and plugins. During the generation process, you may choose either to generate an empty project, or a skeleton. In this last case, you may choose between Java DSL, Blueprint DSL or Spring DSL.

    For those who, like me, prefer to manually craft their stuff, you may of course create a normal maven project and add to it the dependencies and plugins you need. This is the way that the money-transfer project has been done. If you choose to use Spring or Blueprint DSL, JBDS offers you a Fuse Route Editor which greatly facilitates the routes implementation. Both Spring DSL and Blueprint DSL are complex formal notations with an XML syntax, consisting in long lines of dozens of characters and using the Fuse Route Editor in order to edit them dramatically can increase your productivity. A Palette located on the right side of the Fuse Integration perspective in Eclipse presents to the developer the full set of available tools: Components, Routing, Control Flow, Transformation, Miscellaneous. The developer simply selects the desired tool in the Palette and drops it on the canvas. Then the Property dialog opens and allows him to configure and customize the chosen tool, without even having to know the Camel syntax. The image below shows the Camel diagram of the money-transfer project, exported from the workspace to a JPG file.

    CamelContext.jpg

     

     

     

     

     

    To run the application, perform the following steps:

    1. Start the JBoss EAP server with the full profile, as follows:

      $JBOSS_HOME/bin/standalone.sh –c standalone-full.xml

    2. Create a JMS destination, as follows:

      $JBOSS_HOME/bin/jboss-cli.sh –c
      [standalone@localhost:9999 /]
      jms-queue add --queue-address=bankQ --entries=queue/bankQ

    3. Check whether the JMS queue has been created successfully, as follows:

      [standalone@localhost:9999 /] /subsystem=messaging/hornetq-server=default/jms-queue=bankQ:read-resource
      {
      "outcome" => "success",
          "result" => {
              "durable" => false,
              "entries" => ["java:/jms/queue/bankQ"],
              "selector" => undefined
          }
      }
      [standalone@localhost:9999 /]

    4. Clone the git project from the GitHub, as follows:

      git clone https://github.com/nicolasduminil/money-transfer.git

    5. Build and deploy the project, as follows:

      mvn clean install
      mvn –pl money-transfer-war jboss-as:deploy

    6. Now that the project is built and deployed, you can test it by copying the file src/main/resources/money-transfer.xml from the money-transfer-war project into the $JBOSS_HOME/data/temp/inbox directory. This will automatically trigger your Camel route and you can admire the results in the EAP log file.

    Congratulations, you just have successfully developed and deployed your first integration complex project on JBoss EAP. Those of you who already have deployed Camel routes in more classical environments, Java SE, will certainly appreciate the facility.

  • Using the Batch Processing in Java EE 7

    This blog entry demonstrates the use of the JSR 352 specifications in Java EE 7. The JSR 352 specs define the implementation and the management of the batch processing. Historically, the Java batch processing made the object of the Spring Batch framework. Now, with the JSR 352, the Java batch processing became a part of Java EE, meaning that it is standard and it is implemented by any compliant application server, without any add on or complementary libraries.

    Our demo uses Wildfly 10.1.0, the community release of the famous RedHat JBoss EAP, but things should also work in a similar manner with any other Java EE 7 compliant application server. In this last case, some slight modifications in the associated maven POM files, are of course required.

    The Batch Processing has this particularity of being able to process large quantities of data. They should be seen as long running processes, comparable with business processes, without the inherent heavyness of these last ones. Like business processes, batches are based on an XML notation describing workflows. But as opposed to the BPMN2 language, which is the XML notation describing business processes, the JSR 352 specifications are specifically targeting Java platforms and, as such, provide a complete Java API guaranteeing portability.

    The BPMN2 specifications only cover the workflow design process, not the runtime, and each implemnation comes with its own set of tools which makes very difficult the migration process between different products. Converselly, using the a JSR 352 implemntation, a workflow designed to work on Wildfly or JBoss EAP may be very easily migated to Glassfish, WebLogic or WebSphere.

    The implementation of the JSR 352 specifications used here is JBeret which comes out-of-the-box with Wildfly 10.1.0 or higher and JBoss EAP 7.

    JSR 352 versus Spring Batch

    As already mentioned, historically the Batch Processing in Java has been made the object of Spring Batch. So why would one need the JSR 352 ? Well, there is a long and old debate here between Spring and Java EE and, without pretending to bring anything new here or to close in anyway this debate, the idea is that, while Java EE is a set of standard specifications drafted by a non-profit international foundation, named JCP (Java Community Process), which Executive Commitee groups together some of the majors like IBM, Oracle, RedHat, HP, etc., Spring is an open-source Java framework owned by Pivotal Software, a services company based in California.

    While developers are implementing since ages batch processing in Java with Spring Batch, this requires to integrate Spring in the application server's landscape which, depending on the application server, might be a more or less difficult process. In any case, this requires to download and install the hundreds of the Spring libraries and to answer a quite complex question: what happens if 100 hundreds components deployed on the application servers use Spring Batch ? Should these components embed in their own archives the Spring libraries or should kind of shared library be constructed and deploied only once on the application server, such that all the components can use it ? Depending on the strategy adopted here, integrating Spring with an application server could be a tough process. Not to mention any more the fact that, once having integrated Spring into an application server platform, this platform is not any more supported by its vendor, as considered not standard. From this point on, you're responsible of what ever may happen to your platform and, the day when some obscure conflict prevents your services to run, you have no one to legally turn to. The only support in this case is the community. 

    Using JSR 352 is very different in the sense that the implementation comes with the application server and is a part of its binaries. There is no add-on to download, nothing to integrate, you don't need to define any pooling or sharing strategy. Accordingly, there couldn't be any conflicts preventing you to run your services in production and, if this happens nevertheless, in addition to the community support, your vendor is legally responsible to solve the issues within the time limt defined by your support contract.

    Some Basic Concepts

    The basic concepts proposed by the JSR 352 are very similar to those defined by Spring Batch. A batch is a job described in a specific XML notation named JSL (Job Specification Language). Each job consists of a set of steps which should be seen as atomic stages. JSL describes job's steps in terms of two main programming models:

    • Chunks: discrete parts of a step consisting in a reader-processor-writer design pattern. According to this design pattern, a chunck consists of a reader which provides chunk input data, a processor which transforms input data to output data, and a writer which provides the chunk's output data, i.e. the processin results. Each chunk output data is the next chunk input data.
    • Batchlets: atomic parts of a step, more discrete then chunks and not requiring a set of reading, processing and writing operations. A batchlet is more atomic the a chunk as it is not divided into different operations.

    A job is executed through the JobOperation interface. Substitution properties may also be specified in the job's JSL definition for customization purposes. The runtime loads the batch artifacts described JSL and runs the job on a separate thread. All steps in the job run on the same thread unless partitions or splits are used. The JSL description of the job may contain conditional logic that controls the order in which the steps run. The runtime handles your job’s conditional logic, ensuring the correct execution sequence.

    Another important basic notion is the one of partition. It is defined as a discrete functional closure allowing to run a separate instance of the step’s chunk or batchlet artifacts. For example, if we want to process 100 records in a databse table and our processing time is estimated to take 10 minutes, using partitioning, we can group by 10 our records such that to have 10 partitions and to reduce at 1 minute our processing time.

    The Business Case

    In order to illustrate our speech we will consider a vey classical busines case: the monet transfer. A bank eeds to perform massive money transfer. The information comes in XML files having an associated grammar described by an XSD. Our batch is then responsible of parsing the XML files, unmarshalling the XML payload to Java domain objects, converting these domain objects into text messages and sending them to an output JMS destination. The project serving to demonstrate the business case we are going to discuss can be found here. It is a maven multi-project divided in several modules, as follows:

    • bank-master: the master POM
    • bank-jaxb: this is the JAXB module which unmarshalls the money transfer oerations, described in an XML file, into Java domain objects
    • bank-facade: this project implements the facade design pattern. It is implemnted as a singleton automatically started at the deployment time and which starts the workflow
    • bank-batch: this project contains the batchlet required to performthe money transfer operations
    • bank-ear: this project aims at packaging the whole stuff as an EAR archive. It also contains the required scripts and plugins such that to create and run a docker container with the Wildfly server inside and with our EAR deployed.

    Let's try to look in a more detailed manner to each individual module.

    The bank-master module

    This is the master module defining the dependecies and plugins to be used, together with their associated versions, as well as all the other modules.

    The bank-facde module

    This is an EJB module deployed as an EJB-JAR. It contains a singleton which starts the batch. Here is the code:

    @Singleton
    @Startup
    public class BankBatchStarter
    {
      private static final Logger slf4jLogger = LoggerFactory.getLogger(BankBatchStarter.class);

      @Inject
      @ConfigProperty(name = "bank.money-transfer.batch.starter.jobID")
      private String jobID;

      @Inject
      private MoneyTransferBatchlet mtb;

      @PostConstruct
      public void onStartup()
      {
        slf4jLogger.debug("*** BankBatchStarter.onStartup(): starting job {} {}", jobID, mtb);
        BatchRuntime.getJobOperator().start(jobID, null);
      }
    }

    The code above is showing a Java EE singleton, which automatically starts as soon as the EAR is deployed. Once started, it executes the onStartup() method, annotated with the @PostConstruct annotation which, in turn, will instatiate the JobOperator interface and start the batch. The batch is identified by its ID. Here we are using the Apache Deltaspike CDI library to inject properties from a property file. The private attribute jobID will be injected the value  of the property named "bank.money-transfer.batch.starter.jobID" extracted from the apache-deltaspike.properties file. This value, which is the string "batch-job" is then used to identify the required job in the JSL file batch-jobs/bank-job.xml in the META-INF directory. Here is the JSL file:

    <?xml version="1.0" encoding="UTF-8"?>
    <job id="bank-job" xmlns="http://xmlns.jcp.org/xml/ns/javaee"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
      http://xmlns.jcp.org/xml/ns/javaee/jobXML_1_0.xsd" version="1.0">
      <flow id="money-transfer">
        <step id="unmarshall-files">
          <batchlet ref="moneyTransferBatchlet" />
        </step>
      </flow>
    </job>

    Our goal here being to demonstrate the general use of the batch processing in Java and not to go into all the subleties of the JSL, we present a very simplified JSL file defining a flow, named "money-transfer", having only one step. This step, which name is unmarshall-files, will run the batchlet identified on the behalf of the CDI name moneyTransferBatchlet. The batchlet will be shown below as it belongs to the batch module.

    The bank-batch module

    This is the batch project which contains the batchlet. Here is the code:

    @Named
    public class MoneyTransferBatchlet extends AbstractBatchlet implements Serializable
    {
      private static final long serialVersionUID = 1L;

      private static final Logger slf4jLogger =
        LoggerFactory.getLogger(MoneyTransferBatchlet.class);

      @Inject
      @ConfigProperty(name = "bank.money-transfer.source.file.name")
      private String sourceFileName;

      @Resource(name="jms/QueueConnectionFactory")
      private ConnectionFactory connectionFactory;
      @Resource(mappedName = "java:/jms/queue/BanQ")
      private Queue queue;

      public String process() throws Exception
      {
        slf4jLogger.debug("*** MoneyTransferBatchlet.process(): Running ...");
        JMSProducer producer = connectionFactory.createContext().createProducer();
        JAXBContext jaxbContext = JAXBContext.newInstance(MoneyTransfers.class);
        Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller();
        InputStream is = this.getClass().getClassLoader().getResourceAsStream(sourceFileName);
        MoneyTransfers mts = (MoneyTransfers) jaxbUnmarshaller.unmarshal(is);
        for (MoneyTransfer mt : mts.getMoneyTransfers())
          producer.send(queue, new ReflectionToStringBuilder(mt).toString());
        return BatchStatus.COMPLETED.name();
      }
    }

    Like any batch, the one above implemnts the inteface AbstarctBatchlet which defines the process() method. This is the place where evrything in the batch happens. After having instatiated a JMS producer, the code is retriving the XML input stream containing the money transfer to be done and it unmarshalls it into Java domain objects. Then for each money transfer in the input stream, a new text message is produced. Notice the way that the required JMS artifacts, the connection factory and the queue, are injected.

    The code could have been even simpler by directly injecting the JMSContext instead of using the JMS connection factory to create the context and the producer. But this would have required a CDI request scope, which we don't have here as our starting point is a Java EE signleton, not a servlet.

    Last but not least, here is the XML file containing the money transfer:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE xml>
    <ss:moneyTransfers xmlns:ss="http://www.simplex-software.fr/money-transfer"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://www.simplex-software.fr/money-transfer ../xsd/money-transfer.xsd ">
      <ss:moneyTransfer>
        <ss:sourceAccount accountID="ID0198" accountNumber="4007654329" accountType="SAVINGS"
          bankName="Société Générale" sortCode="AB234" transCode="XY">
          <ss:bankAddress cityName="Paris" countryName="France" poBox="PO1234" streetName="rue de
            Londres" streetNumber="24" zipCode="75008"/>
        </ss:sourceAccount>
        <ss:targetAccount accountID="ID0298" accountNumber="4007654330" accountType="SAVINGS" 
          bankName="ING" sortCode="SC9821" transCode="18er670">
          <ss:bankAddress cityName="Brussels" countryName="Belgium" poBox="None"
            streetName="Chaussée de Waterloo" streetNumber="49" zipCode="B1002"/>
        </ss:targetAccount>
        <ss:amount>51000.85</ss:amount>
      </ss:moneyTransfer>
    </ss:moneyTransfers>

    This is a very simple XML definition of some money transfer operations. It will corresponds to a simple gramma which is presented below.

    The bank-jaxb module

    This is a simple module which compiles the XSD grammar of the money transfer projects into Java domain objects. Everything is done by the jaxb2-maven-plugin plugin, as shown below:

    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>jaxb2-maven-plugin</artifactId>
      <executions>
        <execution>
          <goals>
            <goal>xjc</goal>
          </goals>
        </execution>
      </executions>
      <configuration>
        <packageName>fr.simplex_software.bank.money_transfer.jaxb</packageName>
        <outputDirectory>${basedir}/src/main/java</outputDirectory>
        <schemaDirectory>${basedir}/src/main/resources/xsd</schemaDirectory>
        <extension>true</extension>
      </configuration>
    </plugin>

    Here we are using xjc, the XSD to Java compiler, i order to compile the XSD file in the src/main/resources/xsd directory into Java domain objects belonging to the fr.simplex_software.bank.money_transfer.jaxb package. The XSD file is as follows:

    <schema xmlns="http://www.w3.org/2001/XMLSchema"
      targetNamespace="http://www.simplex-software.fr/money-transfer"
      xmlns:ss="http://www.simplex-software.fr/money-transfer"
      xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"

      xmlns:jaxb="http://java.sun.com/xml/ns/jaxb"
      jaxb:extensionBindingPrefixes="xjc" jaxb:version="2.0" elementFormDefault="qualified">

      <annotation>
        <appinfo>
          <jaxb:globalBindings>
            <xjc:simple/>
          </jaxb:globalBindings>
        </appinfo>
      </annotation>
      <simpleType name="BankAccountType" final="restriction">
        <restriction base="string">
          <enumeration value="SAVINGS" />
          <enumeration value="CHECKING" />
        </restriction>
      </simpleType>

      <complexType name="BankAddress">
        <attribute name="streetName" type="string" />
        <attribute name="streetNumber" type="string" />
        <attribute name="poBox" type="string" />
        <attribute name="cityName" type="string" />
        <attribute name="zipCode" type="string" />
        <attribute name="countryName" type="string" />
      </complexType>

      <complexType name="BankAccount">
        <sequence>
          <element name="bankAddress" type="ss:BankAddress" maxOccurs="1" minOccurs="1" />
        </sequence>
        <attribute name="accountID" type="string" />
        <attribute name="accountType" type="ss:BankAccountType" />
        <attribute name="sortCode" type="string" />
        <attribute name="accountNumber" type="string" />
        <attribute name="transCode" type="string" />
        <attribute name="bankName" type="string" />
      </complexType>

      <complexType name="SourceAccount">
        <complexContent>
          <extension base="ss:BankAccount" />
        </complexContent>
      </complexType>

      <complexType name="TargetAccount">
        <complexContent>
          <extension base="ss:BankAccount" />
        </complexContent>
      </complexType>

      <complexType name="MoneyTransfer">
        <sequence>
          <element name="sourceAccount" type="ss:SourceAccount" maxOccurs="1" minOccurs="1" />
          <element name="targetAccount" type="ss:TargetAccount" maxOccurs="1" minOccurs="1" />
          <element name="amount" type="decimal" maxOccurs="1" minOccurs="1" />
        </sequence>
      </complexType>

      <complexType name="MoneyTransfers">
        <sequence>
          <element name="moneyTransfer" type="ss:MoneyTransfer"
            maxOccurs="unbounded" minOccurs="1"/>

        </sequence>
      </complexType>

      <element name="moneyTransfers" type="ss:MoneyTransfers"></element>
    </schema>

    The XML Schema above uses the JAXB global bindings in order to define the @XmlRootElement annotations. This is expressed here by the <xjc:simple/> construct and activated by the <extension>true</extension> parameter in the JAX-B plugin above. Beside that, the XSD defines some complex elements, named MoneyTransfer, SourceAccount, TargetAccount, BankAddress, together with an enumerated element named BankAccountType. Finally, it defines a list of MoneyTransfer complex elements.

    The bank-ear module

    This module is packaging the whole stuff as an EAR archiven using the maven-ear-plugin. It also uses the docker-maven-plugin to create a docker container running the jboss/wildfly:10.1.0.Final docker image from DockerHub. Here is the associated code:

    <plugin>
      <groupId>io.fabric8</groupId>
      <artifactId>docker-maven-plugin</artifactId>
      <configuration>
        <images>
          
        </images>
      </configuration>
      <executions>
        <execution>
          <id>docker:start</id>
          <phase>install</phase>
          <goals>
            <goal>start</goal>
          </goals>
        </execution>
      </executions>
    </plugin>

    The code excerpt above will run the docker container in which it will start the Wildfly application server using the standalone-full profile. Two volumes containing a customization script and, respectivelly, our EAR, will be mounted. Here is the customization script:

    #!/bin/bash

    WILDFLY_HOME=/opt/jboss/wildfly
    JBOSS_CLI=$WILDFLY_HOME/bin/jboss-cli.sh

    echo $(date -u) "=> Creating a new JMS queue"
    $JBOSS_CLI -c "jms-queue add --queue-address=BanQ --entries=java:/jms/queue/BanQ"

    echo $(date -u) "=> Deploy application"
    $JBOSS_CLI -c "deploy wildfly/customization/target/bank-ear.ear"

    echo $(date -u) "=> Create user"
    $WILDFLY_HOME/bin/add-user.sh admin admin

    This script uses the jboss-cli utility to create a new JMS queue named BanQ, which JNDI name is java:/jms/queue/BanQ. After that it deploys the EAR and creates the Wildfly admin user.

    Running the demo

    Now, in order to run the demo, after having cloned the project, perform the following:

    nicolas@BEL20:~/workspace/bank-master$ mvn clean install

    [INFO] Scanning for projects...
    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Build Order:
    [INFO]
    [INFO] Bank :: The Master POM
    [INFO] Bank :: The JAXB Module
    [INFO] Bank :: The Batchlet Module
    [INFO] Bank :: The EJB JAR Module
    [INFO] Bank :: The EAR module
    ..........

    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary:
    [INFO]
    [INFO] Bank :: The Master POM ............................. SUCCESS [ 0.181 s]
    [INFO] Bank :: The JAXB Module ............................ SUCCESS [ 2.192 s]
    [INFO] Bank :: The Batchlet Module ........................ SUCCESS [ 0.101 s]
    [INFO] Bank :: The EJB JAR Module ......................... SUCCESS [ 0.247 s]
    [INFO] Bank :: The EAR module ............................. SUCCESS [ 1.417 s]
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 4.358 s
    [INFO] Finished at: 2018-01-09T16:46:52+01:00
    [INFO] Final Memory: 29M/417M
    [INFO] ------------------------------------------------------------------------

    The maven project has been installed, the EAR built and the docker container created and started. you can check that as follows:

    nicolas@BEL20:~/workspace/bank-master$ docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    08fdb4eea384 jboss/wildfly:10.1.0.Final "/opt/jboss/wildfly/…" About a minute ago Up About a minute 0.0.0.0:8080->8080/tcp, 0.0.0.0:9990->9990/tcp wfy10

    This shows that the docker container named wfy10, based on the jboss/wildfly:10.1.0.Final image from DockerHub, is started. It listens on all the allocated IP adresses, on the ports 8080 and 9990. Now we need to customize our Wildfly installation, by creating the required JMS artifacts, by deploying our EAR and by creating th Wildfly admin user. This is done by executing the customize.sh script.

    nicolas@BEL20:~/workspace/bank-master$ docker exec -ti wfy10 ./wildfly/customization/customize.sh
    Tue Jan 9 15:49:06 UTC 2018 => Creating a new JMS queue
    Tue Jan 9 15:49:07 UTC 2018 => Deploy application
    Tue Jan 9 15:49:10 UTC 2018 => Create user
    Added user 'admin' to file '/opt/jboss/wildfly/standalone/configuration/mgmt-users.properties'
    Added user 'admin' to file '/opt/jboss/wildfly/domain/configuration/mgmt-users.properties'
    nicolas@BEL20:~/workspace/bank-master$

    Now, in order to check that evrything is working as expected, you can look in the Wildfly log file:

    At the end of the log file we should find the following lines:

     15:49:10,578 INFO  [fr.simplex_software.bank.session.MessageReceiver] (Thread-0 (ActiveMQ-client-global-threads-331309011)) *** MessageReceiver.onMessage(): got message fr.simplex_software.bank.money_transfer.jaxb.MoneyTransfer@5aa6d308[sourceAccount=fr.simplex_software.bank.money_transfer.jaxb.SourceAccount@12218d3f,targetAccount=fr.simplex_software.bank.money_transfer.jaxb.TargetAccount@1f73e58e,amount=51000.85]

    This shows that the batch has been successfuly executed. Congratulations, you got a docker container running a Wildfly server with our EAR inside, proving that the Java batchprocessing and JMS/ActiveMQ works as expected.

  • How to Secure REST APIs (JAX-RS) with OAuth 2.0, OpenID Connect, Keycloak and Dockers

    This blog ticket aims at demonstrating some of the most modern techniques in the world of the REST API, as follows :

    • Using JAX-RS 2.0 to develop REST APIs. The JAX-RS 2.0 implementation used here is RESTeasy 3.0.19 provided by the Wildfly 10.1.0 application server. Wildfly is the community release of the famous JBoss, one of the most known and used Java EE application servers, currently provided by RedHat under the name of JBoss EAP (Enterprise Application Platform). In its 10.1.0 release, Wildfly supports the Java EE 7 specifications level.
    • Using the Keycloak IAM (Identity and Access Management) server in order to secure our REST API. Keycloak is the community release of the RedHat Single Sign-On product. It encompasses lots of technology stacks like OAuth 2.0, OpenId Connect, SAML, Kerberos and much others. Here we’ll be using the last release of Keycloak server which is the 3.4.2.
    • Using Docker containers to deploy the full solution.

    So, let’s start coding.

    The Customer Management REST API

    For the purposes of our demo, we choose a quite classical business case : a customer management service. This service is exposing a REST API which may be called in order to perform CRUD operations with customers, like create, select, update and remove, as well as different multi-criteria find operations. In order to facilitate the speech clarity, a full maven project was provided in GitHub. Please follow this link in order to download and use it by browsing, building and testing it.

    The project is structured on several layers, as follows :

    • The customer-management project. This is a multi-module maven project hosting the master POM.
    • The customer-management-data module. This module is the JPA (Java Persistence API) layer of the project and defines the entities required by the CRUD operations. In order to simplify things, only such entity is defined : Customer.
    • The customer-management-repository module. The project is based on the « repository » design pattern. According to this design pattern, the data persistence and the data retriveral operations are abstracted such that to be performed through a series of straightforward methods, without the need to deal with database concerns like connections, commands, cursors, or readers. Using this pattern can help achieve loose coupling and can keep domain objects persistence ignorant. Here we use the implementation by Apache Deltaspike Data module of the « repository » pattern.
    • The customer-management-facade module. This module implements the « facade » design pattern in order to provide an access data layer to the enterprise data. This layer consists in a stateless session bean which exposes a set of services above the repository layer.
    • The customer-management-rest module. This module is the REST API and defines a set of JAX-RS (RESTeasy) services above the facade layer.
    • The customer-management-ear module. This is a maven module which unique role is to package the whole stuff as an EAR archive. It also uses the wildfly maven plugin in order to deploy and undeploy the built EAR.

    Now, let’s look in a more detailed manner at all these modules.

    The customer-management-data artifact.

    This maven artifact is a JAR archive containing the domain objects. In our case, there is only one domain object, the Customer class. Here is the listing :

    package fr.simplex_software.customer_management.data;
    ...

    @XmlRootElement(name="customer")
    @XmlAccessorType(XmlAccessType.PROPERTY)

    @Entity
    @Table(name="CUSTOMERS")
    public class Customer implements Serializable
    {
      private static final long serialVersionUID = 1L;
      private BigInteger id;
      private String firstName;
      private String lastName;
      private String street;
      private String city;
      private String state;
      private String zip;
      private String country;

      public Customer()
      {
      }

      public Customer(String firstName, String lastName, String street,
        String city, String state, String zip, String country)
      {
        this.firstName = firstName;
        this.lastName = lastName;
        this.street = street;
        this.city = city;
        this.state = state;
        this.zip = zip;
        this.country = country;
      } 

      public Customer (Customer customer)
      {
        this (customer.firstName, customer.lastName, customer.street,
          customer.city, customer.state, customer.zip, customer.country);
      } 

      @Id
      @SequenceGenerator(name = "CUSTOMERS_ID_GENERATOR", sequenceName = "CUSTOMERS_SEQ")
      @GeneratedValue(strategy = GenerationType.SEQUENCE,
        generator = "CUSTOMERS_ID_GENERATOR")
      @Column(name = "CUSTOMER_ID", unique = true, nullable = false, length = 8)
      public BigInteger getId()
      {
        return id;
      }

      public void setId(BigInteger id)
      {
        this.id = id;
      }

      @XmlElement
      @Column(name = "FIRST_NAME", nullable = false, length = 40)
      public String getFirstName()
      {
        return firstName;
      }

      public void setFirstName(String firstName)
      {
        this.firstName = firstName;
      }
      ...

    The class Customer above uses JPA and JAXB annotations in order to define the domain object, together with the persistence and XML/JSON marshalling options. Besides the most notable things here there is the ID generation strategy which assumes the use of Oracle sequences. A generator named CUSTOMERS_ID_GENERATOR is defined and it creates an Oracle sequence named CUSTOMERS_SEQ used further to generate IDs each time a new customer has to be created. While this works fine with Oracle databases, it also works in the case of H2 in-memory databases, used for testing purposes, as in our case.

    In addition to the JPA annotations, defining the database name and columns, the JAXB annotations aim at marshalling/unmarshalling the Java entities from/into XML/JSON payloads. This conversions are required upon invocation of the REST API, as we’ll see later.

    The customer-management-repository artifact.

    This artifact is the repository layer. It consists in the following interface :

    package fr.simplex_software.customer_management.repository;
    ...


    @Repository
    public interface CustomerManagementRepository
      extends EntityRepository<Customer, BigInteger>, EntityManagerDelegate<Customer>
    {
      public List<Customer> findByLastName (String lastName);
      public List<Customer> findByCountry (String country);
      public List<Customer> findByCity (String city);
      public List<Customer> findByZip (String zip);
      public List<Customer> findByState (String state);
      public List<Customer> findByStreet (String street);
      public List<Customer> findByFirstName(String firstName);
    }

    The interface above extends the Apache Deltaspike EntityRepository generic interface passing to it the domain class Customer as well as the ID class which is BigInteger. The idea here is to provide an EntityRepository for Customer domain objects, identified by BigInteger instances. That’s all. We don’t need to implement anything here, as all the required CRUD methods, like save(), refresh(), flush(), find(), remove(), etc. are already provided out-of-the-box. We just need to declare some customized finders with criteria, using the « findBy… » naming convention, and all these methods will be dynamically generated by the Apache Deltaspike Data module, at the deployment time. Awesome !

    The repository design pattern is a very strong one made quite popular by its initial Spring Data implementation.  Our demo being a Java EE 7 one, we don’t have any reason to use Spring here, hence Apache Deltaspike Data module is a very convenient alternative. It fits much better for business cases using Java EE 7 containers, like Wildfly 10, especially when it comes to the CMT (Container Managed Transactions), while Spring, as a non compliant Java EE framework, needs to integrate with an external transaction manager, with all the additional work and risks that this might involve.

    The customer-management-facade artifact.

    This artifact is the « facade » layer. It consists in the following stateless session bean with no interface :

    package fr.simplex_software.customer_management.facade;
    ...

    @Stateless
    public class CustomerManagementFacade
    {
      private static Logger slf4jLogger = 
        LoggerFactory.getLogger(CustomerManagementFacade.class);
      @Inject
      private CustomerManagementRepository repo;
      @Produces
      @PersistenceContext
      private static EntityManager entityManager; 

      public List<Customer> findByFirstName(String firstName)
      {
        return repo.findByFirstName(firstName);
      }

      public List<Customer> findByLastName(String lastName)
      {
        return repo.findByLastName(lastName);
      } 

      public List<Customer> findByCountry(String country)
      {
        return repo.findByCountry(country);
      } 

      public List<Customer> findByCity(String city)
      {
        return repo.findByCity(city);
      } 

      public List<Customer> findByZip(String zip)
      {
        return null;
      } 

      public List<Customer> findByState(String state)
      {
        return repo.findByState(state);
      } 

      public List<Customer> findByStreet(String street)
      {
        return repo.findByStreet(street);
      } 

      public List<Customer> findAll()
      {
        return repo.findAll();
      } 

      public List<Customer> findAll(int customer, int arg1)
      {
        return repo.findAll(customer, arg1);
      } 

      public Customer findBy(BigInteger customerId)
      {
        return repo.findBy(customerId);
      } 

      ………

    As we can see in the listing above, the implementation is really very simple. The class
    CustomerManagementFacade uses the business delegate design pattern in order to expose the our data repository. The CustomerManagerRepository is injected here and used as a business delegate to provide the desired functionality. Everything is highly simplified by using the EJB container CMT, which automatically sets the boundaries of the transactions, such that to discharge the developer of the responsibility to manually handle the commit/rollback mechanics, very complex in a distribuetd environment. Here we are using the implicit attribute for enterprise bean, whic is TransactionAttributeType.REQUIRED. Based on this attribute, if the calling client is running within a transaction and invokes the enterprise bean’s method, the method executes within the client’s transaction. If the client is not associated with a transaction, the container starts a new transaction before running the method. The Java EE platform standard EntityManager is injected as well such that to take advanatge of the CMT. This is as opposed to the use of Spring or other Java SE techniques, which require to integrate with an external JPA implemntation, with all the additional work and risks that this might involve.

    The customer-management-rest artifact

    This artifact is the actual REST layer. Exactly the same way that the customer-management-facade was using the business delegate design pattern to expose its functionality above the repository layer, this layer uses the same business delegate design pattern to expose behaviour above the facade layer. Here is the code :

    package fr.simplex_software.rest;

    @Path("/customers")
    @Produces(MediaType.APPLICATION_JSON)
    @Consumes(MediaType.APPLICATION_JSON)
    public class CustomerManagementResource
    {
      @EJB
      private CustomerManagementFacade facade;
      @POST
      public Response createCustomer(Customer customer)
      {
        Customer newCustomer = facade.saveAndFlushAndRefresh(customer);
        return Response.created(URI.create("/customers/" + newCustomer.getId())).entity(newCustomer).build();
      }

      @GET
      @Path("{id}")
      public Response getCustomer(@PathParam("id") BigInteger id)
      {
        return Response.ok().entity(facade.findBy(id)).build();
      }

      @PUT
      @Path("{id}")
      public Response updateCustomer(@PathParam("id") BigInteger id, Customer customer)
      {
        Customer cust = facade.findBy(id);
        if (cust == null)
          throw new WebApplicationException(Response.Status.NOT_FOUND);
        Customer newCustomer = new Customer(cust);
        customer.setId(cust.getId());
        facade.save(newCustomer);
        return Response.ok().build();
      }

      @DELETE
      @Path("{id}")
      public Response deleteCustomer(@PathParam("id") BigInteger id)
      {
        facade.removeAndFlush(id);
        return Response.ok().build();
      }

      @GET
      @Path("firstName/{firstName}")
      public Response getCustomersByFirstName(@PathParam("firstName") String firstName)
      {
        return Response.ok().entity(facade.findByFirstName(firstName)).build();
      }

      @GET
      public Response getCustomers()
      {
        return Response.ok().entity(facade.findAll()).build();
      }
    }

    The listing above provide a customer CRUD implemented as a REST API. It exposes GET requests to select customers or to find them based on criteria like their first name or their ID, POST requests to create new customers, PUT requests to update existing customers, DELETE requests to remove customers, etc. All this by delegating to the facade layer. Another way to implement things would have been to directly annotate this class with the @Stateless annotation, in which case our REST layer would have been the facade layer in the same time. We preferd to decouple the facade layer from the REST layer, this having also the advantage to expose two different interfaces, dedicated to different kind of clients : HTTP clients invoking the REST layer and RMI/IIOP clients invoking directly the facade layer. Notice that using CDI to directly inject the repository layer would also have been an interesting possibility, giving a third type of interface to our API.

    Now, the interesting part is the security configuration of the REST API layer. It is deployed as a WAR and, hence, this is done in the web.xml file below :

    <?xml version="1.0" encoding="UTF-8"?>
    <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
      <module-name>customer-management-rest</module-name>
      <security-constraint>
        <web-resource-collection>
          <web-resource-name>customers</web-resource-name>
          <url-pattern>/services/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
          <role-name>customer-manager</role-name>
        </auth-constraint>
      </security-constraint>
      <login-config>
        <auth-method>KEYCLOAK</auth-method>
      </login-config>
      <security-role>
        <role-name>customer-manager</role-name>
      </security-role>
    </web-app>

    What the XML file above is saying is that our REST resources, which path is /services, are protected for all the HTTP requests (GET, POST, PUT, DELETE) such that only the customer-manager role be able to invoke the associated endpoints. This role has to be defined using the KEYCLOAK security provider

    The customer-management-ear artifact

    This artifact aims at building an EAR archive containing all the previous artifact and deploying it on the Wildfly platform. It is also responsible of the creation of the docker containers running the Wildfly and the Keycloak servers, as well as of their configuration. Everything is driven by docker-compose, a very convenient utility to handle and orchestrate docker containers. Here we are using the docker-compose plugin for maven and here is the associated YAML file :

    version: "2"
    services:
      wfy10:
        image: jboss/wildfly:10.1.0.Final
        volumes:
          - ./wfy10/customization:/opt/jboss/wildfly/customization/
        container_name: wfy10
        entrypoint: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
        ports:
          - 8080:8080
          - 9990:9990
        depends_on:
          - "keycloak"

      keycloak:
        image: jboss/keycloak:latest
        volumes:
          - ./keycloak/customization:/opt/jboss/keycloak/customization/
        container_name: keycloak
        entrypoint: /opt/jboss/docker-entrypoint.sh -b 0.0.0.0 -bmanagement 0.0.0.0
        ports:
          - 18080:8080
          - 19990:9990
        environment:
          KEYCLOAK_USER: admin
          KEYCLOAK_PASSWORD: admin

    The YAML file above is using docker-compse release 2. It defines two docker containers :

    • One named keycloak, based on the image named jboss/keycloak:latest from the DockerHub registry. It will run the Keycloak 3.4.2 server by exposing the local TCP ports 8080 and 9990 as global host’s ports 18080 and 19990. A read-only volume will e created for this container and mapped to its /opt/jboss/keycloak/customization It contains some configuration scripts. Also, the user named admin, with password admin, will be created for this server.
    • A second container named wfy10, based on the docker image named jboss/wildfly:10.1.0.Final from the DockerHub registry. This container depends on the first one. This dependency means that the authentication and authorization process, for components deployed on the Wildfly server, are performed by the Keycloak server. It exposes the local ports 8080 and 9990 as being the same global hosts ports.

    Now, running the maven POM with this plugin :

         <plugin>
            <groupId>com.dkanejs.maven.plugins</groupId>
            <artifactId>docker-compose-maven-plugin</artifactId>
            <executions>
              <execution>
                <id>up</id>
                <phase>install</phase>
                <goals>
                  <goal>up</goal>
                </goals>
                <configuration>
                  <composeFile>
                    ${project.basedir}/src/main/docker/docker-compose.yml
                  </composeFile>
                  <detachedMode>true</detachedMode>
                </configuration>
              </execution>
              ……….

    Will check-out the two images from DockerHub and will create the two containers. The execution might take sometime, depending on the network latency.

    mkdir tests
    cd tests
    git clone
    https://github.com/nicolasduminil/customer-management.git

    cd customer-management
    mvn –DskipTests clean install

    We assume here that maven is installed and correctly configured with the required repositories. Also docker and docker-compose should be installed. Once the build process finishes, you can check the result as follows :

    nicolas@BEL20:~/workspace/customer-management$ docker ps
    CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                              NAMES
    fa25d88b75f3        jboss/wildfly:10.1.0.Final   "/opt/jboss/wildfly/…"   5 seconds ago       Up 4 seconds        0.0.0.0:8080->8080/tcp, 0.0.0.0:9990->9990/tcp     wfy10
    05f1ad1377f9        jboss/keycloak:latest        "/opt/jboss/docker-e…"   6 seconds ago       Up 4 seconds        0.0.0.0:18080->8080/tcp, 0.0.0.0:19990->9990/tcp   keycloak


    Here we can see that we have two running containers, named respectivelly keycloak and wfy10. Now we need to configure the servers ran by these two containers such that we can use them for our purposes. The following script we’ll do :

    docker exec -ti keycloak keycloak/customization/customize.sh > /dev/null
    docker exec -ti wfy10 wildfly/customization/customize.sh > /dev/null
    docker restart wfy10 > /dev/null
    docker exec -ti wfy10 wildfly/customization/deploy.sh > /dev/null

    The first line of this script is customizing the the keycloak server by running the customize.sh script. Here is the code :

    #!/bin/bash
    WILDFLY_HOME=/opt/jboss/keycloak
    KCADM=$WILDFLY_HOME/bin/kcadm.sh
    $KCADM config credentials --server http://localhost:8080/auth --realm master --user admin --password admin
    $KCADM create users -r master -s username=customer-admin -s enabled=true
    $KCADM set-password -r master --username customer-admin --new-password admin
    $KCADM create clients -r master -s clientId=customer-manager-client
      -s bearerOnly="true" -s "redirectUris=["http://localhost:8080/customer-management/*"]" -s enabled=true
    $KCADM create clients -r master -s clientId=curl -s publicClient="true"
      -s directAccessGrantsEnabled="true" -s "redirectUris=["http://localhost"]" -s enabled=true
    $KCADM create roles -r master -s name=customer-manager
    $KCADM add-roles --uusername customer-admin --rolename customer-manager -r master

    The Keycloak server consists in a set of security realms. Each realm is a set of users, roles and clients. In order to secure our REST API we need to create a security realm containing the required users and roles as defined by the web.xml that we showed previously. This might be done in two ways :

    • Using the Keycloak web aministrative console. This is probably the most general and the easiest method but it requires to interact with the console.
    • Using the Keycloak server admin utility, named kcadm. This is a shell script allowing to perform in a programatic way every thing one could do with the admin console. This is the method we are using here as it complies very well with docker and docker-compose.

    First we need to login to the Keycloak server. This is done by the config credentials command. Once logged-in, a JSON file is created and it will be used for the duration of the whole session. Then we create an user named customer-admin. The –r option indicates the name of the security realm. Keycloak comes with a seurity realm by default, called master. We could create a new security realm but we decided to use the existent one, for simplicity sake. This new user needs a password as well and this is created using the set-password command.

    The Keycloak server uses the notion of client, defined as an entity that can request authentication on the behalf of a user. Clients support either the OpenID Connect protocol or SAML. Here we’ll be using OpenID Connect. There are two types of clients :

    • Bearer-only clients. This is a term specific to the OAuth 2.0 protocol. The short explanation is that, in order to access securized services, a client needs an OAuth 2.0 Access Token. This token is created by the OAuth 2.0 implementation. Keycloak is an OAuth 2.0 implementation and, as such, it is able to create and provide to clients an access token such that they be able to invoke the given services. In order to get an access token, a client needs a bearer token, also called refresh token. So a bearer-only client is a Keycloak client that will only verify bearer tokens and can’t obtain the access tokens itself. This means that the caller, which is a service and not an user, is not going to be redirected to a login page, such that it would be the case of an web application. This is exactly what we need in a case where a service needs authentication and authorization in order to call another service.
    • Public clients. This category of clients are able to obtain an access token based on a certain form of authetication.

    In our case, we need two clients : a bearer-only client that will be used for the caller service authentication and authorization purposes, and a public client of the behalf of which we will manually get an access token by providing an username and a password. This is what happens in the script above, the two create clients commands. The first command is creating a bearer-only client, named customer-manager-client. This is the client that will be used by the caller. The second service, named curl, is a public one. It enables the Direct Grant Access (another OAuth 2.0 specific term) meaning that it is able to swap a username and a password for a token.

    The last two lines of the script create the role customer-manager, specified by the security configuration in the web.xml file. Once created, this role is assigned to the user customer-admin on the behalf of the add-roles command.

    The way that the Keycloak documentation recommends to use these two clients, is as follows :
     

    RESULT=`curl --data "grant_type=password&client_id=curl&username=customer-admin&password=admin"
     http://localhost:18080/auth/realms/master/protocol/openid-connect/token`
    TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`

    This example would work on Linux as it relies on the utilization of the curl and sed utilities. Basically what happens here is that the first command, the curl request, invokes the Keycloak OpenID Connect token endpoint with grant type set to password. This triggers another OAuth 2.0 specific thing called Resource Owner Credentials Flow that allows to obtain an access token based on the provided credentials. This request assumes that the Keycloak OpenId Connect endpoint is accessible via the port 18080 of the localhost. Next, the access token is extracted from the curl result by the sed command. Now, this access token may be used by the caller service in order to authenticate against the called service. Now our Keycloak server, ran by the docker container with the same name, is configured and ready to be used. We need to configure the Wildfly server such that to take advantage of the Keycloak authentication/authorization and to deploy to it our EAR. This happens in the next lines of the setup.sh above. First, the wildfly/customization/customize.sh script is called in order to customize the Wildfly server. Here is the code 

    #!/bin/bash
    WILDFLY_HOME=/opt/jboss/wildfly
    JBOSS_CLI=$WILDFLY_HOME/bin/jboss-cli.sh
    $WILDFLY_HOME/bin/add-user.sh admin admin
    if [ ! -f  ./keycloak-wildfly-adapter-dist-3.4.2.Final.tar.gz ]
    then
      curl -O -s https://downloads.jboss.org/keycloak/3.4.2.Final/adapters/keycloak-oidc/keycloak-wildfly-adapter-dist-3.4.2.Final.tar.gz
    fi
    tar xzf keycloak-wildfly-adapter-dist-3.4.2.Final.tar.gz -C $WILDFLY_HOME
    $JBOSS_CLI --file=$WILDFLY_HOME/bin/adapter-install-offline.cli

    In order to configure Wildfly application server for Keycloak authentication, we need to download and install the keycloak adapter for Wildfly. This is what this script is doing. Once downloaded, the archive is stored in the Wildfly home directory and then the adapter-install-offline.cli is called. We only need to restart the Wildfly server such that to take advantage of the new configuration and, after that, on the behalf of the CLI, to deploy our EAR archive.

    Testing our REST API

    Now, once that our two docker containers are running and that our two servers are configured, we are ready for testing. The only last point to check is the file customer-management/customer-management-rest/src/main/webapp/WEB-INF/keycloak.json

    {
      "realm": "master",
      "bearer-only": true,
      "auth-server-url": "http://172.18.0.2:8080/auth",
      "ssl-required": "external",
      "resource": "customer-manager-client",
      "confidential-port": 0,
      "enable-cors": true
    }

    We need to make sure that the IP address in the 3rd line is the public IP address of the docker container running the Keycloak server (the one we called keycloak). This is quite easy to check, for example, by doing the following command :

    docker inspect --format '{{ .NetworkSettings.IPAddress }}' keycloak

    Once that this point is checked, we are ready to star tour integration tests. We are using here the failsafe maven plugin. For example :

    cd tests/customer-manager
    mvn failsafe:integration-test

    This will launch the integration test named  CustomerServiceTestIT which listing is provided here below :

    package fr.simplex_software.rest;

    @FixMethodOrder(MethodSorters.NAME_ASCENDING)
    public class CustomerServiceTestIT
    {
      private static Logger slf4jLogger = LoggerFactory.getLogger(CustomerServiceTestIT.class);
      private static Client client;
      private static WebTarget webTarget;
      private static Customer customer = null;
      private static String token;

      @BeforeClass
      public static void init() throws Exception
      {
        token = Keycloak.getInstance("http://172.18.0.2:8080/auth", "master", "customer-admin",
          "admin", "curl").tokenManager().getAccessToken().getToken();
      }

      @Before
      public void setUp() throws Exception
      {
        client = ClientBuilder.newClient();
        webTarget = client.target("http://localhost:8080/customer-management/services/customers");
      }

      @After
      public void tearDown() throws Exception
      {
        if (client != null)
        {
          client.close();
          client = null;
        }
        webTarget = null;
      }

      @AfterClass
      public static void destroy()
      {
        token = null;
      }

      @Test
      public void test1() throws Exception
      {
        slf4jLogger.debug("*** Create a new Customer ***");
        Customer newCustomer = new Customer("Nick", "DUMINIL", "26 Allée des Sapins", "Soisy sous Montmorency",
          "None", "95230", "France");
        Response response = webTarget.request().header(HttpHeaders.AUTHORIZATION, "Bearer " +
          token).post(Entity.entity(newCustomer, "application/json"));
        assertEquals(201, response.getStatus());
        customer = response.readEntity(Customer.class);
        assertNotNull(customer);
        String location = response.getLocation().toString();
        slf4jLogger.debug("*** Location: " + location + " ***");
        response.close();
      }

      @Test
      public void test2()
      {
        String customerId = customer.getId().toString();
        slf4jLogger.debug("*** Get a Customer with ID {} ***", customerId);
        slf4jLogger.info("*** token: {}", token);
        Response response = webTarget.path(customerId).request().header(HttpHeaders.AUTHORIZATION,
          "Bearer " + token).get();
        assertEquals(200, response.getStatus());
        customer = response.readEntity(Customer.class);
        assertNotNull(customer);
        assertEquals(customer.getCountry(), "France");
      }

      @Test
      public void test3()
      {
        String firstName = customer.getFirstName();
        slf4jLogger.debug("*** Get a Customer by first name {} ***", firstName);
        Response response =
          ebTarget.path("firstName").path(firstName).request().header(HttpHeaders.AUTHORIZATION,
          "Bearer " + token).get();
        assertEquals(200, response.getStatus());
        List<Customer> customers = response.readEntity(new GenericType<List<Customer>>(){});
        assertNotNull(customers);
        assertTrue(customers.size() > 0);
        customer = customers.get(0);
        assertNotNull(customer);
        assertEquals(customer.getCountry(), "France");
      }

      @Test
      public void test4()
      {
        String customerId = customer.getId().toString();
        slf4jLogger.debug("*** Update the customer with ID {} ***", customerId);
        customer.setCountry("Belgium");
        Response response = webTarget.path(customerId).request().header(HttpHeaders.AUTHORIZATION,
          "Bearer " + token).put(Entity.entity(customer, "application/json"));
        assertEquals(200, response.getStatus());
      }

      @Test
      public void test5()
      {
        String customerId = customer.getId().toString();
        slf4jLogger.debug("*** Delete the customer with ID {} ***", customerId);
        Response response = webTarget.path(customerId).request().header(HttpHeaders.AUTHORIZATION,
          "Bearer " + token).delete();
        assertEquals(200, response.getStatus());
      }

      @Test
      public void test6()
      {
        Response response = webTarget.request().header(HttpHeaders.AUTHORIZATION, "Bearer " + token).get();
        assertEquals(200, response.getStatus());
        List<Customer> customers = response.readEntity(new GenericType<List<Customer>>(){});
        assertNotNull(customers);
        assertTrue(customers.size() > 0);
        customer = customers.get(0);
        assertNotNull(customer);
        assertEquals(customer.getCountry(), "France");
      }
    }

    As we can see, this is a simple test calling all the endpoint of our REST API. The only more notable thing is the way that we are getting the OAuth 2.0 access token. This happens in the init() method. It is annotated with the @BeforeClass annotation, meaning that it is executed only once, in the beggining, before the individual tests are done. The Java code we have here is the exact equivalent of the curl/sed commands combination presented above. The getInstance() methos gets an instance of the Keycloak server by providing the login information, as follows :

    • The Keycloak OpenID Connect endpoint which is http://<ip-address>:<tcp-port>/auth, where <ip-address> is the keycloak docker container public IP addres (172.18.0.2 in our case), and <tcp-port> is the internal docker container tcp port where the Keycloak server is listening for requests (8080 in our case). These informtion should match the ones in the keycloak.json file, as mentioned previously. Please notice that, while the Keycloak server is accessible from the host running the docker container at the address localhost :18080, from inside the container, this address is 172.18.0.2 :8080.
    • The name of the realm which, in our case, is master. This information should also match the one in keycloak .json.
    • The credentials : username (customer-admin) and password (admin).
    • The name of the Keycloak client having enabled the Direct Grant Acess of the OAuth 2.0 protocol. In our case we provided a client named curl. Please notice that there is no any realtion with the utility having the same name.

    Once logged-in to the Keycloak server, we can get the TokenManager and, on its behalf, the AccessToken and, then the string representation of the access token itself. This string representation of the access token is stored in a static global property and injected as an AUTHORIZATION header with each HTTP request.

    The listing below shows a fragment of the log file created by the test execution. You can see inside the Authorization: Bearer header. We replaced the token content by « … » in order to simplify things. At the end, you should see Tests run : 6, Failures : 0, Skipped : 0.

    -------------------------------------------------------
     T E S T S
    -------------------------------------------------------
    Running fr.simplex_software.rest.CustomerServiceIT
    0    [main] DEBUG org.apache.http.impl.conn.PoolingClientConnectionManager  - Connection request: [route: {}->http://172.18.0.2:8080][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 10]
    7    [main] DEBUG org.apache.http.impl.conn.PoolingClientConnectionManager  - Connection leased: [id: 0][route: {}->http://172.18.0.2:8080][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 10]
    9    [main] DEBUG org.apache.http.impl.conn.DefaultClientConnectionOperator  - Connecting to 172.18.0.2:8080
    20   [main] DEBUG org.apache.http.client.protocol.RequestAddCookies  - CookieSpec selected: best-match
    24   [main] DEBUG org.apache.http.client.protocol.RequestAuthCache  - Auth cache not set in the context
    24   [main] DEBUG org.apache.http.client.protocol.RequestTargetAuthentication  - Target auth state: UNCHALLENGED
    24   [main] DEBUG org.apache.http.client.protocol.RequestProxyAuthentication  - Proxy auth state: UNCHALLENGED
    24   [main] DEBUG org.apache.http.impl.client.DefaultHttpClient  - Attempt 1 to execute request
    24   [main] DEBUG org.apache.http.impl.conn.DefaultClientConnection  - Sending request: POST /auth/realms/master/protocol/openid-connect/token HTTP/1.1
    25   [main] DEBUG org.apache.http.wire  -  >> "POST /auth/realms/master/protocol/openid-connect/token HTTP/1.1[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "Accept: application/json[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "Accept-Encoding: gzip, deflate[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "Content-Type: application/x-www-form-urlencoded[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "Content-Length: 73[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "Host: 172.18.0.2:8080[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "Connection: Keep-Alive[r][n]"
    26   [main] DEBUG org.apache.http.wire  -  >> "[r][n]"
    26   [main] DEBUG org.apache.http.headers  - >> POST /auth/realms/master/protocol/openid-connect/token HTTP/1.1
    26   [main] DEBUG org.apache.http.headers  - >> Accept: application/json
    26   [main] DEBUG org.apache.http.headers  - >> Accept-Encoding: gzip, deflate
    26   [main] DEBUG org.apache.http.headers  - >> Content-Type: application/x-www-form-urlencoded
    26   [main] DEBUG org.apache.http.headers  - >> Content-Length: 73
    26   [main] DEBUG org.apache.http.headers  - >> Host: 172.18.0.2:8080
    26   [main] DEBUG org.apache.http.headers  - >> Connection: Keep-Alive
    27   [main] DEBUG org.apache.http.wire  -  >> "grant_type=password&username=customer-admin&password=admin&client_id=curl"
    131  [main] DEBUG org.apache.http.wire  -  << "HTTP/1.1 200 OK[r][n]"
    132  [main] DEBUG org.apache.http.wire  -  << "Connection: keep-alive[r][n]"
    132  [main] DEBUG org.apache.http.wire  -  << "Set-Cookie: KC_RESTART=; Version=1; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Max-Age=0; Path=/auth/realms/master; HttpOnly[r][n]"
    132  [main] DEBUG org.apache.http.wire  -  << "Content-Type: application/json[r][n]"
    133  [main] DEBUG org.apache.http.wire  -  << "Content-Length: 2423[r][n]"
    133  [main] DEBUG org.apache.http.wire  -  << "Date: Thu, 04 Jan 2018 11:06:44 GMT[r][n]"
    133  [main] DEBUG org.apache.http.wire  -  << "[r][n]"
    133  [main] DEBUG org.apache.http.impl.conn.DefaultClientConnection  - Receiving response: HTTP/1.1 200 OK
    133  [main] DEBUG org.apache.http.headers  - << HTTP/1.1 200 OK
    133  [main] DEBUG org.apache.http.headers  - << Connection: keep-alive
    134  [main] DEBUG org.apache.http.headers  - << Set-Cookie: KC_RESTART=; Version=1; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Max-Age=0; Path=/auth/realms/master; HttpOnly
    134  [main] DEBUG org.apache.http.headers  - << Content-Type: application/json
    134  [main] DEBUG org.apache.http.headers  - << Content-Length: 2423
    134  [main] DEBUG org.apache.http.headers  - << Date: Thu, 04 Jan 2018 11:06:44 GMT
    153  [main] DEBUG org.apache.http.client.protocol.ResponseProcessCookies  - Cookie accepted [KC_RESTART="", version:1, domain:172.18.0.2, path:/auth/realms/master, expiry:Thu Jan 01 01:00:10 CET 1970]
    154  [main] DEBUG org.apache.http.impl.client.DefaultHttpClient  - Connection can be kept alive indefinitely
    264  [main] DEBUG org.apache.http.wire  -  << "{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI1WW1JaW50R09uZ2J0RExCYzREeUJKQVlpRk5LdUNTS1VJRVYyaUNFekZZIn0.eyJqdGkiOiI2NGJiZTAyZS00Zjk4LTRlMGEtOGFiNy1iNzVkNTQzZTFmN2QiLCJleHAiOjE1MTUwNjQwNjQsIm5iZiI6MCwiaWF0IjoxNTE1MDY0MDA0LCJpc3MiOiJodHRwOi8vMTcyLjE4LjAuMjo4MDgwL2F1dGgvcmVhbG…..

    308  [main] DEBUG org.apache.http.impl.conn.PoolingClientConnectionManager  - Connection [id: 0][route: {}->http://172.18.0.2:8080] can be kept alive indefinitely
    308  [main] DEBUG org.apache.http.impl.conn.PoolingClientConnectionManager  - Connection released: [id: 0][route: {}->http://172.18.0.2:8080][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]
    316  [main] DEBUG fr.simplex_software.rest.CustomerServiceIT  - *** Create a new Customer ***
    335  [main] DEBUG org.apache.http.impl.conn.BasicClientConnectionManager  - Get connection for route {}->http://localhost:8080
    335  [main] DEBUG org.apache.http.impl.conn.DefaultClientConnectionOperator  - Connecting to localhost:8080
    336  [main] DEBUG org.apache.http.client.protocol.RequestAddCookies  - CookieSpec selected: best-match
    336  [main] DEBUG org.apache.http.client.protocol.RequestAuthCache  - Auth cache not set in the context
    336  [main] DEBUG org.apache.http.client.protocol.RequestProxyAuthentication  - Proxy auth state: UNCHALLENGED
    336  [main] DEBUG org.apache.http.impl.client.DefaultHttpClient  - Attempt 1 to execute request
    336  [main] DEBUG org.apache.http.impl.conn.DefaultClientConnection  - Sending request: POST /customer-management/services/customers HTTP/1.1
    336  [main] DEBUG org.apache.http.wire  -  >> "POST /customer-management/services/customers HTTP/1.1[r][n]"
    336  [main] DEBUG org.apache.http.wire  -  >> "Accept-Encoding: gzip, deflate[r][n]"
    337  [main] DEBUG org.apache.http.wire  -  >> "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI1WW1JaW50R09uZ2J0RExCYzREeUJKQVlpRk5LdUNTS1VJRVYyaUNFekZZIn0.eyJqdGkiOiI2NGJiZTAyZS00Zjk4LTRlMGEtOGFiNy1iNzVkNTQzZTFmN2QiLCJleHAiOjE1MTUwNjQwNjQsIm5iZiI6MCwiaWF0IjoxNTE1MDY0MDA0LCJpc3MiOiJodHRwOi8vMTcyLjE4LjAuMjo4MDgwL2F1dGgvcmVhbG1zL21hc3RlciIsImF1…

    337  [main] DEBUG org.apache.http.wire  -  >> "Content-Type: application/json[r][n]"
    337  [main] DEBUG org.apache.http.wire  -  >> "Content-Length: 163[r][n]"
    337  [main] DEBUG org.apache.http.wire  -  >> "Host: localhost:8080[r][n]"
    337  [main] DEBUG org.apache.http.wire  -  >> "Connection: Keep-Alive[r][n]"
    337  [main] DEBUG org.apache.http.wire  -  >> "[r][n]"
    337  [main] DEBUG org.apache.http.headers  - >> POST /customer-management/services/customers HTTP/1.1
    337  [main] DEBUG org.apache.http.headers  - >> Accept-Encoding: gzip, deflate
    337  [main] DEBUG org.apache.http.headers  - >> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI1WW1JaW50R09uZ2J0RExCYzREeUJKQVlpRk5LdUNTS1VJRVYyaUNFekZZIn0.eyJqdGkiOiI2NGJiZTAyZS00Zjk4LTRlMGEtOGFiNy1iNzVkNTQzZTFmN2QiLCJleHAiOjE1MTUwNjQwNjQsIm5iZiI6MCwiaWF0IjoxNTE1MDY0MDA0LCJpc3MiOiJodHRwOi8vMTcyLjE4LjAuMjo4MDgwL2F1dGgvcmVhbG1zL21hc3RlciIsImF1…

    337  [main] DEBUG org.apache.http.headers  - >> Content-Type: application/json
    337  [main] DEBUG org.apache.http.headers  - >> Content-Length: 163
    337  [main] DEBUG org.apache.http.headers  - >> Host: localhost:8080
    337  [main] DEBUG org.apache.http.headers  - >> Connection: Keep-Alive
    337  [main] DEBUG org.apache.http.wire  -  >> "{"id":null,"firstName":"Nick","lastName":"DUMINIL","street":"26 All[0xc3][0xa9]e des Sapins","city":"Soisy sous Montmorency","state":"None","zip":"95230","country":"France"}"
    674  [main] DEBUG org.apache.http.wire  -  << "HTTP/1.1 201 Created[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Expires: 0[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Cache-Control: no-cache, no-store, must-revalidate[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "X-Powered-By: Undertow/1[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Server: WildFly/10[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Pragma: no-cache[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Location: http://localhost:8080/customer-management/services/customers/1[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Date: Thu, 04 Jan 2018 11:06:45 GMT[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Connection: keep-alive[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Content-Type: application/json[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "Content-Length: 160[r][n]"
    676  [main] DEBUG org.apache.http.wire  -  << "[r][n]"
    676  [main] DEBUG org.apache.http.impl.conn.DefaultClientConnection  - Receiving response: HTTP/1.1 201 Created
    676  [main] DEBUG org.apache.http.headers  - << HTTP/1.1 201 Created
    676  [main] DEBUG org.apache.http.headers  - << Expires: 0
    676  [main] DEBUG org.apache.http.headers  - << Cache-Control: no-cache, no-store, must-revalidate
    676  [main] DEBUG org.apache.http.headers  - << X-Powered-By: Undertow/1
    676  [main] DEBUG org.apache.http.headers  - << Server: WildFly/10
    676  [main] DEBUG org.apache.http.headers  - << Pragma: no-cache
    676  [main] DEBUG org.apache.http.headers  - << Location: http://localhost:8080/customer-management/services/customers/1
    676  [main] DEBUG org.apache.http.headers  - << Date: Thu, 04 Jan 2018 11:06:45 GMT
    676  [main] DEBUG org.apache.http.headers  - << Connection: keep-alive
    677  [main] DEBUG org.apache.http.headers  - << Content-Type: application/json
    677  [main] DEBUG org.apache.http.headers  - << Content-Length: 160
    679  [main] DEBUG org.apache.http.impl.client.DefaultHttpClient  - Connection can be kept alive indefinitely
    679  [main] DEBUG org.apache.http.wire  -  << "{"id":1,"firstName":"Nick","lastName":"DUMINIL","street":"26 All[0xc3][0xa9]e des Sapins","city":"Soisy sous Montmorency","state":"None","zip":"95230","country":"France"}"
    684  [main] DEBUG org.apache.http.impl.conn.BasicClientConnectionManager  - Releasing connection org.apache.http.impl.conn.ManagedClientConnectionImpl@3e27aa33
    684  [main] DEBUG org.apache.http.impl.conn.BasicClientConnectionManager  - Connection can be kept alive indefinitely
    684  [main] DEBUG fr.simplex_software.rest.CustomerServiceIT  - *** Location: http://localhost:8080/customer-management/services/customers/1 ***

    Congratulations, you just have configured and run two docker containers, with Keycloak and Wildfly servers, and you tested the REST API authentication/authorization with Oauth 2.0 and OpenId Connect protocols.