Home | Ciber
Knowledge page of Ciber Netherlands

Archive for the 'Java' Category

Enterprise Integration: ActiveMQ network of brokers


ActiveMQ is a powerful and widely used open source messaging server, more commonly known as "message broker". It’s being used often in Java based enterprise integration solutions, and provides "Enterprise features" like clustering, multiple message stores, and ability to use any database as a JMS persistence provider besides VM, cache, and journal persistency. For more information on ActiveMQ, visit the website [1] and check the documentation. In general, JMS message brokers often are the heart of (enterprise) integration solutions.


There is a commercially supported version available at FuseSource[2], as part of the FuseSource stack consisting of an Enterprise Service Bus (based on ServiceMix), Message Broker (based on ActiveMQ), Mediation Router (based on Apache Camel) and web services stack (based on CXF).

Besides these products, FuseSource offers other products such as Fuse HQ, an infrastructure monitoring tool and FuseIDE, an eclipse base visual development tool for designing Camel routes.


Concept: Network of Brokers

One of the more interesting features of ActiveMQ is the ability to create a network of brokers, thus connecting and clustering multiple instances of the message broker on different servers in a network (or network of networks, on the internet, or somewhere in the cloud)

Networks of message brokers [3] in ActiveMQ work quite differently than more familiar models such as that of physical networks. In this article I will try to explain each component piece progressively moving up in complexity from a single broker through to a full blown network. At the end you should have basic understanding of how these networks behave and be able to reason about the interactions across different topologies. Please note that this is not intended as an ActiveMQ tutorial. Thus, no code examples or configuration snippets this time…


Concept: Producer-Consumer

One of the key things in understanding broker-based messaging is that the production, or sending of a message, is disconnected from the consumption of that message. The broker acts as an intermediary, serving to make the method by which a method is consumed as well as the route that the message has travelled orthogonal to its production. You don’t really need to understand the entire postal system to know that when you post a letter in the big orange PostNL (or whatever it is called today) box, it eventually will arrive in the little box at the front of the recipient’s house. Same idea applies here.


Producer and consumer are unaware of each other; only the broker they are connected to.

Connections are shown in the direction of where they were established from (i.e. Consumer connects to Broker).

Out of the box when a standard message is sent to a queue from a producer, it is sent to the broker, which persists it in its message store. By default this is in KahaDB, but it can configured to be stored in memory, which buys performance at the cost of reliability. Once the broker has confirmation that the message has been persisted in the journal (the terms journal and message store are often used interchangeably), it responds with an acknowledgement back to the producer. The thread sending the message from the producer is blocked at this time.


On the consumption side, when a message listener is registered or a call to receive() is made, the broker creates a subscription to that queue. Messages are fetched from the message store and passed to the consumer; it’s usually done in batches, and the fetching is a lot more complex than simply read from disk, but that’s the general idea.

The consumer will usually at this stage process the message and subsequently acknowledge that the message has been consumed. The broker then updates the message store marking that message as consumed, or just deleting it (this depends on the persistence mechanism).


So what happens when there are more than one consumer on a queue? All things being equal, and ignoring consumer priorities, the broker will in this case hand out incoming messages in a round-robin manner to each subscriber.



Concept: Store-and-forward

Now let’s scale up to two brokers, Broker1 and Broker2. In ActiveMQ a network of brokers is set up by connecting a networkConnector to a transportConnector (think of it as a socket listening on a port). A networkConnector is an outbound connection from one broker to another.


When a subscription is made to a queue on Broker2, that broker tells the other brokers that it knows about (in our case, just Broker1) that it is interested in that queue; another subscription is now made on Broker1 with Broker2 as the consumer. As far as an ActiveMQ broker is concerned there is no difference between a standard client that consumes messages and another broker acting on behalf of a client. They are treated in exactly the same manner.

So now that Broker1 sees a subscription from Broker2, what happens? The result is a hybrid of the two producer and consumer behaviors. Broker1 is the producer, and Broker2 the consumer. Messages are fetched from Broker1?s message store, passed to Broker2. Broker2 processes the message by “store”-ing it in its journal, and acknowledges consumption of that message. Broker1 then marks the message as consumed.


The simple consume case applies as Broker2 “forwards” the message to its consumers, as if the message was produced directly into it. Neither the producer nor consumer are aware that a network of brokers exists, it is orthogonal to their functionality – a key driver of this style of messaging.

Concept: Local and remote consumers

It has already been noted that as far as a broker is concerned, all subscriptions are equal. To it there is no difference between a local “real” consumer, and another broker that is going to forward those messages on. Hence incoming messages will be handed out round-robin as usual. In case of 2 consumers (Consumer1 on Broker1, and Consumer2 on Broker2) if messages are produced to Broker1, both consumers will each receive the same number of messages.


A networkConnector is unidirectional by default, which means that the broker initiating the connector acts as a client, forwarding its subscriptions. Broker2 in this case subscribes on behalf of its consumers to Broker1. Broker2 however will not be made aware of subscriptions on Broker1. networkConnectors can however be made duplex, such that subscriptions are passed in both directions.

So let’s take it one step further with a network that demonstrates why it is a bad idea to connect brokers to each other in an ad-hoc manner. Let’s add Broker3 into the mix such that it connects into Broker1, and Broker2 sets up a second networkConnector into Broker3. All networkConnectors are set up as duplex.


This is a common approach people take when they first encounter broker networks and want to connect a number of brokers to each other, as they are naturally used to the internet model of network behaviour where traffic is routed down the shortest path. If we think about it from first principles, it quickly becomes apparent that is not the best approach.

Let’s examine what happens when a consumer connects to Broker2.

  • Broker2 echoes the subscription to the brokers it knows about - Broker1 and Broker3.
  • Broker3 echoes the subscription down all networkConnectors other than the one from which the request came; it subscribes to Broker1.
  • A producer sends messages into Broker1.
  • Broker1 stores and forwards messages to the active subscriptions on its transportConnector; half to Broker2, and half to Broker3.
  • Broker2 stores and forwards to its consumer.
  • Broker3 stores and forwards to Broker2.

Eventually everything ends up at the consumer, but some messages ended up needlessly travelling Broker1->Broker3->Broker2, while the others went by the more direct route Broker1->Broker2. Add more brokers into the mix, and the store-and-forward traffic increases exponentially as messages flow through any number of weird and wonderful routes.


The message flow overview above shows lots of unnecessary store-and-forward.

Fortunately, it is possible to avoid this by employing other topologies, such as hub and spoke.


Better, isn’t it? A message can flow between any of the numbered brokers via the hub and a maximum of 3 hops.

You can also use a more nuanced approach that includes considerations such as unidirectional networkConnectors that pass only a certain subscriptions, or reducing consumer priority such that further consumers have a lower priority than closer ones.

Each network design needs to be considered separately and trades off considerations such message load, amount of hardware at your disposal, latency (number of hops) and reliability. When you understand how all the parts fit and think about the overall topology from first principles, it’s much easier to work through.


[1] Apache ActiveMQ : http://activemq.apache.org

[2] FuseSource : http://www.fusesource.com

[3] Network of brokers : http://activemq.apache.org/networks-of-brokers.html

No comments

Integrating Activiti BPM with Mule and Camel



Today there are many excellent frameworks and components for developing (enterprise) integration solutions. Typical integration components are BPM (Business Process Management), ESB (Enterprise Service Bus) and message brokers. Quite often, these components complement each other, each with its own specific role, as can be seen at the image below.


The role of the BPM component can vary based on your needs or applications requirements. It can perform workflow tasks, business process tasks and orchestration tasks. Although these tasks (and their implementations) may differ, there’s usually one obvious commonality: the need to interact with the other components.

As for the BPM part, there’s a new kid in town and is here to stay. I’m talking about Activiti. This article shows how Activiti plays nicely with other components, such as an ESB (such as Mule) and a message router (like Apache Camel).

About Activiti

Activiti is an open source workflow and Business Process Management (BPM) platform, and currently part of Alfresco, replacing jBPM as its default workflow framework. Its core is a lightweight BPMN 2 process engine for Java. It’s distributed under the Apache license. Activiti runs stand-alone or embedded in any Java (web) application using it’s excellent Spring support.

The Activiti engine provides a powerful Java API that makes it easy to deploy process definitions, implement custom logic and unit test processes. If you want to connect remotely to the Activiti engine there’s a REST API to communicate with the process engine. But what if you want to start a new process instance by sending a JMS message, or invoke a web service from a BPMN 2.0 process? This is a much more real-life scenario.

BPM integration with Web Services

By default, the BPMN 2.0 specification provides support for doing web service calls via a specific web service task. The Activiti engine provides support for a web service task, but it may be a bit cumbersome to implement due to the large amount of additional XML elements needed. And this task does only SOAP web service calls.

Luckily the Activiti community came to the rescue. In the latest releases of Activiti you’ll see two interesting contributed Activiti modules, one for Mule ESB integration and one for Apache Camel integration. If you are interested, just download the latest version and play around, or you can check out the Activiti source and take a look for yourself. For more information on building from source, please read the ‘Building the distribution’ wiki page.

Now let’s walk through some examples to get an overview of these modules.

About Mule

Let’s start with the Mule ESB. Mule has out of the box BPM support, including an implementation for Activiti. This is not really a surprise, knowing that Activiti and MuleSoft work closely together on supporting each other’s products.

Mule integration

With the Activiti Mule ESB integration there are two deployment options:

  • Option one is to run both Mule and Activiti in one Spring container and the communication between Mule ESB and the Activiti Engine uses the Activiti Java API.
  • The second deployment option is to run Mule ESB standalone and let Mule ESB communicate with the Activiti engine through its REST API. The only difference is the Activiti connector, which is defined in the Mule configuration.
  • Below examples of Mule configuration for both deployment options.

    Embedded configuration:









          historyService-ref="historyService" />

    <!– Rest of the code shown in the next snippets –>



    Remote configuration:








       password="kermit" />


    The embedded configuration references Spring beans defined in the Activiti engine Spring configuration for the Mule ESB to communicate with the Activiti engine. The remote configuration defines the location of the REST API and the authentication parameters so Mule can use the Activiti REST API to communicate with the Activiti engine. For more information on Mule configuration, please take a look at the MuleSoft website. Access to the developer documentation requires signing up, but is free.

    Now let’s actually do something with the Activiti connector. Let’s kick off a new process instance of a simple BPMN 2.0 process using a JMS message. First, we have to set up the Activiti connector infrastructure in Mule, which is detailed below.

    Activiti connector configuration:

    <jms:activemq-connector name="jmsConnector" brokerURL="tcp://localhost:61616"/>

    <flow name="CreateActivitiProcessFlow">

      <jms:inbound-endpoint queue="in.create" />

      <logger message="Received message #[payload]" level="INFO" />

      <activiti:create-process parametersExpression="#[payload]" />

      <jms:outbound-endpoint queue="out.create" />


    Note that the "flow" tags represent a Mule message flow. When a message is sent to the ‘in.create’ queue, the message is logged using the #[payload] expression. Next, the Mule ESB Activiti module is invoked to create a new process instance. In this example, the JMS message is expected to be a MapMessage and the Map is retrieved to get the process parameters with the parametersExpression. To be able to start a process instance, we have to specify a ‘processDefinitionKey’ property in the MapMessage.

    The additional properties specified in the MapMessage are all translated to process variables. Finally the process instance gets created and the newly created process instance object is sent to another JMS queue (out.create). This JMS message (which is an ObjectMessage) contains among others the process instance ID that can be used to retrieve information such like process variables etc.

    To test this example we need a bit of JMS plumbing code. If you’re interested in running the code examples yourself, you can check out the code from the Google Code repository, listed at the bottom of this article.

    In addition to creating new process instances, you can also set new process variables, signal a process instance etc. For a full overview of BPM functionalities in Mule, you can read the Mule ESB Activiti BPM documentation.


    I already mentioned that Activiti uses the BPMN 2.0 specification for modeling process definitions. This is great, since you won’t be tied to a vendor specific or closed specification.

    Mule does have its own DSL for specifying Mule message flows, but it tries to implement the Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. In fact, many tools and framework are based on these patterns, like Apache Camel, ServiceMix/Fuse and Spring Integration (just to name a few).

    These patterns allow us to graphically define message flows. For more information on Enterprise Integration Patterns, take a look at http://www.eaipatterns.com. In this article, you will find some examples explained using the patterns.


    In addition to communicating with the Activiti engine from the Mule ESB, it’s also possible to send messages from a BPMN process to the Mule ESB. This opens up possibilities to send for example JMS messages, or create advanced integration logic from a BPMN process. The current implementation is limited to the embedded mode for this piece of functionality, but there’s no reason why this can’t be expanded to also supporting the standalone or remote setup. Let’s look at a simple Activiti process definition, containing a Mule send task.

    <definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"



      <process id="helloMuleWorld">

        <startEvent id="Start" />

        <sequenceFlow sourceRef="Start" targetRef="sendToMuleTask" />

        <sendTask id="sendToMuleTask" activiti:type="mule">


           <activiti:field name="endpointUrl">



           <activiti:field name="language">



           <activiti:field name="payloadExpression">



           <activiti:field name="resultVariable">





        <sequenceFlow sourceRef="sendToMuleTask" targetRef="End" />

        <endEvent id="End" />



    In this example we send a message to the in queue of the JVM transport in Mule (which is a JVM messaging component). The message contains the value of the processVariable1 process variable and the response (we use a request-response exchange pattern in the Mule flow configuration) is written to a new process variable named processVariable2.

    The Mule message flow configuration listening to the JVM queue looks like this.

    <flow name="HelloMuleWorldFlow">

      <vm:inbound-endpoint path="in" exchange-pattern="request-response" />

      <logger message="Received message #[payload]" level="INFO" />


        <script:script engine="groovy">return ‘world’



    The message is logged and a simple Groovy script returns a response message with the content ‘world’. This shows how easy it is to send a message from a BPMN process to the Mule ESB.

    Now let’s take a look at the Apache Camel implementation.

    About Camel

    Apache Camel is a lightweight integration framework which implements all Enterprise Integration Patterns. Thus, you can easily integrate different applications using the required patterns. You can use Java, Spring XML, Scala or Groovy. Besides that, own custom components can be created very easily.

    You can deploy Apache Camel as standalone application, in a web container (e.g. Tomcat or Jetty), in a JEE application Server (e.g. JBoss AS or WebSphere AS), in an OSGi environment or in combination with a Spring container.

    Camel integration

    As mentioned before, there is another great and widely used integration framework available in Activiti to be used: Apache Camel. You should understand that both Mule ESB and Apache Camel are capable of doing lots of similar integration logic.

    Of course, there are plenty of differences. One of the most important difference is that Mule is usually considered a product, while Camel is much more of a framework. As for Mule, this is real ESB with the usual functionalities. Camel is technically not an ESB, although it provides much of the same integration logic.

    Another difference is that the Camel integration always runs embedded with the Activiti Engine in the same Spring configuration. So you have to define a Spring XML configuration that includes an Activiti Engine config and a Camel context config. To be able to start new process instances from Camel the deployed process definition key is made available in the Camel context as you can see in the following snippet.







        <property name="brokerURL" value="tcp://localhost:61616" />


      <bean id="camel" class="org.activiti.camel.CamelBehaviour">

        <constructor-arg index="0">


            <bean class="org.activiti.camel.SimpleContextProvider">

              <constructor-arg index="0" value="helloCamelProcess" />

              <constructor-arg index="1" ref="camelProcess" />













    In this configuration we create a connection to an ActiveMQ broker we’ll use later on. Next, a SimpleContextProvider is defined that connects a deployed process definition on the Activiti engine to a Camel context. You can define a list of SimpleContextProviders for each process definition that you want to connect to a Camel context. In the last part a Camel context is defined that scans for RouteBuilder classes in the configured package.

    With the infrastructure in place we can now define integration logic in a Camel RouteBuilder class using fluent API.

    public class CamelHelloRoute extends RouteBuilder {


      public void configure() throws Exception {


            .log(LoggingLevel.INFO, "Received message ${body}")


            .log(LoggingLevel.INFO, "Received message ${body}")



            .log(LoggingLevel.INFO, "Received message on service task ${property.var1}")





    There are two Camel routes defined in this RouteBuilder class. The first Camel route listens for new messages arriving at the ‘in.create’ ActiveMQ queue. The message is logged and a new instance of the ‘helloCamelProcess’ process definition is created and the process instance id is logged and sent to the ‘out.create’ queue of ActiveMQ. Now we can send a JMS message to the ‘in.create’ queue and all entries of the map are set as new process variables on the instance of the ‘helloCamelProcess’ process, and the process instance ID is sent to the ‘out.create’ queue.

    In the second route the Java service task logic of the ‘helloCamelProcess’ process is implemented (we’ll see in a bit how this is implemented in BPMN 2.0 XML). First the process variable var1 is logged and then a new process variable var2 is created on the process instance. Of course we can implement far more complex integration logic here, like sending a JMS message or invoking a web service call.

    Now let’s look how the logic of the Java service task (serviceTask1) is delegated to this Camel route.

    <definitions targetnamespace=" http://activiti.org"



      <process id="helloCamelProcess">

        <startevent id="Start">

        <sequenceflow sourceref="start" targetref="serviceTask1">

        <servicetask activiti:delegateexpression="${camel}" id="serviceTask1">

        <sequenceflow sourceref="serviceTask1" targetref="waitState">

        <receivetask id="waitState">

        <sequenceflow sourceref="waitState" targetref="End">

        <endevent id="End">



    As you can see the Camel route delegation is really simple. We only have to reference the CamelBehavior Spring bean (camel) we defined earlier. In the source code you can find a unit test to run the full example.


    With the availability of both integration modules there is a wide range of integration options that can be leveraged. The BPMN 2.0 specification already supports the web service task, and the Activiti engine adds a powerful Java service task, but now a whole range of ESB transports and enterprise integration patterns are available for you to be used.


    Activiti project website : http://www.activiti.org

    BPMN 2.0 specification : http://www.omg.org/spec/BPMN/2.0/

    Mule ESB : http://www.mulesoft.org

    Apache Camel : http://camel.apache.org

    Source code: https://activiti-integration-demo.googlecode.com/svn/trunk/

    1 comment

    Real Time Saver

    We all have these projects were a bunch of excel sheets are made available to you by the business. “All the business logic is in there Have Fun” . Next you will spend days trying to figure out how stuff works. You can save a lot of time by using an excel plug in “trace” made by Christopher Teh Boon Sung: http://www.christopherteh.com/trace/.

    It make great graphical overview on how all the cells are connected to each other. This can save you several days!

    Till Next Time

    No comments

    Java development using SAP NWDI

    SAP has a lot of experience with enterprise wide software development and managed transport of software. To support component based development and transport for Java software, SAP introduced Netweaver Development Infrastructure (NWDI).


    Read more

    No comments

    SoapUI usage and integration within a CI environment part II

    Best practices / lessons learned

    This section is organized in the following main parts, which are discussed one by one in more detail within the remainder of this article:

    1. Tips for an optimized SoapUI configuration when using a CM (configuration management) system

    2. Tips for creating SoapUI TestCases

    3. Tips for setting up the SoapUI Testrunner script

    4. Tips for configuring SoapUI within Hudson (CI environment)

    Read more

    No comments

    SoapUI usage and integration within a CI environment Part I

    SoapUI usage and integration within a CI environment Part I: an enumeration of best practices and lessons learned.

    The focus on this article is not to summarize the known features and functionality of SoapUI, but instead focus on lessons learned and best practices which are gathered during extensive usage of SoapUI within a webservices project.

    Service-oriented architecture (SOA) and webservices are becoming more and more popular in many IT projects. Exposing your business logic component as a webservice is as simple as adding a few metadata annotations.

    Before making webservices available to the public, you need to make sure they function as designed. The common way to do this is by writing functional tests for your webservices. Another reason for writing functional tests is regression, e.g. ‘is everything still working as before?’. If well-designed, all tests should continue to be successful when nothing in the entire domain has been changed and detect impacting changes by indicating non-successful results.

    Read more

    No comments

    Software quality, how do we manage?

    We write software. We try to make it as good as possible but it is hard. Complex business rules, intricate web frameworks and high-tech application servers all add to the complexity of the whole package. How do we ensure it all works flawlessly together?

    Read more

    No comments

    Creating XSLT-extensions with SAXON

    Suppose you want to do some heavy lifting with xslt (2.0) but xslt provides no  out-of-the-box functions for your problem.  In that case you can create your own xslt extensions. 

    Below you find a small demo how to accomplish this.

    Read more

    No comments

    JavaScript LinkedList and HashMap

    Lately I’ve been doing some JavaScript coding again and the lack of 2 simple but very useful functions like LinkedList and HashMap inspired me to write my own implementation.

    Read more


    Subversion 1.5

    Today the de facto standard for (open source) versioning systems seems to be Subversion, but it wasn’t always like that. Up until a couple of years ago most projects were using CVS.

    CVS has some limitations though. It didn’t version the moving or renaming of files or directories. In the world of Java where refactoring of software is common this is quite a big issue. Complete directory trees might move because of a single refactoring action. Clearly CVS wasn’t up to this.

    Read more

    No comments

    Next Page »