Now consulting independently

After a long time working as a consultant for others, I felt that the time was right to take off the training wheels and break out on my own. Earlier this month, I re-focused my company Ameliant to independent work, delivering consulting around the Apache suite of integration technologies that I have been blogging about for some time – Camel, ActiveMQ, CXF and ServiceMix.

Today, I’m pleased to launch the company’s new site at ameliant.com. It’s now also reachable from the little leaf motif on the blog. I’m really pleased with the way it turned out.

Big thanks to Luke Burford at Lunamedia for the design and development of the site. He’s also the guy who put this blog design together, which is why the designs are so complementary.

I am looking forward to this new endeavour.

  • Array
  • Array
  • Array
  • Array
  • Array
  • Array
  • Array

Posted on March 17th, 2013 in thoughts | No Comments »

Deep testing of integrations with Camel

One of the things that often comes up in client conversations about developing integration code with Camel is what test support is available, and more to the point appropriate, for testing integrations. There is a spectrum of test types that can be performed, ranging from fully automated unit tests to full-blown multi-system, user-based “click and see the flow through effects” tests. Camel came with comprehensive test support baked in from it’s very inception, but the mechanisms that are available can be used to go way beyond the standard unit test.

Unit tests

Without wanting to get academic about it, let’s define a unit test as being one that tests the logic encapsulated within a block of code without external side effects. Unit testing straightforward classes is trivial. If you want to make use of external service classes, these can be mocked using your favourite mocking library and injected into the class under test. Camel routes are a little different, in that what they define isn’t executed directly, but rather builds up a set of instructions that are handed to the Camel runtime for execution.

Camel has extensive support for testing routes defined using both the Java DSL as well as the Spring/Blueprints XML DSL. In general the pattern is:

  1. instantiate a RouteBuilder or Spring context containing the routes with a CamelContext, and start the context (this is handled for you by CamelTestSupport or CamelSpringTestSupport – see Camel testing). These should contain direct: endpoints as the inputs to the routes (consumers) and mock: endpoints as the outputs (producers).
  2. get a hold of the mock endpoints, and outline the expectations. A MockEndpoint itself uses a directed builder DSL to allow tou to define a suite of comprehensive expectation, ranging from checking the number of messages received to the details of an individual message. You can make full use of Camel expressions in these tests as well.
  3. create messages that you want to feed in to the route and send them to the direct: endpoint at the top of the route under test using a ProducerTemplate.
  4. assert that the mock endpoints received the expected messages.

An example of this approach can be seen in the RssConsumerRouteBuilderTest in the horo-app I blogged about yesterday.

There are a couple of things that you need to employ this approach successfully. If using Java, the RouteBuilder class that defines your routes should have the ability to have the route endpoint URIs injected and any beans that touch external resources into it – see RssConsumerRouteBuilder. The external beans can easily be mocked as in a standard unit test.

Using the Spring DSL, we can still employ the same general approach, but we need to jump through a couple of hoops to do it. Consider what you would need to do the equivalent. A simple route might be defined via:

    <route id="fileCopyRoute">
        <from uri="file:///some/directory"/>
        <to uri="file:///some/other/directory"/>
    </route>

You can externalise any URIs using Spring’s property support:

    <route id="fileCopyRoute">
        <from uri="${fileCopyRoute.input}"/>
        <to uri="${fileCopyRoute.output}"/>
    </route>

You could then define a PropertyPlaceHolderConfigurer with a properties file that defines these properties as

fileCopyRoute.input=file:///some/directory
fileCopyRoute.output=file:///some/other/directory

The definition of this class should be in a Spring context file seperate to that of your route definitions. For testing you would run the routes with another test XML file that defines a PropertyPlaceHolderConfigurer that points to a test file with the test URIs:

fileCopyRoute.input=direct:fileCopyRoute.in
fileCopyRoute.output=mock:fileCopyRoute.out

This is usually why Spring DM/Blueprints based bundle projects split the config across (a minimum of) two context files. One (META-INF/spring/spring-context-osgi.xml) contains all of the beans that touch the OSGi runtime including the properties mechanism, and the other (META-INF/spring/spring-context.xml) contains your physical routes. When testing you can easily switch out the OSGi bits via another config file. This allows you to inject in other bits during a unit test of the XML-based routes, or when using the camel-maven-plugin in order to run those routes off the command line without an OSGi container like ServiceMix.

Embedded integration tests

Sometimes, testing just the route logic isn’t enough. When I was building out the horo-app, I happily coded up my routes, tested tham and deployed, only to have them blow up immediately. What happened? The objects that I was expecting to receive from the RSS component didn’t match those the component actually sent out. So I changed tact. To engage the component as part of the route I needed a web server to serve the file that fed the test.

Integration testing is usually pretty problematic in that you need an external system servicing your tests – and when you are in an environment where the service changes, you can break the code of the other people working against the same system. But there is a solution! Sun’s Java 6 comes with an embeddable web server that you can start up as part of your integration tests.

The approach that I used was to spin up this server at the start of my test, and configure it programatically to serve up a response suitable for my test when a certain resource was consumed. The server was started on port 0, which means that it’s up to the runtime to assign an available port on the machine when the test runs. This is very important as it enables multiple instances of the same test to run at the same time, as is often the case on CI servers. Without it, tests would trip over each other. Similar approaches are possible using other embeddable server types, such as LDAP via ApacheDS, messaging via ActiveMQ, or databases via H2 or Derby.

Tests that require an external resource often start failing on large projects without any changes on the programmer’s side due to this exact reason – the underlying system dependencies changing. By embedding the server to test your integration against, you decouple yourself from that dependency.

The routes in your test then inject the URI to the embedded resource. In my case, I whipped up an integration test version of the original unit test (RssConsumerRouteBuilderITCase) to do exactly this. Integration tests can be wired in to a seperate part of the Maven build lifecycle using the maven-failsafe-plugin and use a different naming convention (*ITCase.java as opposed to *Test.java).

Usually the way the you structure your tests to avoid duplicating the lifecycle of these embedded backends ends up relying on a test class hierarchy, which may end up looking like:

  • CamelTestSupport
    • CamelTestSupportWithDatabase
    • CamelTestSupportWithWebserver

which I don’t really like, as you inevitably end up requiring two kinds of resource in a test. A much better option is to manage these extended resources using JUnit’s @Rule annotation. This treats any object that implements the org.junit.rules.ExternalResource interface as an aspect of the test, stopping and starting it as part of the test’s lifecycle. As such, you can compose your test of as many of these dependencies as you like – all without a rigid class hierarchy.

This approach allows you to test your integration code against a physical backend, without requiring that backend to be shared between developers. This decouples your development from the rest of the team and allows your integration tests to be run in a CI server. A huge win, as only tests which are deterministic end up being run and maintained in the long term.

#winning!

  • Array
  • Array
  • Array
  • Array
  • Array
  • Array
  • Array

Posted on October 5th, 2012 in Camel, osgi, ServiceMix, software engineering, testing, thoughts | No Comments »

Transactional persistence with MyBatis in ServiceMix

This post has been a long time coming. A while back I cooked up a sample application, kind of a pet store for integration, that demonstrates a bunch of things that you might want to do beyond the standard bootstrap examples. This app takes the form of a horoscope aggregator, which allows you to view the last x days worth of horoscopes for a starsign (perhaps for the purposes of checking how accurate, or not, they were). The project is available as usual on GitHub at FuseByExample/horo-app.

The app demonstrates a number of useful things that you would typically want to do in an integration:

  • MyBatis via Spring for persistence directly from Camel routes, as well as for use directly by a web app (seperate bundles)
  • database transactions via a Spring PlatformTransactionManager
  • testing of your MyBatis templates against an embedded database; the one I have used is H2, the one used by the real app is Postgres
  • templatize your Camel routes; the app consumes two seperate RSS feeds in exactly the same way, though with different endpoints
  • unit test your routes by dependency injecting your endpoints, and using the CamelTestSupport mechanisms
  • perform “semi-integration” tests via the @Rule annotation against an embedded server within your JUnit tests, in such a way that multiple integration tests can run at the same time on the same server. This plugs in to the integration test part of the Maven build lifecycle, via the maven-failsafe-plugin Incredibly useful for CI! In this case, the purpose was to test the behaviour of the camel-rss component as part of the route.
  • deploy CXF JAX-RS services alongside your Camel bundles to provide access to your data via XML and JSON using just the one mechanism
  • share expensive resources such as DataSources between bundles using metadata, such as the database name that they provide access to
  • idempotent consumption, so the same stuff doesn’t keep getting processed (saved to the database) over and over. This is saved to a JDBC IdempotentRepository in a live environment, and an in-memory one in tests.

Full documentation available in the README. Enjoy!

  • Array
  • Array
  • Array
  • Array
  • Array
  • Array
  • Array

Posted on October 4th, 2012 in Camel, cxf, fusebyexample, ServiceMix | No Comments »

System Integrations as Plugins using Camel and ServiceMix

I recently had a client with a use case that I thought would be interesting to share (and they were happy for me to talk about – no names, industry changed). Sample code and full instructions for the solution available as always at FuseByExample on Github.

Imagine a system integration where the core logic is static, but the systems that participate in a particular process change over time. Take as an example, a travel booking system that accepts orders for buying flights. How do you go about adding new integration logic for a particular capability such as the booking of a flight with a new airline via its system? On top of this:

  • without changing any of the core logic around the main business process (the booking of a flight also includes taking payment)
  • with no application downtime (hot deployment)
  • enabling other development teams to define integrations for new airlines

Conceptually, these requirements could be satisfied via an application-level “plugin” solution. Your core flight booking application forms a platform alongside which you deploy capabilities specific to individual airlines. This may seem like a complicated set of requirements, but there is a set of tools that you can use to enable exactly this type of application.

Using Camel provides us with a way that to easily partition integration logic into a core process (flight bookings) and sub-processes (booking a ticket with a particular airline), and dynamically route to the right sub-process depending on the booking request. Routes can be connected within the one container by using the ServiceMix NMR to pass messages beteen bundles.

Deploying logic that is specific to an individual airline as an OSGi component enables the separation of code away from the core process, and provides the required dynamicity (and allows others to write that logic). The trick is in putting it all together.

spacer

Conceptual overview

OSGi bundles can be thought of as mini-applications. Basing a bundle on SpringDM/Blueprints (essentially defining a Spring-like config in a known location), we can embed a Camel context inside it and have routing logic start up when that bundle is deployed. This is a good candidate for our airline-specific booking code.

We then need to somehow advise the main application bundle that the new process is on-line and ready to do business by having messages routed to it. To achieve this part, we use of the OSGi service registry. To those unfamiliar with it, the registry acts as a conceptual whiteboard inside an OSGi container. Our plugin bundles can register beans inside the registry as implementors of an interface. The core application that wishes to use these services looks up the registry to get a handle on the implementations.

To advertise the availability of a sub-process and provide the name of the Camel endpoint that accepts bookings for a particular airline, we use an interface that indicates to the main application that a plugin bundle is available to take bookings. This is placed in its own bundle such that it could be implemented in the airline-specific bundles, and used in the main booking process.

public interface BookingProcessor {
        public String getAirlineCode();
        public String getBookingRouteUri();
}

Each airline bundle defines its own implementation that returns the airline code that it accepts booking messages for, and the URI for the endpoint of the Camel route that it would be listening for messages on.

spacer

Bundle dependencies

This object is then registered with the OSGi service registry:

<osgi:service ref="germanAirlinePlugin" 
        interface="com.fusesource.examples.booking.spi.BookingProcessor" />

The main process bundle is then able to get a dynamic proxy to a set of BookingProcessors that may come and go:

<osgi:set id="bookingProcessors" 
        interface="com.fusesource.examples.booking.spi.BookingProcessor" 
        cardinality="0..N"/>

This set can then be injected into a bean (BookingProcessorRegistry) that makes decisions such as:

  • Is this airline currently supported by the system?
  • What is the route that can be invoked to process this booking?

The Camel routing logic can then be really simple:

<route id="placeBooking">
	<from uri="jetty:0.0.0.0:9191/booking" />
	<transform>
		<simple>${headers.flightNumber}</simple>
	</transform>
	<choice>
		<when>
			<method bean="bookingProcessorRegistry" method="isAirlineSupported"/>
			<recipientList stopOnException="true" strategyRef="bookingResponseAggregationStrategy">
				<constant>direct:takePayment, direct:placeBookingWithAirline</constant>
			</recipientList>
		</when>
		<otherwise>
			<transform>
				<simple>Unable to book flight for ${body} - unsupported airline</simple>
			</transform>
		</otherwise>
	</choice>
</route>

<route id="placeBookingWithAirline">
	<from uri="direct:placeBookingWithAirline" />
	<!-- work out who to send the message to -->
	<setHeader headerName="bookingProcessor">
		<method bean="bookingProcessorRegistry" method="getBookingProcessorUri"/>
	</setHeader>
	<log message="Calling out to ${headers.bookingProcessor}"/>
	<recipientList>
		<header>bookingProcessor</header>
	</recipientList>
</route>

So to make a new airline available for bookings, you:

  1. create a new OSGi bundle
  2. register a BookingProcessor in the service registry that indicates the route that processes these bookings
  3. write the integration logic in a route that listens on that endpoint
  4. build the bundle and drop it into Servicemix alongside the main process.

Voila! An application-specific plugin system. You can then use the Karaf web console as a mechanism to make the bundle logic available via REST to a single container, or if you want to distribute it across a cluster – Fuse Fabric via Fuse ESB Enterprise.

  • Array
  • Array
  • Array
  • Array
  • Array
  • Array
  • Array

Posted on May 22nd, 2012 in Camel, fusebyexample, osgi, ServiceMix | 4 Comments »

Machiavelli on software

It must be remembered that there is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new system. For the initiator has the enmity of all who would profit by the preservation of the old institutions and merely lukewarm defenders in those who would gain by the new ones.

– Machiavelli “The Prince”

  • Array
  • Array
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.