Why You Shouldn’t Use Complex Objects as HashMap Keys

I’m a big believer in learning from my mistakes, but I’m an even bigger believer in learning from other people’s mistakes.  Hopefully someone else will be able to learn from my mistakes.

mistakes

 

This post is inspired by an issue that took me a number of days to track down and pin point the root cause.  It started with NullPointerExceptions randomly be thrown in one of my applications.  I wasn’t able to consistently replicate the issue, so I added an indiscriminate amount of logging to the code to see if I could track down what was going on.

What I found was that when I was attempting to pull a value out of a particular hashmap, the value would sometimes be Null, which was a bit puzzling because after initializing the map with the keys/values, there were no more calls to put(), only calls to get(), so there should have been no opportunity to put a null value in the map.

Below is a code snippet similar (but far more concise) to the one I had been working on.


public void runTest() {
ProductSummaryBean summaryBean = new ProductSummaryBean(19.95, "MyWidget", "Z332332", new DecimalFormat("$#,###,##0.00"));
ProductDetailsBean detailBean = getProductDetailsBean(summaryBean);
productMap.put(summaryBean, detailBean);
//Load the same summaryBean from the DB
summaryBean = loadSummaryBean("Z332332");
//Pull the detailBean from the map for the given summaryBean
detailBean = productMap.get(summaryBean);
System.out.println("DetailBean is: " + detailBean );
}

There is a ProductSummaryBean with a short summary of the product, and a ProductDetailBean with further product details.  The summary bean is below and contains four properties.


package com.lynden.mapdemo;
import java.text.DecimalFormat;
import java.util.Objects;
public class ProductSummaryBean {
protected double price;
protected String name;
protected String upcCode;
protected DecimalFormat priceFormatter;
public ProductSummaryBean(double price, String name, String upcCode, DecimalFormat priceFormatter) {
this.price = price;
this.name = name;
this.upcCode = upcCode;
this.priceFormatter = priceFormatter;
}
public double getPrice() {
return price;
}
public String getName() {
return name;
}
public String getUpcCode() {
return upcCode;
}
public DecimalFormat getPriceFormatter() {
return priceFormatter;
}
@Override
public int hashCode() {
int hash = 7;
hash = 79 * hash + (int) (Double.doubleToLongBits(this.price) ^ (Double.doubleToLongBits(this.price) >>> 32));
hash = 79 * hash + Objects.hashCode(this.name);
hash = 79 * hash + Objects.hashCode(this.upcCode);
hash = 79 * hash + Objects.hashCode(this.priceFormatter);
return hash;
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final ProductSummaryBean other = (ProductSummaryBean) obj;
if (Double.doubleToLongBits(this.price) != Double.doubleToLongBits(other.price)) {
return false;
}
if (!Objects.equals(this.name, other.name)) {
return false;
}
if (!Objects.equals(this.upcCode, other.upcCode)) {
return false;
}
if (!Objects.equals(this.priceFormatter, other.priceFormatter)) {
return false;
}
return true;
}
@Override
public String toString() {
return "ProductBean{" + "price=" + price + ", name=" + name + ", upcCode=" + upcCode + ", priceFormatter=" + priceFormatter + '}';
}
}

 

Any guesses what happens when the code above is run?

Exception in thread "main" java.lang.NullPointerException
 at com.lynden.mapdemo.TestClass.runTest(TestClass.java:34)
 at com.lynden.mapdemo.TestClass.main(TestClass.java:50)

 

So what happened?  The HashMap stores its keys by using the hashcode of the key objects.  If we print out the hashcode when the ProductSummaryBean is first created and also after its read out of the DB we get the following.

SummaryBean hashcode before: -298224643
SummaryBean hashcode after: -298224679

 

We  can see that the hashcode before and after are different, so there must be something different about the two objects.

SummaryBean before: ProductBean{priceFormatter=java.text.DecimalFormat@67500, price=19.95, name=MyWidget, upcCode=Z332332}
SummaryBean after: ProductBean{priceFormatter=java.text.DecimalFormat@674dc, price=19.95, name=MyWidget, upcCode=Z332332}

 

Printing out the entire objects shows that while name, upc code, and price are all the same, the DecimalFormatter used for the price is different.  Since the DecimalFormatter is part of the hashcode() calculation for the ProductSummaryBean, the hashcodes between the before and after versions of the bean turned out different.  Since the hashcode was modified, the map was not able to find the corresponding ProductDetailBean which in turned caused the NullPointerException.

Now one may ask, should the DecimalFormat object in the bean been used as part of the equals() and hashcode() calculations?  In this case, probably not, but this may not be true in your case.  The safer way to go for the hashmap key would be to have used the product’s upc code as the HashMap key to avoid the danger of the keys changing unexpectedly.

 

 

 

 

 

Automatically Cloning a Wildfly Instance Using Chef

As we have started moving to a service based architecture, we have been developing processes to create and configure our infrastructure in a predictable and repeatable way using Vagrant and Chef.  One challenge that we have faced is trying to replicate a production Wildfly server on a dev box, including the applications are installed on it and their correct versions.

Ideally, we’d like the developer to be able to specify which server they want to clone when kicking off the Chef process.  Chef would then create a new Wildfly instance and download and install all the web applications running on the specified instance.

The first question Chef will need to know, is “what what Wildfly servers are running on the network?”  The next question is then, “which applications, and what versions are installed on those servers?”

In order to answer these questions, we developed a “WildflyMonitor” web application which is installed on each of our Wildfly instances.  The application will collect information about the local Wildfly instance that it’s running on, including the names and versions of the hosted apps, and publish that information to our messaging system.  This information eventually makes it into our Wildfly Registry DB, where it is collected and organized by Wildfly instance.

A rough diagram of the architecture appears below.

WildflyRegistry

In the example, there are 3 Wildfly instances, lisprod01, 02, and 03, which are reporting their applications to the registry. The table below the DB illustrates how the data is organized by server, and then by applications, with each Wildfly instance is running 2 applications.    The WildflyRegistry REST service then makes this information available to any client on the network, including Chef recipes.

 

The next step is then to modify the Chef script


require 'net/http'
require 'json'
url = 'http://lweb.lynden.com/WildflyRegistry/registry/services'
webapps = []
resp = Net::HTTP.get_response(URI.parse(url))
resp_text = resp.body
data =JSON.parse(resp_text)
servers = data
# Loop through each Wildfly server that was found
servers.each do |server|
serverName = server["serverName"]
#Is this the server we want to clone?
if serverName == cloneServer then
wildFlyApps = server["wildflyApp"]
#Loop through app on the server
wildFlyApps.each do |app|
#Create a hash of app name to app version
if app["appRuntimeName"].end_with? ".war" then
appTokens = app["appName"].split("-")
myHash = Hash[ :name, appTokens[0], :version, appTokens[1]]
webapps.push(myHash)
end
end
end

view raw

getAppNames.rb

hosted with ❤ by GitHub

The snippet above shows the script contacting the REST service, looping through all the servers that were returned until the desired server to clone is found.  Once the server is found, the script loops through that server’s list of applications and creates a list of hashes with the app name mapped to its version number.

Next, the script loops through each of the apps which were discovered in the previous snippet.


webapps.each do |app|
url = "http://nexus.lynden.com/repo/com/lynden/" + app[:name] + "/" + app[:version] + "/" + app[:name] + "-" + app[:version] + ".war"
puts "deploying " + app[:name] + "-" + app[:version]
fullFileNameNew = app[:name] + "-" + app[:version]
fullPath = "/tmp/" + app[:name] + "-" + app[:version]
fullPath = fullPath + ".war"
puts "Full path to apps " + fullPath
remote_file fullPath do
source url
action :create_if_missing
end
bash 'deploy_app' do
cwd '/usr/local/wildfly/bin'
command = '/tmp/deployWildfly.sh ' + fullPath
code <<-EOH
echo COMMAND #{command}
#{command}
EOH
end
end

view raw

deploy.rb

hosted with ❤ by GitHub

 

First the script constructs the URL to the web app in our Nexus repository. The script then downloads each web app to the tmp folder on the server.  The script then calls a shell script which deploys the applications to Wildfly utilizing the Wildfly command line interface.

The shell script which is called by Chef to perform the actual deployment to Wildfly is fairly straightforward and appears below.


!/bin/sh
FILENAME=$1
PATH=$PATH:/usr/local/java/bin
cd /usr/local/wildfly/bin
./jboss-cli.sh -c –user=myUser –password=myPassword –command="deploy $FILENAME"

 

That’s it, based on the data in our WildflyRegistry we are able to use this Chef script and shell script to create a clone of an existing Wildfly instance running on our network.

 

 

Developing Trading Applications with the SumZero Trading API

I have open sourced a Java trading library which I have been using to develop automated trading applications for many years.  The SumZero Trading API provides the ability to develop trading applications for the equity, futures, and currency markets, by utilizing the following sub APIs

  • Market Data API – Request real time Level 1 (NBBO) and Level 2 (Market Depth) market data
  • Broker API – Submit, execute, and monitor orders
  • Historical Data API – Request intraday and end-of-day historical market data.
  • Strategy API – Develop trading strategies to automatically place buy/sell orders based on user defined algorithms.

The library includes implementation of all of these APIs for Interactive Brokers, except for the Broker API, which also has an implementation for Quantitative Brokers.

The libraries are licensed under the MIT open source license and source code is available at:
https://github.com/rterp/SumZeroTrading

In future posts I will show how easy it is to connect to Interactive Brokers to request real-time market data and place a trade using the API.

twitter: @RobTerpilowski
twitter: @SumZeroTrading

Using Apache Camel and ActiveMQ to Implement Synchronous Request/Response.

Implementing a synchronous request/response pattern with Apache Camel and ActiveMQ is quite a bit easier than you may expected, and has allowed us to leverage our current messaging infrastructure to facilitate synchronous exchanges between applications where we otherwise may have needed to create a new web service.

Below is an example of setting up two Camel endpoints which will demonstrate the request/response pattern.

First, configure the connection to the JMS message broker.  In this case, an ActiveMQ broker is created in-process.


public void startBroker() throws Exception {
context = new DefaultCamelContext();
context.addComponent("jms-broker", activeMQComponent("vm://localhost?broker.persistent=false"));
buildProducerRoute(context);
buildConsumerRoute(context);
context.start();
}

 

Next, set up the producer route.  First a processor is created, which will print the body of the message.  The route will be executed when a file is dropped into the /Users/RobTerpilowski/tmp/in directory, and routed to the robt.test.queue destination.  Once the route has completed, the processor will be executed.  What we are hoping to see is that the message has been modified by the consuming endpoint when this route has completed.  The important piece to note here is the url:
jms-broker:queue:robt.test.queue?exchangePattern=InOut
exchangePattern=InOut tells Camel that the route is a syncrhonous request/response


public void buildProducerRoute(CamelContext context) throws Exception {
context.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
Processor processor = (Exchange exchange) -> {
System.out.println("PRODUCER Received response: " + exchange.getIn().getBody(String.class));
};
from("file:///Users/RobTerpilowski/tmp/in")
.to("jms-broker:queue:robt.test.queue?exchangePattern=InOut")
.process(processor);
}
});
}

 

Next, set up the consumer endpoint.  Again, a processor is created which will be run when the route has completed.  This processor will first print the message that the producer sent.  It will then replace the message with a new message saying that the original message was seen.  This endpoint will listen on the robt.test.queue and route the result to the directory /Users/RobTerpilowski/tmp/out.  When the route has completed, the processor will update the message.  If everything works correctly, the producer endpoint should be able to see the modified message.


public void buildConsumerRoute(CamelContext context) throws Exception {
context.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
Processor processor = (Exchange exchange) -> {
System.out.println("CONSUMER received message: " + exchange.getIn().getBody(String.class));
exchange.getIn().setBody("I Saw it!!! It contained: " + exchange.getIn().getBody(String.class));
};
from("jms-broker:queue:robt.test.queue")
.to("file:///Users/RobTerpilowski/tmp/out")
.process(processor);
}
});
}

So now that the routes are set up, it’s time to send a message.  Saving a file in the specified input directory will kick things off.  The file will contain the text “HelloCamel”.

 

$ echo "HelloCamel" > /Users/RobTerpilowski/tmp/in/message.txt

The consumer listening on the robt.test.queue immediately sees the message arrives, and prints the message body.

CONSUMER received message: HelloCamel

 

The producer endpoint then receives the modified message back, with confirmation that the consumer endpoint did indeed see the message.

PRODUCER Received response: I Saw it!!! It contained: HelloCamel

 

Make Sure Your Server Clocks are in Sync!

As I finished my first test services that would utilize the Request/Response pattern I created an integration test where they connected to messaging broker that was running in-process.  Things looked great, and the services were communicating without any issues.  I deployed the services to a Wildfly instance running locally, which were pointing at a messaging broker on our staging server.  However, this time when I started my test, the requests were consistently timing out, and never making it back from the second service.  I literally spent the entire day deconstructing each service piece by piece to see what was going on.  I then remembered something about clock synchronization in the Camel JMS documentation.  I checked both the clock on the VM and the clock on the staging server and proceeded to do a face palm when I saw there was a four hour difference, lesson learned!

 

twitter: @RobTerpilowski

Using Dependency Injection in a Java SE Application

It would be nice to decouple components in client applications the way that we have become accustom to doing in server side applications and providing a way to use mock implementations for unit testing.

Fortunately it is fairly straightforward to configure a Java SE client application to use a dependency injection framework such as Weld.

The first step is to include the weld-se jar as a dependency in your project. The weld-se jar is basically the Weld framework repackaged along with its other dependencies as a single jar file which is about 4MB.

    <dependency>
        <groupId>org.jboss.weld.se</groupId>
        <artifactId>weld-se</artifactId>
        <version>2.2.11.Final</version>
    </dependency>

 

Implement a singleton which will create and initialize the Weld container and provide a method to access a bean from the container.

import org.jboss.weld.environment.se.Weld;
import org.jboss.weld.environment.se.WeldContainer;


public class CdiContext {

    public static final CdiContext INSTANCE = new CdiContext();

    private final Weld weld;
    private final WeldContainer container;

    private CdiContext() {
        this.weld = new Weld();
        this.container = weld.initialize();
        Runtime.getRuntime().addShutdownHook(new Thread() {
            @Override
            public void run() {
                weld.shutdown();
            }
        });
    }

    public <T> T getBean(Class<T> type) {
        return container.instance().select(type).get();
    }
}

 

Once you have the context you can then use it to instantiate a bean which in turn will inject any dependencies into the bean.

import java.util.HashMap;
import java.util.Map;


public class MainClass {
    protected String baseDir;
    protected String wldFileLocation;
    protected String dataFileDir;
    protected int timeInterval = 15;
    protected String outputFileDir;

    public void run() throws Exception {
        CdiContext context = CdiContext.INSTANCE;

        //Get an isntance of the bean from the context
        IMatcher matcher = context.getBean(IMatcher.class);

        matcher.setCommodityTradeTimeMap( getDateTranslations(1, "6:30:00 AM", "6:35:00 AM", "6:45:00 AM") );

        matcher.matchTrades(wldFileLocation, dataFileDir, timeInterval, outputFileDir);

    }

What is great is that there are no annotations required on the interfaces or their implementing classes. Weld will automatically find the implementation and inject it in the class where defined. ie. there were no annotations required on the IDataFileReader interface or its implementing classes in order to @Inject it into the Matcher class. Likewise neither the IMatcher interface nor the Matcher class require annotations in order to be instantiated by the CdiContext above.

public class Matcher implements IMatcher {

    //Framework will automatically find and inject
    //an implementation of IDataFileReader

    @Inject
    protected IDataFileReader dataFileReader;

twitter: @RobTerpilowski
@LimitUpTrading

JPA java.lang.IllegalArgumentException: No query defined for that name (Solved)

I’ve recently been working with the [Camel JPA] (http://camel.apache.org/jpa.html) component for moving data from one of our SQL servers to our messaging system.

We have a number of Entity POJOs defined that contain a named query which the JPA component uses in order to query the database to select the appropriate records that need to be processed. Everything was working great and I decided to move these beans to a separate library that could be shared with other applications. However, once I did this the original application started encountering the following error.

java.lang.IllegalArgumentException: No query defined for that name [AllinboundMessagesSqlBean.findByProcessed]

I checked the classpath and the beans were in fact being found, but the named queries on the beans were not being found. It took some research, but the solution to the problem ended up proving to be very simple.

The change was to explicitly add the entity classes to the application’s persistance.xml

com.lynden.json.beans.AllinboundLoopBean
com.lynden.json.beans.AllinboundMessagesSqlBean

Once the classes were defined in the file, the app was then able to find these entity beans that were in a separate .jar. Hopefully this will help others out there who may have run into a similar issue.

twitter: @RobTerpilowski

ClassCastException on Hibernate 4.3.x and Glassfish 4.x

I am attempting to utilize Hibernate 4.3.8 in a service that I am creating that will be running on Glassfish 4.1. When I attempt to

read an object from the db, such as the example below:

Product product = entityManager.find(Product.class, 980001);

The following exception is thrown

java.lang.ClassCastException: com.lynden.allin.service.Product cannot be cast to com.lynden.allin.service.Product

At first glance this may seem a bit strange since the 2 classes appear identical, and they are, but the issue is that there are 2 instances of the class being loaded by different classloaders. When the entityManager attempts to cast the class, it grabs a version that the service itself doesn’t know about since the service the reference that the service has was created from a different class loader.

After some searching it appears that this is a know issue with Hibernate 4.3.6 and newer:

https://hibernate.atlassian.net/browse/HHH-9446

The solution for the time being is to downgrade hibernate to 4.3.5 in order to avoid this issue in Glassfish.

twitter: @RobTerpilowski

Using Spring Boot Actuator Endpoints and Jersey Web Services

This one took me a few hours to find an easy solution for, so I thought I’d share here so it may help others.

I have been working to create a Jersey web service that will run in a Spring Boot instance, and wanted to make use of the nifty actuator endpoints that are available in Spring Boot for things such as monitoring the health of the application, listing the beans in use by the application, and shutting down the application, among other things which are detailed on the Spring Boot website

The problem is that Jersey application will take over all URLS at the root, thus masking the Spring Boot URLs such as /health, even though the application itself is not using that mapping.

The easiest solution I found was to add an application path to the Jersey application so that it listened for requests arriving from a different URL root such as /api/MyJerseyService, where /api is the root that Jersey will use.

Configuring this was relatively straightforward and only required an additional annotation in the AppConfig class.  Notice the @ApplicationPath(“/api”) annotation, specifying that Jersey should use /api as the application root.

@Configuration
@ApplicationPath("/api")
public class AppConfig extends ResourceConfig {
    public AppConfig() {
        register( UvDataResource.class );
    }
}

Notice the @ApplicationPath(“/api”) which will tells the application to use /api as the root. Now when the Spring Boot health web service is invoked at the following URL, the expected results are returned.

http://localhost:8080/health

{
    "status": "UP",
    "diskSpace": {
        "status": "UP",
        "free": 118162386944,
        "threshold": 10485760
    }
}

While the call to the Jersey Webservice produces the expected result:

http://localhost:8080/api/uvdata/AMLOPS_EQUIP_MASTER/844024

{
    "id": "844024",
    "equipmentType": "11*3",
    "serialNumber": "844024",
    "checksum": 750288259,
    "badValuesMap": {},
    "multiValueMap": {}
}

twitter: @RobTerpilowski

Writing to a NoSQL DB using Camel

We use a somewhat out of the ordinary NoSQL database called “Universe“, produced by a company called Rocket as our primary data store. We have written our own ORM framework to write data to the DB from Java beans that we have dubbed “siesta” as it is a lightweight hibernate-like framework.

Camel is a great framework for implementing Enterprise Integration Patterns (EIP), and we have started making heavy use of the various Camel components in order to pass data in varying formats between internal and 3rd party systems.
While there are large number of components available out of the box available here, there are no components available for writing data to UniVerse

Fortunately it is extremely easy to implement custom Camel components, and we were able to create a component to write to UniVerse with a few classes and one configuration file.

For the Camel endpoint URI, we would like to use the following format:

siesta://com.lynden.siesta.component.FreightBean?uvSessionName=XDOCK_SHARED

where:

siesta:// denotes the component scheme,

com.lynden.siesta.component.FreightBean denotes the annotated POJO that the Siesta framework will use to persist the data to UniVerse.

uvSessionName=XDOCK_SHARED tells the component which database session pool to use when connecting to the DB.


The Endpoint Class

package com.lynden.siesta.component;

import com.lynden.siesta.BaseBean;
import org.apache.camel.Consumer;
import org.apache.camel.Processor;
import org.apache.camel.Producer;
import org.apache.camel.impl.DefaultEndpoint;
import org.apache.camel.spi.UriEndpoint;
import org.apache.camel.spi.UriParam;

/**
 * Represents a Siesta endpoint.
 */
@UriEndpoint(scheme = "siesta" )
public class SiestaEndpoint extends DefaultEndpoint {

    @UriParam
    protected String uvSessionName = "";

    Class<? extends BaseBean> siestaBean;

    public SiestaEndpoint() {
    }

    public SiestaEndpoint(String uri, SiestaComponent component) {
        super(uri, component);
    }

    public SiestaEndpoint(String endpointUri) {
        super(endpointUri);
    }

    @Override
    public Producer createProducer() throws Exception {
        return new SiestaProducer(this);
    }

    @Override
    public Consumer createConsumer(Processor processor) throws Exception {
        return null;
    }

    @Override
    public boolean isSingleton() {
        return true;
    }

    public void setSiestaBeanClass( Class<? extends BaseBean> siestaBean) {
        this.siestaBean = siestaBean;
    }

    public Class<? extends BaseBean> getSiestaBeanClass() {
        return siestaBean;
    }

    public String getUvSessionName() {
        return uvSessionName;
    }

    public void setUvSessionName(String uvSessionName) {
        this.uvSessionName = uvSessionName;
    }
}

The Component Class
The next step is to create a class to represent the component itself. The easiest way to do this is to extend the org.apache.camel.impl.DefaultComponent class and override the createEndpoint() method.

import com.lynden.siesta.BaseBean;
import java.util.Map;
import org.apache.camel.Endpoint;
import org.apache.camel.impl.DefaultComponent;

public class SiestaComponent extends DefaultComponent {

    @Override
    protected Endpoint createEndpoint(String uri, String path,    Map<String, Object> options) throws Exception {

    SiestaEndpoint endpoint = new SiestaEndpoint(uri, this);
    setProperties(endpoint, options);

    Class<? extends BaseBean> type = getCamelContext().getClassResolver().resolveClass(path, BaseBean.class, SiestaComponent.class.getClassLoader());

   if (type != null) {
       endpoint.setSiestaBeanClass(type);
    }
    return endpoint;
    }
}

The createEndpoint method takes as arguments, the uri of the component, the path, which includes the “com.lynden.siesta.component.FreightBean” portion of the URI, and finally the options, which include everything after the “?” portion of the URI.

From this method we use reflection to load the BaseBean class specified in the URI, and pass it into the SiestaEndpoint class that was created in the previous step.


The Producer Class

import com.lynden.siesta.BaseBean;
import com.lynden.siesta.EntityManager;
import com.lynden.siesta.IEntityManager;
import org.apache.camel.Exchange;
import org.apache.camel.impl.DefaultProducer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * The Siesta producer.
 */
public class SiestaProducer extends DefaultProducer {
    private static final Logger LOG = LoggerFactory.getLogger(SiestaProducer.class);
    private SiestaEndpoint endpoint;
    private IEntityManager entityManager;
    private String uvSessionName;

    public SiestaProducer(SiestaEndpoint endpoint) {
        super(endpoint);
        this.endpoint = endpoint;
        uvSessionName = endpoint.getUvSessionName();
        entityManager =  EntityManager.getInstance(uvSessionName);

    }

    @Override
    public void process(Exchange exchange) throws Exception {
        BaseBean siestaBean = exchange.getIn().getBody( BaseBean.class );
        entityManager.createOrUpdate(siestaBean);
        LOG.debug( "Saving bean " + siestaBean.getClass() + " with ID: "+ siestaBean.getId() );
    }

}

The Config File

The final step is to create a configuration file in the .jar’s META-INF directory which will allow the Camel Context to find and load the custom component. The convention is to put a file named component-name (“siesta” in our case) in the META-INF/services/org/apache/camel/component/
directory of the component’s .jar file

The META-INF/services/org/apache/camel/component/siesta file contains 1 line to tell the Camel Context which class to load:

class=com.lynden.siesta.component.SiestaComponent

That’s it, With 3 relatively simple classes and a small config file we were able to easily implement our own Camel producer using our NoSQL database as an endpoint.

twitter: @RobTerp

Written with StackEdit.