Why You Shouldn’t Use Complex Objects as HashMap Keys

I’m a big believer in learning from my mistakes, but I’m an even bigger believer in learning from other people’s mistakes.  Hopefully someone else will be able to learn from my mistakes.

mistakes

 

This post is inspired by an issue that took me a number of days to track down and pin point the root cause.  It started with NullPointerExceptions randomly be thrown in one of my applications.  I wasn’t able to consistently replicate the issue, so I added an indiscriminate amount of logging to the code to see if I could track down what was going on.

What I found was that when I was attempting to pull a value out of a particular hashmap, the value would sometimes be Null, which was a bit puzzling because after initializing the map with the keys/values, there were no more calls to put(), only calls to get(), so there should have been no opportunity to put a null value in the map.

Below is a code snippet similar (but far more concise) to the one I had been working on.


public void runTest() {
ProductSummaryBean summaryBean = new ProductSummaryBean(19.95, "MyWidget", "Z332332", new DecimalFormat("$#,###,##0.00"));
ProductDetailsBean detailBean = getProductDetailsBean(summaryBean);
productMap.put(summaryBean, detailBean);
//Load the same summaryBean from the DB
summaryBean = loadSummaryBean("Z332332");
//Pull the detailBean from the map for the given summaryBean
detailBean = productMap.get(summaryBean);
System.out.println("DetailBean is: " + detailBean );
}

There is a ProductSummaryBean with a short summary of the product, and a ProductDetailBean with further product details.  The summary bean is below and contains four properties.


package com.lynden.mapdemo;
import java.text.DecimalFormat;
import java.util.Objects;
public class ProductSummaryBean {
protected double price;
protected String name;
protected String upcCode;
protected DecimalFormat priceFormatter;
public ProductSummaryBean(double price, String name, String upcCode, DecimalFormat priceFormatter) {
this.price = price;
this.name = name;
this.upcCode = upcCode;
this.priceFormatter = priceFormatter;
}
public double getPrice() {
return price;
}
public String getName() {
return name;
}
public String getUpcCode() {
return upcCode;
}
public DecimalFormat getPriceFormatter() {
return priceFormatter;
}
@Override
public int hashCode() {
int hash = 7;
hash = 79 * hash + (int) (Double.doubleToLongBits(this.price) ^ (Double.doubleToLongBits(this.price) >>> 32));
hash = 79 * hash + Objects.hashCode(this.name);
hash = 79 * hash + Objects.hashCode(this.upcCode);
hash = 79 * hash + Objects.hashCode(this.priceFormatter);
return hash;
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final ProductSummaryBean other = (ProductSummaryBean) obj;
if (Double.doubleToLongBits(this.price) != Double.doubleToLongBits(other.price)) {
return false;
}
if (!Objects.equals(this.name, other.name)) {
return false;
}
if (!Objects.equals(this.upcCode, other.upcCode)) {
return false;
}
if (!Objects.equals(this.priceFormatter, other.priceFormatter)) {
return false;
}
return true;
}
@Override
public String toString() {
return "ProductBean{" + "price=" + price + ", name=" + name + ", upcCode=" + upcCode + ", priceFormatter=" + priceFormatter + '}';
}
}

 

Any guesses what happens when the code above is run?

Exception in thread "main" java.lang.NullPointerException
 at com.lynden.mapdemo.TestClass.runTest(TestClass.java:34)
 at com.lynden.mapdemo.TestClass.main(TestClass.java:50)

 

So what happened?  The HashMap stores its keys by using the hashcode of the key objects.  If we print out the hashcode when the ProductSummaryBean is first created and also after its read out of the DB we get the following.

SummaryBean hashcode before: -298224643
SummaryBean hashcode after: -298224679

 

We  can see that the hashcode before and after are different, so there must be something different about the two objects.

SummaryBean before: ProductBean{priceFormatter=java.text.DecimalFormat@67500, price=19.95, name=MyWidget, upcCode=Z332332}
SummaryBean after: ProductBean{priceFormatter=java.text.DecimalFormat@674dc, price=19.95, name=MyWidget, upcCode=Z332332}

 

Printing out the entire objects shows that while name, upc code, and price are all the same, the DecimalFormatter used for the price is different.  Since the DecimalFormatter is part of the hashcode() calculation for the ProductSummaryBean, the hashcodes between the before and after versions of the bean turned out different.  Since the hashcode was modified, the map was not able to find the corresponding ProductDetailBean which in turned caused the NullPointerException.

Now one may ask, should the DecimalFormat object in the bean been used as part of the equals() and hashcode() calculations?  In this case, probably not, but this may not be true in your case.  The safer way to go for the hashmap key would be to have used the product’s upc code as the HashMap key to avoid the danger of the keys changing unexpectedly.

 

 

 

 

 

Automatically Cloning a Wildfly Instance Using Chef

As we have started moving to a service based architecture, we have been developing processes to create and configure our infrastructure in a predictable and repeatable way using Vagrant and Chef.  One challenge that we have faced is trying to replicate a production Wildfly server on a dev box, including the applications are installed on it and their correct versions.

Ideally, we’d like the developer to be able to specify which server they want to clone when kicking off the Chef process.  Chef would then create a new Wildfly instance and download and install all the web applications running on the specified instance.

The first question Chef will need to know, is “what what Wildfly servers are running on the network?”  The next question is then, “which applications, and what versions are installed on those servers?”

In order to answer these questions, we developed a “WildflyMonitor” web application which is installed on each of our Wildfly instances.  The application will collect information about the local Wildfly instance that it’s running on, including the names and versions of the hosted apps, and publish that information to our messaging system.  This information eventually makes it into our Wildfly Registry DB, where it is collected and organized by Wildfly instance.

A rough diagram of the architecture appears below.

WildflyRegistry

In the example, there are 3 Wildfly instances, lisprod01, 02, and 03, which are reporting their applications to the registry. The table below the DB illustrates how the data is organized by server, and then by applications, with each Wildfly instance is running 2 applications.    The WildflyRegistry REST service then makes this information available to any client on the network, including Chef recipes.

 

The next step is then to modify the Chef script


require 'net/http'
require 'json'
url = 'http://lweb.lynden.com/WildflyRegistry/registry/services'
webapps = []
resp = Net::HTTP.get_response(URI.parse(url))
resp_text = resp.body
data =JSON.parse(resp_text)
servers = data
# Loop through each Wildfly server that was found
servers.each do |server|
serverName = server["serverName"]
#Is this the server we want to clone?
if serverName == cloneServer then
wildFlyApps = server["wildflyApp"]
#Loop through app on the server
wildFlyApps.each do |app|
#Create a hash of app name to app version
if app["appRuntimeName"].end_with? ".war" then
appTokens = app["appName"].split("-")
myHash = Hash[ :name, appTokens[0], :version, appTokens[1]]
webapps.push(myHash)
end
end
end

view raw

getAppNames.rb

hosted with ❤ by GitHub

The snippet above shows the script contacting the REST service, looping through all the servers that were returned until the desired server to clone is found.  Once the server is found, the script loops through that server’s list of applications and creates a list of hashes with the app name mapped to its version number.

Next, the script loops through each of the apps which were discovered in the previous snippet.


webapps.each do |app|
url = "http://nexus.lynden.com/repo/com/lynden/" + app[:name] + "/" + app[:version] + "/" + app[:name] + "-" + app[:version] + ".war"
puts "deploying " + app[:name] + "-" + app[:version]
fullFileNameNew = app[:name] + "-" + app[:version]
fullPath = "/tmp/" + app[:name] + "-" + app[:version]
fullPath = fullPath + ".war"
puts "Full path to apps " + fullPath
remote_file fullPath do
source url
action :create_if_missing
end
bash 'deploy_app' do
cwd '/usr/local/wildfly/bin'
command = '/tmp/deployWildfly.sh ' + fullPath
code <<-EOH
echo COMMAND #{command}
#{command}
EOH
end
end

view raw

deploy.rb

hosted with ❤ by GitHub

 

First the script constructs the URL to the web app in our Nexus repository. The script then downloads each web app to the tmp folder on the server.  The script then calls a shell script which deploys the applications to Wildfly utilizing the Wildfly command line interface.

The shell script which is called by Chef to perform the actual deployment to Wildfly is fairly straightforward and appears below.


!/bin/sh
FILENAME=$1
PATH=$PATH:/usr/local/java/bin
cd /usr/local/wildfly/bin
./jboss-cli.sh -c –user=myUser –password=myPassword –command="deploy $FILENAME"

 

That’s it, based on the data in our WildflyRegistry we are able to use this Chef script and shell script to create a clone of an existing Wildfly instance running on our network.

 

 

Using Apache Camel and ActiveMQ to Implement Synchronous Request/Response.

Implementing a synchronous request/response pattern with Apache Camel and ActiveMQ is quite a bit easier than you may expected, and has allowed us to leverage our current messaging infrastructure to facilitate synchronous exchanges between applications where we otherwise may have needed to create a new web service.

Below is an example of setting up two Camel endpoints which will demonstrate the request/response pattern.

First, configure the connection to the JMS message broker.  In this case, an ActiveMQ broker is created in-process.


public void startBroker() throws Exception {
context = new DefaultCamelContext();
context.addComponent("jms-broker", activeMQComponent("vm://localhost?broker.persistent=false"));
buildProducerRoute(context);
buildConsumerRoute(context);
context.start();
}

 

Next, set up the producer route.  First a processor is created, which will print the body of the message.  The route will be executed when a file is dropped into the /Users/RobTerpilowski/tmp/in directory, and routed to the robt.test.queue destination.  Once the route has completed, the processor will be executed.  What we are hoping to see is that the message has been modified by the consuming endpoint when this route has completed.  The important piece to note here is the url:
jms-broker:queue:robt.test.queue?exchangePattern=InOut
exchangePattern=InOut tells Camel that the route is a syncrhonous request/response


public void buildProducerRoute(CamelContext context) throws Exception {
context.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
Processor processor = (Exchange exchange) -> {
System.out.println("PRODUCER Received response: " + exchange.getIn().getBody(String.class));
};
from("file:///Users/RobTerpilowski/tmp/in")
.to("jms-broker:queue:robt.test.queue?exchangePattern=InOut")
.process(processor);
}
});
}

 

Next, set up the consumer endpoint.  Again, a processor is created which will be run when the route has completed.  This processor will first print the message that the producer sent.  It will then replace the message with a new message saying that the original message was seen.  This endpoint will listen on the robt.test.queue and route the result to the directory /Users/RobTerpilowski/tmp/out.  When the route has completed, the processor will update the message.  If everything works correctly, the producer endpoint should be able to see the modified message.


public void buildConsumerRoute(CamelContext context) throws Exception {
context.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
Processor processor = (Exchange exchange) -> {
System.out.println("CONSUMER received message: " + exchange.getIn().getBody(String.class));
exchange.getIn().setBody("I Saw it!!! It contained: " + exchange.getIn().getBody(String.class));
};
from("jms-broker:queue:robt.test.queue")
.to("file:///Users/RobTerpilowski/tmp/out")
.process(processor);
}
});
}

So now that the routes are set up, it’s time to send a message.  Saving a file in the specified input directory will kick things off.  The file will contain the text “HelloCamel”.

 

$ echo "HelloCamel" > /Users/RobTerpilowski/tmp/in/message.txt

The consumer listening on the robt.test.queue immediately sees the message arrives, and prints the message body.

CONSUMER received message: HelloCamel

 

The producer endpoint then receives the modified message back, with confirmation that the consumer endpoint did indeed see the message.

PRODUCER Received response: I Saw it!!! It contained: HelloCamel

 

Make Sure Your Server Clocks are in Sync!

As I finished my first test services that would utilize the Request/Response pattern I created an integration test where they connected to messaging broker that was running in-process.  Things looked great, and the services were communicating without any issues.  I deployed the services to a Wildfly instance running locally, which were pointing at a messaging broker on our staging server.  However, this time when I started my test, the requests were consistently timing out, and never making it back from the second service.  I literally spent the entire day deconstructing each service piece by piece to see what was going on.  I then remembered something about clock synchronization in the Camel JMS documentation.  I checked both the clock on the VM and the clock on the staging server and proceeded to do a face palm when I saw there was a four hour difference, lesson learned!

 

twitter: @RobTerpilowski

Using Dependency Injection in a Java SE Application

It would be nice to decouple components in client applications the way that we have become accustom to doing in server side applications and providing a way to use mock implementations for unit testing.

Fortunately it is fairly straightforward to configure a Java SE client application to use a dependency injection framework such as Weld.

The first step is to include the weld-se jar as a dependency in your project. The weld-se jar is basically the Weld framework repackaged along with its other dependencies as a single jar file which is about 4MB.

    <dependency>
        <groupId>org.jboss.weld.se</groupId>
        <artifactId>weld-se</artifactId>
        <version>2.2.11.Final</version>
    </dependency>

 

Implement a singleton which will create and initialize the Weld container and provide a method to access a bean from the container.

import org.jboss.weld.environment.se.Weld;
import org.jboss.weld.environment.se.WeldContainer;


public class CdiContext {

    public static final CdiContext INSTANCE = new CdiContext();

    private final Weld weld;
    private final WeldContainer container;

    private CdiContext() {
        this.weld = new Weld();
        this.container = weld.initialize();
        Runtime.getRuntime().addShutdownHook(new Thread() {
            @Override
            public void run() {
                weld.shutdown();
            }
        });
    }

    public <T> T getBean(Class<T> type) {
        return container.instance().select(type).get();
    }
}

 

Once you have the context you can then use it to instantiate a bean which in turn will inject any dependencies into the bean.

import java.util.HashMap;
import java.util.Map;


public class MainClass {
    protected String baseDir;
    protected String wldFileLocation;
    protected String dataFileDir;
    protected int timeInterval = 15;
    protected String outputFileDir;

    public void run() throws Exception {
        CdiContext context = CdiContext.INSTANCE;

        //Get an isntance of the bean from the context
        IMatcher matcher = context.getBean(IMatcher.class);

        matcher.setCommodityTradeTimeMap( getDateTranslations(1, "6:30:00 AM", "6:35:00 AM", "6:45:00 AM") );

        matcher.matchTrades(wldFileLocation, dataFileDir, timeInterval, outputFileDir);

    }

What is great is that there are no annotations required on the interfaces or their implementing classes. Weld will automatically find the implementation and inject it in the class where defined. ie. there were no annotations required on the IDataFileReader interface or its implementing classes in order to @Inject it into the Matcher class. Likewise neither the IMatcher interface nor the Matcher class require annotations in order to be instantiated by the CdiContext above.

public class Matcher implements IMatcher {

    //Framework will automatically find and inject
    //an implementation of IDataFileReader

    @Inject
    protected IDataFileReader dataFileReader;

twitter: @RobTerpilowski
@LimitUpTrading

JPA java.lang.IllegalArgumentException: No query defined for that name (Solved)

I’ve recently been working with the [Camel JPA] (http://camel.apache.org/jpa.html) component for moving data from one of our SQL servers to our messaging system.

We have a number of Entity POJOs defined that contain a named query which the JPA component uses in order to query the database to select the appropriate records that need to be processed. Everything was working great and I decided to move these beans to a separate library that could be shared with other applications. However, once I did this the original application started encountering the following error.

java.lang.IllegalArgumentException: No query defined for that name [AllinboundMessagesSqlBean.findByProcessed]

I checked the classpath and the beans were in fact being found, but the named queries on the beans were not being found. It took some research, but the solution to the problem ended up proving to be very simple.

The change was to explicitly add the entity classes to the application’s persistance.xml

com.lynden.json.beans.AllinboundLoopBean
com.lynden.json.beans.AllinboundMessagesSqlBean

Once the classes were defined in the file, the app was then able to find these entity beans that were in a separate .jar. Hopefully this will help others out there who may have run into a similar issue.

twitter: @RobTerpilowski

ClassCastException on Hibernate 4.3.x and Glassfish 4.x

I am attempting to utilize Hibernate 4.3.8 in a service that I am creating that will be running on Glassfish 4.1. When I attempt to

read an object from the db, such as the example below:

Product product = entityManager.find(Product.class, 980001);

The following exception is thrown

java.lang.ClassCastException: com.lynden.allin.service.Product cannot be cast to com.lynden.allin.service.Product

At first glance this may seem a bit strange since the 2 classes appear identical, and they are, but the issue is that there are 2 instances of the class being loaded by different classloaders. When the entityManager attempts to cast the class, it grabs a version that the service itself doesn’t know about since the service the reference that the service has was created from a different class loader.

After some searching it appears that this is a know issue with Hibernate 4.3.6 and newer:

https://hibernate.atlassian.net/browse/HHH-9446

The solution for the time being is to downgrade hibernate to 4.3.5 in order to avoid this issue in Glassfish.

twitter: @RobTerpilowski

Writing to a NoSQL DB using Camel

We use a somewhat out of the ordinary NoSQL database called “Universe“, produced by a company called Rocket as our primary data store. We have written our own ORM framework to write data to the DB from Java beans that we have dubbed “siesta” as it is a lightweight hibernate-like framework.

Camel is a great framework for implementing Enterprise Integration Patterns (EIP), and we have started making heavy use of the various Camel components in order to pass data in varying formats between internal and 3rd party systems.
While there are large number of components available out of the box available here, there are no components available for writing data to UniVerse

Fortunately it is extremely easy to implement custom Camel components, and we were able to create a component to write to UniVerse with a few classes and one configuration file.

For the Camel endpoint URI, we would like to use the following format:

siesta://com.lynden.siesta.component.FreightBean?uvSessionName=XDOCK_SHARED

where:

siesta:// denotes the component scheme,

com.lynden.siesta.component.FreightBean denotes the annotated POJO that the Siesta framework will use to persist the data to UniVerse.

uvSessionName=XDOCK_SHARED tells the component which database session pool to use when connecting to the DB.


The Endpoint Class

package com.lynden.siesta.component;

import com.lynden.siesta.BaseBean;
import org.apache.camel.Consumer;
import org.apache.camel.Processor;
import org.apache.camel.Producer;
import org.apache.camel.impl.DefaultEndpoint;
import org.apache.camel.spi.UriEndpoint;
import org.apache.camel.spi.UriParam;

/**
 * Represents a Siesta endpoint.
 */
@UriEndpoint(scheme = "siesta" )
public class SiestaEndpoint extends DefaultEndpoint {

    @UriParam
    protected String uvSessionName = "";

    Class<? extends BaseBean> siestaBean;

    public SiestaEndpoint() {
    }

    public SiestaEndpoint(String uri, SiestaComponent component) {
        super(uri, component);
    }

    public SiestaEndpoint(String endpointUri) {
        super(endpointUri);
    }

    @Override
    public Producer createProducer() throws Exception {
        return new SiestaProducer(this);
    }

    @Override
    public Consumer createConsumer(Processor processor) throws Exception {
        return null;
    }

    @Override
    public boolean isSingleton() {
        return true;
    }

    public void setSiestaBeanClass( Class<? extends BaseBean> siestaBean) {
        this.siestaBean = siestaBean;
    }

    public Class<? extends BaseBean> getSiestaBeanClass() {
        return siestaBean;
    }

    public String getUvSessionName() {
        return uvSessionName;
    }

    public void setUvSessionName(String uvSessionName) {
        this.uvSessionName = uvSessionName;
    }
}

The Component Class
The next step is to create a class to represent the component itself. The easiest way to do this is to extend the org.apache.camel.impl.DefaultComponent class and override the createEndpoint() method.

import com.lynden.siesta.BaseBean;
import java.util.Map;
import org.apache.camel.Endpoint;
import org.apache.camel.impl.DefaultComponent;

public class SiestaComponent extends DefaultComponent {

    @Override
    protected Endpoint createEndpoint(String uri, String path,    Map<String, Object> options) throws Exception {

    SiestaEndpoint endpoint = new SiestaEndpoint(uri, this);
    setProperties(endpoint, options);

    Class<? extends BaseBean> type = getCamelContext().getClassResolver().resolveClass(path, BaseBean.class, SiestaComponent.class.getClassLoader());

   if (type != null) {
       endpoint.setSiestaBeanClass(type);
    }
    return endpoint;
    }
}

The createEndpoint method takes as arguments, the uri of the component, the path, which includes the “com.lynden.siesta.component.FreightBean” portion of the URI, and finally the options, which include everything after the “?” portion of the URI.

From this method we use reflection to load the BaseBean class specified in the URI, and pass it into the SiestaEndpoint class that was created in the previous step.


The Producer Class

import com.lynden.siesta.BaseBean;
import com.lynden.siesta.EntityManager;
import com.lynden.siesta.IEntityManager;
import org.apache.camel.Exchange;
import org.apache.camel.impl.DefaultProducer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * The Siesta producer.
 */
public class SiestaProducer extends DefaultProducer {
    private static final Logger LOG = LoggerFactory.getLogger(SiestaProducer.class);
    private SiestaEndpoint endpoint;
    private IEntityManager entityManager;
    private String uvSessionName;

    public SiestaProducer(SiestaEndpoint endpoint) {
        super(endpoint);
        this.endpoint = endpoint;
        uvSessionName = endpoint.getUvSessionName();
        entityManager =  EntityManager.getInstance(uvSessionName);

    }

    @Override
    public void process(Exchange exchange) throws Exception {
        BaseBean siestaBean = exchange.getIn().getBody( BaseBean.class );
        entityManager.createOrUpdate(siestaBean);
        LOG.debug( "Saving bean " + siestaBean.getClass() + " with ID: "+ siestaBean.getId() );
    }

}

The Config File

The final step is to create a configuration file in the .jar’s META-INF directory which will allow the Camel Context to find and load the custom component. The convention is to put a file named component-name (“siesta” in our case) in the META-INF/services/org/apache/camel/component/
directory of the component’s .jar file

The META-INF/services/org/apache/camel/component/siesta file contains 1 line to tell the Camel Context which class to load:

class=com.lynden.siesta.component.SiestaComponent

That’s it, With 3 relatively simple classes and a small config file we were able to easily implement our own Camel producer using our NoSQL database as an endpoint.

twitter: @RobTerp

Written with StackEdit.

Setting up a Nightly Build Process with Jenkins, SVN and Nexus

We wanted to set up a nightly integration build with our projects so that we could run unit and integration tests on the latest version of our applications and their underlying libraries.  We have a number of libraries that are shared across multiple projects and we wanted this build to run every night and use the latest versions of those libraries even if our applications had a specific release version defined in their Maven pom file.  In this way we would be alerted early if someone added a change to one of the dependency libraries that could potentially break an application when the developer upgraded the dependent library in a future version of the application.

The chart below illustrates our dependencies between our libraries and our applications.

image

Updating Versions Nightly

Both the Crossdock-shared and Messaging-shared libraries depend on the Siesta Framework library.  The Crossdock Web Service and CrossdockMessaging applications both depend on the Crossdock-Shared and Messaging-Shared libraries.  Because of the dependency structure we wanted the SiestaFramework library built first.  The Crossdock-Shared and Messaging-Shared libraries could be built in parallel, but we didn’t want the builds for the Crossdock Web Service and CrossdockMessaging applications to begin until all the libraries had finished building.  We also wanted the nightly build to tag subversion with the build date as well as upload the artifact up to our Nexus “Nightly Build” repository.  The resulting artifact would look something like SiestaFramework-20120720.jar

Also as I had mentioned, even though the CrossdockMessaging app may specify in its pom file it depends on version 5.0.4 of the SiestaFramework library.  For the purposes of the nightly build, we wanted it to use the freshly built SiestaFramework-NIGHTLY-20120720.jar version of the library.

The first problem to tackle was getting the current date into the project’s version number.  For this I started with the Jenkins Zentimestamp plugin.  With this plugin the format of Jenkin’s BUILD_ID timestamp can be changed.  I used this to specify using the format of yyyyMMdd for the timestamp.

image

The next step was to get the timestamp into the version number of the project.  I was able to accomplish this by using the Maven Versions plugin.  One of the things that the Versions plugin can do is to allow you to override the version number in the pom file dynamically at build time.  The code snippet from the SiestaFramework’s pom file is below.

<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>versions-maven-plugin</artifactId>
   <version>1.3.1</version>
</plugin>

At this point the Jenkin’s job can be configured to invoke the “versions;set” goal, passing in the new version string to use.  The ${BUILD_ID} Jenkins variable will have the newly formatted date string.

image

This will produce an artifact with the name SiestaFramework-NIGHTLY-20120720.jar


Uploading Artifacts to a Nightly Repository

Since this job needed to upload the artifact to a different repository from our Release repository that is defined in our project pom files the “altDeploymentRepository” property was used to pass in the location of the nightly repository.

image

The deployment portion of the SiestaFramework job specifies the location of the nightly repository where ${LYNDEN_NIGHTLY_REPO} is a Jenkin’s variable which contains the nightly repo URL.


Tagging Subversion

Finally, the Jenkins Subversion Tagging Plugin was used to tag SVN if the project was successfully built.  The plugin provides a Post-build Action for the job with the configuration section shown below.

image


Dynamically Updating Dependencies.

So now that the main project is set up, the dependent projects are set up in a similar way, but need to be configured to use the SiestaFramework-NIGHTLY-20120720 of the dependency rather than whatever version they currently have specified in their pom file.  This can be accomplished by changing the pom to use a property for the version number of the dependency.  For example, if the snippet below was the original pom file.

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>5.0.1</version>
   </dependency>
</dependencies>

It could be changed to the following to allow the SiestaFramework version to be set dynamically.

<properties>
   <siesta.version>5.0.1</siesta.version>
</properties>

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>${siesta.version}</version>
   </dependency>
</dependencies>

This version can then be overriden by the Jenkins job. The example below shows the Jenkins configuration for the Crossdock-shared build.

image


Enforcing Build Order

The final step in this process is setting up a structure to enforce the build order of the projects.  The dependencies are set up in such a way that SiestaFramework needs to be built first, and the Crossdock-shared and Messaging-shared libraries can be run concurrently once SiestaFramework finishes. The Crossdock Web Service and CrossdockMessaging application jobs can be run concurrently, but not until after both shared libraries have finished.

Setting up the Crossdock-shared and Messaging-shared jobs to be built after SiestaFramework completes is pretty straightforward.  In the Jenkins job configuration for both the shared libraries, the following build trigger is added.

image

For the requirement of not having the apps build until all libraries have built, I enlisted the help of the Join Plugin.  The Join Plugin can be used to execute a job once all “downstream” jobs have completed.  What does this mean exactly?  Looking at the diagram below, the Crossdock-Shared and the Messaging-Shared jobs are “downstream” from the SiestaFramework job.  Once both of these jobs complete, a Join trigger can be used to start other jobs.

image

In this case rather than having the Join trigger kick off other app jobs directly, I create a dummy Join job.  In this way, as we add more application builds, we don’t need to keep modifying the SiestaFramework job with the new application job we just added.

To illustrate the configuration, SiestaFramework has a new Post-build Action (below)

image

Join-Build is a Jenkins job I configured that does not do anything when executed.  Then our Crossdock Web Service and CrossdockMessaging applications define their builds to trigger as soon as Join-Build has completed.

image

In this way we are able to run builds each night which will update to the latest version of our dependencies as well as tag SVN and archive the binaries to Nexus.  I’d love to hear feedback from anyone who is handling nightly builds via Jenkins, and how they have handled the configuration and build issues.

twitter: @RobTerp

Creating a Deployment Pipeline with Jenkins, Nexus, Ant and Glassfish

In a previous post I discussed how we created a build pipeline using Jenkins to create application binaries and move them into our Nexus repository. (Blog post here)  In this post I will show how we are using Jenkins to pull a versioned binary out of Nexus and deploy to one of our remote test, staging or production Glassfish servers.  By remote I mean that the Glassfish instance does not live on the same box as the Jenkins CI instance, but both machines are on the same network.

In a previous post I also discussed how to set up Glassfish v3 to allow deployments pushed from remote servers (Blog post here), so if you haven’t explicitly configured your Glassfish to allow this feature, you will need to do so before you get started.

On our Jenkins CI box we have an Ant script, which will be executed by a Jenkins job manually kicked off by a user. The script defines tasks to perform the following operations:

  • Ensure all needed parameters were entered by the user. (app name, version number, admin username/password, etc).
  • Copy the specified version of the application from Nexus to a local temp directory for deployment to a remote Glassfish instance.
  • Undeploy a previous version of the app from the target Glassfish instance.  (Optional).
  • Deploy the app from the temp directory to the target Glassfish instance.
  • Record the deployment info in a deployment tracking database table.  Historical deployment info can then be viewed from a web app.

Ant Script

Below are some of the more interesting code snippets from our Ant script that will be doing the heavy lifting for our deployment pipeline.  The first code snippet below defines the Ant tasks needed for deploying and undeploying applications from Glassfish.  These Ant tasks are bundled with Glassfish, but not installed by default.  If you haven’t installed them, you will need to do so from your Glassfish update center admin page.

<taskdef name="sun-appserv-deploy" classname="org.glassfish.ant.tasks.DeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

<taskdef name="sun-appserv-undeploy" classname="org.glassfish.ant.tasks.UndeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

Once we have the tasks defined, we create a new target for pulling the binary from Nexus and copying it to a temporary location from where it will be deployed to Glassfish.

<target name="copy.from.nexus">
   <echo message="copying from nexus"/>    
   <get src="http://cisunwk:8081/nexus/content/repositories/Lynden-Java-Release/com/lynden/${app.name}/${version.number}/${app.name}-${version.number}.${package.type}" dest="/tmp/${app.name}-${version.number}.war"/>
</target>

Next is a target to undeploy a previous version of the application from Glassfish.  This step is optional and only executed if the user specifies a version to undeploy from Jenkins.

<target name="undeploy.from.glassfish" if="env.Undeploy_Version">
   <echo message="Undeploying app: ${app.name}-${undeploy.version}"/>
   <echo file="/tmp/gf.txt" message="AS_ADMIN_PASSWORD=${env.Admin_Password}"/>
   <sun-appserv-undeploy name="${app.name}-${undeploy.version}" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311"/>
   <delete file="/tmp/gf.txt"/>
</target>

Next, we then define a target to do the deployment of the application to Glassfish.

<target name="deploy.to.glassfish.with.context" if="context.is.set">
    <sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war" name="${app.name}-${version.number}" force="true" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311" contextroot="${App_Context}"/>
</target>

And then finally, we define a target, which will invoke a server and pass information to it, such as app name, version, who deployed the app, etc.so that it can be recorded in our deployment database.

<target name="tag.uv.deploy.file">
   <tstamp>
      <format property="time" pattern="yyyyMMdd-HHmmss"/>
   </tstamp> 

   <!--Ampersand character for the URL -->
   <property name="A" value="&amp;"/>
   <get src="http://shuyak.lynden.com:8080/DeploymentRecorder/DeploymentRecorderServlet?app=${app.name}${A}date=${time}${A}environment=${deploy.env}${A}serverName=${server.name}${A}serverPort=${server.port}${A}adminPort=${admin.port}${A}serverType=${server.type}${A}version=${version.number}${A}who=${deploy.user}${A}contextName=${App_Context}" dest="/dev/null"/>

   <echo message="tagging deployment info to UV"/>
</target>

Jenkins Configuration

Now that we have an Ant script to perform the actions that we need to do a deployment, we set up a Jenkins job to deploy to servers in each one of our environments (test/staging/prod).
image


In order to kick off a deployment to one of our servers, the appropriate environment is selected from the screenshot above, and the “Build Now” link is clicked which presents the user with the screen below. In this case we are deploying to a test Glassfish domain named “bjorn” on unga.lynden.com
image

The user can select from the drop down list the server they wish to deploy to and the application they wish to deploy.  The version number is a required entry in a text field.  If the script can’t find the specified version in Nexus, the build will fail.  There are also optional parameters for specifying an existing version to undeploy as well as an application context in the event the app name shouldn’t be used as the default context.


In the screenshot below we are deploying version 5.0.0 of the Crossdock App to the “bjorn” domain running on unga.lynden.com

image


Once the job completes, if we log into the bjorn Glassfish admin page on unga.lynden.com, we see that Crossdock-5.0.0 has been deployed to the server.

image


The screenshot below is an example of undeploying version 5.0.0 of Crossdock, and deploying version 5.0.1 of Crossdock.  Also, in this example, we are telling the script that we want the web context in Glassfish to be /Crossdock, rather than the default /Crossdock-5.0.1

image


The screenshot of the Glassfish admin page below shows that Crossdock-5.0.0 has been unistalled, and that Crossdock-5.0.1 is now installed with a Context Root of /Crossdock.

image


Deployment History

Finally, as I mentioned previously, the Ant script is also saving the deployment information to a historical deployment table in our database.  We have written a simple web application which will display this historical data.  The screenshot below shows all of the applications that have been deployed to our test environment. (We have similar pages for Staging and Production as well).  Included in this information is the application name, version number, date, and who deployed it, among some other miscellaneous info.

image

We can then drill into the history of a specific application by clicking the “Crossdock” link on the screen above and get a detailed history about the deployments for that application including version numbers or dates.  We maintain more than 60 different web applications serving various purposes here at Lynden, so this has been a great tool us to see exactly what versions of our applications are currently deployed where, as well as see the history of the deployment of a specific application in the event we need to roll back to a previous version.

image

As we have learned firsthand, Jenkins is a very useful and versatile tool that is easy to extend for purposes beyond automated builds and continuous integration.

twitter: @RobTerp

Migrating an Automated Deployment Script from Glassfish v2 to Glassfish v3

Just recently we attempted to update an Ant script which we are using to do automated deployments to Glassfish v2 servers, to deploy to our new Glassfish v3 servers instead.  This Ant script is invoked from our Jenkins automated deploy pipeline process (another blog post about this later) and copies a .war file from our Nexus repository and installs it to a Glassfish instance running on a remote server.  I ran into a number of issues that prevented the Ant script from being used “as-is”.

The first thing that this assumes is that there is a Glassfish v3 server running on the build machine from which this script is executed.  The build script needs access to the “asadmin” tool in the glassfish/bin directory in order to deploy a .war file to a remote glassfish server.

The first issue was that the Ant deploy/undeploy tasks for Glassfish are in a different location in v3.  Actually, not only is the .jar file with the Ant tasks in a different location in the newer version of Glassfish, its not installed by default!  In order to get the Ant task you’ll need to go to the Update Tool on the admin web page of your Glassfish v3 instance and install the “glassfish-ant-tasks” component.  Once you’ve done that then you can modify your Ant script to use the new Ant tasks (which are also located in a different Java package also).  The code snippets from an Ant script below compare the differences between the Glassfish v2 and v3 ant task usage.

<!—Glassfish v2 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.apache.tools.ant.taskdefs.optional.sun.appserv.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfish/lib/sun-appserv-ant.jar”/> </classpath> </taskdef>

<!—Glassfish v3 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.glassfish.ant.tasks.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfishv3/glassfish/lib/ant/ant-tasks.jar”/> </classpath> </taskdef>

The next change that needs to be made to the build script is the call to the sun-appserv-deploy task itself.  The “”installDir” property has been changed to “asinstalldir” for Glassfish v3.  A comparison of the v2 vs. v3 code snippets are below.

<sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war"
name="${app.name}-${version.number}" force="true" 
host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" 
passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish"/>

<sun-appserv-deploy file=”/tmp/${app.name}.war”

name=”${app.name}” force=”true” host=”${server.name}” port=”${admin.port}” user=”${env.Admin_Username}”

passwordfile=”/tmp/gf.txt” asinstalldir=”/nas/apps/cisunwk/glassfish311″/>

The final task in getting this to work is to enable remote commands on each of the target glassfish instances to which the apps will be deployed.

./asadmin enable-remote-admin –port <glassfish-admin-port>

where <glassfish-admin-port> is the admin port that domain (4848 by default).

The last “gotcha” to keep in mind here is that the glassfish that is defined in the <sun-appserv-deploy> task with the “asinstalldir” property MUST be the same version as the remote target Glassfish instances where the web app will be deployed to.  At least this was the case when we attempt to specify a Glassfish v3.0 instance in the installDir property, deploying to a Glassfish v3.1 remote instance.  When we updated the installDir to point to a v3.1 instance, the deployment went fine. (v3.1 to v3.1.2 may be OK)

Hopefully this will help others that have encountered similar issues while attempting to do deployments to remote Glassfish instances either from Jenkins, Ant or other automated processes.

Twitter: @RobTerp