Setting up a Nightly Build Process with Jenkins, SVN and Nexus

We wanted to set up a nightly integration build with our projects so that we could run unit and integration tests on the latest version of our applications and their underlying libraries.  We have a number of libraries that are shared across multiple projects and we wanted this build to run every night and use the latest versions of those libraries even if our applications had a specific release version defined in their Maven pom file.  In this way we would be alerted early if someone added a change to one of the dependency libraries that could potentially break an application when the developer upgraded the dependent library in a future version of the application.

The chart below illustrates our dependencies between our libraries and our applications.

image

Updating Versions Nightly

Both the Crossdock-shared and Messaging-shared libraries depend on the Siesta Framework library.  The Crossdock Web Service and CrossdockMessaging applications both depend on the Crossdock-Shared and Messaging-Shared libraries.  Because of the dependency structure we wanted the SiestaFramework library built first.  The Crossdock-Shared and Messaging-Shared libraries could be built in parallel, but we didn’t want the builds for the Crossdock Web Service and CrossdockMessaging applications to begin until all the libraries had finished building.  We also wanted the nightly build to tag subversion with the build date as well as upload the artifact up to our Nexus “Nightly Build” repository.  The resulting artifact would look something like SiestaFramework-20120720.jar

Also as I had mentioned, even though the CrossdockMessaging app may specify in its pom file it depends on version 5.0.4 of the SiestaFramework library.  For the purposes of the nightly build, we wanted it to use the freshly built SiestaFramework-NIGHTLY-20120720.jar version of the library.

The first problem to tackle was getting the current date into the project’s version number.  For this I started with the Jenkins Zentimestamp plugin.  With this plugin the format of Jenkin’s BUILD_ID timestamp can be changed.  I used this to specify using the format of yyyyMMdd for the timestamp.

image

The next step was to get the timestamp into the version number of the project.  I was able to accomplish this by using the Maven Versions plugin.  One of the things that the Versions plugin can do is to allow you to override the version number in the pom file dynamically at build time.  The code snippet from the SiestaFramework’s pom file is below.

<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>versions-maven-plugin</artifactId>
   <version>1.3.1</version>
</plugin>

At this point the Jenkin’s job can be configured to invoke the “versions;set” goal, passing in the new version string to use.  The ${BUILD_ID} Jenkins variable will have the newly formatted date string.

image

This will produce an artifact with the name SiestaFramework-NIGHTLY-20120720.jar


Uploading Artifacts to a Nightly Repository

Since this job needed to upload the artifact to a different repository from our Release repository that is defined in our project pom files the “altDeploymentRepository” property was used to pass in the location of the nightly repository.

image

The deployment portion of the SiestaFramework job specifies the location of the nightly repository where ${LYNDEN_NIGHTLY_REPO} is a Jenkin’s variable which contains the nightly repo URL.


Tagging Subversion

Finally, the Jenkins Subversion Tagging Plugin was used to tag SVN if the project was successfully built.  The plugin provides a Post-build Action for the job with the configuration section shown below.

image


Dynamically Updating Dependencies.

So now that the main project is set up, the dependent projects are set up in a similar way, but need to be configured to use the SiestaFramework-NIGHTLY-20120720 of the dependency rather than whatever version they currently have specified in their pom file.  This can be accomplished by changing the pom to use a property for the version number of the dependency.  For example, if the snippet below was the original pom file.

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>5.0.1</version>
   </dependency>
</dependencies>

It could be changed to the following to allow the SiestaFramework version to be set dynamically.

<properties>
   <siesta.version>5.0.1</siesta.version>
</properties>

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>${siesta.version}</version>
   </dependency>
</dependencies>

This version can then be overriden by the Jenkins job. The example below shows the Jenkins configuration for the Crossdock-shared build.

image


Enforcing Build Order

The final step in this process is setting up a structure to enforce the build order of the projects.  The dependencies are set up in such a way that SiestaFramework needs to be built first, and the Crossdock-shared and Messaging-shared libraries can be run concurrently once SiestaFramework finishes. The Crossdock Web Service and CrossdockMessaging application jobs can be run concurrently, but not until after both shared libraries have finished.

Setting up the Crossdock-shared and Messaging-shared jobs to be built after SiestaFramework completes is pretty straightforward.  In the Jenkins job configuration for both the shared libraries, the following build trigger is added.

image

For the requirement of not having the apps build until all libraries have built, I enlisted the help of the Join Plugin.  The Join Plugin can be used to execute a job once all “downstream” jobs have completed.  What does this mean exactly?  Looking at the diagram below, the Crossdock-Shared and the Messaging-Shared jobs are “downstream” from the SiestaFramework job.  Once both of these jobs complete, a Join trigger can be used to start other jobs.

image

In this case rather than having the Join trigger kick off other app jobs directly, I create a dummy Join job.  In this way, as we add more application builds, we don’t need to keep modifying the SiestaFramework job with the new application job we just added.

To illustrate the configuration, SiestaFramework has a new Post-build Action (below)

image

Join-Build is a Jenkins job I configured that does not do anything when executed.  Then our Crossdock Web Service and CrossdockMessaging applications define their builds to trigger as soon as Join-Build has completed.

image

In this way we are able to run builds each night which will update to the latest version of our dependencies as well as tag SVN and archive the binaries to Nexus.  I’d love to hear feedback from anyone who is handling nightly builds via Jenkins, and how they have handled the configuration and build issues.

twitter: @RobTerp

Creating a Deployment Pipeline with Jenkins, Nexus, Ant and Glassfish

In a previous post I discussed how we created a build pipeline using Jenkins to create application binaries and move them into our Nexus repository. (Blog post here)  In this post I will show how we are using Jenkins to pull a versioned binary out of Nexus and deploy to one of our remote test, staging or production Glassfish servers.  By remote I mean that the Glassfish instance does not live on the same box as the Jenkins CI instance, but both machines are on the same network.

In a previous post I also discussed how to set up Glassfish v3 to allow deployments pushed from remote servers (Blog post here), so if you haven’t explicitly configured your Glassfish to allow this feature, you will need to do so before you get started.

On our Jenkins CI box we have an Ant script, which will be executed by a Jenkins job manually kicked off by a user. The script defines tasks to perform the following operations:

  • Ensure all needed parameters were entered by the user. (app name, version number, admin username/password, etc).
  • Copy the specified version of the application from Nexus to a local temp directory for deployment to a remote Glassfish instance.
  • Undeploy a previous version of the app from the target Glassfish instance.  (Optional).
  • Deploy the app from the temp directory to the target Glassfish instance.
  • Record the deployment info in a deployment tracking database table.  Historical deployment info can then be viewed from a web app.

Ant Script

Below are some of the more interesting code snippets from our Ant script that will be doing the heavy lifting for our deployment pipeline.  The first code snippet below defines the Ant tasks needed for deploying and undeploying applications from Glassfish.  These Ant tasks are bundled with Glassfish, but not installed by default.  If you haven’t installed them, you will need to do so from your Glassfish update center admin page.

<taskdef name="sun-appserv-deploy" classname="org.glassfish.ant.tasks.DeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

<taskdef name="sun-appserv-undeploy" classname="org.glassfish.ant.tasks.UndeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

Once we have the tasks defined, we create a new target for pulling the binary from Nexus and copying it to a temporary location from where it will be deployed to Glassfish.

<target name="copy.from.nexus">
   <echo message="copying from nexus"/>    
   <get src="http://cisunwk:8081/nexus/content/repositories/Lynden-Java-Release/com/lynden/${app.name}/${version.number}/${app.name}-${version.number}.${package.type}" dest="/tmp/${app.name}-${version.number}.war"/>
</target>

Next is a target to undeploy a previous version of the application from Glassfish.  This step is optional and only executed if the user specifies a version to undeploy from Jenkins.

<target name="undeploy.from.glassfish" if="env.Undeploy_Version">
   <echo message="Undeploying app: ${app.name}-${undeploy.version}"/>
   <echo file="/tmp/gf.txt" message="AS_ADMIN_PASSWORD=${env.Admin_Password}"/>
   <sun-appserv-undeploy name="${app.name}-${undeploy.version}" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311"/>
   <delete file="/tmp/gf.txt"/>
</target>

Next, we then define a target to do the deployment of the application to Glassfish.

<target name="deploy.to.glassfish.with.context" if="context.is.set">
    <sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war" name="${app.name}-${version.number}" force="true" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311" contextroot="${App_Context}"/>
</target>

And then finally, we define a target, which will invoke a server and pass information to it, such as app name, version, who deployed the app, etc.so that it can be recorded in our deployment database.

<target name="tag.uv.deploy.file">
   <tstamp>
      <format property="time" pattern="yyyyMMdd-HHmmss"/>
   </tstamp> 

   <!--Ampersand character for the URL -->
   <property name="A" value="&amp;"/>
   <get src="http://shuyak.lynden.com:8080/DeploymentRecorder/DeploymentRecorderServlet?app=${app.name}${A}date=${time}${A}environment=${deploy.env}${A}serverName=${server.name}${A}serverPort=${server.port}${A}adminPort=${admin.port}${A}serverType=${server.type}${A}version=${version.number}${A}who=${deploy.user}${A}contextName=${App_Context}" dest="/dev/null"/>

   <echo message="tagging deployment info to UV"/>
</target>

Jenkins Configuration

Now that we have an Ant script to perform the actions that we need to do a deployment, we set up a Jenkins job to deploy to servers in each one of our environments (test/staging/prod).
image


In order to kick off a deployment to one of our servers, the appropriate environment is selected from the screenshot above, and the “Build Now” link is clicked which presents the user with the screen below. In this case we are deploying to a test Glassfish domain named “bjorn” on unga.lynden.com
image

The user can select from the drop down list the server they wish to deploy to and the application they wish to deploy.  The version number is a required entry in a text field.  If the script can’t find the specified version in Nexus, the build will fail.  There are also optional parameters for specifying an existing version to undeploy as well as an application context in the event the app name shouldn’t be used as the default context.


In the screenshot below we are deploying version 5.0.0 of the Crossdock App to the “bjorn” domain running on unga.lynden.com

image


Once the job completes, if we log into the bjorn Glassfish admin page on unga.lynden.com, we see that Crossdock-5.0.0 has been deployed to the server.

image


The screenshot below is an example of undeploying version 5.0.0 of Crossdock, and deploying version 5.0.1 of Crossdock.  Also, in this example, we are telling the script that we want the web context in Glassfish to be /Crossdock, rather than the default /Crossdock-5.0.1

image


The screenshot of the Glassfish admin page below shows that Crossdock-5.0.0 has been unistalled, and that Crossdock-5.0.1 is now installed with a Context Root of /Crossdock.

image


Deployment History

Finally, as I mentioned previously, the Ant script is also saving the deployment information to a historical deployment table in our database.  We have written a simple web application which will display this historical data.  The screenshot below shows all of the applications that have been deployed to our test environment. (We have similar pages for Staging and Production as well).  Included in this information is the application name, version number, date, and who deployed it, among some other miscellaneous info.

image

We can then drill into the history of a specific application by clicking the “Crossdock” link on the screen above and get a detailed history about the deployments for that application including version numbers or dates.  We maintain more than 60 different web applications serving various purposes here at Lynden, so this has been a great tool us to see exactly what versions of our applications are currently deployed where, as well as see the history of the deployment of a specific application in the event we need to roll back to a previous version.

image

As we have learned firsthand, Jenkins is a very useful and versatile tool that is easy to extend for purposes beyond automated builds and continuous integration.

twitter: @RobTerp

Migrating an Automated Deployment Script from Glassfish v2 to Glassfish v3

Just recently we attempted to update an Ant script which we are using to do automated deployments to Glassfish v2 servers, to deploy to our new Glassfish v3 servers instead.  This Ant script is invoked from our Jenkins automated deploy pipeline process (another blog post about this later) and copies a .war file from our Nexus repository and installs it to a Glassfish instance running on a remote server.  I ran into a number of issues that prevented the Ant script from being used “as-is”.

The first thing that this assumes is that there is a Glassfish v3 server running on the build machine from which this script is executed.  The build script needs access to the “asadmin” tool in the glassfish/bin directory in order to deploy a .war file to a remote glassfish server.

The first issue was that the Ant deploy/undeploy tasks for Glassfish are in a different location in v3.  Actually, not only is the .jar file with the Ant tasks in a different location in the newer version of Glassfish, its not installed by default!  In order to get the Ant task you’ll need to go to the Update Tool on the admin web page of your Glassfish v3 instance and install the “glassfish-ant-tasks” component.  Once you’ve done that then you can modify your Ant script to use the new Ant tasks (which are also located in a different Java package also).  The code snippets from an Ant script below compare the differences between the Glassfish v2 and v3 ant task usage.

<!—Glassfish v2 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.apache.tools.ant.taskdefs.optional.sun.appserv.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfish/lib/sun-appserv-ant.jar”/> </classpath> </taskdef>

<!—Glassfish v3 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.glassfish.ant.tasks.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfishv3/glassfish/lib/ant/ant-tasks.jar”/> </classpath> </taskdef>

The next change that needs to be made to the build script is the call to the sun-appserv-deploy task itself.  The “”installDir” property has been changed to “asinstalldir” for Glassfish v3.  A comparison of the v2 vs. v3 code snippets are below.

<sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war"
name="${app.name}-${version.number}" force="true" 
host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" 
passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish"/>

<sun-appserv-deploy file=”/tmp/${app.name}.war”

name=”${app.name}” force=”true” host=”${server.name}” port=”${admin.port}” user=”${env.Admin_Username}”

passwordfile=”/tmp/gf.txt” asinstalldir=”/nas/apps/cisunwk/glassfish311″/>

The final task in getting this to work is to enable remote commands on each of the target glassfish instances to which the apps will be deployed.

./asadmin enable-remote-admin –port <glassfish-admin-port>

where <glassfish-admin-port> is the admin port that domain (4848 by default).

The last “gotcha” to keep in mind here is that the glassfish that is defined in the <sun-appserv-deploy> task with the “asinstalldir” property MUST be the same version as the remote target Glassfish instances where the web app will be deployed to.  At least this was the case when we attempt to specify a Glassfish v3.0 instance in the installDir property, deploying to a Glassfish v3.1 remote instance.  When we updated the installDir to point to a v3.1 instance, the deployment went fine. (v3.1 to v3.1.2 may be OK)

Hopefully this will help others that have encountered similar issues while attempting to do deployments to remote Glassfish instances either from Jenkins, Ant or other automated processes.

Twitter: @RobTerp