Streamlining a Deployment Pipeline with a Custom Jenkins Plug-In

In a previous post (located here) I discussed setting up a deployment pipeline with Jenkins which would retrieve a specified version of one of our web applications from our Nexus repository and deploy it to a specified Glassfish server.  One of the challenges with the Jenkins job was that most of the fields on its deployment page were text fields, allowing users to free form text which was prone to errors.  The original deployment page appears below and allows the user to select the server and application via dropdown box, but leaves all other properties open to free form text.

image1.png (945×640)

The other drawback to the screen above was that each application needed to be listed in a text file which Jenkins would read  in order to populate the Application dropdown box.  This meant that there was a manual step involved with deploying an application that had not been deployed before.

With this in mind I decided to modify the Jenkins Dynamic Parameter Plugin to allow us to auto populate dropdown boxes with valid values based on our environment here at Lynden. I started with this plug-in since it already had functionality to change the contents of dropdown boxes based upon user selections. ie The selection of one dropdown could be used to determine the contents of another dropdown box.  The requirements for this modified plug-in were that it should:

  • Get a list of applications that are in our Nexus release repository
  • Get a list of versions of the selected application from Nexus.
  • Get a list of the application versions that are currently deployed to the Glassfish instance the user has specified for deployment.
  • Enable the user to select a web context to use when deploying the app, or select “Other” if they want to enter a context that does not appear in the list.

The Server dropdown box for this new Deployment plug-in allows the user to select which instance of Glassfish they would like to deploy to.  Each line in the dropdown contains the server name, Glassfish instance name, HTTP port, and admin port of the instance.

JenkinsDeployServerSelect

Once the user has selected a server, they can then select an application from the Application dropdown box.  For this field, the Jenkins plug-in goes out to our Nexus repository and retrieves the names of all of our applications, populating the list with these values.

JenkinsDeployAppSelect

Once the application has been selected, the user can select which version of the application to deploy.  This list of versions is also obtained from Nexus by the plug-in, which displays up to 25 of the most recent versions.

JenkinsDeployAppVersionSelect

Once the user has selected the version to deploy, they can optionally specify a version to undeploy.  The plug-in populates this field by looking at the Glassfish instance that the user selected and querying it to determine if any versions of the specified application are currently deployed to it.  In the example below, HelloWorldWeb 1.1.5 and 1.1.4 are both currently deployed to the Glassfish instance running on “webstg”.  This field is optional, and if a value is selected, the Deploy task will first undeploy the specified version before deploying the new version.

JenkinsDeployAppUndeployVersionSelect

Finally, the user can select a context to use when deploying the application, which is the portion of the URL after the hostname that the app server uses to identify requests for the application.  The two most common context names are included in the drop down, which are <app-name> and <app-name>-<version-number>.  So for example, if HelloWorldWeb-1.1.3 were selected below, the application would have a URL of http://webstg.lynden.com/HelloWorldWeb-1.1.3

JenkinsDeployAppContextSelect

If a different application context is needed, the user can select “Other” from the dropdown box (in the screenshot above), and then enter an app context name in the “Other” text box below the Context drop down box.

JenkinsDeployAppContextOther

At this point the user can select the “Build” button which will invoke an Ant script detailed in the previous post, downloading the binary from Nexus, undeploying the previously deployed version from Glassfish (if specified), deploying the new version of the application to Glassfish, and then saving information about the deployment to a database which can be viewed from a separate web application.


Configuration

Since nearly all the fields are dynamically populated, there isn’t too much work for configuring the job in Jenkins.  Below is the Configuration screen for this deployment job.  There are 2 parameters which must be defined for this job.  The first is the environment this job will deploy to, ie. Test, Staging or Prod. The second is a list of valid Glassfish instances which can be deployed to.  The location of the maven repository and the location of the Glassfish command line interface are defined as system properties in Jenkins.

JenkinsDeployAppConfig

By using a Jenkins plugin to prepopulate the fields which are required for deploying an application we will streamline the deployment process and make it less prone to error.

twitter: @RobTerp

Setting up a Nightly Build Process with Jenkins, SVN and Nexus

We wanted to set up a nightly integration build with our projects so that we could run unit and integration tests on the latest version of our applications and their underlying libraries.  We have a number of libraries that are shared across multiple projects and we wanted this build to run every night and use the latest versions of those libraries even if our applications had a specific release version defined in their Maven pom file.  In this way we would be alerted early if someone added a change to one of the dependency libraries that could potentially break an application when the developer upgraded the dependent library in a future version of the application.

The chart below illustrates our dependencies between our libraries and our applications.

image

Updating Versions Nightly

Both the Crossdock-shared and Messaging-shared libraries depend on the Siesta Framework library.  The Crossdock Web Service and CrossdockMessaging applications both depend on the Crossdock-Shared and Messaging-Shared libraries.  Because of the dependency structure we wanted the SiestaFramework library built first.  The Crossdock-Shared and Messaging-Shared libraries could be built in parallel, but we didn’t want the builds for the Crossdock Web Service and CrossdockMessaging applications to begin until all the libraries had finished building.  We also wanted the nightly build to tag subversion with the build date as well as upload the artifact up to our Nexus “Nightly Build” repository.  The resulting artifact would look something like SiestaFramework-20120720.jar

Also as I had mentioned, even though the CrossdockMessaging app may specify in its pom file it depends on version 5.0.4 of the SiestaFramework library.  For the purposes of the nightly build, we wanted it to use the freshly built SiestaFramework-NIGHTLY-20120720.jar version of the library.

The first problem to tackle was getting the current date into the project’s version number.  For this I started with the Jenkins Zentimestamp plugin.  With this plugin the format of Jenkin’s BUILD_ID timestamp can be changed.  I used this to specify using the format of yyyyMMdd for the timestamp.

image

The next step was to get the timestamp into the version number of the project.  I was able to accomplish this by using the Maven Versions plugin.  One of the things that the Versions plugin can do is to allow you to override the version number in the pom file dynamically at build time.  The code snippet from the SiestaFramework’s pom file is below.

<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>versions-maven-plugin</artifactId>
   <version>1.3.1</version>
</plugin>

At this point the Jenkin’s job can be configured to invoke the “versions;set” goal, passing in the new version string to use.  The ${BUILD_ID} Jenkins variable will have the newly formatted date string.

image

This will produce an artifact with the name SiestaFramework-NIGHTLY-20120720.jar


Uploading Artifacts to a Nightly Repository

Since this job needed to upload the artifact to a different repository from our Release repository that is defined in our project pom files the “altDeploymentRepository” property was used to pass in the location of the nightly repository.

image

The deployment portion of the SiestaFramework job specifies the location of the nightly repository where ${LYNDEN_NIGHTLY_REPO} is a Jenkin’s variable which contains the nightly repo URL.


Tagging Subversion

Finally, the Jenkins Subversion Tagging Plugin was used to tag SVN if the project was successfully built.  The plugin provides a Post-build Action for the job with the configuration section shown below.

image


Dynamically Updating Dependencies.

So now that the main project is set up, the dependent projects are set up in a similar way, but need to be configured to use the SiestaFramework-NIGHTLY-20120720 of the dependency rather than whatever version they currently have specified in their pom file.  This can be accomplished by changing the pom to use a property for the version number of the dependency.  For example, if the snippet below was the original pom file.

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>5.0.1</version>
   </dependency>
</dependencies>

It could be changed to the following to allow the SiestaFramework version to be set dynamically.

<properties>
   <siesta.version>5.0.1</siesta.version>
</properties>

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>${siesta.version}</version>
   </dependency>
</dependencies>

This version can then be overriden by the Jenkins job. The example below shows the Jenkins configuration for the Crossdock-shared build.

image


Enforcing Build Order

The final step in this process is setting up a structure to enforce the build order of the projects.  The dependencies are set up in such a way that SiestaFramework needs to be built first, and the Crossdock-shared and Messaging-shared libraries can be run concurrently once SiestaFramework finishes. The Crossdock Web Service and CrossdockMessaging application jobs can be run concurrently, but not until after both shared libraries have finished.

Setting up the Crossdock-shared and Messaging-shared jobs to be built after SiestaFramework completes is pretty straightforward.  In the Jenkins job configuration for both the shared libraries, the following build trigger is added.

image

For the requirement of not having the apps build until all libraries have built, I enlisted the help of the Join Plugin.  The Join Plugin can be used to execute a job once all “downstream” jobs have completed.  What does this mean exactly?  Looking at the diagram below, the Crossdock-Shared and the Messaging-Shared jobs are “downstream” from the SiestaFramework job.  Once both of these jobs complete, a Join trigger can be used to start other jobs.

image

In this case rather than having the Join trigger kick off other app jobs directly, I create a dummy Join job.  In this way, as we add more application builds, we don’t need to keep modifying the SiestaFramework job with the new application job we just added.

To illustrate the configuration, SiestaFramework has a new Post-build Action (below)

image

Join-Build is a Jenkins job I configured that does not do anything when executed.  Then our Crossdock Web Service and CrossdockMessaging applications define their builds to trigger as soon as Join-Build has completed.

image

In this way we are able to run builds each night which will update to the latest version of our dependencies as well as tag SVN and archive the binaries to Nexus.  I’d love to hear feedback from anyone who is handling nightly builds via Jenkins, and how they have handled the configuration and build issues.

twitter: @RobTerp

Creating a Deployment Pipeline with Jenkins, Nexus, Ant and Glassfish

In a previous post I discussed how we created a build pipeline using Jenkins to create application binaries and move them into our Nexus repository. (Blog post here)  In this post I will show how we are using Jenkins to pull a versioned binary out of Nexus and deploy to one of our remote test, staging or production Glassfish servers.  By remote I mean that the Glassfish instance does not live on the same box as the Jenkins CI instance, but both machines are on the same network.

In a previous post I also discussed how to set up Glassfish v3 to allow deployments pushed from remote servers (Blog post here), so if you haven’t explicitly configured your Glassfish to allow this feature, you will need to do so before you get started.

On our Jenkins CI box we have an Ant script, which will be executed by a Jenkins job manually kicked off by a user. The script defines tasks to perform the following operations:

  • Ensure all needed parameters were entered by the user. (app name, version number, admin username/password, etc).
  • Copy the specified version of the application from Nexus to a local temp directory for deployment to a remote Glassfish instance.
  • Undeploy a previous version of the app from the target Glassfish instance.  (Optional).
  • Deploy the app from the temp directory to the target Glassfish instance.
  • Record the deployment info in a deployment tracking database table.  Historical deployment info can then be viewed from a web app.

Ant Script

Below are some of the more interesting code snippets from our Ant script that will be doing the heavy lifting for our deployment pipeline.  The first code snippet below defines the Ant tasks needed for deploying and undeploying applications from Glassfish.  These Ant tasks are bundled with Glassfish, but not installed by default.  If you haven’t installed them, you will need to do so from your Glassfish update center admin page.

<taskdef name="sun-appserv-deploy" classname="org.glassfish.ant.tasks.DeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

<taskdef name="sun-appserv-undeploy" classname="org.glassfish.ant.tasks.UndeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

Once we have the tasks defined, we create a new target for pulling the binary from Nexus and copying it to a temporary location from where it will be deployed to Glassfish.

<target name="copy.from.nexus">
   <echo message="copying from nexus"/>    
   <get src="http://cisunwk:8081/nexus/content/repositories/Lynden-Java-Release/com/lynden/${app.name}/${version.number}/${app.name}-${version.number}.${package.type}" dest="/tmp/${app.name}-${version.number}.war"/>
</target>

Next is a target to undeploy a previous version of the application from Glassfish.  This step is optional and only executed if the user specifies a version to undeploy from Jenkins.

<target name="undeploy.from.glassfish" if="env.Undeploy_Version">
   <echo message="Undeploying app: ${app.name}-${undeploy.version}"/>
   <echo file="/tmp/gf.txt" message="AS_ADMIN_PASSWORD=${env.Admin_Password}"/>
   <sun-appserv-undeploy name="${app.name}-${undeploy.version}" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311"/>
   <delete file="/tmp/gf.txt"/>
</target>

Next, we then define a target to do the deployment of the application to Glassfish.

<target name="deploy.to.glassfish.with.context" if="context.is.set">
    <sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war" name="${app.name}-${version.number}" force="true" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311" contextroot="${App_Context}"/>
</target>

And then finally, we define a target, which will invoke a server and pass information to it, such as app name, version, who deployed the app, etc.so that it can be recorded in our deployment database.

<target name="tag.uv.deploy.file">
   <tstamp>
      <format property="time" pattern="yyyyMMdd-HHmmss"/>
   </tstamp> 

   <!--Ampersand character for the URL -->
   <property name="A" value="&amp;"/>
   <get src="http://shuyak.lynden.com:8080/DeploymentRecorder/DeploymentRecorderServlet?app=${app.name}${A}date=${time}${A}environment=${deploy.env}${A}serverName=${server.name}${A}serverPort=${server.port}${A}adminPort=${admin.port}${A}serverType=${server.type}${A}version=${version.number}${A}who=${deploy.user}${A}contextName=${App_Context}" dest="/dev/null"/>

   <echo message="tagging deployment info to UV"/>
</target>

Jenkins Configuration

Now that we have an Ant script to perform the actions that we need to do a deployment, we set up a Jenkins job to deploy to servers in each one of our environments (test/staging/prod).
image


In order to kick off a deployment to one of our servers, the appropriate environment is selected from the screenshot above, and the “Build Now” link is clicked which presents the user with the screen below. In this case we are deploying to a test Glassfish domain named “bjorn” on unga.lynden.com
image

The user can select from the drop down list the server they wish to deploy to and the application they wish to deploy.  The version number is a required entry in a text field.  If the script can’t find the specified version in Nexus, the build will fail.  There are also optional parameters for specifying an existing version to undeploy as well as an application context in the event the app name shouldn’t be used as the default context.


In the screenshot below we are deploying version 5.0.0 of the Crossdock App to the “bjorn” domain running on unga.lynden.com

image


Once the job completes, if we log into the bjorn Glassfish admin page on unga.lynden.com, we see that Crossdock-5.0.0 has been deployed to the server.

image


The screenshot below is an example of undeploying version 5.0.0 of Crossdock, and deploying version 5.0.1 of Crossdock.  Also, in this example, we are telling the script that we want the web context in Glassfish to be /Crossdock, rather than the default /Crossdock-5.0.1

image


The screenshot of the Glassfish admin page below shows that Crossdock-5.0.0 has been unistalled, and that Crossdock-5.0.1 is now installed with a Context Root of /Crossdock.

image


Deployment History

Finally, as I mentioned previously, the Ant script is also saving the deployment information to a historical deployment table in our database.  We have written a simple web application which will display this historical data.  The screenshot below shows all of the applications that have been deployed to our test environment. (We have similar pages for Staging and Production as well).  Included in this information is the application name, version number, date, and who deployed it, among some other miscellaneous info.

image

We can then drill into the history of a specific application by clicking the “Crossdock” link on the screen above and get a detailed history about the deployments for that application including version numbers or dates.  We maintain more than 60 different web applications serving various purposes here at Lynden, so this has been a great tool us to see exactly what versions of our applications are currently deployed where, as well as see the history of the deployment of a specific application in the event we need to roll back to a previous version.

image

As we have learned firsthand, Jenkins is a very useful and versatile tool that is easy to extend for purposes beyond automated builds and continuous integration.

twitter: @RobTerp

Migrating an Automated Deployment Script from Glassfish v2 to Glassfish v3

Just recently we attempted to update an Ant script which we are using to do automated deployments to Glassfish v2 servers, to deploy to our new Glassfish v3 servers instead.  This Ant script is invoked from our Jenkins automated deploy pipeline process (another blog post about this later) and copies a .war file from our Nexus repository and installs it to a Glassfish instance running on a remote server.  I ran into a number of issues that prevented the Ant script from being used “as-is”.

The first thing that this assumes is that there is a Glassfish v3 server running on the build machine from which this script is executed.  The build script needs access to the “asadmin” tool in the glassfish/bin directory in order to deploy a .war file to a remote glassfish server.

The first issue was that the Ant deploy/undeploy tasks for Glassfish are in a different location in v3.  Actually, not only is the .jar file with the Ant tasks in a different location in the newer version of Glassfish, its not installed by default!  In order to get the Ant task you’ll need to go to the Update Tool on the admin web page of your Glassfish v3 instance and install the “glassfish-ant-tasks” component.  Once you’ve done that then you can modify your Ant script to use the new Ant tasks (which are also located in a different Java package also).  The code snippets from an Ant script below compare the differences between the Glassfish v2 and v3 ant task usage.

<!—Glassfish v2 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.apache.tools.ant.taskdefs.optional.sun.appserv.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfish/lib/sun-appserv-ant.jar”/> </classpath> </taskdef>

<!—Glassfish v3 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.glassfish.ant.tasks.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfishv3/glassfish/lib/ant/ant-tasks.jar”/> </classpath> </taskdef>

The next change that needs to be made to the build script is the call to the sun-appserv-deploy task itself.  The “”installDir” property has been changed to “asinstalldir” for Glassfish v3.  A comparison of the v2 vs. v3 code snippets are below.

<sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war"
name="${app.name}-${version.number}" force="true" 
host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" 
passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish"/>

<sun-appserv-deploy file=”/tmp/${app.name}.war”

name=”${app.name}” force=”true” host=”${server.name}” port=”${admin.port}” user=”${env.Admin_Username}”

passwordfile=”/tmp/gf.txt” asinstalldir=”/nas/apps/cisunwk/glassfish311″/>

The final task in getting this to work is to enable remote commands on each of the target glassfish instances to which the apps will be deployed.

./asadmin enable-remote-admin –port <glassfish-admin-port>

where <glassfish-admin-port> is the admin port that domain (4848 by default).

The last “gotcha” to keep in mind here is that the glassfish that is defined in the <sun-appserv-deploy> task with the “asinstalldir” property MUST be the same version as the remote target Glassfish instances where the web app will be deployed to.  At least this was the case when we attempt to specify a Glassfish v3.0 instance in the installDir property, deploying to a Glassfish v3.1 remote instance.  When we updated the installDir to point to a v3.1 instance, the deployment went fine. (v3.1 to v3.1.2 may be OK)

Hopefully this will help others that have encountered similar issues while attempting to do deployments to remote Glassfish instances either from Jenkins, Ant or other automated processes.

Twitter: @RobTerp

Stamping Version Number and Build Time in a Properties File with Maven

Stamping the version number and the build time of an application in a properties file so that it could be displayed by an application at runtime seemed like it should be a pretty straightforward task, although it took a bit of time to find a solution that didn’t require the timestamp, version, or ant-run plugins.

I started with a version.txt file at the default package level in src/main/resources of my project, which looks as follows.

version=${pom.version}
build.date=${timestamp}

By default the Maven resources plug-in will not do text substitution (filtering), so it will need to be enabled within the <build> section of the pom.xml file.

<resources>
   <resource>
      <directory>src/main/resources</directory>
      <filtering>true</filtering>
   </resource>
</resources>

Maven does actually define a ${maven.build.timestamp} property which in theory could have been used in the version.txt file, rather than the ${timestamp} property, but unfortunately a bug within Maven prevents the ${maven.build.timestamp} property from getting passed to the resource filtering mechanism.  (issue: http://jira.codehaus.org/browse/MRESOURCES-99).

The workaround is to create another property within the pom.xml file and set that new property to the timestamp value, in this case, the property name is “timestamp”, which is used above in the version.txt file.  The maven.build.timestamp.format is an optional property for (obviously) defining the timestamp format.

<properties>
   <timestamp>${maven.build.timestamp}</timestamp>

   <maven.build.timestamp.format>yyyy-MM-dd HH:mm</maven.build.timestamp.format>
</properties>

Now, when the build is executed we end up with the version number and build time of the application in the version.txt file.

version=1.0.2-SNAPSHOT
build.date=2012-03-16 15:42

twitter: @RobTerp
twitter: @LimitUpTrading

Creating a build pipeline using Maven, Jenkins, Subversion and Nexus.

For a while now, we had been operating in the wild west when it comes to building our applications and deploying to production.  Builds were typically done straight from the developer’s IDE and manually deployed to one of our app servers.  We had a manual process in place, where the developer would do the following steps.

  • Check all project code into Subversion and tag
  • Build the application.
  • Archive the application binary to a network drive
  • Deploy to production
  • Update our deployment wiki with the date and version number of the app that was just deployed.

The problem is that there were occasionally times where one of these steps were missed, and it always seemed to be at a time when we needed to either rollback to the previous version, or branch from the tag to do a bugfix.  Sometimes the previous version had not been archived to the network, or the developer forgot to tag SVN.  We were already using Jenkins to perform automated builds, so we wanted to look at extending it further to perform release builds.

The Maven Release plug-in provides a good starting point for creating an automated release process.  We have also just started using the Nexus Maven repository and wanted to incorporate that as well to archive our binaries to, rather than archiving them to a network drive.

The first step is to set up the project’s pom file with the Deploy plugin as well as include configuration information about our Nexus and Subversion repositories.

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-release-plugin</artifactId>
   <version>2.2.2</version>
   <configuration>
      <tagBase>http://mks:8080/svn/jrepo/tags/Frameworks/SiestaFramework</tagBase>
    </configuration
</plugin>

The Release plugin configuration is pretty straightforward. The configuration takes the subversion URL of the location where the tags will reside for this project.

The next step is to configure the SVN location where the code will be checked out from.

<scm>
   <connection>scm:svn:http://mks:8080/svn/jrepo/trunk/Frameworks/SiestaFramework</connection>
   <url>http://mks:8080/svn</url>
</scm>

The last step in configuring the project is to set up the location where the binaries will be archived to.  In our case, the Nexus repository.

<distributionManagement>
   <repository>
       <id>Lynden-Java-Release</id>
       <name>Lynden release repository</name>
       <url>http://cisunwk:8081/nexus/content/repositories/Lynden-Java-Release</url>
    </repository>
</distributionManagement>

The project is now ready to use the Maven release plug-in.  The Release plugin provides a number of useful goals.

  • release:clean – Cleans the workspace in the event the last release process was not successful.
  • release: prepare – Performs a number of operations
    • Checks to make sure that there are no uncommitted changes.
    • Ensures that there are no SNAPSHOT dependencies in the POM file,
    • Changes the version of the application and removes SNAPSHOT from the version.  ie 1.0.3-SNAPSHOT becomes 1.0.3
    • Run project tests against modified POMs
    • Commit the modified POM
    • Tag the code in Subversion
    • Increment the version number and append SNAPSHOT.  ie 1.0.3 becomes 1.0.4-SNAPSHOT
    • Commit modified POM
  • release: perform – Performs the release process
    • Checks out the code using the previously defined tag
    • Runs the deploy Maven goal to move the resulting binary to the repository.

Putting it all together

The last step in this process is to configure Jenkins to allow release builds on-demand, meaning we want the user to have to explicitly kick off a release build for this process to take place.  We have download and installed the Release Jenkins plug-in in order to allow developers to kick off release builds from Jenkins.  The Release plug-in will execute tasks after the normal build has finished.  Below is a screenshot of the configuration of one of our projects.  The release build option for the project is enabled by selecting the “Configure release build” option in the “Build Environment” section.

The Maven release plug-in is activated by adding the goals to the “After successful release build” section.  (The –B option enables batch mode so that the Release plug-in will not ask the user for input, but use defaults instead.)

image

Once the release option has been configured for a project there will be a “Release” icon on the left navigation menu for the project. Selecting this will kick off a build and then the Maven release process, assuming the build succeeds.

image

Finally a look at SVN and Nexus verifies that the build for version 1.0.4 of the Siesta-Framework project has been tagged in SVN and uploaded to Nexus.

image

image

The next steps for this project will be to generate release notes for release builds, and also to automate a deployment pipeline, so that developers can deploy to our test, staging and production servers via Jenkins rather than manually from their development workstations.

twitter: @RobTerp
blog: http://rterp.wordpress.com