Monitoring and Managing your Java EE application with JMX

I have recently become a big fan of the Java Management eXtensions (JMX) framework for monitoring and managing our server-side applications that are installed on Glassfish 3.  Setting an application up to be monitored via JMX with either VisualVM or JConsole (both available as part of the JDK) is a relatively easy and straightforward process, so much so, that I don’t know why we have not started using JMX earlier.

We have a application that runs within Glassfish and acts as a proxy from our handheld wireless devices in our warehouses to our ActiveMQ messaging server.  We wanted a way to monitor the health of this application in real-time and looked at using JMX for the task.

The process is fairly painless.  Start with creating an interface with “get” methods for the data that should be available through JMX.  The interface name should be appended with “MXBean” if any of the interface’s methods will need to return values other than primitives.  In the example below getClients() returns a Set of MessagingProxyClientMXBean objects. These MXBeans dive further into device specific info and will also show as a tree structure in VisualVM. (More about these beans later).  Below is the interface for exposing data related to our messaging proxy server.

public interface ProxyServerInfoMXBean {

    public int getConnectedClientCount();

    public String getProxyVersion();

    public int getLastMinuteMessageCount();

    public int getLastHourMessageCount();

    public int getLast24HourMessageCount();

    public int getSubscriberConnectionCount();

    public int getPublisherConnectionCount();

    public int getBrokerConnectionCount();

    public Set<MessagingProxyClientMXBean> getClients();

    public String getBuildDate();

}

As you may expect, the implementation of the interface for the proxy’s MXBean is pretty straightforward as well.

public class ProxyServerInfo implements ProxyServerInfoMXBean {

    protected Map<String, IClientContext> clientMap = new HashMap<String, IClientContext>();
    protected Logger logger;
    protected MessageCounter messageCounter;

    public ProxyServerInfo(Map<String, IClientContext> clientMap) {
        this.clientMap = clientMap;
        logger = Logger.getLogger( getClass() );
        messageCounter = MessageCounter.getInstance();
    }

    @Override
    public int getConnectedClientCount() {
        return clientMap.size();
    }

    @Override
    public Set<MessagingProxyClientMXBean> getClients() {
        Set<MessagingProxyClientMXBean> beanSet = new HashSet<MessagingProxyClientMXBean>();
        for (IClientContext context : clientMap.values()) {
            beanSet.add(new MessagingProxyClient(context));
        }

        return beanSet;
    }

    @Override
    public int getLast24HourMessageCount() {
        return messageCounter.getLast24HourCount();
    }

    @Override
    public int getLastHourMessageCount() {
        return messageCounter.getLastHourMessageCount();
    }

    @Override
    public int getLastMinuteMessageCount() {
        return messageCounter.getLastMinuteMessageCount();
    }

    @Override
    public String getProxyVersion() {
        return Server.getInstance().getVersion();
    }

    @Override
    public String getBuildDate() {
        return Server.getInstance().getBuildDate();
    }

    @Override
    public int getBrokerConnectionCount() {
        return Server.getInstance().getBrokerConnectionCount();
    }

    @Override
    public int getPublisherConnectionCount() {
        return Server.getInstance().getPublisherConnectionCount();
    }

    @Override
    public int getSubscriberConnectionCount() {
        return Server.getInstance().getSubscriberConnectionCount();
    }

}

The final piece to get the MXBean to appear in JConsole is to connect to the JMX instance and register the MXBean.  In the code snippet below, and unique nameString is defined which will determine the bean’s location in JConsole’s tree hierarchy.

protected MBeanServer mbs = null;
protected ProxyServerInfo proxyServerMBean;
protected String nameString = "com.lynden.messaging:type=MessagingProxyClients";

protected ObjectName mBeanName;

public void connect() {
   mbs = ManagementFactory.getPlatformMBeanServer();
   proxyServerMBean = new ProxyServerInfo(clientMap); 
  try { 
    mBeanName = new ObjectName(nameString); 
    try { 
      mbs.unregisterMBean(mBeanName); 
    } catch (Exception ex) { 
       //there may not already be a bean there to unregister. 
    }
      mbs.registerMBean(proxyServerMBean, mBeanName); 
    } catch (Exception ex) { 
      throw new IllegalStateException(ex);
   }
 }

Launching VisualVM and connecting to Glassfish’s JMX port, the “com.lynden.messaging” node appears and shows the results of the MessagingProxyClients info that was defined in the class and interface above.

image


Nesting MXBeans

As previously mentioned, MXBeans can be nested, which in this case can be used to display information about the individual clients that are connected to the proxy server.

Just like in the previous example an MXBean interface can be created to represent the various pieces of data about the client that we want available in JMX, along with an implementation of the interface (Not shown).

Registering the MXBean for the client is similar to the above process for registering the MXBean for the server itself, except for the “clientNameString” variable shown below, which shows a type=MessagingProxyClients,name=<some-client-id>.  Once that is in place, any new clients that connect to the proxy will automatically appear in JConsole without needing any sort of refresh.

String clientNameString = "com.lynden.messaging:type=MessagingProxyClients,name=";

MessagingProxyClient mpc = new MessagingProxyClient(client);
ObjectName clientName = new ObjectName(clientNameString + client.getClientId()); 
mbs.registerMBean(mpc, clientName); 
clientObjectList.add(clientNameString + client.getClientId());

When clients disconnect from the proxy, the client’s MXBeans are deregistered from JMX in a similar manner.

String clientNameString="com.lynden.messaging:type=MessagingProxyClients,name=";
mbs.unregisterMBean(new ObjectName(clientNameString + client.getClientId()));
clientObjectList.remove(clientNameString + client.getClientId());

Below is a screenshot from VisualVM of an expanded proxy server node, displaying all the connected client instances.  The right side of the window shows the various properties that are available from the client’s MXBean.

image


JMX Operations

One final piece of JMX functionality that can be useful is the ability to execute methods on the MXBeans (called Operations) in VisualVM/JConsole.  In order to create a method available via JMX it only needs to be in the MXBean interface and implementation, and not conform to the standard JavaBean “getter” format.  In our client’s MXBean, in addition to the getter methods, the bean also contained a removeClient() method which would force the client’s connection to the proxy to close.  Below is a snippet from the client’s MXBean’s implementation.

@Override
public void removeClient() {
     client.setFlaggedForDisposal(true);
}

Below is a screenshot from VisualVM showing that the method can be invoked from the bean’s “Operations” tab by clicking the “removeClient” button.

image

With this information available from our production servers via JMX we now have a very useful tool for debugging issues when they arise in production that will give us greater insight than the application’s log files alone.

twitter: @RobTerp

Hacking ActiveMQ to Retrieve Connection Statistics

This hack is so awful that I am almost embarrassed to blog about it, but this was the only way to obtain the information we needed from ActiveMQ to display in an MBean in JConsole.

We have written a Java proxy server to allow Window CE devices running a C# application to connect to our JMS message broker.  The proxy server runs within a Glassfish instance to make managing its lifecycle a bit easier.  For system diagnostics and debugging purposes we created an MBean to be accessed via JMX so we could get real-time statistics from the proxy through JConsole or VisualVM, such as # of clients connected, message counts, as well as total connections open from the proxy to the broker.

The number of connections that the proxy had opened to the broker seemed like it *should* be a fairly straightforward task.  Enable statistics on the ActiveMQConnectionFactory instance by calling setStatsEnabled(true) and then retrieve the connection statistics from the factory via a call to the getStats() method, and then get the total number of open connections. However, no matter what we did, the getStats() call always returned null, and for good reason, upon inspecting the ActiveMQConnectionFactory source code we found:

218    public StatsImpl getStats() {
219        // TODO
220        return null;
221    }

In the ActiveMQConnectionFactory class there resides a field called factoryStats of type JMSStatsImpl.  The field is protected and there are no get() methods to retrieve this field, so we had to resort to drastic measures if we really wanted to know how many open connections the proxy had open to the broker.  The code to get the connection information from the factory appears below.

    @Override
    public int getBrokerConnectionCount() {

        int count = -1;
        try {
            Field factoryStatsField = ActiveMQConnectionFactory.class.getDeclaredField("factoryStats");
            factoryStatsField.setAccessible(true);
            JMSStatsImpl factoryStats = (JMSStatsImpl) factoryStatsField.get(connectionFactory);
            count = factoryStats.getConnections().length;
        } catch (Exception ex) {
            logger.error("Attempted to get Broker connection count, but failed due to: " + ex.getMessage());
        }
        return count;
    }

Even though the field is protected, reflection can be used to set the field to “accessible” and then the value of the field can be read directly from the factory object.  This is not guaranteed to work in future releases of ActiveMQ, but since this is only for informational purposes in JConsole this was good enough for what we needed.

Twitter: @RobTerp

Stamping Version Number and Build Time in a Properties File with Maven

Stamping the version number and the build time of an application in a properties file so that it could be displayed by an application at runtime seemed like it should be a pretty straightforward task, although it took a bit of time to find a solution that didn’t require the timestamp, version, or ant-run plugins.

I started with a version.txt file at the default package level in src/main/resources of my project, which looks as follows.

version=${pom.version}
build.date=${timestamp}

By default the Maven resources plug-in will not do text substitution (filtering), so it will need to be enabled within the <build> section of the pom.xml file.

<resources>
   <resource>
      <directory>src/main/resources</directory>
      <filtering>true</filtering>
   </resource>
</resources>

Maven does actually define a ${maven.build.timestamp} property which in theory could have been used in the version.txt file, rather than the ${timestamp} property, but unfortunately a bug within Maven prevents the ${maven.build.timestamp} property from getting passed to the resource filtering mechanism.  (issue: http://jira.codehaus.org/browse/MRESOURCES-99).

The workaround is to create another property within the pom.xml file and set that new property to the timestamp value, in this case, the property name is “timestamp”, which is used above in the version.txt file.  The maven.build.timestamp.format is an optional property for (obviously) defining the timestamp format.

<properties>
   <timestamp>${maven.build.timestamp}</timestamp>

   <maven.build.timestamp.format>yyyy-MM-dd HH:mm</maven.build.timestamp.format>
</properties>

Now, when the build is executed we end up with the version number and build time of the application in the version.txt file.

version=1.0.2-SNAPSHOT
build.date=2012-03-16 15:42

twitter: @RobTerp
twitter: @LimitUpTrading

Creating a build pipeline using Maven, Jenkins, Subversion and Nexus.

For a while now, we had been operating in the wild west when it comes to building our applications and deploying to production.  Builds were typically done straight from the developer’s IDE and manually deployed to one of our app servers.  We had a manual process in place, where the developer would do the following steps.

  • Check all project code into Subversion and tag
  • Build the application.
  • Archive the application binary to a network drive
  • Deploy to production
  • Update our deployment wiki with the date and version number of the app that was just deployed.

The problem is that there were occasionally times where one of these steps were missed, and it always seemed to be at a time when we needed to either rollback to the previous version, or branch from the tag to do a bugfix.  Sometimes the previous version had not been archived to the network, or the developer forgot to tag SVN.  We were already using Jenkins to perform automated builds, so we wanted to look at extending it further to perform release builds.

The Maven Release plug-in provides a good starting point for creating an automated release process.  We have also just started using the Nexus Maven repository and wanted to incorporate that as well to archive our binaries to, rather than archiving them to a network drive.

The first step is to set up the project’s pom file with the Deploy plugin as well as include configuration information about our Nexus and Subversion repositories.

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-release-plugin</artifactId>
   <version>2.2.2</version>
   <configuration>
      <tagBase>http://mks:8080/svn/jrepo/tags/Frameworks/SiestaFramework</tagBase>
    </configuration
</plugin>

The Release plugin configuration is pretty straightforward. The configuration takes the subversion URL of the location where the tags will reside for this project.

The next step is to configure the SVN location where the code will be checked out from.

<scm>
   <connection>scm:svn:http://mks:8080/svn/jrepo/trunk/Frameworks/SiestaFramework</connection>
   <url>http://mks:8080/svn</url>
</scm>

The last step in configuring the project is to set up the location where the binaries will be archived to.  In our case, the Nexus repository.

<distributionManagement>
   <repository>
       <id>Lynden-Java-Release</id>
       <name>Lynden release repository</name>
       <url>http://cisunwk:8081/nexus/content/repositories/Lynden-Java-Release</url>
    </repository>
</distributionManagement>

The project is now ready to use the Maven release plug-in.  The Release plugin provides a number of useful goals.

  • release:clean – Cleans the workspace in the event the last release process was not successful.
  • release: prepare – Performs a number of operations
    • Checks to make sure that there are no uncommitted changes.
    • Ensures that there are no SNAPSHOT dependencies in the POM file,
    • Changes the version of the application and removes SNAPSHOT from the version.  ie 1.0.3-SNAPSHOT becomes 1.0.3
    • Run project tests against modified POMs
    • Commit the modified POM
    • Tag the code in Subversion
    • Increment the version number and append SNAPSHOT.  ie 1.0.3 becomes 1.0.4-SNAPSHOT
    • Commit modified POM
  • release: perform – Performs the release process
    • Checks out the code using the previously defined tag
    • Runs the deploy Maven goal to move the resulting binary to the repository.

Putting it all together

The last step in this process is to configure Jenkins to allow release builds on-demand, meaning we want the user to have to explicitly kick off a release build for this process to take place.  We have download and installed the Release Jenkins plug-in in order to allow developers to kick off release builds from Jenkins.  The Release plug-in will execute tasks after the normal build has finished.  Below is a screenshot of the configuration of one of our projects.  The release build option for the project is enabled by selecting the “Configure release build” option in the “Build Environment” section.

The Maven release plug-in is activated by adding the goals to the “After successful release build” section.  (The –B option enables batch mode so that the Release plug-in will not ask the user for input, but use defaults instead.)

image

Once the release option has been configured for a project there will be a “Release” icon on the left navigation menu for the project. Selecting this will kick off a build and then the Maven release process, assuming the build succeeds.

image

Finally a look at SVN and Nexus verifies that the build for version 1.0.4 of the Siesta-Framework project has been tagged in SVN and uploaded to Nexus.

image

image

The next steps for this project will be to generate release notes for release builds, and also to automate a deployment pipeline, so that developers can deploy to our test, staging and production servers via Jenkins rather than manually from their development workstations.

twitter: @RobTerp
blog: http://rterp.wordpress.com

Issues with Server Names and Maven Subversion scm:tag goal

So I was attempting to set up the Maven SCM plug-in to tag our release builds, but encountered a nasty error when executing the scm:tag goal.  The set up in the project’s pom.xml file is below:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-scm-plugin</artifactId>
    <version>1.0</version>
    <configuration>
        <tagBase>http://svn:8080/svn/jrepo/tags/Frameworks/${project.artifactId}</tagBase>
        <tag>${project.version}</tag>
    </configuration>
</plugin>

The error that was encountered when executing the goal was:

C:\Users\robt\Java-Source\Maven-Conversion\SiestaFramework>mvn scm:tag
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building SiestaFramework 1.0.4
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-scm-plugin:1.0:tag (default-cli) @ SiestaFramework ---
[INFO] Final Tag Name: '1.0.4'
[INFO] Executing: svn --non-interactive copy --file C:\Users\robt\AppData\Local\
Temp\maven-scm-1167960694.commit . http://svn:8080/svn/jrepo/tags/Frameworks/SiestaFramework/1.0.4
[INFO] Working directory: C:\Users\robt\Java-Source\Maven-Conversion\SiestaFramework
[ERROR] Provider message:
[ERROR] The svn tag command failed.
[ERROR] Command output:
[ERROR] svn: E235000: In file 'C:\csvn\requirements\subversion\subversion\libsvn _client\commit_util.c' line 474: assertion failed ((copy_mode_root && copy_mode)
 || ! copy_mode_root)

After some research I found some information regarding a potential issue with the SVN client if the server name used in the svn tag command is different than the server name used when the project was checked out.  This was a possibility in my case since our SVN server can be referenced as both svn.lynden.com and mks.lynden.com.

Running the svn info command on the local project directory revealed the following information

C:\Users\robt\Java-Source\Maven-Conversion\SiestaFramework>svn info
Path: .
Working Copy Root Path: C:\Users\robt\Java-Source\Maven-Conversion\SiestaFramework
URL: http://mks:8080/svn/jrepo/trunk/Frameworks/SiestaFramework
Repository Root: http://mks:8080/svn/jrepo
Repository UUID: 14f29ef1-cf71-606c-dc27-f3467c99571f
Revision: 4520
Node Kind: directory
Schedule: normal
Last Changed Author: robt
Last Changed Rev: 4520
Last Changed Date: 2012-02-02 18:07:34 -0800 (Thu, 02 Feb 2012)

As suspected this proved to be the issue.  The project’s pom.xml file defined the server as “svn”, while the project directory was checked out from “mks”.  Changing the tagBase to “mks” cleared the issue and the project could be tagged in SVN using the Maven plugin.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-scm-plugin</artifactId>
    <version>1.0</version>
    <configuration>
        <tagBase>http://svn:8080/svn/jrepo/tags/Frameworks/${project.artifactId}</tagBase>
        <tag>${project.version}</tag>
    </configuration>
</plugin>

Debugging Web Service Issues When Migrating from Glassfish Version 2 to 3.

For the most part our migration of web applications (including web services) from Glassfish version 2 to version 3 has been relatively painless. We did however encounter a strange issue with one of our server apps that has both a web service and an RMI server component. The application has been running without problems on Glassfish 2, but when we attempted to deploy to Glassfish 3 we encountered the error message shown in the screenshot below:

“Error occurred during deployment: Exception while preparing the app: Servlet CarrierBillWS implements 2 web service endpoints but must only implement 1”

image

The initial thought was to go ahead and migrate the application from Java EE 5 to Java EE 6, and hope that corrected the issue. This however did not change anything and we were left with the same error. Googling didn’t produce any useful results so I decided to take a look through some of the glassfish code where the exception was being thrown from.

         WebServicesDescriptor webServices = bundle.getWebServices();
          Collection endpoints =
                 webServices.getEndpointsImplementedBy(webComponentImpl);

         if( endpoints.size() > 1 ) {
             String msg = "Servlet " + getWebComponentLink() +
                     " implements " + endpoints.size() + " web service endpoints " +
                     " but must only implement 1";
             throw new IllegalStateException(msg);
         }

It appears that there could possibly a name collision with classes in the application, which wasn’t being checked for in Glassfish 2. The web service class itself is pretty simple:

@WebService()
public class CarrierBillWS {

    protected CarrierBillWSActor actor;
    protected Logger logger;

    public CarrierBillWS() {
        logger = Logger.getLogger(getClass());
        actor = new CarrierBillWSActor(PropertyManager.getInstance().getProperty(PropKey.UV_SESSION));
    }

    @WebMethod(operationName = "reopenBill")
    public void reopenBill( String dispatchId, String locationCode ) {
        logger.info( "Web service Request received to reopen carrier bill: " + dispatchId + " from location: " + locationCode );
        actor.reopenBill(dispatchId, locationCode);

    }
}

For fun, I commented out the @WebService annotation and deployed the application without any issues, with the obvious result being that there was no CarrierBillWS web service listed in the Glassfish admin console, but I was at least able to deploy. I then did a grep for “CarrierBillWS” and to my surprise found another class with the same name in one of the web service app’s dependencies. This dependency is a .jar file that is shared across multiple applications here at Lynden, including the web service. So, why was there a class with the same name in this library? It was because the library also contained code for a client of this CarrierBill web service which other applications depended on for communicating with the service. The CarrierBillWS application itself depends on this .jar file for other functionality that is shared across multiple apps. When the jar was built, it pulled down the WSDL from the web service and auto generated the client side classes, which included a CarrierBillWS client side class. This .jar was then bundled in the web service’s .war file, and Glassfish was finding both classes and then complaining.

The solution to the problem was pretty simple; override some of the default values of the @WebService annotation, and then rebuild the dependency using a new WSDL.

@WebService(serviceName="CarrierBillWebService", name="CarrierBillWebService")
public class CarrierBillWS {

The new “serviceName” and “name” properties on the WebService annotation will cause the client project to generate code with a class name of CarrierBillWebService rather than CarrierBillWS. When the .jar is built, the Web Service and its client related classes in the dependent jar now have different names and Glassfish v3 loads the application without any issues.

Since there wasn’t much info out there on this particular issue, I’m hoping that this may help someone in the future who encounters a similar problem.

Twitter: @RobTerp

Coping with unreliable wireless network connections between Windows CE and JMS using Apache MINA

Our freight tracking and cross docking system which consists of handheld barcode scanners which are running a C# application on Windows CE (written in-house), needed a reliable way to communicate with each other using our backend ActiveMQ Java Message Service infrastructure.  These devices include Symbol MC9090 handheld scanners, WT4090 wrist mount scanners as well as VC5090 forklift mounted scanners (all pictured below).

MC 9090 Handheld Device

VC 5090 Forklift Mounted Device

WT 4090 Wristmount Device

There were a number of challenges integrating these devices which were running .Net software communicating via a Java messaging server.  The original route we headed down was to use the .Net Messaging Service API (NMS) for ActiveMQ, with the devices communicating directly to the message broker. The NMS libraries are .NET assemblies that provide access to ActiveMQ via the native OpenWire protocol. This approach however proved to be an ineffective solution for the following reasons.

  • We learned that the next release of the .NET Message Service API (NMS) library would be dropping support for Windows CE. So even if we wanted to continue this route we either couldn’t or we’d need to modify the NMS library ourselves to support our CE clients.
  • NMS/OpenWire failover protocol was problematic on Windows CE.  Even though we were only running 1 production messaging broker, we were using the failover protocol on the client since it provided a way for the client to automatically attempt to reconnect to the messaging broker should the connection go down. (This happens frequently on these devices).
  • Unreliable network connections.  To expand on the point above further, devices are constantly taken in and out of truck trailers where the Wi-Fi connections are spotty.  Connections to the messaging system were not always getting properly cleaned up, and as a result, excessive resources would gradually be consumed over time on the broker which caused instability and occasional crashes on the server side.

In light of the above challenges it was decided to develop a proxy server to handle the communication between the CE devices and the messaging broker and act as a buffer for all the frequent disconnects/reconnects that the devices experienced.  We had initially looked at the REST API provided by ActiveMQ, but due to lack of support for durable subscriptions at the time (it may support them now), we ruled that option out.


Apache MINA and the MINA State Machine Libraries

The Apache MINA libraries provide a way to create network applications in an efficient and scalable way. The framework makes use of Java NIO and features an event driven API which allowed for a very elegant design of the proxy server. Even better, the MINA State Machine (which has no official release at this time, but the code can be downloaded and built from the MINA Subversion repository) provided an easy way to manage various states of the messaging clients (ie. Connected, Subscribed, Unsubscribed, etc). An ASCII based protocol was used as the communication medium between the clients and the proxy server.  In this manner, we could also make use of the proxy to connect Java desktop clients since their connection to the messaging system would be somewhat transient (as users stopped and started the application), although not as transient as the device connections in the field.  The ASCII protocol would also allow us to tie in our legacy BASIC applications to the messaging system which are running on the UniVerse database and application platform.  Existing server side Java applications would still be allowed to talk directly with the broker because in theory, their connection to the messaging system should be constant and stable. We are currently hosting the proxy server within our Glassfish3 application server with our other web applications since it provides an easy way to manage the proxy’s lifecycle. The chart below illustrates the current paths the various applications in our system take to access the message server.


Future directions

One obvious weakness with the design that is illustrated above is that of failover support (or lack thereof). Even if we were to run multiple message brokers for fault tolerance, the proxy server itself is still a single point of failure within the system. Support for Java and BASIC clients as previously mentioned will also be at the top of our list of additions to make to our messaging solution.

Twitter: @RobTerp


Freight Management System on NetBeans Rich Client Platform

Lynden is a family of transportation and logistics companies specialized in shipping to Alaska [1] and other locations worldwide. Over land, on the water, in the air – or in any combination – Lynden has been helping customers solve transportation problems for over a century.

The Lynden Freight Management System is a NetBeans Platform application [2] which serves a dual purpose as both a planning and freight tracking tool.

  • The Planning module allows terminal managers to see all freight that is currently inbound to their location as well as freight that is scheduled to depart from their location so they can make the most efficient use of their dock space and resources as possible.
  • The Trace module allows customer service personnel to search for customer account information, view the tracking history of any given freight item in the system as well as display any documents related to the shipment, such as bills of lading or delivery receipts.

NetBeans Platform

Lynden has benefited from the NetBeans Platform as it allows developers to focus on the business logic of our applications rather than the underlying “plumbing”.  We are able to leverage built-in support for event handling, enable/disable functionality on UI controls, dockable windows, and automatic updates for our application with minimal work compared to rolling our own framework.

We chose to go the desktop application route as we have a number of existing desktop applications that this application will need to interface with, as well as a commercial set of rich UI components that we have been using for some time now. For the initial deployment, we will be pushing the application installer out to employees PCs via the Landesk remote desktop administration tool.  Future updates to various modules within the application will be done via the update center functionality built into the NetBeans Platform.

Screenshots