GMapsFX 2.0.4 Released

A minor update of GMapsFX has been released this week which provides a bugfix for an issue that was preventing the GMapsFX GoogleMapView component from being loaded into Scene Builder under certain circumstances.

The release number is 2.0.4 and is currently available from BinTray, and hopefully should be available via Maven Central in the next couple of days.

twitter: @RobTerpilowski
LinkedIn: RobTerpilowski

 

Screen Shot 2016-03-18 at 3.40.46 PM

 

 

SVN: Could not read status line: connection was closed by server. Solved

I’ve recently started getting the following error message when attempting to access our SVN repository from my Mac, both from the svn command line interface as well as via NetBeans.

svn: E175002: OPTIONS of 'http://svn.company.com/svn/Frameworks/': Could not read status line: connection was closed by server (http://svn.company.com)

After some research, it appears that the Cisco Anywhere VPN client’s Web Security module can interfere with SVN clients.

The solution is to uninstall the Web Security module of the Anywhere client.

/opt/cisco/anyconnect/bin/websecurity_uninstall.sh

After uninstalling the module both NetBeans and the command line client are able to connect to the svn repository as expected.

I hope this helps someone out there who may be encountering a similar issue.

Article in Futures Magazine on our Commodity Trading Program

From the December 2014 issue of Futures Magazine • Subscribe!

Zoi: A match made in cyber heaven

Konstantinos (Gus) Tsahas and Robert Terpilowski had similar ideas regarding trading. These partners, who have been working on trading strategies together for nearly a decade, met of all places on an Interactive Brokers (IB) message board where they began sharing ideas (both have an engineering background) despite being on separate coasts: Tsahas in New York and Terpilowski in Seattle.

“Around 2000, Interactive Brokers offered an API and I began developing a mean reversion strategy for the equity markets,” Terpilowski says. “I was working on software to automate that. Gus actually posted something on the message board of the API group so we hooked up and started bouncing ideas off of each other and hit it off.”

Tsahas adds, “It just turned out that we were working on the same mean reversion ideas. I was further along in some aspects, Rob was [further along] in others. Collaborating helped both of us to come up with some good strategies. That was the start.”

This collaboration continued for five years with both Terpilowski and Tsahas pursuing separate careers: Terpilowski in various civil engineering positions and Tsahas running his own business installing fiber optic lines.

By 2005 they were trading a number of different strategies but running into scalability issues trading individual equities and decided to look at taking their mean reverting strategies and porting them over to futures.

“We noticed in 2005 that it was difficult to get filled in equity markets,” Tsahas says.

“The reason we chose the mean reversion strategies is we were looking at forming a CTA and felt like mean reversion was a pretty good alternative to the 70% or so of [CTAs] who were trend followers,” Terpilowski says. “That would allow people to include our strategy in their portfolio and provide a return stream that was uncorrelated to other CTAs.”

At the time Tsahas went back to school to get a financial engineering degree and entered an IB-sponsored trading contest using their methodology. “I actually came in second place using this strategy and won $50,000.”

Both would build and code the strategy. “Once we hit something that was beneficial we would share it. It was a lot of test and verify,” Tsahas says. “That is the beauty of having a second person with a different set of skills. Once I saw something that I liked or Rob saw something that he liked, we would port it over and test it ourselves.”

“That is one of the reasons we have worked so well together,” Terpilowski adds. “Being from an engineering background we basically say if it is not something that we can model and backtest it is not something that we are willing to trade.”

By 2007 they began trading proprietary money on 19 futures markets in six sectors. Their strategy attempts to capture short-term pullbacks in medium-term (two to three-week) trends.  They did not attempt to optimize their strategy on each market but rather on the entire portfolio to avoid curve-fitting.

“The inputs we use for natural gas are the same inputs that we use for the S&P 500 or gold or wheat,” Terpilowski says. “We wanted to avoid curve-fitting so we decided we are going to optimize it at the portfolio level only and we included 40 markets.”

Their strategies performed extremely well in 2007-2009 and by 2010 they began trading customer money in CTA Zoi Capital.

Winning trades typically last 1.5 days, with losers lasting 2.5 days. Zoi’s Telio program has a four-point risk management strategy. They exit the market in five days if a trade does not begin to revert back to the mean.

“We want to minimize the amount of time in the market because whenever you are in the market you are assuming risk,” Terpilowski says. “We apply stop losses whenever we open a new position but they are dynamic; if volatility is higher we want to place the stops a little further away than if volatility is less to give it more room to breathe.”

However, Zoi will reduce the position size when applying wider stops. That way they maintain their risk parameters, 2.5% on every trade, but allow trades to work. They also will exit all positions if they reach a portfolio-wide drawdown of 5% in a day. They cap margin to equity at 25%, though it typically runs around 7 to 10%.

Zoi earned solid returns: 36.12% in 2010, 25.34% in 2011 and 32.66% in 2012, trading basically for friends and family and looked to raise more assets.

In 2013 Zoi got its first institutional client when Jack Schwager provided an allocation from the ADM Investor Services Diversified Strategies Fund he was co-portfolio manager of, which is ironic as Terpilowski first got interested in technical trading after reading a Jack Schwager book.

The Telio program has produced a compound annual return of 20.36% since April 2010 with a Sharpe ratio of 1.05. It is up 6.56% in 2014 through October.

Zoi is looking to expand its program to 40 markets, which should be no problem as the strategy has already been rigorously tested. As Tsahas says, “If we can’t see it in a model, validate it and make sure it is not curve fitted, we are not going to trade it.”

GMapsFX Version 1.0.0

GMapsFX 1.0.0 has been released and supports the following features.

  • Markers
  • Marker Animations
  • Info Windows
  • Polylines
  • Shapes
    • Circles
    • Polygons
    • Rectangles
  • Map State Events
    • Bounds Changed
    • Center Changed
    • Drag
    • Drag End
    • Drag Start
    • Heading Changed
    • Idle
    • MapTypeId Changed
    • Projection Changed
    • Resize
    • Tiles Loaded
    • Tilt Changed
    • Zoom Changed
  • Map UI Events
    • Click
    • Double Click
    • Mouse Move
    • Mouse Up
    • Mouse Down
    • Mouse Over
    • Mouse Out
    • Right Click

Below is a sample map displaying some of the new features including markers, a polyline, an info window, a rectangle, and a circle, as well as events which are monitoring the lat/long position of the center of the map as it is dragged around and updates the value at the top of the UI.

 

enter image description here

Special thanks to Geoff Capper who contributed the shape and event handling code for the latest version.

The GMapsFX.jar file can be found here
Project documentation can be found here and with Javadocs located here

Written with StackEdit.

Drag and Drop With Custom Components in JavaFX

In my last post I showed how to create custom JavaFX components with Scene Builder and FXML and how to add the custom components to a new Scene.  The screenshot below displays the custom TableView component where each row represents data about a shipping container, with a custom “Add Plan” component on the right side of the UI to add a specified container to a plan.

The functionality of this test application will now be extended so that the user can select a row from the table and drag it over to the “Add Plan” component where they can drop it. When the drop event occurs the application will show a dialog displaying the value of the “VFC #” field from the row that was dragged from the container table.

screen2Additionally, when the user selects the row and begins dragging the cursor, we would like the application to display a translucent image of a shipping container (pictured to the left).  Once the cursor is over the Add Plan component, the component will be set with a drop shadow effect, in addition to changing the cursor to indicate to the user that the Add Plan component will accept the drop

The Drop Source (TestTable class)

The drop source for this example is the TestTable class from the previous post.  I will show sections from this class below, and then will include the entire class at the end of this post for reference.

Below is the TestTable class which contains a TableView with the shipping container information.  The first modification to this class was to add a parentProperty listener so that we know when this component is added to its parent node.  We will need the parent node to register for drag events that will occur when the user drags outside of the bounds of this container.  Once we have the parent we can register the drag listeners.

public class TestTable extends AnchorPane {

    @FXML
    private TableView myTableView;
    private ImageView dragImageView;
    private Parent root;

    public TestTable() {
        FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/com/lynden/planning/ui/TestTable.fxml"));
        fxmlLoader.setRoot(this);
        fxmlLoader.setController(this);

        try {
            fxmlLoader.load();

            parentProperty().addListener(new ChangeListener() {
                @Override
                public void changed(ObservableValue<? extends Parent> ov, Parent oldP, Parent newP) {
                    root = newP;
                    registerDragEvent();
                }
            });
        } catch (IOException exception) {
            throw new RuntimeException(exception);
        }
    }

The first step in the registerDragEvent() method is to read in the image of the shipping container that we want to display to the user and scale it down to 100×100.  Next, we can register the OnDragDetectedEvent for the TableView.  Once this listener detects a DragEvent it adds the ImageView to the Scene if it hasn’t already been added.  It then sets the opacity, sets MouseTransparent to true, which allows the mouse events to be passed to the components underneath it, and then displays the image.

The DragBoard is then fetched for the TableView, at which time we determine what row was selected, and put the value from the VFC# column as the content of the DragBoard.

 protected void registerDragEvent() {
        Image image = new Image(getClass().getResourceAsStream("/com/lynden/planning/ui/container2.png"));
        dragImageView = new ImageView(image);
        dragImageView.setFitHeight(100);
        dragImageView.setFitWidth(100);

        myTableView.setOnDragDetected(new EventHandler() {
            @Override
            public void handle(MouseEvent t) {

                AnchorPane anchorPane = (AnchorPane) myTableView.getScene().getRoot();

                if (!anchorPane.getChildren().contains(dragImageView)) {
                    anchorPane.getChildren().add(dragImageView);
                }

                dragImageView.setOpacity(0.5);
                dragImageView.toFront();
                dragImageView.setMouseTransparent(true);
                dragImageView.setVisible(true);
                dragImageView.relocate(
                        (int) (t.getSceneX() - dragImageView.getBoundsInLocal().getWidth() / 2),
                        (int) (t.getSceneY() - dragImageView.getBoundsInLocal().getHeight() / 2));

                Dragboard db = myTableView.startDragAndDrop(TransferMode.ANY);
                ClipboardContent content = new ClipboardContent();

                InboundBean inboundBean = (InboundBean) myTableView.getSelectionModel().getSelectedItem();
                content.putString(inboundBean.getVfcNumber());
                db.setContent(content);

                t.consume();
            }
        });

Next, we add the OnDragOver event handler to this component’s Parent.  This listener will re-position the shipping container image as the mouse is dragged around the UI.  If we just add the event handler to the TableView component, the position of the shipping container image will not be updated once the user drags the cursor outside of the TableView’s bounds.

//Add the drag over listener to this component's PARENT, so that the drag over events will be processed even
//after the cursor leaves the bounds of this component.
root.setOnDragOver(new EventHandler() {
    public void handle(DragEvent e) {
	Point2D localPoint = myTableView.getScene().getRoot().sceneToLocal(new Point2D(e.getSceneX(), e.getSceneY()));
	dragImageView.relocate(
		(int) (localPoint.getX() - dragImageView.getBoundsInLocal().getWidth() / 2),
		(int) (localPoint.getY() - dragImageView.getBoundsInLocal().getHeight() / 2));
	e.consume();
    }
});

The last step for the drop source is to add the OnDragDone event handler so that the shipping container image disappears when the user releases the mouse button.

myTableView.setOnDragDone(new EventHandler() {
    public void handle(DragEvent e) {
	dragImageView.setVisible(false);
	e.consume();
    }
});

The Drop Target (TestButton class)

The TestButton class is initialized in a similar fashion to the TestTable class.  The main difference is that the TestButton doesn’t care about its parent node, and we can just register for the drop events straight from the constructor without having to wait for the component’s parent to be set.

public class TestButton extends AnchorPane {

    @FXML
    private AnchorPane myTestButton;

    public TestButton() {
        FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/com/lynden/planning/ui/TestButton.fxml"));
        fxmlLoader.setRoot(this);
        fxmlLoader.setController(this);

        try {
            fxmlLoader.load();
        } catch (IOException exception) {
            throw new RuntimeException(exception);
        }

        registerDropEvent();
    }

The first thing we would like to do is to register for the OnDragOver event so that we can give feedback to the user that this component is a valid drop target.  The acceptTransferModes(TransferMode.ANY) will change the cursor to indicate the component accepts a drop event as well as adding a drop shadow to the component to provide a bit of highlighting

  
private void registerDropEvent() {

myTestButton.setOnDragOver(new EventHandler() {
    @Override
    public void handle(DragEvent t) {
	t.acceptTransferModes(TransferMode.ANY);
	DropShadow dropShadow = new DropShadow();
	dropShadow.setRadius(5.0);
	dropShadow.setOffsetX(3.0);
	dropShadow.setOffsetY(3.0);
	dropShadow.setColor(Color.color(0.4, 0.5, 0.5));
	myTestButton.setEffect(dropShadow);
	//Don't consume the event.  Let the layers below process the DragOver event as well so that the
	//translucent container image will follow the cursor.
	//t.consume();
    }
});

If the user exits the drop target area without releasing the mouse button the drop shadow effect is cleared from the component by setting the test button’s effect to null.

 
myTestButton.setOnDragExited(new EventHandler() {
    @Override
    public void handle(DragEvent t) {
	myTestButton.setEffect(null);
	t.consume();
    }
});

Finally, if the user does release the mouse button over the target, we will read the content from the Dragboard and display the results in a JOptionPane (used for simplicity sake).

myTestButton.setOnDragDropped(new EventHandler() {
    @Override
    public void handle(DragEvent t) {
	Dragboard db = t.getDragboard();
	final String string = db.getString();
	SwingUtilities.invokeLater(new Runnable() {
	    @Override
	    public void run() {
		JOptionPane.showMessageDialog(null, "Creating a New Plan For Container #: " + string);
	    }
	});
	t.setDropCompleted(true);
    }
});

Results

Below are some screenshots of the drag and drop in action.
Selecting a row and begin dragging displays translucent the shipping container image.

Dragging cursor over the Add Plan component causes the component to be highlighted with a DropShadow effect.
screen4

Releasing the mouse button over the Add Plan component will display a dialog containing the VFC# of the container row that was selected from the TableView.
screen5

For reference I have included both the TestTable and TestButton classes below.

 

twitter: @RobTerp

TestTable class

/*
 * To change this template, choose Tools | Templates
 * and open the template in the editor.
 */
package com.lynden.fx.test;

import com.lynden.fx.InboundBean;
import java.io.IOException;
import javafx.beans.value.ChangeListener;
import javafx.beans.value.ObservableValue;
import javafx.event.EventHandler;
import javafx.fxml.FXML;
import javafx.fxml.FXMLLoader;
import javafx.geometry.Point2D;
import javafx.scene.Parent;
import javafx.scene.control.TableView;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
import javafx.scene.input.ClipboardContent;
import javafx.scene.input.DragEvent;
import javafx.scene.input.Dragboard;
import javafx.scene.input.MouseEvent;
import javafx.scene.input.TransferMode;
import javafx.scene.layout.AnchorPane;

/**
 *
 * @author ROBT
 */
public class TestTable extends AnchorPane {

    @FXML
    private TableView myTableView;
    private ImageView dragImageView;
    private Parent root;

    public TestTable() {
        FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/com/lynden/planning/ui/TestTable.fxml"));
        fxmlLoader.setRoot(this);
        fxmlLoader.setController(this);

        try {
            fxmlLoader.load();

            parentProperty().addListener(new ChangeListener() {
                @Override
                public void changed(ObservableValue<? extends Parent> ov, Parent oldP, Parent newP) {
                    root = newP;
                    registerDragEvent();
                }
            });
        } catch (IOException exception) {
            throw new RuntimeException(exception);
        }
    }

    protected void registerDragEvent() {
        Image image = new Image(getClass().getResourceAsStream("/com/lynden/planning/ui/container2.png"));
        dragImageView = new ImageView(image);
        dragImageView.setFitHeight(100);
        dragImageView.setFitWidth(100);

        myTableView.setOnDragDetected(new EventHandler() {
            @Override
            public void handle(MouseEvent t) {

                AnchorPane anchorPane = (AnchorPane) myTableView.getScene().getRoot();

                if (!anchorPane.getChildren().contains(dragImageView)) {
                    anchorPane.getChildren().add(dragImageView);
                }

                dragImageView.setOpacity(0.5);
                dragImageView.toFront();
                dragImageView.setMouseTransparent(true);
                dragImageView.setVisible(true);
                dragImageView.relocate(
                        (int) (t.getSceneX() - dragImageView.getBoundsInLocal().getWidth() / 2),
                        (int) (t.getSceneY() - dragImageView.getBoundsInLocal().getHeight() / 2));

                Dragboard db = myTableView.startDragAndDrop(TransferMode.ANY);
                ClipboardContent content = new ClipboardContent();

                InboundBean inboundBean = (InboundBean) myTableView.getSelectionModel().getSelectedItem();
                content.putString(inboundBean.getVfcNumber());
                db.setContent(content);

                t.consume();
            }
        });

        //Add the drag over listener to this component's PARENT, so that the drag over events will be processed even
        //after the cursor leaves the bounds of this component.
        root.setOnDragOver(new EventHandler() {
            public void handle(DragEvent e) {
                Point2D localPoint = myTableView.getScene().getRoot().sceneToLocal(new Point2D(e.getSceneX(), e.getSceneY()));
                dragImageView.relocate(
                        (int) (localPoint.getX() - dragImageView.getBoundsInLocal().getWidth() / 2),
                        (int) (localPoint.getY() - dragImageView.getBoundsInLocal().getHeight() / 2));
                e.consume();
            }
        });

        myTableView.setOnDragDone(new EventHandler() {
            public void handle(DragEvent e) {
                dragImageView.setVisible(false);
                e.consume();
            }
        });
    }
}

TestButton Class

/*
 * To change this template, choose Tools | Templates
 * and open the template in the editor.
 */
package com.lynden.fx.test;

import java.io.IOException;
import javafx.event.EventHandler;
import javafx.fxml.FXML;
import javafx.fxml.FXMLLoader;
import javafx.scene.effect.DropShadow;
import javafx.scene.input.DragEvent;
import javafx.scene.input.Dragboard;
import javafx.scene.input.TransferMode;
import javafx.scene.layout.AnchorPane;
import javafx.scene.paint.Color;
import javax.swing.JOptionPane;
import javax.swing.SwingUtilities;

/**
 *
 * @author ROBT
 */
public class TestButton extends AnchorPane {

    @FXML
    private AnchorPane myTestButton;

    public TestButton() {
        FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/com/lynden/planning/ui/TestButton.fxml"));
        fxmlLoader.setRoot(this);
        fxmlLoader.setController(this);

        try {
            fxmlLoader.load();
        } catch (IOException exception) {
            throw new RuntimeException(exception);
        }

        registerDropEvent();
    }

    private void registerDropEvent() {

        myTestButton.setOnDragOver(new EventHandler() {
            @Override
            public void handle(DragEvent t) {
                t.acceptTransferModes(TransferMode.ANY);
                DropShadow dropShadow = new DropShadow();
                dropShadow.setRadius(5.0);
                dropShadow.setOffsetX(3.0);
                dropShadow.setOffsetY(3.0);
                dropShadow.setColor(Color.color(0.4, 0.5, 0.5));
                myTestButton.setEffect(dropShadow);
                //Don't consume the event.  Let the layers below process the DragOver event as well so that the
                //translucent container image will follow the cursor.
                //t.consume();
            }
        });

        myTestButton.setOnDragExited(new EventHandler() {
            @Override
            public void handle(DragEvent t) {
                t.acceptTransferModes(TransferMode.ANY);
                myTestButton.setEffect(null);
                t.consume();
            }
        });

        myTestButton.setOnDragDropped(new EventHandler() {
            @Override
            public void handle(DragEvent t) {
                Dragboard db = t.getDragboard();
                final String string = db.getString();
                SwingUtilities.invokeLater(new Runnable() {
                    @Override
                    public void run() {
                        JOptionPane.showMessageDialog(null, "Creating a New Plan For Container #: " + string);
                    }
                });
                t.setDropCompleted(true);
            }
        });

    }
}

Setting up a Nightly Build Process with Jenkins, SVN and Nexus

We wanted to set up a nightly integration build with our projects so that we could run unit and integration tests on the latest version of our applications and their underlying libraries.  We have a number of libraries that are shared across multiple projects and we wanted this build to run every night and use the latest versions of those libraries even if our applications had a specific release version defined in their Maven pom file.  In this way we would be alerted early if someone added a change to one of the dependency libraries that could potentially break an application when the developer upgraded the dependent library in a future version of the application.

The chart below illustrates our dependencies between our libraries and our applications.

image

Updating Versions Nightly

Both the Crossdock-shared and Messaging-shared libraries depend on the Siesta Framework library.  The Crossdock Web Service and CrossdockMessaging applications both depend on the Crossdock-Shared and Messaging-Shared libraries.  Because of the dependency structure we wanted the SiestaFramework library built first.  The Crossdock-Shared and Messaging-Shared libraries could be built in parallel, but we didn’t want the builds for the Crossdock Web Service and CrossdockMessaging applications to begin until all the libraries had finished building.  We also wanted the nightly build to tag subversion with the build date as well as upload the artifact up to our Nexus “Nightly Build” repository.  The resulting artifact would look something like SiestaFramework-20120720.jar

Also as I had mentioned, even though the CrossdockMessaging app may specify in its pom file it depends on version 5.0.4 of the SiestaFramework library.  For the purposes of the nightly build, we wanted it to use the freshly built SiestaFramework-NIGHTLY-20120720.jar version of the library.

The first problem to tackle was getting the current date into the project’s version number.  For this I started with the Jenkins Zentimestamp plugin.  With this plugin the format of Jenkin’s BUILD_ID timestamp can be changed.  I used this to specify using the format of yyyyMMdd for the timestamp.

image

The next step was to get the timestamp into the version number of the project.  I was able to accomplish this by using the Maven Versions plugin.  One of the things that the Versions plugin can do is to allow you to override the version number in the pom file dynamically at build time.  The code snippet from the SiestaFramework’s pom file is below.

<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>versions-maven-plugin</artifactId>
   <version>1.3.1</version>
</plugin>

At this point the Jenkin’s job can be configured to invoke the “versions;set” goal, passing in the new version string to use.  The ${BUILD_ID} Jenkins variable will have the newly formatted date string.

image

This will produce an artifact with the name SiestaFramework-NIGHTLY-20120720.jar


Uploading Artifacts to a Nightly Repository

Since this job needed to upload the artifact to a different repository from our Release repository that is defined in our project pom files the “altDeploymentRepository” property was used to pass in the location of the nightly repository.

image

The deployment portion of the SiestaFramework job specifies the location of the nightly repository where ${LYNDEN_NIGHTLY_REPO} is a Jenkin’s variable which contains the nightly repo URL.


Tagging Subversion

Finally, the Jenkins Subversion Tagging Plugin was used to tag SVN if the project was successfully built.  The plugin provides a Post-build Action for the job with the configuration section shown below.

image


Dynamically Updating Dependencies.

So now that the main project is set up, the dependent projects are set up in a similar way, but need to be configured to use the SiestaFramework-NIGHTLY-20120720 of the dependency rather than whatever version they currently have specified in their pom file.  This can be accomplished by changing the pom to use a property for the version number of the dependency.  For example, if the snippet below was the original pom file.

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>5.0.1</version>
   </dependency>
</dependencies>

It could be changed to the following to allow the SiestaFramework version to be set dynamically.

<properties>
   <siesta.version>5.0.1</siesta.version>
</properties>

<dependencies>
   <dependency>
      <groupId>com.lynden</groupId>
      <artifactId>SiestaFramework</artifactId>
      <version>${siesta.version}</version>
   </dependency>
</dependencies>

This version can then be overriden by the Jenkins job. The example below shows the Jenkins configuration for the Crossdock-shared build.

image


Enforcing Build Order

The final step in this process is setting up a structure to enforce the build order of the projects.  The dependencies are set up in such a way that SiestaFramework needs to be built first, and the Crossdock-shared and Messaging-shared libraries can be run concurrently once SiestaFramework finishes. The Crossdock Web Service and CrossdockMessaging application jobs can be run concurrently, but not until after both shared libraries have finished.

Setting up the Crossdock-shared and Messaging-shared jobs to be built after SiestaFramework completes is pretty straightforward.  In the Jenkins job configuration for both the shared libraries, the following build trigger is added.

image

For the requirement of not having the apps build until all libraries have built, I enlisted the help of the Join Plugin.  The Join Plugin can be used to execute a job once all “downstream” jobs have completed.  What does this mean exactly?  Looking at the diagram below, the Crossdock-Shared and the Messaging-Shared jobs are “downstream” from the SiestaFramework job.  Once both of these jobs complete, a Join trigger can be used to start other jobs.

image

In this case rather than having the Join trigger kick off other app jobs directly, I create a dummy Join job.  In this way, as we add more application builds, we don’t need to keep modifying the SiestaFramework job with the new application job we just added.

To illustrate the configuration, SiestaFramework has a new Post-build Action (below)

image

Join-Build is a Jenkins job I configured that does not do anything when executed.  Then our Crossdock Web Service and CrossdockMessaging applications define their builds to trigger as soon as Join-Build has completed.

image

In this way we are able to run builds each night which will update to the latest version of our dependencies as well as tag SVN and archive the binaries to Nexus.  I’d love to hear feedback from anyone who is handling nightly builds via Jenkins, and how they have handled the configuration and build issues.

twitter: @RobTerp

Designing an Automated Trading Application on the Netbeans Rich Client Platform (Part 1)

Over the past 10 years new opportunities have opened in the stock, futures and currency markets to allow retail traders the ability to produce their own automated trading strategies which was once only the realm of hedge funds and investment banks.  Interactive Brokers was one of the first brokerage firms to offer a Java API to its retail customers. Originally envisioned as way for developers to augment Interactive Brokers Trader Workstation (TWS) desktop application with features such as charting or record keeping, the API has gained popularity as a way to automate trading strategies.

In my first iteration of developing a trading strategy and software to automate the trades I built a Java desktop application using Swing components which would monitor stocks throughout the day and place trades when certain parameters were met, and then exit the trades at the close of the trading day. The software worked well, and it was adequate for the strategy it was designed to trade, however it was not extensible and attempting to implement new trading strategies to automate as well as connect to different brokerage accounts proved difficult and cumbersome.  Also, there are restrictions on how many stocks could be monitored via the broker’s data feed so the software had to be able to accommodate real-time market data feeds from other sources in addition to the broker’s data feed. 

I was introduced to the Netbeans Rich Client Platform (RCP) a couple of years ago and have recently decided to begin porting my application to the platform due to a large number of advantages that it provides.  The Netbeans RCP is built on a modular design principle allowing the developer to define abstract APIs for features and then provide modules which may have different implementations of the API, allowing the application to select at runtime which implementation to use.  Not only does it provide for a cleaner design by separating concerns, but by using the Netbeans Lookup API it also decouples the application and its various components from each other.  There are numerous other features that can be leveraged including a built-in windowing system, text editor, file explorer, toolbar, table and tree table components as well the Action API (just to name a few).

The trading application will make use of the RCP module system to define abstract APIs with the following functionality:

Broker API

  • Place and cancel orders for stocks, options, futures, or currencies
  • Provide event notification when orders are filled
  • Monitor cash balances in the account

Market Data API

  • Subscribe to real-time quote data for any ticker symbol
  • Subscribe to Level 2 data (market depth/order-book) for any ticker symbol

Historical Data API

  • Request historical price data for any ticker symbol

Trading Strategy API

  • Define a set of rules for entering and exiting trades
  • Ability to use any broker, market data, and historical data API implementations in order to make trading decisions.

The primary implementation for the Broker, Market Data and Historical data API modules will be utilizing Interactive Broker’s Java API,  but other implementations can also be created  as Netbeans modules and then imported into the trading application so that trading strategies can make use of market data from different sources if needed.

New trading strategies can be built as Netbeans modules  implementing the Trading Strategy API, where each strategy can make use of one of the implementations of the various data and broker APIs.  Utilizing the Netbeans Lookup API, strategies can query the platform to get a list of all implementations of the broker and market data APIs providing for loose coupling between the APIs and allowing the user to select which implementation to use at runtime.
Below is a diagram illustrating the organization of the various API components of the application:

In future posts I will go into more detail on how to create an API plug-in for the Netbeans RCP as well as show how to create a concrete implementation of the API.  In the illustration above the abstract broker, market data, and trading strategy APIs are installed into the RCP as plug-ins.  The broker API has a single implementation for Interactive Brokers at this point in time.  The market data API has plug-ins which provide implementations for real-time market data from Yahoo Finance as well as Interactive Brokers real-time market data.  Finally, the trading strategy API has 2 implementations in this example.  The first strategy named “Limit Buyer” will watch the prices of approx 800 stocks and place limit order to buy when certain conditions are met.  The second strategy in the example above, named AUD/NZD Currency Strategy will monitor the exchange rates of the Australian and New Zealand dollars and place orders to buy or sell when certain conditions are met.  

At this point in time the application is functional and is utilizing Interactive Brokers as the main brokerage as well as market data provider.  The AUD/NZD trading strategy is actively being traded through the application, albeit with a rudimentary user interface which is publishing messages to a text area within the strategy’s main tab.  The screenshot below illustrates Interactive Brokers “Trader Workstation” application, the large black application (which is a Java Swing app), as well as the Netbeans RCP automated trading application which is the small white application, with the large text area.  In the screenshot below the application is currently monitoring prices and placing trades for the Australian Dollar, New Zealand dollar, Hong Kong dollar and Japanese yen currencies.

screenshot

 

This post is just a high level overview on the design of an RCP application to trade in the financial markets.  Future parts to this series will include more information on how to implement abstract APIs and make them available for other portions of the application to use via the Netbeans Lookup API as well as working with some of the Netbeans UI components included with the platform such as tabs, trees and tables, showing how easy it is to render the same data via these different views using the Netbeans Nodes API.  In addition to this I would like to incorporate some JavaFX components into the application such as the charting components that can be found in the core JavaFX library which will provide a graphical representation of some of the data the strategies are monitoring which will be a bit more user friendly than the current large text area.  The integration of JavaFX components within the application will be documented in a future post as well.

You can follow my trading related blog if you would like to see the actual trading results of the application as its being refined at:
http://LimitUpTrading.wordpress.com

twitter: @RobTerp
twitter: @LimitUpTrading

Creating a Deployment Pipeline with Jenkins, Nexus, Ant and Glassfish

In a previous post I discussed how we created a build pipeline using Jenkins to create application binaries and move them into our Nexus repository. (Blog post here)  In this post I will show how we are using Jenkins to pull a versioned binary out of Nexus and deploy to one of our remote test, staging or production Glassfish servers.  By remote I mean that the Glassfish instance does not live on the same box as the Jenkins CI instance, but both machines are on the same network.

In a previous post I also discussed how to set up Glassfish v3 to allow deployments pushed from remote servers (Blog post here), so if you haven’t explicitly configured your Glassfish to allow this feature, you will need to do so before you get started.

On our Jenkins CI box we have an Ant script, which will be executed by a Jenkins job manually kicked off by a user. The script defines tasks to perform the following operations:

  • Ensure all needed parameters were entered by the user. (app name, version number, admin username/password, etc).
  • Copy the specified version of the application from Nexus to a local temp directory for deployment to a remote Glassfish instance.
  • Undeploy a previous version of the app from the target Glassfish instance.  (Optional).
  • Deploy the app from the temp directory to the target Glassfish instance.
  • Record the deployment info in a deployment tracking database table.  Historical deployment info can then be viewed from a web app.

Ant Script

Below are some of the more interesting code snippets from our Ant script that will be doing the heavy lifting for our deployment pipeline.  The first code snippet below defines the Ant tasks needed for deploying and undeploying applications from Glassfish.  These Ant tasks are bundled with Glassfish, but not installed by default.  If you haven’t installed them, you will need to do so from your Glassfish update center admin page.

<taskdef name="sun-appserv-deploy" classname="org.glassfish.ant.tasks.DeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

<taskdef name="sun-appserv-undeploy" classname="org.glassfish.ant.tasks.UndeployTask">
   <classpath>
      <pathelement location="/nas/apps/cisunwk/glassfish311/glassfish/lib/ant/ant-tasks.jar"/>
   </classpath>
</taskdef>

Once we have the tasks defined, we create a new target for pulling the binary from Nexus and copying it to a temporary location from where it will be deployed to Glassfish.

<target name="copy.from.nexus">
   <echo message="copying from nexus"/>    
   <get src="http://cisunwk:8081/nexus/content/repositories/Lynden-Java-Release/com/lynden/${app.name}/${version.number}/${app.name}-${version.number}.${package.type}" dest="/tmp/${app.name}-${version.number}.war"/>
</target>

Next is a target to undeploy a previous version of the application from Glassfish.  This step is optional and only executed if the user specifies a version to undeploy from Jenkins.

<target name="undeploy.from.glassfish" if="env.Undeploy_Version">
   <echo message="Undeploying app: ${app.name}-${undeploy.version}"/>
   <echo file="/tmp/gf.txt" message="AS_ADMIN_PASSWORD=${env.Admin_Password}"/>
   <sun-appserv-undeploy name="${app.name}-${undeploy.version}" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311"/>
   <delete file="/tmp/gf.txt"/>
</target>

Next, we then define a target to do the deployment of the application to Glassfish.

<target name="deploy.to.glassfish.with.context" if="context.is.set">
    <sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war" name="${app.name}-${version.number}" force="true" host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish311" contextroot="${App_Context}"/>
</target>

And then finally, we define a target, which will invoke a server and pass information to it, such as app name, version, who deployed the app, etc.so that it can be recorded in our deployment database.

<target name="tag.uv.deploy.file">
   <tstamp>
      <format property="time" pattern="yyyyMMdd-HHmmss"/>
   </tstamp> 

   <!--Ampersand character for the URL -->
   <property name="A" value="&amp;"/>
   <get src="http://shuyak.lynden.com:8080/DeploymentRecorder/DeploymentRecorderServlet?app=${app.name}${A}date=${time}${A}environment=${deploy.env}${A}serverName=${server.name}${A}serverPort=${server.port}${A}adminPort=${admin.port}${A}serverType=${server.type}${A}version=${version.number}${A}who=${deploy.user}${A}contextName=${App_Context}" dest="/dev/null"/>

   <echo message="tagging deployment info to UV"/>
</target>

Jenkins Configuration

Now that we have an Ant script to perform the actions that we need to do a deployment, we set up a Jenkins job to deploy to servers in each one of our environments (test/staging/prod).
image


In order to kick off a deployment to one of our servers, the appropriate environment is selected from the screenshot above, and the “Build Now” link is clicked which presents the user with the screen below. In this case we are deploying to a test Glassfish domain named “bjorn” on unga.lynden.com
image

The user can select from the drop down list the server they wish to deploy to and the application they wish to deploy.  The version number is a required entry in a text field.  If the script can’t find the specified version in Nexus, the build will fail.  There are also optional parameters for specifying an existing version to undeploy as well as an application context in the event the app name shouldn’t be used as the default context.


In the screenshot below we are deploying version 5.0.0 of the Crossdock App to the “bjorn” domain running on unga.lynden.com

image


Once the job completes, if we log into the bjorn Glassfish admin page on unga.lynden.com, we see that Crossdock-5.0.0 has been deployed to the server.

image


The screenshot below is an example of undeploying version 5.0.0 of Crossdock, and deploying version 5.0.1 of Crossdock.  Also, in this example, we are telling the script that we want the web context in Glassfish to be /Crossdock, rather than the default /Crossdock-5.0.1

image


The screenshot of the Glassfish admin page below shows that Crossdock-5.0.0 has been unistalled, and that Crossdock-5.0.1 is now installed with a Context Root of /Crossdock.

image


Deployment History

Finally, as I mentioned previously, the Ant script is also saving the deployment information to a historical deployment table in our database.  We have written a simple web application which will display this historical data.  The screenshot below shows all of the applications that have been deployed to our test environment. (We have similar pages for Staging and Production as well).  Included in this information is the application name, version number, date, and who deployed it, among some other miscellaneous info.

image

We can then drill into the history of a specific application by clicking the “Crossdock” link on the screen above and get a detailed history about the deployments for that application including version numbers or dates.  We maintain more than 60 different web applications serving various purposes here at Lynden, so this has been a great tool us to see exactly what versions of our applications are currently deployed where, as well as see the history of the deployment of a specific application in the event we need to roll back to a previous version.

image

As we have learned firsthand, Jenkins is a very useful and versatile tool that is easy to extend for purposes beyond automated builds and continuous integration.

twitter: @RobTerp

Migrating an Automated Deployment Script from Glassfish v2 to Glassfish v3

Just recently we attempted to update an Ant script which we are using to do automated deployments to Glassfish v2 servers, to deploy to our new Glassfish v3 servers instead.  This Ant script is invoked from our Jenkins automated deploy pipeline process (another blog post about this later) and copies a .war file from our Nexus repository and installs it to a Glassfish instance running on a remote server.  I ran into a number of issues that prevented the Ant script from being used “as-is”.

The first thing that this assumes is that there is a Glassfish v3 server running on the build machine from which this script is executed.  The build script needs access to the “asadmin” tool in the glassfish/bin directory in order to deploy a .war file to a remote glassfish server.

The first issue was that the Ant deploy/undeploy tasks for Glassfish are in a different location in v3.  Actually, not only is the .jar file with the Ant tasks in a different location in the newer version of Glassfish, its not installed by default!  In order to get the Ant task you’ll need to go to the Update Tool on the admin web page of your Glassfish v3 instance and install the “glassfish-ant-tasks” component.  Once you’ve done that then you can modify your Ant script to use the new Ant tasks (which are also located in a different Java package also).  The code snippets from an Ant script below compare the differences between the Glassfish v2 and v3 ant task usage.

<!—Glassfish v2 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.apache.tools.ant.taskdefs.optional.sun.appserv.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfish/lib/sun-appserv-ant.jar”/> </classpath> </taskdef>

<!—Glassfish v3 ant tasks –>

<taskdef name=”sun-appserv-deploy” classname=”org.glassfish.ant.tasks.DeployTask”> <classpath> <pathelement location=”/nas/apps/cisunwk/glassfishv3/glassfish/lib/ant/ant-tasks.jar”/> </classpath> </taskdef>

The next change that needs to be made to the build script is the call to the sun-appserv-deploy task itself.  The “”installDir” property has been changed to “asinstalldir” for Glassfish v3.  A comparison of the v2 vs. v3 code snippets are below.

<sun-appserv-deploy file="/tmp/${app.name}-${version.number}.war"
name="${app.name}-${version.number}" force="true" 
host="${server.name}" port="${admin.port}" user="${env.Admin_Username}" 
passwordfile="/tmp/gf.txt" installDir="/nas/apps/cisunwk/glassfish"/>

<sun-appserv-deploy file=”/tmp/${app.name}.war”

name=”${app.name}” force=”true” host=”${server.name}” port=”${admin.port}” user=”${env.Admin_Username}”

passwordfile=”/tmp/gf.txt” asinstalldir=”/nas/apps/cisunwk/glassfish311″/>

The final task in getting this to work is to enable remote commands on each of the target glassfish instances to which the apps will be deployed.

./asadmin enable-remote-admin –port <glassfish-admin-port>

where <glassfish-admin-port> is the admin port that domain (4848 by default).

The last “gotcha” to keep in mind here is that the glassfish that is defined in the <sun-appserv-deploy> task with the “asinstalldir” property MUST be the same version as the remote target Glassfish instances where the web app will be deployed to.  At least this was the case when we attempt to specify a Glassfish v3.0 instance in the installDir property, deploying to a Glassfish v3.1 remote instance.  When we updated the installDir to point to a v3.1 instance, the deployment went fine. (v3.1 to v3.1.2 may be OK)

Hopefully this will help others that have encountered similar issues while attempting to do deployments to remote Glassfish instances either from Jenkins, Ant or other automated processes.

Twitter: @RobTerp

CamelOne 2012 Summary

camelone2012

I’ve just returned from the CamelOne Open Source Integration Conference in Boston where I was invited as a speaker to present how Lynden is using ActiveMQ as part of its freight tracking infrastructure.  As expected, there were a number of big announcements and very interesting projects that speakers from all over the world showcased during the 2 day conference.  Here are few new projects/applications that I found may be useful here at Lynden in particular.


I thought this was by far the most interesting project announced during the conference.  Fuse Fabric is a distributed configuration, management and provisioning system for Fuse ESB, Fuse Message Broker and the Apache open source integration solutions (ActiveMQ, Camel, CXF, Karaf & ServiceMix).  Basically providing a single point where configuration information for brokers (and other applications) can go to retrieve their config info.   Under the covers it is making use of Apache ZooKeeper to store the cluster configuration and node registration.

We are currently only running 1 production message broker, but we have 8 message brokers in our test environment.  Fuse Fabric could greatly reduce the overhead of managing the configuration of those brokers.

    Apache ZooKeeper
As it seems at these conferences, there are always ideas flying around among the attendees and I always seem to obtain just as many new ideas attending sessions as I do talking to other architects and developers outside the sessions.  I was explaining to someone the challenge we have with managing the configurations of our 70+ web applications and services in our production, staging and test environments.  His immediate response was to take a look at Apache ZooKeeper.  They had been using it the past few months for managing configuration of multiple environments with good success.  As mentioned above, Fuse Fabric makes use of Apache ZooKeeper, and I’m curious to learn if Fabric can be used beyond the integration apps it mentions (ESB, MQ, etc), and also used for things such as our web applications and services.

Fuse Management Console
Provides a toolf for managing large-scale deployments of Fuse ESB Enterprise and Fuse MQ Enterprise.  A FuseSource subscription is required to install the console, which provides the following benefits:

  • Simplified management– key status metrics and commands are accessible from a unified, centralized console
  • Automatic provisioning– dependencies between components and include files are tracked and managed
  • Profile management– simplify the creation and management of customized brokers by defining configuration profiles
  • Automated updates– updates to a single component are automatically deployed to the appropriate brokers based on the profile
  • Local or Cloud deployment – your integration infrastructure can be deployed and managed locally or in the Cloud

MQTT protocol support in ActiveMQ 5.6
MQTT protocol has come a little late for Lynden since we already have a solution for connecting Windows CE devices to ActiveMQ.  MQ Telemetry Transport (MQTT) is a machine to machine protocol design as an extremely lightweight publish/subscribe messaging transport.  ActiveMQ 5.6 and Apollo 1.0 will both support MQTT.

Fuse HQ
Fuse HQ is a SOA management and monitoring system and is available with a FuseSource support subscription.  Fuse HQ takes advantage of the JMX-based reporting capabilities of the Fuse products.  Some of the features Fuse HQ provides are:

Monitor systems– comprehensive cross-stack dashboard make it easy to check the health of systems and services

  • Role-based access controls for managing user visibility
  • Visually review relationships between FuseSource servers and other managed resources
  • Explore and generate reports of real-time and historic logs, configurations, and events
  • Impact analysis and change control based on configuration tracking
  • Collect, chart and view real-time and historic metrics from hardware, network and applications without invasive instrumentation
  • Auto-discover hardware and software attributes including configuration, version numbers, memory, CPU, disk and network devices
  • Distributed monitoring allows for linear scalability of Fuse HQ server

Manage systems– execute control operations to manage the FuseSource environment

  • Role-based access controls for managing user permissions
  • External authentication leverages existing LDAP or Kerberos directories
  • Automate maintenance activities with scheduled controls and conditional responses to alerts
  • High availability allows multiple Fuse HQ servers to assume workloads in the event of a server failure

Intelligent Alerts– definable alerts proactively identify problems before they occur

  • Follow-the-sun assignments routs alerts to the appropriate person based on the time of day and users’ scheduled availability
  • Integrate traps with existing IT operations suite
  • Define alerts once and apply to large, global resource groups
  • Alerts can be multiconditional, role-based and group based, and trigger recovery processes
  • Generate alerts on changes in configuration or key attributes of any managed resource

twitter: @RobTerp