GMapsFX 2.12.0 Released – Mac OSX Garbled Text Fixed!

The latest version of GMapsFX has been released and includes a major bug fix and a few enhancements.

Bug Fix

Last month I reported a bug that has been affecting users of GMapsFX on Mac OSX where the text appeared ‘garbled’.  An example image is below.
Screen Shot 2017-03-29 at 2.15.45 PM

The underlying issue is that the JavaFx WebView component on Mac OSX is rendering icons rather than letters for some websites, including Google Maps.

I receive a tweet from @ggeorgopoulos1 with a proposed work around that involves injecting CSS programatically into the page with the correct font.  I’ve incorporated the code into the latest GMapsFX library and I am happy to say that it is working once again on Mac OSX!


Screen Shot 2016-03-18 at 3.40.46 PM


Added a setKey() method to the GoogleMap compoment

This will allow a key to be set in the FXML and will eliminate the need to programatically set a key on the map object at runtime.


Support for Clustered Markers

Added ability to cluster markers by utilizing the Google Maps Marker Clustering API.


Ability to Set Route Colors

Route colors can now be specified rather than having to rely on the default blue that Google Maps uses.




Garbled Text in GMapsFX on Mac OSX


I have been receiving a lot of messages the last couple of months regarding the text in GMapsFX applications looking “garbled”, as illustrated in the screenshot below.


Screen Shot 2017-03-29 at 2.15.45 PM


This is appears to only affect applications running on Mac OSX.  GMapsFX makes use of the JavaFX WebView component under the hood, so I created a simple test app that loads Google Maps into a WebView to confirm the issue was the WebView component.

Screen Shot 2017-03-01 at 5.09.09 PM.png

I filed a bug report with Oracle, but apparently this is a known issue that affects only some websites in WebView on Mac. Linux and Windows applications using the WebView component are unaffected.

The issue has been open for a couple of years now which leads me to think that a recent Mac update may be contributing to the problem becoming more widespread.

Unfortunately a fix for this won’t be available until Java 9, and there is no work around that I know of.



Get the Lat/Long of a Mouse Click in a GMapsFX Map

A new API has been created for mouse events in the GMapsFX API which will begin to make it easier to obtain information about mouse events occurring within the Google Map without having to interact with the underlying Javascript API.

So now getting the Lat/Long of a mouse click is a relatively straightforward process.

GoogleMap map = googleMapView.createMap(mapOptions, false);
map.addMouseEventHandler(, (GMapMouseEvent event) -> {
LatLong latLong = event.getLatLong();
System.out.println("Latitude: " + latLong.getLatitude());
System.out.println("Longitude: " + latLong.getLongitude());

view raw

hosted with ❤ by GitHub

We tell the map we want to add a click UI event listener and pass in an event handler which handles a GMapMouseEvent event object.  From that object the latitude and longitude of the event can be determined.

Currently the Lat/Long are the only properties available on the GMapMouseEvent object, but additional properties will be added as demand warrants.

Below is a screenshot of one of the example applications included with the GMapsFX project that illustrates how to capture the lat/long of a mouse click.




Help! How do I Ship my Freight? Ask the Machine.

Problem: Lynden, Inc. a freight and logistics company owns a number of operating companies that ship via different modes of transport (air, truck, sea), and also specialize in shipping different types of commodities.  How do I, as a customer know which Lynden company to call if I want to have something shipped? Below is a table of a few of our operating companies, with the company abbreviation (referenced later in this post), and how each company moves their freight.


Company Abbreviation Mode of Transport
Alaska Marine Lines AML Sea
Lynden International LINT Air
Lynden Air Cargo LAC Air
Alaska West Express AWE Truck
Lynden Transport LTIA Truck
LTI, Inc. LTII Truck

Currently there is a single number that a customer can call to get directed to the appropriate company that can move the freight.  However this has proved to be a somewhat error prone process as there are a number of exception cases that can dictate which company can move the freight, such as is it oversized, or hazmat.

I wanted to see if we could gain any insight to this problem by looking at our historical shipments and possibly use various attributes about a shipment itselfto make an accurate recommendation as to which company to call.  This boils down to a classification problem, ie how do we classify the company that ships the freight based on attributes that the customer tells us about their freight?  Classification problems are one of the areas where machine learning has been applied extensively over the last few years.

There are dozens and dozens of attributes related to our shipments which could be used to feed into a machine learning model, but to keep things simple for this experiment I chose to use the freight description field. If the customer could give a description of their freight to a predictive model, could it properly suggest which company should ship it?  The exercise would be to see if there was any predictive power at all with the description field, and if so, additional fields could potentially be added to refine the model further.

I obtained data from 50,000 historical shipments which included a short description of the goods being shipped and the abbreviation of the company that shipped the goods.  I imported this data into R Studio for analysis.



There are a number of different machine learning models to choose from for classification problems.  I selected the Naive Bayes model which is the same model that most spam filters use to classify email  as spam based on the words within the email.

Out of curiosity I decided to use R to create a word cloud of common terms for a few of the companies just to see what pops out.   The first cloud below is common words found in the descriptions of freight shipped with Alaska Marine Lines (AML).



The word cloud below was generated for freight shipped with Lynden International (LINT).

Finally, the last cloud is for freight that was shipped with LTI, Inc. (LTII), with an interesting combination of commodities which were returned.  Hopefully the same tankers that are being used to ship sulfur are also not being used to ship wine!

Next it was time to train the model.  I took 40,000 of the records and submitted it to a Naive Bayes model that I had configured in R.  This would give the model a chance to look at the terms in each shipment, and associate it with the company that shipped the freight, hopefully finding commonalities that it could use when it needed to predict the company based on the description.

I then took the remaining 10,000 records and had the model guess which company shipped the goods based solely on the freight description.  I then tabulated the accuracy of the model’s 10,000 predictions.

The table below illustrates what percent of the time the model selected the correct company given the description of the freight.

  • AML:  95%
  • AWE:  94%
  • LTII:   94%
  • LTIA:  83%
  • LAC:   52%

I was quite surprised by the accuracy of the results given that just one attribute of the shipment data was being used by the model.  LAC is the big outlier, and freight that should have been classified as LAC was most commonly misclassified as LINT.  Likewise freight that should have been classified as LTIA was most commonly misclassified as AML.

The next step would be to take a look at some of the other attributes of the freight and see if we can further refine the model, by possibly using fields such as origin, destination, weight, etc.

If you are curious about the R code that I wrote to perform this experiment I have included it below.

set.seed(123) <- read.table( "ShipmentsDescOnly.csv", sep="|", header=TRUE, stringsAsFactors = FALSE)
#Shuffle the shipments up so they aren't in any particular order. <-[sample(nrow(,]$CompanyAbbreviation <- factor($CompanyAbbreviation)
# Build a word cloud for each company
aml <- subset(, CompanyAbbreviation == "AML")
lint <- subset(, CompanyAbbreviation == "LINT")
ltia <- subset(, CompanyAbbreviation == "LTIA")
awe <- subset(, CompanyAbbreviation == "AWE")
ltii <- subset(, CompanyAbbreviation == "LTII")
lac <- subset(, CompanyAbbreviation == "LAC")
wordcloud(aml$ShortDescription, max.words = 40, scale = c(3,0.5))
wordcloud(lint$ShortDescription, max.words = 40, scale = c(3,0.5))
wordcloud(ltia$ShortDescription, max.words = 40, scale = c(3,0.5))
wordcloud(awe$ShortDescription, max.words = 40, scale = c(3,0.5))
wordcloud(ltii$ShortDescription, max.words = 40, scale = c(3,0.5))
wordcloud(lac$ShortDescription, max.words = 40, scale = c(3,0.5))
#Cleanup the description fields remove numbers, punctuation etc.
corpus <- Corpus(VectorSource($ShortDescription))
corpus.clean <- tm_map(corpus, content_transformer(tolower))
corpus.clean <- tm_map(corpus.clean, removeNumbers)
corpus.clean <- tm_map(corpus.clean, removeWords, stopwords())
corpus.clean <- tm_map(corpus.clean, removePunctuation)
corpus.clean <- tm_map(corpus.clean, stripWhitespace)
document.term.matrix <- DocumentTermMatrix(corpus.clean)
#Break the data set into a training set containing 80% of the data, and a test set with the remaining.
training.set.size <- floor(0.80 * nrow(
training.index <- sample(seq_len(nrow(, size = training.set.size) <-[training.index, ] <-[training.index, ]
#Get the data in a format the model can understand.
dtm.train <- document.term.matrix[training.index,]
dtm.test <- document.term.matrix[training.index,]
corpus.train <- corpus.clean[training.index]
corpus.test <- corpus.clean[training.index]
shipment.dict <- c(findFreqTerms(dtm.train,5))
convert_counts <- function(x) {
x <- ifelse(x > 0, 1, 0)
x <- factor(x, levels = c(0,1), labels = c("No", "Yes"))
shipments.train <- DocumentTermMatrix(corpus.train, list(dictionary=shipment.dict))
shipments.test <- DocumentTermMatrix(corpus.test, list(dictionary=shipment.dict))
shipments.train <- apply(shipments.train, MARGIN = 2, convert_counts)
shipments.test <- apply(shipments.test, MARGIN = 2, convert_counts)
#Train the model with the training data.
model <- naiveBayes(shipments.train,$CompanyAbbreviation)
#See how well the model predicts based on the test data.
prediction <- predict(model, shipments.test)
#Print the results of the prediction
CrossTable(prediction,$CompanyAbbreviation, prop.chisq = FALSE, prop.t = FALSE, dnn = c('predicted', 'actual'))

view raw


hosted with ❤ by GitHub


Why You Shouldn’t Use Complex Objects as HashMap Keys

I’m a big believer in learning from my mistakes, but I’m an even bigger believer in learning from other people’s mistakes.  Hopefully someone else will be able to learn from my mistakes.



This post is inspired by an issue that took me a number of days to track down and pin point the root cause.  It started with NullPointerExceptions randomly be thrown in one of my applications.  I wasn’t able to consistently replicate the issue, so I added an indiscriminate amount of logging to the code to see if I could track down what was going on.

What I found was that when I was attempting to pull a value out of a particular hashmap, the value would sometimes be Null, which was a bit puzzling because after initializing the map with the keys/values, there were no more calls to put(), only calls to get(), so there should have been no opportunity to put a null value in the map.

Below is a code snippet similar (but far more concise) to the one I had been working on.

public void runTest() {
ProductSummaryBean summaryBean = new ProductSummaryBean(19.95, "MyWidget", "Z332332", new DecimalFormat("$#,###,##0.00"));
ProductDetailsBean detailBean = getProductDetailsBean(summaryBean);
productMap.put(summaryBean, detailBean);
//Load the same summaryBean from the DB
summaryBean = loadSummaryBean("Z332332");
//Pull the detailBean from the map for the given summaryBean
detailBean = productMap.get(summaryBean);
System.out.println("DetailBean is: " + detailBean );

There is a ProductSummaryBean with a short summary of the product, and a ProductDetailBean with further product details.  The summary bean is below and contains four properties.

package com.lynden.mapdemo;
import java.text.DecimalFormat;
import java.util.Objects;
public class ProductSummaryBean {
protected double price;
protected String name;
protected String upcCode;
protected DecimalFormat priceFormatter;
public ProductSummaryBean(double price, String name, String upcCode, DecimalFormat priceFormatter) {
this.price = price; = name;
this.upcCode = upcCode;
this.priceFormatter = priceFormatter;
public double getPrice() {
return price;
public String getName() {
return name;
public String getUpcCode() {
return upcCode;
public DecimalFormat getPriceFormatter() {
return priceFormatter;
public int hashCode() {
int hash = 7;
hash = 79 * hash + (int) (Double.doubleToLongBits(this.price) ^ (Double.doubleToLongBits(this.price) >>> 32));
hash = 79 * hash + Objects.hashCode(;
hash = 79 * hash + Objects.hashCode(this.upcCode);
hash = 79 * hash + Objects.hashCode(this.priceFormatter);
return hash;
public boolean equals(Object obj) {
if (this == obj) {
return true;
if (obj == null) {
return false;
if (getClass() != obj.getClass()) {
return false;
final ProductSummaryBean other = (ProductSummaryBean) obj;
if (Double.doubleToLongBits(this.price) != Double.doubleToLongBits(other.price)) {
return false;
if (!Objects.equals(, {
return false;
if (!Objects.equals(this.upcCode, other.upcCode)) {
return false;
if (!Objects.equals(this.priceFormatter, other.priceFormatter)) {
return false;
return true;
public String toString() {
return "ProductBean{" + "price=" + price + ", name=" + name + ", upcCode=" + upcCode + ", priceFormatter=" + priceFormatter + '}';


Any guesses what happens when the code above is run?

Exception in thread "main" java.lang.NullPointerException
 at com.lynden.mapdemo.TestClass.runTest(
 at com.lynden.mapdemo.TestClass.main(


So what happened?  The HashMap stores its keys by using the hashcode of the key objects.  If we print out the hashcode when the ProductSummaryBean is first created and also after its read out of the DB we get the following.

SummaryBean hashcode before: -298224643
SummaryBean hashcode after: -298224679


We  can see that the hashcode before and after are different, so there must be something different about the two objects.

SummaryBean before: ProductBean{priceFormatter=java.text.DecimalFormat@67500, price=19.95, name=MyWidget, upcCode=Z332332}
SummaryBean after: ProductBean{priceFormatter=java.text.DecimalFormat@674dc, price=19.95, name=MyWidget, upcCode=Z332332}


Printing out the entire objects shows that while name, upc code, and price are all the same, the DecimalFormatter used for the price is different.  Since the DecimalFormatter is part of the hashcode() calculation for the ProductSummaryBean, the hashcodes between the before and after versions of the bean turned out different.  Since the hashcode was modified, the map was not able to find the corresponding ProductDetailBean which in turned caused the NullPointerException.

Now one may ask, should the DecimalFormat object in the bean been used as part of the equals() and hashcode() calculations?  In this case, probably not, but this may not be true in your case.  The safer way to go for the hashmap key would be to have used the product’s upc code as the HashMap key to avoid the danger of the keys changing unexpectedly.






How to Import the GMapsFX Component into SceneBuilder

I recently received a question regarding how to add the GMapsFX component to SceneBuilder, so it could be dragged and dropped onto the UI as its being constructed.

I thought I would address the question here since the good folks at Gluon have made it extremely easy to import custom components to SceneBuilder.

The first step is to click the small ‘gear’ icon just to the right of the Library search box and select the “JAR/FXML Manager” menu item from the popup menu.




Since the GMapsFX binaries are in the Maven Central repository SceneBuilder can directly download and install the component from there.  Click the “Search repository” link in the Library Manager dialog box.



Next, enter “GMapsFX” into the Group or artifact ID text box, and click the “Search” button.  You should get a search result with the com.lynden:GMapsFX artifact. Select the result and click “Add JAR”.



The GoogleMapView should be the only component available in the Jar file, select it and click the “Import Component” button.



Finally, you should get a confirmation that the library was imported into SceneBuilder.



At this point the GoogleMapView component should be visible in the “Custom” component section of the pallette, and ready to be dragged and dropped onto your UI.  Due to the way the component is constructed, a map will not display in SceneBuilder or the SceneBuilder preview tool, but the proper FXML will be generated and the map will display when the program is run.



Feel free to tweet me at:  @RobTerpilowski with any questions.


GMapsFX 2.0.5 Released

GMapsFX 2.0.5 has been released and is now available via bintray and maven central.

This version contains a major bugfix where the directions pane was always enabled.  The framework has been updated to make the direction pane an option that can be toggled on or off at runtime as it is needed.

GMapsFX is a Java library that makes it extremely easy to add a Google Map to a JavaFX application.




GMaps Home:
twitter: @RobTerpilowski

Using Dependency Injection in a Java SE Application

It would be nice to decouple components in client applications the way that we have become accustom to doing in server side applications and providing a way to use mock implementations for unit testing.

Fortunately it is fairly straightforward to configure a Java SE client application to use a dependency injection framework such as Weld.

The first step is to include the weld-se jar as a dependency in your project. The weld-se jar is basically the Weld framework repackaged along with its other dependencies as a single jar file which is about 4MB.



Implement a singleton which will create and initialize the Weld container and provide a method to access a bean from the container.


public class CdiContext {

    public static final CdiContext INSTANCE = new CdiContext();

    private final Weld weld;
    private final WeldContainer container;

    private CdiContext() {
        this.weld = new Weld();
        this.container = weld.initialize();
        Runtime.getRuntime().addShutdownHook(new Thread() {
            public void run() {

    public <T> T getBean(Class<T> type) {
        return container.instance().select(type).get();


Once you have the context you can then use it to instantiate a bean which in turn will inject any dependencies into the bean.

import java.util.HashMap;
import java.util.Map;

public class MainClass {
    protected String baseDir;
    protected String wldFileLocation;
    protected String dataFileDir;
    protected int timeInterval = 15;
    protected String outputFileDir;

    public void run() throws Exception {
        CdiContext context = CdiContext.INSTANCE;

        //Get an isntance of the bean from the context
        IMatcher matcher = context.getBean(IMatcher.class);

        matcher.setCommodityTradeTimeMap( getDateTranslations(1, "6:30:00 AM", "6:35:00 AM", "6:45:00 AM") );

        matcher.matchTrades(wldFileLocation, dataFileDir, timeInterval, outputFileDir);


What is great is that there are no annotations required on the interfaces or their implementing classes. Weld will automatically find the implementation and inject it in the class where defined. ie. there were no annotations required on the IDataFileReader interface or its implementing classes in order to @Inject it into the Matcher class. Likewise neither the IMatcher interface nor the Matcher class require annotations in order to be instantiated by the CdiContext above.

public class Matcher implements IMatcher {

    //Framework will automatically find and inject
    //an implementation of IDataFileReader

    protected IDataFileReader dataFileReader;

twitter: @RobTerpilowski

Automatically Sorting a JavaFX TableView

With the SortedList implementation of ObservableList, automatically sorting a JavaFX TableView nearly works out of the box. In fact, it does work out of the box as long as you are adding or removing items from the list. In these circumstances, the list and the accompanying TableView will automatically resort the data. The case where the data isn’t automatically resorted is if the data in the existing rows change, but no rows are added or removed from the list.

For example, I have a TableView displaying various price information about a list of stocks including the last price, and the percent change from yesterday’s closing price. I would like to sort the list based on percent change, from lowest to highest as the prices change in real-time.  However since the app just loads the stocks into the list once and then never adds or removes items from the list, the TableView only auto sorted on the initial load, but did not automatically sort as the prices were updated.

Fortunately the fix proved to be fairly minor and easy to implement.

To start with, the application uses a Stock POJO which contains the following properties to represent the data. An ObservableList of these POJOs will be the basis of the data that the TableView will render.

public class Stock implements Level1QuoteListener {

protected StockTicker ticker;

protected SimpleStringProperty symbol = new SimpleStringProperty("");

protected SimpleDoubleProperty bid = new SimpleDoubleProperty(0);

protected SimpleDoubleProperty ask = new SimpleDoubleProperty(0);

protected SimpleDoubleProperty last = new SimpleDoubleProperty(0);

protected SimpleDoubleProperty percentChange = new SimpleDoubleProperty(0);

protected SimpleIntegerProperty volume = new SimpleIntegerProperty(0);

protected SimpleDoubleProperty previousClose = new SimpleDoubleProperty(0);

protected SimpleDoubleProperty open = new SimpleDoubleProperty(0);


The first step is to implement a new Callback, tying it to the property we wish to sort in the table. When this property is updated it will trigger the table to resort itself automatically.

      Callback<Stock,Observable[]> cb =(Stock stock) -> new Observable[]{


The next step is to create a new ObservableList using the Callback above.

      ObservableList<Stock> observableList = FXCollections.observableArrayList(cb);

Finally the last step is to create a new SortedList using the ObservableList previously created and also passing in an implementation of a Comparator which will be used to determine how to sort the data.

      SortedList<Stock> sortedList = new SortedList<>( observableList, 
      (Stock stock1, Stock stock2) -> {
        if( stock1.getPercentChange() < stock2.getPercentChange() ) {
            return -1;
        } else if( stock1.getPercentChange() > stock2.getPercentChange() ) {
            return 1;
        } else {
            return 0;


That’s it. The video below illustrates the auto-sorting in action as the price of the stocks are updated in real-time.

twitter: @RobTerpilowski

Article in Futures Magazine on our Commodity Trading Program

From the December 2014 issue of Futures Magazine • Subscribe!

Zoi: A match made in cyber heaven

Konstantinos (Gus) Tsahas and Robert Terpilowski had similar ideas regarding trading. These partners, who have been working on trading strategies together for nearly a decade, met of all places on an Interactive Brokers (IB) message board where they began sharing ideas (both have an engineering background) despite being on separate coasts: Tsahas in New York and Terpilowski in Seattle.

“Around 2000, Interactive Brokers offered an API and I began developing a mean reversion strategy for the equity markets,” Terpilowski says. “I was working on software to automate that. Gus actually posted something on the message board of the API group so we hooked up and started bouncing ideas off of each other and hit it off.”

Tsahas adds, “It just turned out that we were working on the same mean reversion ideas. I was further along in some aspects, Rob was [further along] in others. Collaborating helped both of us to come up with some good strategies. That was the start.”

This collaboration continued for five years with both Terpilowski and Tsahas pursuing separate careers: Terpilowski in various civil engineering positions and Tsahas running his own business installing fiber optic lines.

By 2005 they were trading a number of different strategies but running into scalability issues trading individual equities and decided to look at taking their mean reverting strategies and porting them over to futures.

“We noticed in 2005 that it was difficult to get filled in equity markets,” Tsahas says.

“The reason we chose the mean reversion strategies is we were looking at forming a CTA and felt like mean reversion was a pretty good alternative to the 70% or so of [CTAs] who were trend followers,” Terpilowski says. “That would allow people to include our strategy in their portfolio and provide a return stream that was uncorrelated to other CTAs.”

At the time Tsahas went back to school to get a financial engineering degree and entered an IB-sponsored trading contest using their methodology. “I actually came in second place using this strategy and won $50,000.”

Both would build and code the strategy. “Once we hit something that was beneficial we would share it. It was a lot of test and verify,” Tsahas says. “That is the beauty of having a second person with a different set of skills. Once I saw something that I liked or Rob saw something that he liked, we would port it over and test it ourselves.”

“That is one of the reasons we have worked so well together,” Terpilowski adds. “Being from an engineering background we basically say if it is not something that we can model and backtest it is not something that we are willing to trade.”

By 2007 they began trading proprietary money on 19 futures markets in six sectors. Their strategy attempts to capture short-term pullbacks in medium-term (two to three-week) trends.  They did not attempt to optimize their strategy on each market but rather on the entire portfolio to avoid curve-fitting.

“The inputs we use for natural gas are the same inputs that we use for the S&P 500 or gold or wheat,” Terpilowski says. “We wanted to avoid curve-fitting so we decided we are going to optimize it at the portfolio level only and we included 40 markets.”

Their strategies performed extremely well in 2007-2009 and by 2010 they began trading customer money in CTA Zoi Capital.

Winning trades typically last 1.5 days, with losers lasting 2.5 days. Zoi’s Telio program has a four-point risk management strategy. They exit the market in five days if a trade does not begin to revert back to the mean.

“We want to minimize the amount of time in the market because whenever you are in the market you are assuming risk,” Terpilowski says. “We apply stop losses whenever we open a new position but they are dynamic; if volatility is higher we want to place the stops a little further away than if volatility is less to give it more room to breathe.”

However, Zoi will reduce the position size when applying wider stops. That way they maintain their risk parameters, 2.5% on every trade, but allow trades to work. They also will exit all positions if they reach a portfolio-wide drawdown of 5% in a day. They cap margin to equity at 25%, though it typically runs around 7 to 10%.

Zoi earned solid returns: 36.12% in 2010, 25.34% in 2011 and 32.66% in 2012, trading basically for friends and family and looked to raise more assets.

In 2013 Zoi got its first institutional client when Jack Schwager provided an allocation from the ADM Investor Services Diversified Strategies Fund he was co-portfolio manager of, which is ironic as Terpilowski first got interested in technical trading after reading a Jack Schwager book.

The Telio program has produced a compound annual return of 20.36% since April 2010 with a Sharpe ratio of 1.05. It is up 6.56% in 2014 through October.

Zoi is looking to expand its program to 40 markets, which should be no problem as the strategy has already been rigorously tested. As Tsahas says, “If we can’t see it in a model, validate it and make sure it is not curve fitted, we are not going to trade it.”