QuantDesk® Machine Learning Forecast
for the Week of May 1st

So far this year, both Tiebreaker (which reached a new all-time high) and BlackDog have outperformed the S&P. Although the S&P is not the natural benchmark to measure relative performance for Tiebreaker and BlackDog, it’s still another affirmation that the technology we deploy and test on a roll-forward basis delivers. Actually, it’s been delivering for about three years since inception. To be more specific:

  • Tiebreaker has outperformed its benchmark, VMNIX (Vanguard market neutral institutional fund), by 35.55% since 9/1/2014.
  • BlackDog has outperformed its benchmark, AQR’s risk parity fund, by 26.15% since 4/1/2014.This year, I have added additional strategies to the mix and we are delighted to see their performance unfolding.

This year, I have added additional strategies to the mix and we are delighted to see their performance unfolding.

Many astute fund managers will tell you that any successful strategy is bound to decay over time. Market dynamics change, technologies evolve, and proprietary data eventually makes its way to the masses. At Lucena we work hard to stay ahead of curve with new innovative techniques. Today, I’d like to talk about ensemble voting, a supervised learning technique in which a single strategy utilizes multiple independent learners to deliver investment decisions.

What is Ensemble Learning?

According to Wikipedia: “In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.” The idea is to combine multiple independent weak models to deliver a higher quality forecast.

Image 1: Ensemble voting takes into account multiple independent “opinions” derived from either the same or different observation (data) and deliver a single combined score.

It’s important to consider ensemble voting merely because time and again ensemble models won in prestigious machine learning competitions like Kaggle.

So, What is a Weak Model?

Without getting too deep into the mathematical representation of a model’s efficacy, a weak model represents an observation (X) and its predictive outcome (Y) in which such relationship is inconsistent. There are three main components that contribute to the strength of a machine learning model:

      1. Irreducible Error – Inherent noise in data that can’t be reduced with a better algorithm. In general, all data consists of some noise. If you measure the relationship between a house’s square footage and its sales price, for example, although house prices are heavily influenced by their square footage, it is not a perfect price predictor. Therefore, using square footage to forecast a house’s price carries some form of measurable noise.


    1. Image 2: A Bayesian linear regression model depicting an outcome Y from an observation X. The relationship between X and Y is represented via a linear formula and the distance above or below the perfect outcome (represented by the red line) could be attributed to the inherit noise in the data.
    2. Bias Error – Measures the average the rate of divergence of predicted outcome to actual outcome. By traveling back in time and empirically comparing the model’s prediction to the actual outcome, we can assess the model’s predictive success (also called the model’s average fit). A high bias score indicates that the model is not predictive (not flexible enough to capture data patterns or trends) and is missing important information in the data.
    3. Variance – Measures how different outcomes are derived from similar observations in different training data. This is for all intents and purposes a measure of how “over fit” the model is. A model with a high predictive outcome (relationship between prediction and actual outcome) during the training timeframe, but which has a measurably lower correlation between predictions and their respective outcomes in another period, has a high variance error.

So, what does it all mean?

First, irreducible error or noise is a function of the data and cannot be improved with better machine-learning models. The only solution to noisy data is to use a different set of less-noisy data. For our discussion today, let’s assume that our data is predictive and with a reasonable measure of noise. The ultimate goal of an ensemble learner is to train multiple weak models, whether due to high bias or high variance, in order to produce a lower combined bias and variance. A low bias and variance is the ultimate goal of a supervised ensemble learner and would most likely yield a strong predictive outcome. Unfortunately, bias and variance are a function of a model’s complexity, and are somewhat inversely correlated. Therefore, improving one would normally come at the expense of the other. The ensemble learner is ultimately tasked with finding the optimal trade-off between bias and variance. A more complex model (one with lots of factors) is more likely to fall apart out of sample (due to over fitting or high variance). Conversely, a model with too few factors is too generic and will more likely be prone to error (i.e., bias).

Image 3: Relationship between model complexity and error level. The more complex the models the more Biased they are hence subject to overfitting. In contrast the simpler the models the more error they encounter and thus subject to higher Variance. Credit: Scott Fortmann Roe http://scott.fortmann-roe.com/docs/BiasVariance.html.

By combining multiple strategies into one, we reduce the likelihood of overfitting and consequently increase the model’s robustness and ultimate success. There are various ways by which an ensemble voting strategy can combine multiple scores into one. The simplest form is to consider all votes equally. More sophisticated strategies apply rules by which they give greater weight to votes that were successful historically. Some of the common scoring methods used in supervised ensemble learning are:

  • Bagging – Averaging the score of similar models against different sets of data (or timeframes).
  • Boosting – A roll-forward process in which successful models attribute higher weight in the total score.
  • Stacking – Creating a machine learning model from the output of various independent learners.

At Lucena, we recently deployed on QuantDesk® the ability to combine multiple independent event studies into a single backtest. All models must agree on a particular asset before making an entry selection. The strength of this approach is heavily dependent on how different the models are (or how uncorrelated they are). If we apply ensemble voting on similar models, the value is not much different than using a single model. However, if we can truly combine uncorrelated models to agree on an outcome, the likelihood of success will increase substantially.

Our preliminary results are rather encouraging and we are in the process of creating smart alpha and smart beta data feeds using intelligent ensemble learning techniques. Stay tuned, as more on this is soon to follow.

Image 4: Ensemble learner cross-validation backtest report. Three event study models combined into one strategy with crossover on 1/1/2016. As can be seen by the cone in the top image, out-of- sample performance is slightly above average.

Strategy’s Update

As in past weeks, I wanted to briefly update you on how the model portfolios, and the theme-based strategies we covered recently are performing.

Tiebreaker – Lucena’s Long/Short Equity Strategy – YTD return of 6.62% vs. benchmark of -2.51%
Tiebreaker has been forward traded since 2014 and to date it has enjoyed remarkably low volatility and boasts an impressive return of 40.45%, a low max-drawdown of 6.16%, and a Sharpe of 1.83! (You can see a more detailed view of Tiebreaker’s performance below in this newsletter.)

Image 1: Tiebreaker YTD– benchmark is VMNIX (Vanguard Market Neutral Fund Institutional Shares
Past performance is no guarantee of future returns.

BlackDog – Lucena’s Risk Parity – YTD return of 12.27 % vs. benchmark of 5.67%
We have recently developed a sophisticated multi-sleeve optimization engine set to provide the most suitable asset allocation for a given risk profile, while respecting multi-level allocation restriction rules.
In essence, we strive to obtain an optimal decision while taking into consideration the trade-offs between two or more conflicting objectives. For example, consider a wide universe of constituents, we can find a subset selection and their respective allocations to satisfy the following:

  • Maximizing Sharpe
  • Widely diversified portfolio with certain allocation restrictions across certain asset classes, market sectors and growth/value classifications
  • Restrict volatility
  • Minimize turnover

We can also determine the proper rebalance frequency and validate the recommended methodology with a comprehensive backtest.

Image 2: BlackDog YTD– benchmark is AQR’s Risk Parity Fund Class B
Past performance is not indicative of future returns.

Utilities – Large-Cap Based Actively Managed – YTD return of 16.01% vs. 7.25% of the benchmark!!!
I wrote about utilities last year in an attempt to demonstrate how Lucena’s technology can be deployed to identify fixed income alternatives. Since November 2016 we have been tracking our utilities portfolio, and it has been performing exceptionally well in both total return and low volatility — well ahead of the S&P and its benchmark, the XLU.

Image 3: Utilities based strategy– captured since November of 2016. Benchmark is XLU – Utilities select sector SPDR
Past performance is not indicative of future returns.

Industrials – Large-Cap Based and Actively Managed – YTD Return of 6.65% vs. benchmark of 3.26%
I wrote about an Industrial centric portfolio on January this year. This portfolio was designed to anticipate the administration’s strong desire to invest in infrastructure. The portfolio identifies a well-diversified industrial stock set to track and outperform the XLI (its benchmark).

Image 4: Industrials-based strategy– captured since January 27, 2017 (covered during that week’s newsletter).
Benchmark is XLI – Industrials select sector SPDR ETF
Past performance is not indicative of future returns.

Forecasting the Top 10 Positions in the S&P
Lucena’s Forecaster uses a predetermined set of 10 factors that are selected from a large set of over 500. Self-adjusting to the most recent data, we apply a genetic algorithm (GA) process that runs over the weekend to identify the most predictive set of factors based on which our price forecasts are assessed. These factors (together called a “model”) are used to forecast the price and its corresponding confidence score of every stock in the S&P. Our machine-learning algorithm travels back in time over a look-back period (or a training period) and searches for historical states in which the underlying equities were similar to their current state. By assessing how prices moved forward in the past, we anticipate their projected price change and forecast their volatility.

Image 5: Last week’s top 10 performance vs. SP 500 benchmark.
Past performance is no guarantee of future returns.

The charts below represent the new model and the top 10 positions assessed by Lucena’s Price Forecaster.

Image 6: Default model for the coming week.

The top 10 forecast chart below delineates the ten positions in the S&P with the highest projected market-relative return combined with their highest confidence score.

Image 7: Forecasting the top 10 position in the SPY for the coming week.
The yellow stars (0 stars meaning poorest and 5 stars meaning strongest) represent the confidence score based on the forecasted volatility, while the blue stars represent backtest scoring as to how successful the machine was in forecasting the underlying asset over the lookback period — in our case, the last 3 months.

To view a brief introduction video of all the major functions of QuantDesk, please click on the following link:

The table below presents the trailing 12-month performance and a YTD comparison between the two model strategies we cover in this newsletter (BlackDog and Tiebreaker), as well as the two ETFs representing the major US indexes (the DOW and the S&P).

Image 8: Last week’s changes, trailing 12 months, and year-to-date gains/losses.

Past performance is no guarantee of future returns.

Model Tiebreaker: Lucena’s Active Long/Short US Equities Strategy:

Tiebreaker: Paper trading model portfolio performance compared to the SPY and Vanguard Market Neutral Fund since 9/1/2014.
Past performance is no guarantee of future returns.

Model BlackDog 2X, Lucena’s Tactical Asset Allocation Strategy:

BlackDog: Paper trading model portfolio performance compared to the SPY and Vanguard Balanced Index Fund since 4/1/2014.
Past performance is no guarantee of future returns.


For those of you unfamiliar with BlackDog and Tiebreaker, here is a brief overview: BlackDog and Tiebreaker are two out of an assortment of model strategies that we offer our clients. Our team of quants is constantly on the hunt for innovative investment ideas. Lucena’s model portfolios are a byproduct of some of our best research, packaged into consumable model-portfolios. The performance stats and charts presented here are a reflection of paper traded portfolios on our platform, QuantDesk®. Actual performance of our clients’ portfolios may vary as it is subject to slippage and the manager’s discretionary implementation. We will be happy to facilitate an introduction with one of our clients for those of you interested in reviewing live brokerage accounts that track our model portfolios.

Tiebreaker is an actively managed long/short equity strategy. It invests in equities from the S&P 500 and Russell 1000 and is rebalanced bi-weekly using Lucena’s Forecaster, Optimizer and Hedger. Tiebreaker splits its cash evenly between its core and hedge holdings, and its hedge positions consist of long and short equities. Tiebreaker has been able to avoid major market drawdowns while still taking full advantage of subsequent run-ups. Tiebreaker is able to adjust its long/short exposure based on idiosyncratic volatility and risk. Lucena’s Hedge Finder is primarily responsible for driving this long/short exposure tilt.

Tiebreaker Model Portfolio Performance Calculation Methodology
Tiebreaker’s model portfolio’s performance is a paper trading simulation and it assumes opening account balance of $1,000,000 cash. Tiebreaker started to paper trade on April 28, 2014 as a cash neutral and Bata neutral strategy. However, it was substantially modified to its current dynamic mode on 9/1/2014. Trade execution and return figures assume positions are opened at the 11:00AM EST price quoted by the primary exchange on which the security is traded and unless a stop is triggered, the positions are closed at the 4:00PM EST price quoted by the primary exchange on which the security is traded. In the case of a stop loss, a trailing 5% stop loss is imposed and is measured from the intra-week high (in the case of longs) and low (in the case of shorts). If the stop loss was triggered, an exit from the position 5% below, in the case of longs, and 5% above, in the case of shorts. Tiebreaker assesses the price at which the position is exited with the following modification: prior to March 1st, 2016, at times but not at all times, if, in consultation with a client executing the strategy, it is found that the client received a less favorable price in closing out a position when a stop loss is triggered, the less favorable price is used in determining the exit price. On September 28, 2016 we have applied new allocation algorithms to Tiebreaker and modified its rebalancing sequence to be every two weeks (10 trading days). Since March 1st, 2016, all trades are conducted automatically with no modifications based on the guidelines outlined herein. No manual modifications have been made to the gain stop prices. In instances where a position gaps through the trigger price, the initial open gapped trading price is utilized. Transaction costs are calculated as the larger of 6.95 per trade or $0.0035 * number of shares trades.

BlackDog is a paper trading simulation of a tactical asset allocation strategy that utilizes highly liquid ETFs of large cap and fixed income instruments. The portfolio is adjusted approximately once per month based on Lucena’s Optimizer in conjunction with Lucena’s macroeconomic ensemble voting model. Due to BlackDog’s low volatility (half the market in backtesting) we leveraged it 2X. By exposing twice its original cash assets, we take full advantage of its potential returns while maintaining market-relative low volatility and risk. As evidenced by the chart below, BlackDog 2X is substantially ahead of its benchmark (S&P 500).

In the past year, we covered QuantDesk’s Forecaster, Back-tester, Optimizer, Hedger and our Event Study. In future briefings, we will keep you up-to-date on how our live portfolios are executing. We will also showcase new technologies and capabilities that we intend to deploy and make available through our premium strategies and QuantDesk® our flagship cloud-based software.
My hope is that those of you who will be following us closely will gain a good understanding of Machine Learning techniques in statistical forecasting and will gain expertise in our suite of offerings and services.


  • Forecaster – Pattern recognition price prediction
  • Optimizer – Portfolio allocation based on risk profile
  • Hedger – Hedge positions to reduce volatility and maximize risk adjusted return
  • Event Analyzer – Identify predictable behavior following a meaningful event
  • Back Tester – Assess an investment strategy through a historical test drive before risking capital

Your comments and questions are important to us and help to drive the content of this weekly briefing. I encourage you to continue to send us your feedback, your portfolios for analysis, or any questions you wish for us to showcase in future briefings.
Send your emails to: info@lucenaresearch.com and we will do our best to address each email received.

Please remember: This sample portfolio and the content delivered in this newsletter are for educational purposes only and NOT as the basis for one’s investment strategy. Beyond discounting market impact and not counting transaction costs, there are additional factors that can impact success. Hence, additional professional due diligence and investors’ insights should be considered prior to risking capital.

For those of you who are interested in the spreadsheet with all historical forecasts and results, please email me directly and I will gladly send you the data.

If you have any questions or comments on the above, feel free to contact me: erez@lucenaresearch.com

Have a great week!

Lucena Research brings elite technology to hedge funds, investment professionals and wealth advisors. Our Artificial Intelligence decision support technology enables investment professionals to find market opportunities and to reduce risk in their portfolio.

We employ Machine Learning technology to help our customers exploit market opportunities with precision and scientifically validate their investment strategies before risking capital.

Disclaimer Pertaining to Content Delivered & Investment Advice

This information has been prepared by Lucena Research Inc. and is intended for informational purposes only. This information should not be construed as investment, legal and/or tax advice. Additionally, this content is not intended as an offer to sell or a solicitation of any investment product or service.

Please note: Lucena is a technology company and not a certified investment advisor. Do not take the opinions expressed explicitly or implicitly in this communication as investment advice. The opinions expressed are of the author and are based on statistical forecasting based on historical data analysis. Past performance does not guarantee future success. In addition, the assumptions and the historical data based on which an opinion is made could be faulty. All results and analyses expressed are hypothetical and are NOT guaranteed. All Trading involves substantial risk. Leverage Trading has large potential reward but also large potential risk. Never trade with money you cannot afford to lose. If you are neither a registered nor a certified investment professional this information is not intended for you. Please consult a registered or a certified investment advisor before risking any capital.