FactSet Insight - Commentary and research from our desk to yours

Five Lessons on Machine Learning-Based Investment Strategies

Written by FactSet Insight | Jan 5, 2021

Some have argued that financial markets are a poor choice for the application of Machine Learning (ML). These articles have focused on the prediction of market or stock returns and cite the Gaussian properties of these returns or the “noisiness” of such data as the reason for their conclusions. Often, these are written by data scientists who undoubtedly have a deep understanding of their craft but lack subject matter expertise in the problem they are trying to solve; however, these approaches are flawed in how they initially frame the problem.

In asset management, we already have a significant and growing portion of assets being managed using data-driven "quantitative" investment strategies. This should be the starting point for discussing ML in investment research. ML should be used as a tool by quantitatively driven individuals who are finance experts to make their strategies more efficient and profitable. The benchmark for comparison for the successful introduction of ML should be their current strategies unassisted by ML.

Introducing Machine Learning in Quantitative Research

There are several steps to building data-driven investment strategies, regardless of the software or system being used. First, we need to gather disparate datasets such as company financials, broker estimates, pricing and corporate actions, classification data to group companies, and many different types of esoteric or alternative data to find hidden signals. These datasets then need to be combined, standardized, cleaned of outliers, and turned into factors with economically intuitive meaning. Analytical tools can then be used to analyze how well these factors explain the movement of stock prices and if they have persistent value over time. Finally, these signals can be converted into portfolios using rules-based methods or more sophisticated methods like risk-based optimization.

Where does ML fit in? ML excels at finding patterns in data. One way we can use it is to enhance our traditional data-driven investment strategies to find and exploit patterns in our factors. This allows us to build models to explain stock performance in terms of various factors. This workflow is shown in the diagram below.

Automated Machine Learning

The difficulty becomes how to choose and implement the right types of ML algorithms. Using free tools like those available in Python or R, a novice data scientist quickly gets out of his/her depth and is more likely to fail than succeed. They won’t have the experience to know what types of algorithms to apply to a certain problem or how to train them effectively. They can easily get trapped in an eternal loop of trying different algorithms with a multitude of different parameters and data permutations.

On the other hand, hiring an experienced data scientist can be expensive. There are very few who have all the necessary skills to solve financial market problems. Most likely you will require three people: a data scientist to test and validate algorithms, an engineer/coder to implement these in different environments, and a subject matter expert who understands the data and can intelligently define the problem.

To be successful in ML requires the automation of the more mundane components of programming and statistics. Subject matter experts need to be empowered with sophisticated tools that allow them to tackle these problems with only minimal help from professional data scientists in the form of product support.

For our analysis, we used DataRobot via FactSet, which allowed us to research, build, and automate various models before integrating them into actual portfolios. To learn more, watch the full webcast

Building and Testing Our Model

To show that ML can be used to enhance traditional quant factors, we built a stock prediction model for China A shares. We compiled monthly snapshots of stock performance and diverse factor data for the CSI 800 index from December 2012 through August 2019. We set the target variable as the future one-month return of a stock and used the factors from our original portfolio model.

We systematically tested dozens of different algorithms and preprocessing permutations on the problem through a “survival of the fittest” process. First, we trained each model using a subset of historical data, then we tested the model on data it hadn’t seen before to determine its efficacy. All models were then ranked by different methods, or optimization metrics, to determine the best models for the given problem.

After our models had been evaluated, we took the predictions from the top three models and pulled them back into our analytics model. We built equal-weighted portfolios in which we bought the top 20% of predictions and sold the bottom 20% of predictions. We then analyzed these portfolios alongside more traditional factor-based portfolios. The chart below displays the returns for these different portfolios. 

Lesson 1 - Don't Mix Up In-Sample and Out-Of-Sample

At first glance, our ML-based strategies appear to greatly outperform more conventional strategies. However, this is because we focused on the entire period; instead, we need to analyze these models using only new data the algorithm hasn’t been trained or validated on to ensure the strategy will be successful in the future.

If we compare in-sample results versus the hold-out (out of sample) results, unfortunately, the ML-based models just barely outperform their more traditional peers. In one case, it actually performs quite poorly.  So ML did a great job of modeling the factor behavior during the training and validation period, but this performance would not have persisted with real money behind it. This leads us back to some of the original criticism about applying ML to investments. We can address these by carefully constructing our problem following the points below.

Note: The Information Coefficient (IC) is a measure of how well each factor predicts the rank of security returns. Larger positive values denote better predictive power.

Lesson 2 – Block Out the Noise and Model One Thing at a Time

Unlike typical use cases for ML, such as predicting same-store sales or the likelihood of an individual defaulting on their bank loan, the data for stock returns is noisy. It’s well known that time series financial data is plagued by complex behavior including heteroskedasticity, black swans, and tail dependence. In our case, we do not seek to predict the market return, just the stocks in which to invest. To minimize the impact of these phenomena, we can focus solely on benchmark relative or peer relative performance to minimize the noise.

Lesson 3 – Simplify Your Problem Statement to Produce Better Models

Even after minimizing the noise in our stock returns, predicting the continuum of stock returns is unnecessary. For a typical long-only fund manager, knowing the actual stock returns would not change their behavior so long as the rank order of the stocks does not change. If a stock’s return next month is 10% vs 11%, you would still buy it. Switching to a simple classification-based approach allows us to avoid overfitting when trying to predict the actual stock return.

We tried reframing the problem: will a stock be in the top 30% of stocks on the index? To find out, we re-ran the same process with the same data with this new target.

As shown below, all three of the best classification-based models outperformed the regression-based ones in the in-sample period. Importantly, their out-of-sample performance is stable: the best compared to all other factors and was extremely consistent from month to month. It looks like we may have found a winning recipe.

Lesson 4 – Explaining Your Model Is as Important as Building It

To pitch your fund in your organization and eventually explain to clients the merits and results of the investment strategy, you will have to explain how the model works. The difficulty here is that these ML models are complicated to understand, both conceptually and in practice.

DataRobot gives us the tools to tell us how our ML model works. The chart below represents the feature impact, which is essentially the sensitivity of the change in the prediction to a change in the value of the feature (or independent variable). In this case, our model is most sensitive to changes in the Value, Liquidity, Momentum, and Earnings Growth factors as well as if the company is a State-Owned Enterprise (SOE). The scale of the chart is a function of the most important feature, so all other factors are measured against the impact of Value.

This chart explains the relationship of the features to the prediction. These can be, and are often, nonlinear. In the case of Value, the higher a company’s exposure, the higher the prediction in our model.

We can then take this from a theoretical to a practical understanding and examine what the strategy traded. The graph below shows the SWS industries of the companies the model is recommending. The Y-axis shows the relative importance of an industry on a scale of 1 to 5, with 1 indicating the highest, and the size of the bubble represents the frequency of observations. Our model avoids Financials and Utilities while buying companies in the Electronics industry.

We then grouped our predictions into two groups based on the SOE flag that we highlighted earlier. By analyzing the correlations of the returns of the stocks in each group, we found that the model makes dramatically different recommendations depending on whether a company is state-owned. For SOEs, our model is tilted more toward value stocks, while for private companies, our model tends to invest more in growth companies.

Lesson 5 – Try Out Lots of Approaches and Fail Fast

Related to lessons 3 and 4: it’s highly probable that you will need to iterate on many different approaches to find something that works and generalizes well; we saw that the initial problem statement did not give us what we needed out-of-sample, so we were able to quickly reframe the problem to achieve better results. As we iterated, we used DataRobot’s and FactSet’s combined interpretability features to further inform our modeling decisions. As an example, we could potentially hone this model yet further using what we learned about the different ways the model treats SOEs and non-SOEs; we could, say, include further information on the form the state ownership takes as a variable or even build separate models for SOE and non-SOE stocks using different data fields for each. Equally, we might want to try different training periods, covering longer or shorter timespans or transform some of our data by comparing it with historical ranges of various lengths.

This is where the ability to efficiently model multiple problem statements, input datasets, and target variables becomes really valuable. Automated ML facilitates this by not only trying out many different ML algorithms for a given modeling problem, but also increasing the speed at which the user can iterate; by quickly building and evaluating multiple ML models, users can concentrate on bringing their domain expertise to bear by testing hypotheses on how to improve their models and strategies further.

Just remember lesson 1—select your problem statement on in-sample validation performance and check that it generalizes well using out-of-sample holdout performance prior to deployment.

Conclusion

The example presented here shows one use case of ML applied to enhance factors traditionally used to manage portfolios. The training and application of the algorithms were just one of several steps in this process. The models we built and ultimately selected were very profitable when tested out of sample and significantly outperformed more traditional models. There is no doubt that in the hands of a skillful practitioner, ML is a powerful tool. However, careful consideration is needed when framing the problem to minimize the impact of noisy data and the danger of overfitting. Understanding how ML models and the strategies built from the work is also key when applying ML to portfolio management.