Featured Image

Using Aggregate Statistics to Strengthen ETF and Stock Selection

Technology and Tips

By Michael Amenta, CFA  |  December 20, 2016

Investors need a granular filtering process to tackle an increasingly wide and diverse universe of securities. For active managers in the equity space, this process was succinctly detailed in an article by Bijan Beheshti in May. Fortunately for fund selectors in the popular ETF space, many of the same ideas can also be applied.

In the ETF world, investors are targeting “Smart Beta” funds more than ever. However, investors don’t have to be limited to fixed labels, whether provided by the issuer or by a third-party analyst. The same popular factors—market, value, profitability, solvency, growth, efficiency, etc.—can be targeted by analyzing funds’ holdings with trusted, simple, and verifiable metrics.

For example, the standard deviation of closing NAV over 30 days can be used to quantify market risk and can rank a broad universe of equity ETFs. This can be used in combination with ranked dividend yield (a value factor), return on equity (profitability), EBIT to interest expense ratio (solvency), sales growth (growth), and asset turnover (efficiency) to contribute to a multiple factor ranking scheme with custom weightings available for each factor. These metrics are available through bottom-up aggregations of holdings-level data, which ensures that fund metrics are traceable to the individual equities and standardized across ETF data providers.

These factors can also be grouped by ETF strategy or region of investment.

FactSet clients, launch this sample screen. This screen contains more than 50 criteria, including some of our newest data items, which can be adjusted in your local directory according to individual preferences and metric accessibility.

Here again, opaque (stated) geographic exposures can be further validated by aggregating measurements of geographic revenue exposures at the individual equity level. This check provides particularly crucial insight for fund investors that have a mandate to avoid particular regions.

Performance, expenses/fees, management tenure, and fund flows have been useful selection tools for vanilla selection strategies. But, for an investor to capitalize in an environment that increasingly expects precisely managed factor exposures, he or she requires insight into holdings-level statistics.

For example, aggregations of equity data can quickly and easily find funds that hold equities with strong free cash flow, adequate leverage, and/or low valuations.

Today, tailored factor exposures are available beyond the confines of stock selectors; ETF investors can also enjoy a bespoke selection process that is transparent, flexible, and testable.

Adding Dynamic, Benchmark Relative Context to Equity Screening

Aggregate metrics not only bolster the ETF selection process, they also add important context and filtering criteria in an equity screening environment as well. These statistics can also be returned for benchmarks, which provides an important tool for relative analysis when viewed alongside comparable equity metrics.

Furthermore, the choice of a benchmark for relative rankings doesn’t have to be fixed. An investor can use dynamic comparisons specific to the particular equity in a given row. For example, if a screening universe is the S&P 500, a benchmark-relative ranking could be measured against that index, but it could alternatively be flexible—the S&P 500 Automobiles industry for General Motors, and, a few rows down, the S&P 500 Household Durables industry for PulteGroup.

FactSet clients, launch this sample screen. This screen can be adjusted in your local directory according to individual preferences and metric accessibility.

This process has previously been limited to client-specified universe and group statistics. But comparisons can be more relevant, efficient, and reliable using pre-calculated aggregate statistics that are standardized across benchmarks and funds.

Michael Amenta, CFA

Vice President, Product Strategist, Content & Technology Solutions

Michael is responsible for guiding strategy and development for FactSet’s benchmark data feed solutions. Prior to this role, he was a Product Manager developing aggregated market statistics for countries, industries, benchmarks, and investment funds for the FactSet Market Aggregates (FMA) product group. In his prior role, Michael wrote research briefs that were cited by numerous financial publications. Michael joined FactSet in 2011 and is based in New York City. He holds a B.A. in Finance from the University of Notre Dame and is a CFA charterholder.



We use cookies to personalize content and ads and to analyze our traffic.

We also share information about your use of our site with our advertising and analytics partners. See details.