Featured Image

Granular Tick Data Combined with Cloud Sharing Allow for Optimized Execution Analysis

Data Science and AI

By BMLL Technologies  |  August 8, 2023

Data is considered to be the new oil, and capital markets data is no exception. The more granular market microstructure of Levels 2 and 3 data is widely used in high-frequency and proprietary trading as well as market making. Yet outside of those areas—and for algorithmic trading in general—there is an increasing use of this richer granular data to drive more sophisticated execution analysis and point of execution.

There are a number of drivers for analyzing this level of data. For example, market fragmentation has led to more complexity. Therefore, there is an appetite for more quantitative analysis to realize a true picture of addressable liquidity across markets. Regulation also catalyzes interest, especially around best execution requirements.

Yet the availability and velocity of the deeper levels of data within the order book have, historically, neither been easy to obtain nor administer. As a result, demand for managed solutions is rising.

Use Cases of Level 3 Data Analysis

Historical Level 3 order book data captures and displays the trading intentions of all market participants. It offers high-level depth of market information, including real-time bid price, ask price, quote size, price of the last trade, size of the last trade, as well as the high and low price for the day.

There are a range of reasons for analyzing Level 3 data.

  • Gain a more sophisticated understanding of the markets: Confirm the identity of a potential aggressor on a trade or origination of orders (e.g., due to input as a result of icebergs).

  • Collect input for smart order routers in algo sets: Useful when prop and high-frequency trading (HFT) or to stem “liquidity fade” (i.e., quotes “disappear” when a seller is noticed). Furthermore, some jurisdictions require full-depth protection.

  • Observe the interactions between order books: This not only includes interaction between physical venues, but also between inter-listing securities and when futures markets spill over to cash markets.

  • Protect users against adverse selection: To predict the likelihood a bid will disappear when venues provide order types.

  • Generate analytics for estimating fill probabilities: Gain a better understanding of the real cost of trade in fragmented markets.

Faster performance of analytics and computations

While the availability of deeper levels of data can help with the above use cases, there is also an issue of performance when aggregating and storing large amounts of data. Pulling that data across the network and performing calculations on a different server tend to significantly slow things down. The efficient solution is to move the computation and data next to each other to more easily churn through large amounts of data. This is where the modern technology, such as cloud-based data sharing, can help.

With data sharing, data does not have to be copied or transferred. This is a relatively new approach to storing and serving up big data and provides an efficient way to take the compute closer to the data. Create products from the data natively within the cloud and then make them available to clients. The process of parallelizing computation enables the calculation of liquidity analytics from historical tick data (for example, time-weighted average spreads, volatilities, etc.) in minutes. In addition, these analytics are packaged in easy-to-consume, high-level visualisations.

Accelerate research, optimize workflows, generate alpha

To help provide market participants with better access to larger and deeper levels of data, FactSet recently made its entire global archive of Level 1 Tick History across 300+ venues accessible via a cost-optimized solution in Snowflake. This enables clients to access ready-to-query, normalized data without downloading and storing petabytes of data locally. The tick data can be easily combined with any of FactSet’s other content sets available in Snowflake, such as Pricing & Corporate Actions, Symbol History, Sentiment Data, Fundamental Data, Event Transcript Data, and more.

FactSet recently partnered with BMLL Technologies to offer a unique and granular Level 2 content set that is derived from Level 3 data. Clients can now access Level 2 Data via the same symbology and APIs as the Level 1 Tick History on a common delivery platform to accelerate research, optimize workflow and trading strategies, and generate alpha at speed and scale.

 

This blog post has been written by a third-party contributor and does not necessarily reflect the opinion of FactSet. The information contained in this blog post is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.

Subscribe to FactSet Insight

BMLL Technologies

BMLL Technologies is the leading, independent provider of Level 3 historical data and analytics for the world’s most advanced capital markets participants. Their Level 3 data captures and displays the trading intentions of all market participants, enabling investors to derive predictable insight. The firm empowers researchers and quants across global financial services to understand how markets behave and predict the future price movements of securities.

Comments

The information contained in this article is not investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.