Featured Image

The Problem with Magic

Data Science and AI

By Steve Cohen  |  March 28, 2019

 

The legendary science fiction writer Arthur C. Clarke wrote in 1962 that “Any sufficiently advanced technology is indistinguishable from magic.” If any technology at any time in history fulfills this prophecy, it’s AI in 2019.

AI is magic in at least two ways:

  1. How does it work? Forget outsiders...experts struggle to explain its operations.
  2. What can’t it do? AI can order pizza...will it take my job? 

And has consequences:

  1. If technology isn’t explainable, innovation (especially in highly regulated industries) becomes impossible. After all, compliance is difficult when methodology is hopelessly complex.
  2. If capabilities aren’t understood, firms can spend millions on products that don’t work or they don’t need. For a perfect example, one only needs to look at IBM Watson’s foray into Cancer research.

In short, AI has a magic problem. The technology can seemingly do anything, and it’s getting more and more difficult to explain how it does what it does. This is great for marketing agencies, but terrible for businesses. It’s especially terrible for financial services firms who hope AI can light the path to better trades and lower costs. If they can’t get AI to comply—or get a good handle on what it really does—then all that glorious potential is for naught.

I’ve spent over 20 years building AI applications, and I’ve written this piece to help you better understand the twin mysteries that surround the 21st century’s buzziest technology. If this sounds helpful—if AI is an enigma and you’re kinda lost—then you’ve come to the right place: an honest take awaits.

Demystifying AI: Mechanics

The “black box” problem is the semi-formal name for the mystery surrounding the operations of sophisticated AI systems, and it arose as a consequence of the widespread adoption to the approach fundamental to all of what we now call AI: machine learning. 

The first “AI” systems were built on explicitly programmed rules. These were libraries of input/output combinations that, generally speaking, followed this format: Given input pattern “x,” produce output “y.”

Unfortunately, the world is very complex, so there were a couple of problems.

  1. Real inputs often didn’t match the ideal inputs stored in the library.
  2. General rules often required an extraordinary number of exceptions, and the creation of an exception for every possible input was impossible.

These limitations made the systems brittle. As a consequence, a new approach to artificial intelligence was invented—machine learning.
Machines learn in a manner much like children do: They are given examples, and from those examples, they are asked to generalize. For machines, these examples are in the form of data that has been correctly labelled by humans. And from multiple examples of the same thing—from multiple pictures of different dogs labelled “dog”—the machine infers a model of what makes a dog a dog.

Just like it is with children, this model is flexible and allows the machine to identify dogs it has never seen as dogs.

However, just like it is with children, the reasoning can be difficult to understand. Children can’t articulate the mental models they use to identify dogs, and machine learning models can be even more inscrutable. This opacity only increases as the sophistication of the machine learning process increases (e.g. deep learning), and it is the basis of the black box problem.

For regulators, inexplicability is clearly problematic. Modern regulatory policy is rooted in manual processes and, therefore, requires a level of explainability out of technology that is better suited to analysts than algorithms. Risk averse by function, regulators have an understandably hard time handing the keys over to machines no one quite understands. 

While the “black box” is a serious issue (especially in highly regulated industries like financial services), the good news is that there is an active community of researchers tackling it.

In Transparency Isn’t Enough, Mathematician and AI researcher Gurjeet Singh outlines one of the most interesting strategies to emerge out of this activity: the student/teacher method:

In this approach, a sophisticated AI application is used to decode a dataset. It produces the key insights and relationships; it creates the essential mapping of inputs to outputs. Once this phase is complete, another, simpler application is deployed. But, instead of learning from the data, this application learns from the other model, producing a streamlined version of the input/output relationship the other model created. In other words, the simpler model extracts the “rules” discovered by the other model—this is called a “rule extraction”—and it is this far more intelligible, justifiable model that actually goes into the production technology.

Complexity is, by-and-large, the crux of the “black box” problem. The student/teacher approach manages excessive complexity by substitution, employing a complex system to discover patterns and a simpler system to learn those patterns and apply them in a production environment.

While not applicable in all circumstances, the student/teacher approach is just one of the many strategies commercial and academic researchers are developing to demystify the technology. If AI is to make real headway in financial services, this is essential hurdle must be overcome. 

Demystifying AI: Capabilities

What can modern AI do?

The best answer: both more and less than you think it can.

AI is, practically speaking, synonymous with machine learning, and we described above how machine learning is the process of teaching a computer system to identify patterns through examples. These patterns can be extremely sophisticated, from market fluctuations to the fur on Fluffy’s face, but the domain an algorithm can be trained to operate in is narrow. In fact, the accuracy of a given AI system is very much contingent on how limited and well-defined the domain and task are.

What does this all mean? It means that AI can be even better than humans at narrow tasks (like chess and finding cats in pictures) and much, much worse across broad, complex, and sophisticated tasks (like white collar decision making). 

So, AI excels at tightly constrained pattern recognition, and, when it comes to financial services, this capability has two primary applications: revenue generation and risk mitigation.

Revenue Generation

AI can improve the top-line revenue of financial institutions by enhancing their ability to detect historically significant patterns in the financial markets. In other words, AI systems have a knack for asset price forecasting, and Prattle’s Equity Analytics Data Feed is an excellent example of this application.

Using the the historical relationship between the language used in corporate earnings calls and stock price movements, Prattle has trained an algorithm to identify the impact a given publicly traded company’s earnings call will have on its stock price. These predictions can be used by hedge funds and other asset managers to optimize their investment positions and generate much-coveted returns over market...also known as “alpha.”  

Figure 1: Kroger’s 2017 Q2 Earnings Call Sentiment

Prattle Score2

 

Figure 1 contains the history of Prattle’s predictions—or scores—of the impact Kroger’s earnings calls have on its stock. This impact is measure in the 10-day CAR, which is the expected cumulative abnormal return (compared to the market) over 10 days. For the Q2 2017 call, Prattle’s algorithm detected that that the language used on the call was, historically speaking, highly positive: in the 99th percentile of all calls reviewed by the system. The algorithm assigned a CAR score of 6.671 to the call, indicating that returns over market were likely.

And this is exactly what happened: Kroger’s stock spiked 3.84%, and the S&P 500 fell 0.13% over the next 10 days.[7]
While tools like Prattle’s Equity Analytics Data Feed are still in their early years, the use of AI in asset management will only grow. AI systems will only get better at pattern recognition, and financial institutions are always eager to integrate better predictive intelligence into their decision making.      

Risk Mitigation

As AI’s pattern detection capabilities are discovering their niche in the top-line growth toolset, they’re also finding a home on the other side of the budget: loss reduction.

Most financial institutions consider their compliance departments to be cost centers, and anti-money laundering (AML) departments contribute an increasingly sizeable share of that burden. In fact, a study by LexisNexis reported that

The cost of AML compliance among European financial institutions…was estimated at a staggering $83.5 billion annually. AML compliance officers reported that these costs have risen 21 percent over the past two years, with another 17 percent increase expected in 2017.

One of the biggest contributors to cost of AML departments is the time wasted investigating incorrectly flagged people and transactions. Why? Most existing AML systems operate on ancient, rules driven technology (the kind that machine-learning has displaced in most other spaces), and the enormity of their inefficiently is staggering: A recent study by PwC found that over 90% of AML alerts received by large financial institutions are false positives.

False positives are a serious problem in the know your customer (KYC) and transaction monitoring portions of the AML chain, and AI can move the needle on both fronts. Recent advances in natural language processing (NLP) technology, the field of AI that specializes in human language, has made it possible for machine systems to find, identify, and unite information in structured text (e.g. watchlists, databases, etc.) and unstructured text (e.g. social media posts, news articles, etc.) to create complete profiles on the people and organizations that financial institutions do business with. Complete profiles like these make decisions on the criminality of potential or current customers and their financial activity easier and more accurate—an approach that can drastically reducing the incidence of false positives. And, even a marginal decrease from current levels could provide millions of savings, as shown by a Chartis Research report. According to the same report, applying machine learning technology to AML, “could slash the rate of spurious alerts by half.”

Conclusion

AI has a magic problem, but it’s not insurmountable. The “black box” problem is getting solved. The limits of AI will become common knowledge. For financial services firms, both mysteries couldn’t lose their mystery a day sooner: higher revenues and lower costs are on the horizon.

New call-to-action

Steve Cohen

EVP, COO, and Co-Founder, Basis Technology

Steve co-founded Basis Technology in 1995 and oversees the firm’s business operations. With his engineering background, Steve seeks to move the dial on real-world problems by finding achievable waypoints between academic thinking and production-grade technology.

Comments

The information contained in this article is not investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.