Featured Image

AI Strategies Series: Explainability

Data Science and AI

By Lucy Tancredi  |  February 14, 2024

Given the potential for generative AI hallucinations, users understandably want source links or references to show where a Large Language Model (LLM) finds its answers. In this third installment of our six-part series, we discuss the challenge of generative AI’s lack of explainability—specifically, the inability of LLMs to cite sources—and a solution called RAG for developers to implement explainable output.

As you may recall from our first article, How LLMs Do—and Do Not—Work, LLMs don’t fetch answers from their training data. Rather, they generate a likely sequence of words based on language patterns they’ve learned from training data. In essence, the individual pieces of training data that feed an LLM are analogous to individual pieces of fruit fed into a blender. Once the model has been trained, what is left is like a fruit smoothie. Just as you can't extract the original whole pieces of fruit from a smoothie, LLMs can’t refer to the original source data behind generated answers.

Although an LLM may generate a source reference or URL if you ask for one, it will most likely be a hallucination. Popular LLMs have famously fabricated, or hallucinated, not only information but also legitimate-sounding scholarly citations and URLs that are allegedly sources for that information.

However, in the past year, models such as ChatGPT have gotten better about sidestepping the hallucinated reference challenge by refusing to give source URLs or suggesting potential sources to check. In the example below, I asked an LLM for the source of its answer that Russia is the largest country by land mass. It explained its answers come from a mixture of data from different sources—that is, it didn’t pretend that it fetched the answer from a specific website.

001-worlds-largest-country

RAG to the Rescue

In our previous article, we discussed how RAG (Retrieval-Augmented Generation) is a programmatic solution for hallucinations. RAG comes to the rescue for explainability as well. Because retrieval-augmented generation involves fetching known answers from an existing data source, programmers know exactly where the answer came from and can provide that exact source to the user. This kind of provability—along with access to massive amounts of trusted data—will determine the leaders among technology firms leveraging the conversational strengths of generative AI to deliver fact-based (not LLM-sourced) information.

LLM-enhanced products throughout FactSet are using RAG to ground answers in our trusted data. Responses to fact-based requests are retrieved from our governed databases, not generated from the Large Language Model’s training data.

For example, FactSet Mercury for junior bankers uses RAG to not only deliver accurate results, but to also provide explainability via “prove it” functionality. In this screenshot, a user asks for the top five banks in Virginia with more than 10 branches. Mercury returns the answer in chat and provides additional context or sources—in this case, a map of those bank branches—to reinforce the accuracy and trustworthiness of the answer. Users can always dive into the exact FactSet database or source that provided the answer. If Mercury doesn’t have a factual answer in its database, it will say so, rather than allowing the LLM to hallucinate an answer.

03-factset-mercury

Conclusion

Generative AI can help organizations increase productivity, enhance client and employee experiences, and accelerate business priorities. Understanding the implications and solutions for explainability of output from generative AI models will help organizations and individuals be effective providers and users of AI technologies.

In the meantime, watch for part four next week in this six-part series: inconsistent responses and outdated knowledge. If you missed the previous articles, check them out:

AI Strategies Series: How LLMs Do—and Do Not—Work

AI Strategies Series: 7 Ways to Overcome Hallucinations

 

This blog post is for informational purposes only. The information contained in this blog post is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.

New call-to-action

Lucy Tancredi

Lucy Tancredi, Senior Vice President, Strategic Initiatives - Technology

Ms. Lucy Tancredi is Senior Vice President, Strategic Initiatives - Technology at FactSet. In this role, she is responsible for improving FactSet's competitive advantage and customer experience by leveraging Artificial Intelligence across the enterprise. Her team develops Machine Learning and NLP models that contribute to innovative and personalized products and improve operational efficiencies. She began her career in 1995 at FactSet, where she has since led global engineering teams that developed research and analytics products and corporate technology. Ms. Tancredi earned a Bachelor of Computer Science from M.I.T. and a Master of Education from Harvard University.

Comments

The information contained in this article is not investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.