Featured Image

Reducing Hallucinations from Generative AI

Data Science and AI

By Lucy Tancredi  |  July 3, 2023

Large Language Models (LLMs) are not new, having played an important role in various AI applications for several years. One of the earliest LLMs—Google’s BERT—was introduced in 2018. However, it was the recent release of ChatGPT that sparked widespread public interest in generative AI. The newest and most sophisticated LLMs excel at generating text that is both coherent and relevant, closely resembling human conversation.

However, they can also make up—or “hallucinate”—information and present it with unwavering confidence. Generative text models aim to produce plausible text based on patterns learned from their training data. They do not natively search the Internet or other sources for accurate answers.

They also have limitations in both the scope and timeliness of their knowledge base. ChatGPT’s learned knowledge base is from training data up to September 2021, so it cannot accurately answer questions about current events. The high cost of training LLMs makes it prohibitive to keep them up to date with anything near daily or real-time training data. OpenAI’s CEO Sam Altman has said that training GPT4 cost over $100 million dollars.

Retrieval-Augmented Generation

While those issues have made some corporations understandably wary to leverage LLM technology, there are several methods that can vastly improve the performance of LLMs. One approach that has proven particularly effective in reducing hallucinations is the Retrieval-Augmented Generation (RAG) model in conjunction with vector databases. While Internet-retrieval integrations (for example, with Bing search or via ChatGPT plugins) are beginning to help here, a RAG approach allows companies to efficiently leverage LLMs with their own proprietary data or third-party data.

How does Retrieval-Augmented Generation work? First, trusted knowledge sources are searched for relevant data. The LLM then uses those results to generate a user-friendly response. For example, searching a help documentation site could return two or three pages that contain answers to a user’s question. The LLM would then consolidate the pertinent details from each page into a single concise answer. This approach mitigates the risk of hallucinations that can occur when relying on the LLM to generate answers solely from its training data.

Vector Databases  

Vector databases are important in improving the performance of the RAG model. These databases store text in a way that computers can process more efficiently. Instead of storing text in word form, they represent it as embeddings, or numerical vectors that capture its meaning.  When a user asks a question, it is also converted into a numerical vector. Relevant documents or passages can then be found in the vector database, even when they don’t share the same words.

These methods help LLMs generate relevant responses informed by real-world knowledge. AI engineers also use other techniques to reduce hallucinations, such as lowering the “temperature” parameter, which instructs a GPT model to be less creative. Engineers can also use prompt engineering to explicitly instruct LLMs to respond with more accurate answers.

Together with the RAG model, these various techniques allow businesses to leverage LLMs for their strengths in language manipulation without relying on them for fact-based knowledge. They also incorporate proprietary and third-party knowledge in a more scalable and cost-effective way than re-training or fine-tuning a LLM. The end result is more accurate and relevant responses compared to using a traditional LLM alone.

To learn more, visit FactSet Artificial Intelligence and read our additional LLM articles:

 

This blog post is for informational purposes only. The information contained in this blog post is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.

New call-to-action

Lucy Tancredi

Lucy Tancredi, Senior Vice President, Strategic Initiatives - Technology

Ms. Lucy Tancredi is Senior Vice President, Strategic Initiatives - Technology at FactSet. In this role, she is responsible for improving FactSet's competitive advantage and customer experience by leveraging Artificial Intelligence across the enterprise. Her team develops Machine Learning and NLP models that contribute to innovative and personalized products and improve operational efficiencies. She began her career in 1995 at FactSet, where she has since led global engineering teams that developed research and analytics products and corporate technology. Ms. Tancredi earned a Bachelor of Computer Science from M.I.T. and a Master of Education from Harvard University.

Comments

The information contained in this article is not investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.