Generative AI can help your organization increase productivity, enhance client and employee experiences, and accelerate business priorities over time. But there are many considerations for decision-makers to navigate. Below is a roundup of our pragmatic, real-world perspectives to help with your strategic planning. This content is ideal for chief technology officers, chief information officers, chief financial officers, and chief investment officers—along with anyone pursuing the business potential of AI.
Reducing Hallucinations from Generative AI
This article shows how you can leverage LLMs for their strengths in natural language understanding and generation without relying on them for fact-based knowledge. Methods highlighted here include the Retrieval-Augmented Generation (RAG) model in conjunction with vector databases. These techniques allow businesses to incorporate proprietary and third-party knowledge in a more scalable and cost-effective way than re-training or fine-tuning an LLM. The result is more accurate and relevant responses compared to using a traditional LLM alone.
Large Language Model Projects: Why It’s Strategically Important to Monitor Usage
Are you delivering LLM-powered products? It’s important to set up an infrastructure to securely log the flood of new usage metrics. This will be extremely useful down the line, enabling you to protect proprietary data, manage costs, confirm client interests, preempt computations, and improve overall product performance. Read our perspective on the “what” and “why” of proper tracking and key system requirements.
Finding the Perfect Blend: Merging Large Language Models with Classic Machine Learning Techniques
Given that AI technologies have the potential to significantly enhance how businesses operate, the key to future success lies in knowing how to fuse the strengths of traditional machine learning with Gen AI methodologies. Check out four main scenarios for your company to consider as it integrates LLMs into workflows.
AI Quick Bite: Embeddings and Large Language Models
A foundational AI concept is embeddings, which help identify similarities between text snippets and enable robust and accurate natural language search. FactSet uses embeddings to create smart private company comparables that provide far more value than comps created with industry classifications or curated keywords. Get the details and learn how embeddings allow computers to better understand and process human language, paving the way for more advanced Natural Language Processing and AI applications like Large Language Models.
We Gave Our Employees Access to ChatGPT. Here’s What Happened.
Generative AI can become a powerful tool for employees to efficiently learn and grow. But can employers minimize the risks of providing access? Yes, there’s a way. Learn about our approach and how our employees have embraced our LLM for research and learning.
How We Use LLM Technology to Supercharge Junior Banker Workflows
To boost junior banker productivity and efficiency, we delivered our beta release of an LLM-based knowledge agent, FactSet Mercury. Junior bankers can use the chatbot to access source-linked financial data, generate pitch-ready charts, and perform a SWOT analysis from a company’s 10-K. Watch how it works in this 4-minute video.
How We Use AI to Summarize Earnings Call Q&A Discussions
Our clients know that Q&A discussions in earnings calls can yield material insights. But who has time to listen and extract key themes from hundreds of calls? Large Language Models, properly engineered, can do this. Here, we share our experience and key learnings from developing an LLM capability to help your organization on its LLM journey.