ChatGPT has become a technological tsunami since San Francisco-based OpenAI launched it in November 2022, attracting millions of users at a record pace and taking the spotlight in news headlines. It has also introduced the concept of Large Language Models to a much broader consumer audience. Let’s look into what LLMs are, how they are becoming more useful, and what to watch for as the technology rapidly evolves.
LLMs are meant to improve your efficiency. They can quickly perform a variety of tasks related to written content/text:
Translate text from one language to another
Answer questions from a given text
Write reports, essays, website code, and many other types of content
Simplify complex topics for wider audiences
Summarize key themes from large amounts of text
Classify text into categories
Indicate sentiment, such as whether a news report is positive or negative
At a basic level, LLMs compute the probability of the most appropriate next word or set of words based on the information provided, otherwise known as a prompt. For example, given the sentence “The students opened their,” the LLM will generate a list of words to finish the sentence, e.g., books, laptops, exams, minds, and select the word most statistically likely to match the context of the request based on feedback it has received in the past.
ChatGPT is a conversational AI (or chatbot) designed to provide users with responses to a question or direction. One of the key elements that makes ChatGPT different from other LLMs is that it was trained with Reinforcement Learning from Human Feedback (RLHF). Humans provided feedback on its answers to help it generate responses that align better to human expectations.
To provide the best possible response, conversational Large Language Models must be exposed to—or trained with—large amounts of text. Throughout training, the LLM adjusts and learns within an internal set of parameters. Parameters define the boundaries for an LLM’s responses, such as whether you will receive the same simple response to your question each time or several creative, elaborate responses.
The “Large” in Large Language Models indicates models with a substantial number of parameters. For example, OpenAI's GPT-3 LLM was trained on 175 billion parameters, which is three orders of magnitude larger than some language models.
No. A number of LLMs have been operating for several years. Here are five of the many in operation.
Name |
Developer |
Parameters |
Megatron-Turing NLG |
Microsoft and Nvidia |
530 billion |
BLOOM |
Hugging Face |
176 billion |
LLaMA |
Meta |
65 billion |
BARD |
|
40 billion |
GPT-J |
EleutherAI |
6 billion |
Our cognitive and artificial intelligence (AI) teams have been utilizing Machine Learning and AI throughout our product since 2007. FactSet has been using Large Language Models such as Bloom, Google’s BERT, and T5 across our suite of products and services since 2018. More specifically, Large Language Models have enabled higher productivity and optimization within data extraction, natural language understanding for search and chat, text generation, text summarization, and sentiment analysis across FactSet’s digital platform.
Our use of LLMs has resonated with clients over the years. We are further investigating key areas of promise from newer LLM models like GPT-4 and ChatGPT, for example, for summarization and authoring, domain-specific search, and text-to-code functionality. LLM costs, commercial availability, security, and suitability to specific use cases can vary quite a bit, but we are committed to advancing our capabilities further as the LLM market evolves.
The LLM market is hypercompetitive, which over time could provide users with better accuracy, more efficiency, and lower prices.
In our view, it’s important to remain pragmatic and “read the fine print” amid the hype and noise of ChatGPT. Without a doubt, it’s a material step forward in a more simplified user experience, but it’s not perfect. For example, business institutions might find that ChatGPT can provide answers from disreputable sources. In addition, there are valid legal and ethical concerns coming forth from LLMs, such as:
Hallucinations. ChatGPT generates statistically likely text from its training data—not verified answers extracted from reliable sources—so at times, outputs are fabricated or misleading.
Bias. Given LLMs use human-generated training data, bias, and prejudice are inherent.
Intellectual property infringements. It’s a concern with ChatGPT as the model is capable of generating outputs that are similar or identical to existing text, potentially leading to copyright violations.
To learn more, visit FactSet Artificial Intelligence and read our March article, "Understanding ChatGPT: The Promise and Nuances of Large Language Models."
This blog post is for informational purposes only. The information contained in this blog post is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this article.