AI Case Study
AI Case Study
ChatGPT
What Is ChatGPT?
ChatGPT is a large language model chatbot developed by OpenAI based on
GPT-3.5. It has a remarkable ability to interact in conversational dialogue form
and provide responses that can appear surprisingly human.
Large language models perform the task of predicting the next word in a series
of words.
OpenAI is famous for its well-known DALL·E, a deep-learning model that generates images
from text instructions called prompts.
Microsoft is a partner and investor in the amount of $1 billion dollars. They jointly developed
the Azure AI Platform.
Large Language Models
ChatGPT is a large language model (LLM). Large Language Models (LLMs) are trained with
massive amounts of data to accurately predict what word comes next in a sentence. It was
discovered that increasing the amount of data increased the ability of the language models to do
more.According to Stanford University:
“GPT-3 has 175 billion parameters and was trained on 570 gigabytes of text. For comparison, its
predecessor, GPT-2, was over 100 times smaller at 1.5 billion parameters. This increase in scale drastically
changes the behavior of the model — GPT-3 is able to perform tasks it was not explicitly trained on, like
translating sentences from English to French, with few to no training examples. This behavior was mostly
absent in GPT-2. Furthermore, for some tasks, GPT-3 outperforms models that were explicitly trained to
solve those tasks, although in other tasks it falls short.”
LLMs predict the next word in a series of words in a sentence and the next sentences – kind of
like autocomplete, but at a mind-bending scale. This ability allows them to write paragraphs and
entire pages of content. But LLMs are limited in that they don’t always understand exactly what
a human wants. And that’s where ChatGPT improves on state of the art, with the
aforementioned Reinforcement Learning with Human Feedback (RLHF) training.
How Was ChatGPT Trained?
GPT-3.5 was trained on massive amounts of data about code and information
from the internet, including sources like Reddit discussions, to help ChatGPT
learn dialogue and attain a human style of responding.
ChatGPT was also trained using human feedback (a technique called
Reinforcement Learning with Human Feedback) so that the AI learned what
humans expected when they asked a question. Training the LLM this way is
revolutionary because it goes beyond simply training the LLM to predict the
next word.
A March 2022 research paper titled Training Language Models to Follow
Instructions with Human Feedback explains why this is a breakthrough
approach:
“This work is motivated by our aim to increase the positive impact of large
language models by training them to do what a given set of humans want them
to do.
By default, language models optimize the next word prediction objective, which
is only a proxy for what we want these models to do.
Our results indicate that our techniques hold promise for making language
models more helpful, truthful, and harmless.
Making language models bigger does not inherently make them better at
following a user’s intent.
For example, large language models can generate outputs that are untruthful,
toxic, or simply not helpful to the user.
In other words, these models are not aligned with their users.”
The engineers who built ChatGPT hired contractors (called labelers) to rate the outputs of
the two systems, GPT-3 and the new InstructGPT (a “sibling model” of ChatGPT).
InstructGPT shows small improvements in toxicity over GPT-3, but not bias.”
The research paper concludes that the results for InstructGPT were positive. Still, it also
noted that there was room for improvement.
“Overall, our results indicate that fine-tuning large language models using
human preferences significantly improves their behavior on a wide range of
tasks, though much work remains to be done to improve their safety and
reliability.”
What sets ChatGPT apart from a simple chatbot is that it was specifically trained to understand
the human intent in a question and provide helpful, truthful, and harmless answers.
Because of that training, ChatGPT may challenge certain questions and discard parts of the
question that don’t make sense.
Another research paper related to ChatGPT shows how they trained the AI to predict what
humans preferred.
The researchers noticed that the metrics used to rate the outputs of natural language
processing AI resulted in machines that scored well on the metrics, but didn’t align with what
humans expected.
The following is how the researchers explained the problem:
“Many machine learning applications optimize simple metrics which are only rough
proxies for what the designer intends. This can lead to problems, such as YouTube
recommendations promoting click-bait.”
So the solution they designed was to create an AI that could output answers optimized to
what humans preferred.
To do that, they trained the AI using datasets of human comparisons between different
answers so that the machine became better at predicting what humans judged to be
satisfactory answers.
The paper shares that training was done by summarizing Reddit posts and also tested on
summarizing news.
The research paper from February 2022 is called Learning to Summarize from Human
Feedback.
The researchers write:
Furthermore, the ethical considerations and responsible use of ChatGPT have been
highlighted, emphasizing the importance of maintaining transparency, privacy, and fairness
when integrating AI systems into real-world scenarios. As we move forward, it is crucial for
developers, businesses, and policymakers to collaborate in establishing robust guidelines and
regulations that ensure the ethical deployment of ChatGPT and similar technologies.