0% found this document useful (0 votes)
10 views10 pages

Week #2 Module_ Large Language Models _ UPOU MODeL

The Week #2 Module on Large Language Models (LLMs) covers their functionality, including pre-training and fine-tuning processes that enhance performance. It discusses common LLMs like ChatGPT, Google Gemini, and Microsoft Copilot, while clarifying misconceptions about AI sentience. Additionally, the module introduces prompt engineering as a technique for improving AI interactions by crafting specific instructions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Week #2 Module_ Large Language Models _ UPOU MODeL

The Week #2 Module on Large Language Models (LLMs) covers their functionality, including pre-training and fine-tuning processes that enhance performance. It discusses common LLMs like ChatGPT, Google Gemini, and Microsoft Copilot, while clarifying misconceptions about AI sentience. Additionally, the module introduces prompt engineering as a technique for improving AI interactions by crafting specific instructions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

Week #2 Module: Large Language Models

Site: Massive Open Distance eLearning (MODeL) Printed by: Gaudencio III Lingamen
Course: AI Essentials: Theory and Practice (21 May-18 Jun 2025) Date: Friday, 23 May 2025, 5:59 PM
Book: Week #2 Module: Large Language Models

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 1/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

Table of contents

1. Objectives: What You'll Learn

2. Review: What's an LLM Again?

3. Common LLMs in the Market

4. Where Does the Magic Come From?


4.1. Key to LLM Accuracy #1: Pre-Training
4.2. Key to LLM Accuracy #2: Fine-Tuning

5. Clearing Up a Myth: AI Is Not Sentient

6. Introduction to Prompt Engineering

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 2/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

1. Objectives: What You'll Learn

By the end of this module, you’ll be able to:

Explain how LLMs work.


Understand the role of pre-training and fine-tuning in improving LLM performance.
Identify a common LLM misconception (e.g., AI sentience).
Define prompt and prompt engineering.

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 3/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

2. Review: What's an LLM Again?

Have you tried asking an LLM to give you sample itinerary of your dream
destination?

As a review, an LLM is a type of GenAI designed to produce human-like


text. The LLM's primary function is to generate contextually appropriate
continuations of a given input prompt by predicting the most probable
sequence of words.

A simple way to understand this is through an example. Given the prompt:

“The capital of the Philippines is ______”

The LLM predicts and fills in the most probably next word. This predictive
mechanism enables LLMs to perform a wide range of natural language
processing tasks such as summarization, translation, question answering,
content generation, and dialogue simulation with a high degree of fluency
and relevance.

Examples of LLMs include ChatGPT, Google Gemini, DeepSeek, Microsoft Copilot, and other advanced AI chatbots. The next section discusses
them in greater detail.

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 4/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

3. Common LLMs in the Market

ChatGPT (https://chatgpt.com/)

Launched in November 2022, ChatGPT is a conversational AI developed by OpenAI. It quickly became one of the most
widely used LLMs globally, marking a major breakthrough in making large language models accessible to the general
public. Its widespread adoption was also driven by its user-friendly interface and strong contextual understanding.

Google Gemini (https://gemini.google.com/)

Originally released as Bard in early 2023, Google rebranded its flagship LLM as Gemini in late
2023 to align with its broader AI model family. Gemini integrates deep language capabilities with Google’s massive knowledge graph and
search infrastructure. It intends to facilitate seamless integration with Google Workspace, making it a strong contender in the AI assistant
space.

Microsoft Copilot (https://copilot.microsoft.com/)

Microsoft Copilot is an AI assistant powered by OpenAI’s GPT models but fine-tuned for productivity and enterprise tasks. Introduced in
2023, its deep integration with Microsoft’s productivity suite has positioned it as a practical, task-focused implementation of LLM
technology.

DeepSeek (https://chat.deepseek.com/)

DeepSeek is a newer entrant in the LLM landscape developed by the Chinese company
DeepSeek AI. Released in 2024, it gained attention for its open-source model, which demonstrated strong performance in both English
and Chinese benchmarks. DeepSeek offers robust capabilities for research, enterprise, and educational applications, contributing to the
growing diversity of open LLM ecosystems.

The figure below shows the market share of LLMs:

Source: https://firstpagesage.com/reports/top-generative-ai-chatbots/

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 5/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

4. Where Does the Magic Come From?

You might be wondering: how can LLMs seem to perform tasks so effortlessly—almost like magic?

LLMs are trained on massive datasets (often from the Internet), including books, articles, websites, and
user-generated content.

They learn to understand language by recognizing statistical patterns between words, phrases, and
sentences

The "magic" lies in this: the more data they see, the better they become at mimicking human-like
responses.

Furthermore, there are two key processes that contribute to their accuracy.

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 6/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

4.1. Key to LLM Accuracy #1: Pre-Training

Pre-training is the process where the model is shown partial text and asked to predict the next word.

In other words, AI (essentially a computer program) learns to predict the next word by calculating the probability distribution of all possible
words. It does this by being exposed to billions of textual data and repeating the prediction for trillions of times, making it accurate over
time.

In this example above, given a prompt like “The center of the solar system is the __,” the AI model calculates the probability of different words
and selects the most likely one—just like choosing “sun” with 95% confidence.

Essentially, an LLM is an auto-complete on steroids!

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 7/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

4.2. Key to LLM Accuracy #2: Fine-Tuning

LLMs then undergo the process of fine-tuning, which is using feedback from human reviewers. AI companies hire teams of human reviewers
to rate the quality of the model’s responses.

This technique is also known as Reinforcement Learning from Human Feedback (RLHF) and is crucial to making AI safer, more accurate,
and more useful in real-world settings.

Reviewers evaluate AI model responses and indicate which ones are better or worse. Their feedback is then used to fine-tune (improve) AI, so
that it can give better answers when it encounters similar questions in the future.

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 8/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

5. Clearing Up a Myth: AI Is Not Sentient

LLMs can sound intelligent, empathetic, or emotional, but they are not conscious beings. They do not understand in the way humans do, nor
do they have awareness, goals, or feelings. Their outputs are generated through complex pattern-matching, not reasoning or intention.

So, do you still believe that AI—like ChatGPT—is sentient or has emotions? It's no different from saying that a probability distribution can feel
something. In reality, it’s all just pattern prediction, not consciousness.

In summary:

LLMs are not sentient.


They operate using probability distributions.
No internal beliefs, desires, or feelings exist within them.

Any appearance of empathy or intelligence is the result of training on human-generated text, not internal awareness.

Key takeaway: This is not a dangerous magic genie. This is statistics executed on an enormous scale.

Source: https://the-decoder.com/genai-is-just-advanced-automation-not-a-panacea-or-an-existential-threat-says-stephen-wolfram

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 9/10
5/23/25, 5:59 PM Week #2 Module: Large Language Models | UPOU MODeL

6. Introduction to Prompt Engineering

A prompt is the input or instruction you give to an AI model (like ChatGPT) to guide its response. It can be a question, command, sentence,
or even a set of guidelines that tells the AI what you want it to do.

Examples:

Vague: "Write something about climate."


Better: "Write a paragraph explaining how climate change affects farming in the Philippines."

Therefore, prompt engineering is the skill of writing clear and specific instructions to get better results from AI tools. Think of it as giving the
AI better clues.

Simple tip: The more specific your request, the more useful the output!

For now, take a moment to reflect:

What would you want an LLM to do for you?


Do you already have a go-to prompt (perhaps your favorite prompt) that works well?

We’ll explore more advanced and/or practical uses of prompt engineering in the next module.

https://model.upou.edu.ph/mod/book/tool/print/index.php?id=21572 10/10

You might also like