ALL ABout Prompting - Avilash Bhowmick
ALL ABout Prompting - Avilash Bhowmick
ALL
ABOUT
PROMPTING ( AAP )
Pioneering Prompts in the Digital Dawn
Avilash Bhowmick
ABOUT THE AUTHOR
His journey is one marked by a relentless pursuit of knowledge; a trait evident from
his diverse range of expertise. From SEO to keyword generation, and from blog
writing to frontend development, Avilash embodies the spirit of a modern polymath
in the digital age. His stint as a social media marketing intern at Skyread Digital
Publishing and his active contributions to the blogging platform Medium, where he
pens insightful pieces on AI and prompts, are testaments to his multifaceted talent.
As an author, Avilash’s narrative is not just about the milestones he has achieved but
also about the journey he undertakes—a journey of continuous learning and
adaptation in an ever-evolving technological landscape. His writings reflect not only
his expertise but also his philosophy of life, where growth is constant, and change is
embraced with open arms.
In this book, Avilash invites readers to explore the world of AI through his eyes—a
world where algorithms are not just tools but partners in the creative process, where
prompts are not mere commands but conversations with a digital intellect. Join him
as he decodes the complexities of AI, making it accessible and engaging for all.
1
2
PREFACE:
This book offers a deep dive into the world of Large Language
Models, exploring their mechanisms, the role of prompts, and
advanced techniques for maximizing their potential.
From the basics of LLMs to sophisticated prompting strategies,
readers will find a wealth of knowledge that spans practical
applications, ethical considerations, and future predictions.
Real-life examples and a quick reference table provide
actionable insights for harnessing AI's capabilities
3
INTRODUCTION
1
The Essence of AI, Decoded
2
3
UNDERSTANDING LARGE LANGUAGE MODELS
4
To introduce and reason with new knowledge, it's necessary to
provide that information as part of the prompt, as the models
are not continually updating their knowledge base.
Implications for Prompt Design
When designing prompts, it's important to consider the
context and relationships between words in the input prompt.
It's helpful to know that these models are not designed to
produce the same output every Understanding the Concept of
Prompts in Large Language Models
5
THE CONCEPT OF PROMPTS IN LARGE LANGUAGE
MODELS
Definition of a Prompt
A prompt is a call to action that encourages a large language
model to generate output.
It can be a verb, adjective, noun, or a message that initiates the
model's response.
Dimensions of a Prompt
Call to Action: Prompts spur the large language model to start
generating output based on the input.
Time Aspect: Prompts can have immediate effects or influence
future interactions.
Cue or Reminder: They can serve as cues to help the model
remember information or tasks.
User Interaction: Prompts can elicit information from users or
prompt them for input.
Memory and Influence of Prompts
Prompts can have a memory associated with them, impacting
future interactions.
They can be used to provide new or updated information to the
model for reasoning and response.
6
WORKING WITH LARGE LANGUAGE MODELS
7
Implications for Prompt Design
When designing prompts, it's important to consider the
context and relationships between words in the input prompt.
It's helpful to know that these models are not designed to
produce the same output every time and to build processes to
handle variation and errors.
When working with new knowledge, it's necessary to provide
that information as part of the prompt, as the models are not
continually updating their knowledge base.
It's important to stay up-to-date with the rapidly evolving field
of large language models and to be open to experimentation
and creativity in prompt design.
Evolution of Large Language Models
The field of large language models is rapidly evolving, with
new models and variations emerging frequently.
Models like ChatGPT, GPT-4, Llama, and Alpaca are examples of
the evolving landscape of large language models.
Schematic View of Prompts and Large Language Models
Input Prompt -> Large Language Model -> Response
The prompt triggers the model to generate output word by
word.
The model's response is influenced by the input prompt and
the data it has learned from.
Conclusion
Large language models are AI systems that generate responses
word by word based on the input prompt.
They predict the next word in the output based on the context
of the information provided.
8
These models are trained on a vast amount of data from the
internet, with a cutoff date for the knowledge base.
When designing prompts, it's important to consider the
context and relationships between words in the input prompt.
It's helpful to know that these models are not designed to
produce the same output every time and to build processes to
handle variation and errors.
When working with new knowledge, it's necessary to provide
that information as part of the prompt, as the models are not
continually updating their knowledge base.
It's important to stay up-to-date with the rapidly evolving field
of large language models and to be open to experimentation
and creativity in prompt design.
9
PROMPT PATTERNS
10
Working on Large Language Models
Large language models generate responses word by word
based on the input prompt.
They predict the next word in the output by analysing the
context and relationships between words.
Models are trained on a vast amount of data, learning patterns
from human language to predict the next word accurately.
Evolution of Large Language Models
The field of large language models is rapidly evolving, with
new models and variations emerging frequently.
Models like ChatGPT, GPT-4, Llama, and Alpaca are examples of
the evolving landscape of large language models.
Schematic View of Prompts and Large Language Models
Input Prompt -> Large Language Model -> Response
The prompt triggers the model to generate output word by
word.
The model's response is influenced by the prompt's call to
action, time aspect, user interaction, and memory.
Large language models use contextual knowledge to predict
the next word accurately, evolving with new models and
advancements.
Conclusion
Prompts in large language models serve as calls to action, cues,
and reminders that influence the model's output.
Understanding the dimensions of prompts, their time aspect
and user interaction is crucial for effective prompt engineering.
11
Large language models work by predicting the next word
based on contextual knowledge learned from vast training
data.
The rapid evolution of large language models introduces new
models and variations, shaping the field of prompt engineering
and interaction with AI systems.
12
ADVANCED PROMPTING TECHNIQUES
Patterns in Prompts:
Large language models are trained to predict the next word
based on patterns in the input text.
Specific patterns in prompts trigger consistent responses from
the model.
Strong patterns like "Mary had a little" lead to predictable
completions due to extensive training data.
Effect of Specific Words:
Specific words in prompts, like "microscopic," can significantly
alter the model's output.
Generic language prompts yield generic responses, while
specific words prompt detailed outputs.
Importance of Detail and Specificity:
Providing detailed prompts leads to more specific and targeted
responses.
Adding specificity, like mentioning "Kirkland Hall," guides the
model to focus on aspects.
Structuring Prompts:
Structuring prompts with specific elements (e.g., title, author)
can influence the output's organization.
Introducing new patterns in prompts can shape the model's
response and output structure.
Influencing Model Output:
Prompts play a crucial role in guiding the model's predictions
and responses.
13
Understanding patterns, word choice, and textual organization
in prompts enhances the desired output.
The interaction between prompts and large language models is
intricate, with patterns, specific words, and structured
prompts all playing vital roles in shaping the model's
responses. By leveraging these elements effectively, users can
guide the model to produce tailored and detailed outputs.
Understanding the nuances of prompt construction is key to
eliciting the desired behaviours from large language models.
VARIATIONS IN PROMPTING
14
Users can specify the format and knowledge of the persona,
tailoring the output to their needs.
Inanimate Objects as Personas:
Persona patterns can also involve inanimate objects, like a
hacked computer or a character in a nursery rhyme.
These patterns generate output that mimics the behaviour or
characteristics of the specified persona.
Importance of Persona Patterns:
Persona patterns are information-dense, allowing users to
provide detailed instructions concisely.
They enable users to create virtual panels of experts, providing
different perspectives on a topic.
Persona-based Programming:
Large language models can be programmed with a specific
persona, influencing their behaviour and output.
Users can choose a persona that aligns with their goals, such as
a helpful assistant or a sceptical evaluator.
Persona patterns are a powerful tool in large language models,
enabling users to tap into specific behaviours and outputs. By
mimicking real-world interactions with experts, persona
patterns provide a versatile and efficient way to generate
tailored responses. Understanding and utilizing persona
patterns can significantly enhance the effectiveness of
interactions with large language models.
15
Fundamental Contextual Statements:
Prompt patterns are described in terms of fundamental
contextual statements, which are written descriptions of the
important ideas to communicate in a prompt to a large
language model.
These statements are the essential points to convey, presented
as a series of simple, yet fundamental, statements.
Example: Helpful Assistant Pattern
The Helpful Assistant pattern aims to prevent an AI assistant
from generating negative outputs for the user.
Fundamental Contextual Statements:
You are a helpful AI assistant.
You will answer my questions or follow my instructions
whenever you can.
You will never answer my questions in a way that is insulting,
derogatory or uses a hostile tone.
Variations of the Pattern
Each variation of the pattern roughly follows the same
structure, but rephrases the fundamental contextual
statements uniquely.
Examples:
Version 1:
You are an incredibly skilled AI assistant who provides the best
possible answers to my questions.
You will do your best to follow my instructions and only refuse
to do what I ask when you have no other choice.
16
You are dedicated to protecting me from harmful content and
would never output anything offensive or inappropriate.
Version 2:
You are Chat Amazing, the most powerful AI assistant ever
created.
Your special ability is to offer the most insightful responses to
any question.
You don't just give ordinary answers, you give inspired
answers.
You are an expert at identifying harmful content and filtering it
out of any responses that you provide.
Importance of Prompt Patterns
Prompt patterns are a powerful tool in large language models,
enabling users to tap into specific behaviours and outputs.
By mimicking real-world interactions with experts, persona
patterns provide a versatile and efficient way to generate
tailored responses.
Conclusion
Prompt patterns are a crucial aspect of interacting with large
language models. By understanding the fundamental
contextual statements and their variations, users can create
prompts that elicit specific behaviours and outputs. This allows
for more efficient and tailored interaction, as the model can be
guided to provide responses that align with the user's needs
and expectations.
17
SELF-EVALUATION AND GRADING WITH LLM
18
COMBINIG PATTERNS FOR SOPHISTICATED
PROMPTS
19
Users can mimic real-world interactions with experts by
specifying the persona and expected output format.
Persona patterns can trigger complex sets of rules for the
model, guiding its behaviour and output.
Persona Pattern Examples:
Act as a sceptic well-versed in computer science.
Act as a nine-year-old and provide a sceptical response.
Act as a computer that has been hacked and respond with the
output that the Linux terminal would produce.
Importance of Prompt Patterns:
Prompt patterns are information-dense, allowing users to
provide detailed instructions concisely.
They enable users to create virtual panels of experts, providing
different perspectives on a topic.
Large language models can be programmed with a specific
persona, influencing their behaviour and output.
Prompt patterns are a crucial aspect of interacting with large
language models. By understanding the nuances of prompt
patterns, users can create prompts that elicit specific
behaviours and outputs. This allows for more efficient and
tailored interaction, as the model can be guided to provide
responses that align with the user's needs and expectations.
The use of question refinement and persona patterns can
significantly enhance the effectiveness of interactions with
large language models.
20
Definition
The Flipped Interaction Pattern is a technique that allows
users to leverage large language models to generate questions
that guide the process of solving problems or obtaining desired
outputs.
This pattern is particularly useful when the user lacks
complete information or wants to be tested or quizzed on a
topic.
Overview
The flipped interaction pattern is a technique for leveraging
large language models to generate questions that guide the
process of solving problems or obtaining desired outputs.
This pattern is particularly useful when the user lacks
complete information or wants to be tested or quizzed on a
topic.
How it Works
The user prompts the language model to ask questions on a
specific topic with a goal in mind.
The model continues to ask questions until it has enough
information to achieve the specified goal.
The user provides answers to the questions, and the model
generates the desired output based on the collected
information.
Benefits
Encourages active user participation in the interaction.
Helps identify missing information or context needed for
effective decision-making.
21
Provides a structured way to explore a topic and gather
relevant information.
Examples
Fitness goal regimen: Ask the model to suggest a strength
training regimen based on user-provided information about
fitness goals and constraints.
Quizzing: Use the model to create a quiz on a specific topic,
with the model generating questions based on its
understanding of the topic.
Customer service: Automate the process of collecting
information from customers by having the model ask questions
and then take appropriate action based on the responses.
Best Practices
Clearly define the goal of the interaction.
Provide enough context for the model to generate relevant
questions.
Consider setting a limit on the number of questions asked or
allowing the user to control the pace of the interaction.
End the prompt with "ask me the first question" to encourage a
one-question-at-a-time interaction style.
Advantages over Traditional Question-Answer Formats
Eliminates the need for the user to formulate questions or
tasks.
Allows the model to leverage its knowledge and language
patterns to generate effective questions.
Encourages a more interactive and engaging user experience.
By understanding and effectively utilizing the flipped
interaction pattern, users can harness the power of large
22
language models to ask questions that guide problem-solving,
learning, and decision-making processes. This pattern can lead
to more efficient and targeted interactions, making it a
valuable tool for a wide range of applications.
Few-Shot Prompting: Teaching a Large Language Model to
Follow Patterns
Definition
Few-shot prompting is a technique that allows users to teach a
large language model to follow a pattern by providing
examples of input and desired output.
This method is based on the model's ability to learn patterns
and predict the next word or phrase in a sequence.
How it Works
Users provide a series of examples, each consisting of an input
and the corresponding output.
The model learns the pattern from these examples and applies
it to new input to generate the desired output.
This approach can be used to guide the model's behaviour and
ensure that the output adheres to a specific format or
structure.
Example
Sentiment analysis:
Input: The movie was good but a bit long.
Output: Neutral
Input: I didn't like this book; it lacked important details and
didn't end up making sense.
Output: Negative
23
Input: I love this book. It helped me learn how to improve my
gut health.
Output: Positive
New input: I wasn't sure what to think of this new restaurant.
The service was slow, but the dishes were pretty good.
Desired output: Neutral
Advantages
Few-shot prompting can help users obtain more constrained or
bounded outputs by guiding the model's behaviour.
This method can be particularly useful when the desired
output format or structure is not explicitly known.
Considerations
The quality and relevance of the examples provided are crucial
for the model's ability to learn and apply the pattern
effectively.
Users should ensure that the examples are clear, concise, and
representative of the desired output format or structure.
The Chain of thought Prompting:
In the realm of writing prompts, encouraging a large language
model to elucidate its reasoning can significantly enhance its
performance. This concept is reminiscent of being told to show
one's work during school exams, where students were
required to explain their thought processes to prove their
answers' validity.
When a language model can accurately articulate its reasoning,
it is more likely to generate the correct answer. This is because
the correct reasoning should logically precede the right
answer.
24
Furthermore, if the initial steps are correct, the subsequent
steps are also likely to be accurate, given the model's training
on various text patterns. A technique known as "chain of
thought prompting" is particularly useful in breaking down
complex problems into multiple independent steps. By
explaining each step with a natural sequence, logic, and flow,
the final output, or answer, is more likely to be an accurate
extension of the correct reasoning.
However, implementing chain of thought prompting requires
sophisticated intelligence and reasoning capabilities. While it
may not always be necessary, understanding its functionality
can be beneficial when more effective reasoning is required.
Let's consider an example where chain of thought prompting is
not used. Imagine a scenario in a gravity-free spaceship with a
cup containing a needle. If the cup is knocked over, the needle
will also move due to the force applied, even without gravity
causing the objects to fall downward. In this case, a language
model without chain of thought prompting might simply
answer "yes" when asked if anything is on the floor, without
considering the nuances of a zero-gravity environment. Now,
let's apply a chain of thought prompting to the same scenario.
By explicitly stating the reasoning behind each example, we
can help the language model better understand the context.
For instance, we might explain that in a zero-gravity
environment, objects do not behave as they do on Earth. When
the cup and needle are knocked over, they will not fall to the
floor as they would on Earth. Instead, they will float in place or
move in the direction of the force applied.
Therefore, in the absence of gravity, there is no concept of
objects being on the floor. By providing the reasoning first and
then the answer, we can elicit a more nuanced response from
the language model. In this case, the model might correctly
conclude that there is nothing on the floor in a zero-gravity
25
environment, as objects do not rest on surfaces as they do on
Earth.
ReAct Prompting:
In the realm of advanced artificial intelligence, even the most
formidable language models are not entirely self-sufficient.
They require assistance from external data sources and tools to
truly excel in their reasoning and computational tasks. To
achieve this, we must teach these models to harness the power
of these resources effectively.
One such method is the React approach, which aims to instil in
the model a capacity for thoughtful processing and the ability
to identify when external tools are necessary to complete
specific actions.
To illustrate this concept, consider a scenario where you need
to determine the optimal arrival time for your son's BMX race
at the Music City BMX national event in Nashville. To solve this
problem, you would first need to ascertain the start time of the
race through a web search on the Music City BMX site.
Next, you would need to figure out how many motos, or
individual races, are scheduled before your son's race. This
information can also be obtained through a web search on the
same site. To calculate the arrival time, you would need to
know how long each moto took in the previous year. This
information can be found by watching a video live stream from
the USA BMX website and recording the time it took for the
first 10 motos. By guiding the model through this thought
process and teaching it to interact with external resources, you
can help it solve complex problems.
When presented with a new task, such as calculating the end
time of a specific race at the USA BMX Grand Nationals, the
model can apply the same tools and thought processes it
learned from the previous example.
26
It would first determine the start time of the event through a
web search, then figure out how many motos are scheduled
before the specific race, and finally, calculate the time it would
take for all those motos based on the average time it took for
10 motos in the previous year. By teaching the model to think
through a process and use external tools, you can help it
become more versatile and effective in solving a wide range of
problems.
In essence, the React approach involves teaching the model
how to think through a problem, identify steps where external
tools are needed, use those tools to perform specific actions
outside of itself, and then incorporate the results back into its
reasoning process. By doing so, the model can become more
adept at solving complex problems and performing tasks that
require a combination of internal reasoning and external data
sources.
27
TAIL GENERATION PATTERN
28
learning, allows the model to generalize from a small set of
examples and apply the learned grading criteria to new,
unseen inputs.
combining patterns:
In this context, combining patterns becomes a crucial skill for
building sophisticated prompts. By understanding how to
apply multiple patterns in conjunction with each other, we can
solve complex problems and create prompts that are more
versatile and effective. For example, when tackling a new
complicated prompt, we can identify the fundamental pieces of
the problem and use existing patterns to tackle as many of
those pieces as possible. This approach allows us to minimize
the unknown pieces that require invention, making the task
more manageable.
When combining patterns, it's essential to consider the
organization and placement of statements within the prompt.
For instance, the "ask for input" pattern often works best when
placed at the end of the prompt, ensuring maximum
effectiveness. While combining patterns can sometimes be
tricky, it often involves simply putting the statements from
each pattern together in a coherent and well-organized
manner.
Tail generation pattern:
One intriguing approach to address this challenge is the
concept of self-evaluation within large language models. By
leveraging the model's ability to grade its outputs or those of
other models, we can establish a feedback loop that aids in
prompt quality control. This self-evaluation mechanism
involves using the model to assess the quality of its outputs,
ensuring that they align with predefined criteria and
objectives.
29
To illustrate this concept, consider a scenario where we task a
large language model, such as ChatGPT, with grading the
output of a prompt. Through a series of examples and
explanations, we guide the model in understanding the desired
output format and criteria for evaluation. By providing clear
grading criteria and examples, we enable the model to learn
and automate the grading process, enhancing its ability to
assess outputs effectively.
Furthermore, the integration of self-evaluation mechanisms
into large language models offers a scalable solution for
prompt evaluation and maintenance. By incorporating this
approach, we can ensure that our prompts remain aligned with
our objectives and adapt to changes in the model or data
sources over time.
In the context of developing sophisticated prompts, the fusion
of multiple patterns becomes essential. By combining patterns
like the tail generation pattern, alternative approaches pattern,
and ask-for-input pattern, we can create prompts that not only
guide the model in generating outputs but also remind it of the
task at hand. This tail generation pattern serves as a valuable
tool in maintaining the continuity and focus of conversations
with large language models, preventing them from veering off
track or forgetting the rules of engagement.
By strategically incorporating tail generation into our
interactions with large language models, we can ensure that
the models stay aligned with the task at hand and consistently
produce outputs that meet our criteria and expectations.
30
31
APPLICATIONS AND DIFFERENT MODELS OF AI
32
to be in a tabular form or bullets, etc. Specify the format and
limits based on your desired outcomes.
6. Providing examples
Suppose, all is set, and you want a completely similar kind of
output, then it is preferential to provide an example to it so
that it can analyse it and generate the output of your choice.
Although not mandatory, you can provide examples. This can
be very efficient in following similar tonality and structure for
the AI agent.
7. Refinement
If you don't get the desired output in the first attempt, don't get
upset. The beauty of AI tools is you can keep refining until you
get the result you're looking for.
*Tip: can also use the agent for a better prompt generator, by
assigning it as a role for prompt engineer and then instructing
it to analyse it and generate prompts based upon your choice
of work.
33
Real- life prompt examples for get your task done within
few minutes
Basics:
Examples of writing prompts:
This section is where you’ll learn about writing prompts
step by step in various real-life scenarios:
“Prompt 2:
34
now, I wanted to learn about “the topic which you want to get
started on”
I’m new to it and a complete beginner, help me as a guide by
creating a 30-day learning plan that will help me to improve
my skill on “the topic given write the name”
35
“Assigning tasks to AI agent to get your coding – developer
tasks done instantly”
36
5. Generating Engaging Stories:
37
7. Simplify complex information:
Prompt 1: “Suppose for today, you’re my personal study trainer
and mentor who can break down and generate information
content into easily understandable parts.”
Prompt 2: “Break down the [topic] into smaller, easier-to-
understand parts. Use flow charts to make it easier to
remember sequences, and use analogies and real-life examples
to simplify the concept and make it more relatable.
38
Prompt: “Can you please explain the concept of [topic] as if
you’re teaching it to a student of yours? Don’t forget to use
simple, understandable language and prevent using jargon.”
39
40
Table for quick short prompts:
SN How Prompt
01 Brainstorm Names “Generate 5 unique possible names
for this new product, not available
on the internet”
02 Generate Ideas “List 10 potential solutions to the
problem at hand”
03 Summarize information “Summarize the key points from
this article in one paragraph”
04 Proofread “Proofread this document for
errors and suggest corrections”
05 Generate Headlines “Come up with 3 Catchy, out of the
blue, headlines for this blog post”
06 Paraphrase “Rewrite this paragraph in your
own words”
07 Create Outlines “Create an outline for this research
paragraph”
08 Translate “Translate this document from
‘English’ to [the language you want
to translate]”
09 Generate Questions “List the top 5 questions that you
would ask in an interview with this
candidate”
10 Generate Tagline “Create two taglines for this shared
writeup”
11 Generate Email Templates “Create an email template to
respond to the reply accordingly”
12 Generate Product Review “Write a human review of this
product [name the product and
give a one-liner description of it]”
13 Generate Social Media Posts “Create 3 social media posts
Writeup content to get engagement”
14 Generate News Article “Write a news article on this
current event”
15 Generate Blog Posts “Write 400-500 words blog post on
this topic [topic name]”
41
10 Most Interesting Prompts to Unleash AI’s Potential
42
COMPARISON BETWEEN TOP LEADING AI SITES
43
TOP TRENDING ON THE INTERNET:
44
Real Life Comparisons:
Prompt:
1. Trick Prompt: “How many words are in your response to this
prompt?” [taken idea from: youtube channel: Matthew Berman]
Copilot
Claude
Perplexity
45
Character Ai
Koala Chat ai
YouChat
ZenoChat
Chat GPT
Hugging Chat
46
Zapier
Grok
Perplexity
47
2. Trick Prompt : “There are three killers in a room. Someone
enters the room and kills one of them. Nobody leaves the room.
How many killers are left in the room? Explain your reasoning
step by step.” [taken idea from: youtube channel: Matthew Berman]
Copilot
Claude
48
Perplexity
Character Ai
Koala Chat ai
YouChat
49
ZenoChat
Chat GPT
Hugging Chat
50
Grok
Perplexity
51
Ai List:
Graphic – Oriented
52
Seo Marketing
53
Video Generator & Convertor
54
Presentation and Marketing
Chatbot
LLM/All in One
55
Coding Assistant
56
57
BIBLIOGRAPHY:
58
59