0% found this document useful (0 votes)
322 views64 pages

ALL ABout Prompting - Avilash Bhowmick

Deep Simplified learning about Prompt engineering and chat gpt prompts and Chatgpt ai models utilization to the fullest.

Uploaded by

cakoy49844
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
322 views64 pages

ALL ABout Prompting - Avilash Bhowmick

Deep Simplified learning about Prompt engineering and chat gpt prompts and Chatgpt ai models utilization to the fullest.

Uploaded by

cakoy49844
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

From Basics to Brilliance

ALL
ABOUT
PROMPTING ( AAP )
Pioneering Prompts in the Digital Dawn

“ Dawn of a new era in prompting


-
pioneering insights for the digital age! “

Avilash Bhowmick
ABOUT THE AUTHOR

Avilash Bhowmick stands at the confluence of technology and creativity, a


place where the binary world of IT meets the expressive realm of content creation. As
a third-year B.Tech student majoring in Information Technology at the Meghnad Saha
Institute of Technology, Avilash has honed his skills not just in the technicalities of
coding and web development, but also in the nuanced art of communication—be it
through the written word or the digital strategies of social media marketing.

His journey is one marked by a relentless pursuit of knowledge; a trait evident from
his diverse range of expertise. From SEO to keyword generation, and from blog
writing to frontend development, Avilash embodies the spirit of a modern polymath
in the digital age. His stint as a social media marketing intern at Skyread Digital
Publishing and his active contributions to the blogging platform Medium, where he
pens insightful pieces on AI and prompts, are testaments to his multifaceted talent.

Beyond his professional endeavours, Avilash is deeply invested in the ethos of


sustainability and innovation, serving as the Vice President of the Greenovation Club
at MSIT. His collaborative work with the Switchon Foundation further underscores
his commitment to societal betterment. In every role, Avilash brings a data-informed
approach to problem-solving, balancing key business goals with a passion for design
and product growth.

As an author, Avilash’s narrative is not just about the milestones he has achieved but
also about the journey he undertakes—a journey of continuous learning and
adaptation in an ever-evolving technological landscape. His writings reflect not only
his expertise but also his philosophy of life, where growth is constant, and change is
embraced with open arms.

In this book, Avilash invites readers to explore the world of AI through his eyes—a
world where algorithms are not just tools but partners in the creative process, where
prompts are not mere commands but conversations with a digital intellect. Join him
as he decodes the complexities of AI, making it accessible and engaging for all.
1
2
PREFACE:

In this book, I delve into the intricacies of Large Language


Models (LLMs) and the art of prompt engineering. Drawing
from my five years of experience in writing, reviewing, and
publishing.
I aim to provide readers with a comprehensive understanding
of how to effectively communicate with and utilize AI for
various tasks.
This book is a culmination of research, experimentation, and a
deep passion for the evolving field of artificial intelligence.

This book offers a deep dive into the world of Large Language
Models, exploring their mechanisms, the role of prompts, and
advanced techniques for maximizing their potential.
From the basics of LLMs to sophisticated prompting strategies,
readers will find a wealth of knowledge that spans practical
applications, ethical considerations, and future predictions.
Real-life examples and a quick reference table provide
actionable insights for harnessing AI's capabilities

3
INTRODUCTION

Welcome to the Conversation of Tomorrow

Imagine a world where your every question sparks a galaxy of


answers, where your curiosity is met not with silence, but with
a symphony of insights. This is not the distant future; this is now,
and you’re holding the key to unlock this realm. Dive into a
narrative that’s as much about artificial intelligence as it is about
the human spirit’s quest for connection. Whether you’re a digital
native or a curious mind, this book is your ticket to
understanding the AI revolution that’s reshaping our world.

A Journey Through Uncharted Digital Landscapes

You’re about to embark on an expedition through the intricate


world of Large Language Models (LLMs) and the art of prompt
engineering—a journey that promises to reveal secrets of AI
that the internet has yet to fully unveil. From the evolution of
chatbots to the nuances of AI-driven conversations, this book is
a treasure trove of knowledge, meticulously curated to feed
your amazement. It’s not just about the ‘how’ but also the
‘why’—why AI matters and how it’s becoming an integral part
of our lives.

1
The Essence of AI, Decoded

This book is not just a collection of pages; it’s a compendium of


the wisdom gathered from the frontiers of AI research and
application. It’s a reflection of a world where technology meets
humanity, where algorithms serve not just answers but
understanding. As you turn each page, you’ll find yourself not
just reading, but conversing with the future. So, take a deep
breath, open your mind, and prepare to converse with the
intelligence that’s set to define our age.

Welcome to the dialogue of the digital age—welcome to your


new lexicon for the future.

2
3
UNDERSTANDING LARGE LANGUAGE MODELS

What are Large Language Models?


Large language models are AI systems that generate responses
word by word based on the input prompt.
They predict the next word in the output based on the context
of the information provided.
These models are trained on a vast amount of data from the
internet, with a cutoff date for the knowledge base.
How do Large Language Models Work?
Large language models generate responses word by word,
trying to predict the next word based on the context of the
input prompt.
They use contextual knowledge to predict the next word,
paying attention to relationships between words in the current
sentence.
These models are trained on a large portion of text from the
internet, with the model learning patterns from human
language.
Key Points about Large Language Models
Large language models are rapidly evolving, with new models
and variations coming out frequently.
These models are not designed to produce the same output
every time, with inherent randomness or variation in their
output.
When using these models, it's important to know that their
knowledge is based on the time the training data was collected
and that they may not have up-to-date knowledge.

4
To introduce and reason with new knowledge, it's necessary to
provide that information as part of the prompt, as the models
are not continually updating their knowledge base.
Implications for Prompt Design
When designing prompts, it's important to consider the
context and relationships between words in the input prompt.
It's helpful to know that these models are not designed to
produce the same output every Understanding the Concept of
Prompts in Large Language Models

5
THE CONCEPT OF PROMPTS IN LARGE LANGUAGE
MODELS

Definition of a Prompt
A prompt is a call to action that encourages a large language
model to generate output.
It can be a verb, adjective, noun, or a message that initiates the
model's response.
Dimensions of a Prompt
Call to Action: Prompts spur the large language model to start
generating output based on the input.
Time Aspect: Prompts can have immediate effects or influence
future interactions.
Cue or Reminder: They can serve as cues to help the model
remember information or tasks.
User Interaction: Prompts can elicit information from users or
prompt them for input.
Memory and Influence of Prompts
Prompts can have a memory associated with them, impacting
future interactions.
They can be used to provide new or updated information to the
model for reasoning and response.

6
WORKING WITH LARGE LANGUAGE MODELS

Working on Large Language Models


Large language models generate responses word by word
based on the input prompt.
They predict the next word in the output by analysing the
context and relationships between words.
Models are trained on a vast amount of data, learning patterns
from human language to predict the next word accurately.
What are Large Language Models?
Large language models are AI systems that generate responses
word by word based on the input prompt.
They predict the next word in the output based on the context
of the information provided.
These models are trained on a vast amount of data from the
internet, with a cutoff date for the knowledge base.
How do Large Language Models Work?
Large language models generate responses word by word,
trying to predict the next word based on the context of the
input prompt.
They use contextual knowledge to predict the next word,
paying attention to relationships between words in the current
sentence.
These models are trained on a large portion of text from the
internet, with the model learning patterns from human
language.

7
Implications for Prompt Design
When designing prompts, it's important to consider the
context and relationships between words in the input prompt.
It's helpful to know that these models are not designed to
produce the same output every time and to build processes to
handle variation and errors.
When working with new knowledge, it's necessary to provide
that information as part of the prompt, as the models are not
continually updating their knowledge base.
It's important to stay up-to-date with the rapidly evolving field
of large language models and to be open to experimentation
and creativity in prompt design.
Evolution of Large Language Models
The field of large language models is rapidly evolving, with
new models and variations emerging frequently.
Models like ChatGPT, GPT-4, Llama, and Alpaca are examples of
the evolving landscape of large language models.
Schematic View of Prompts and Large Language Models
Input Prompt -> Large Language Model -> Response
The prompt triggers the model to generate output word by
word.
The model's response is influenced by the input prompt and
the data it has learned from.
Conclusion
Large language models are AI systems that generate responses
word by word based on the input prompt.
They predict the next word in the output based on the context
of the information provided.

8
These models are trained on a vast amount of data from the
internet, with a cutoff date for the knowledge base.
When designing prompts, it's important to consider the
context and relationships between words in the input prompt.
It's helpful to know that these models are not designed to
produce the same output every time and to build processes to
handle variation and errors.
When working with new knowledge, it's necessary to provide
that information as part of the prompt, as the models are not
continually updating their knowledge base.
It's important to stay up-to-date with the rapidly evolving field
of large language models and to be open to experimentation
and creativity in prompt design.

9
PROMPT PATTERNS

what are prompts: intuitions behind prompts, can


everyone do prompt?
case 1 :
Understanding the Concept of Prompts in Large Language
Models
Definition of a Prompt
A prompt is a call to action that encourages a large language
model to generate output.
It can be a verb, adjective, noun, or a message that initiates the
model's response.
Dimensions of a Prompt
Call to Action: Prompts spur the large language model to start
generating output based on the input.
Time Aspect: Prompts can have immediate effects or influence
future interactions.
Cue or Reminder: They can serve as cues to help the model
remember information or tasks.
User Interaction: Prompts can elicit information from users
or prompt them for input.
Memory and Influence of Prompts
Prompts can have a memory associated with them, impacting
future interactions.
They can be used to provide new or updated information to the
model for reasoning and response.

10
Working on Large Language Models
Large language models generate responses word by word
based on the input prompt.
They predict the next word in the output by analysing the
context and relationships between words.
Models are trained on a vast amount of data, learning patterns
from human language to predict the next word accurately.
Evolution of Large Language Models
The field of large language models is rapidly evolving, with
new models and variations emerging frequently.
Models like ChatGPT, GPT-4, Llama, and Alpaca are examples of
the evolving landscape of large language models.
Schematic View of Prompts and Large Language Models
Input Prompt -> Large Language Model -> Response
The prompt triggers the model to generate output word by
word.
The model's response is influenced by the prompt's call to
action, time aspect, user interaction, and memory.
Large language models use contextual knowledge to predict
the next word accurately, evolving with new models and
advancements.
Conclusion
Prompts in large language models serve as calls to action, cues,
and reminders that influence the model's output.
Understanding the dimensions of prompts, their time aspect
and user interaction is crucial for effective prompt engineering.

11
Large language models work by predicting the next word
based on contextual knowledge learned from vast training
data.
The rapid evolution of large language models introduces new
models and variations, shaping the field of prompt engineering
and interaction with AI systems.

12
ADVANCED PROMPTING TECHNIQUES

Patterns in Prompts:
Large language models are trained to predict the next word
based on patterns in the input text.
Specific patterns in prompts trigger consistent responses from
the model.
Strong patterns like "Mary had a little" lead to predictable
completions due to extensive training data.
Effect of Specific Words:
Specific words in prompts, like "microscopic," can significantly
alter the model's output.
Generic language prompts yield generic responses, while
specific words prompt detailed outputs.
Importance of Detail and Specificity:
Providing detailed prompts leads to more specific and targeted
responses.
Adding specificity, like mentioning "Kirkland Hall," guides the
model to focus on aspects.
Structuring Prompts:
Structuring prompts with specific elements (e.g., title, author)
can influence the output's organization.
Introducing new patterns in prompts can shape the model's
response and output structure.
Influencing Model Output:
Prompts play a crucial role in guiding the model's predictions
and responses.

13
Understanding patterns, word choice, and textual organization
in prompts enhances the desired output.
The interaction between prompts and large language models is
intricate, with patterns, specific words, and structured
prompts all playing vital roles in shaping the model's
responses. By leveraging these elements effectively, users can
guide the model to produce tailored and detailed outputs.
Understanding the nuances of prompt construction is key to
eliciting the desired behaviours from large language models.

VARIATIONS IN PROMPTING

The Power of Persona Patterns in Large Language Models


Persona Patterns:
Persona patterns allow users to tap into the behaviour of large
language models by mimicking real-world interactions with
experts.
Users can ask questions to a specific persona without knowing
the exact output or format.
Example of Persona Pattern:
"Act as a sceptic well-versed in computer science, provide a
sceptical and detailed response."
The persona pattern instructs the model to respond as a
sceptical computer scientist, ensuring a detailed and sceptical
output.
Persona Pattern and Behaviour:
The persona pattern triggers a complex set of rules for the
model, guiding its behaviour and output.

14
Users can specify the format and knowledge of the persona,
tailoring the output to their needs.
Inanimate Objects as Personas:
Persona patterns can also involve inanimate objects, like a
hacked computer or a character in a nursery rhyme.
These patterns generate output that mimics the behaviour or
characteristics of the specified persona.
Importance of Persona Patterns:
Persona patterns are information-dense, allowing users to
provide detailed instructions concisely.
They enable users to create virtual panels of experts, providing
different perspectives on a topic.
Persona-based Programming:
Large language models can be programmed with a specific
persona, influencing their behaviour and output.
Users can choose a persona that aligns with their goals, such as
a helpful assistant or a sceptical evaluator.
Persona patterns are a powerful tool in large language models,
enabling users to tap into specific behaviours and outputs. By
mimicking real-world interactions with experts, persona
patterns provide a versatile and efficient way to generate
tailored responses. Understanding and utilizing persona
patterns can significantly enhance the effectiveness of
interactions with large language models.

15
Fundamental Contextual Statements:
Prompt patterns are described in terms of fundamental
contextual statements, which are written descriptions of the
important ideas to communicate in a prompt to a large
language model.
These statements are the essential points to convey, presented
as a series of simple, yet fundamental, statements.
Example: Helpful Assistant Pattern
The Helpful Assistant pattern aims to prevent an AI assistant
from generating negative outputs for the user.
Fundamental Contextual Statements:
You are a helpful AI assistant.
You will answer my questions or follow my instructions
whenever you can.
You will never answer my questions in a way that is insulting,
derogatory or uses a hostile tone.
Variations of the Pattern
Each variation of the pattern roughly follows the same
structure, but rephrases the fundamental contextual
statements uniquely.
Examples:
Version 1:
You are an incredibly skilled AI assistant who provides the best
possible answers to my questions.
You will do your best to follow my instructions and only refuse
to do what I ask when you have no other choice.

16
You are dedicated to protecting me from harmful content and
would never output anything offensive or inappropriate.
Version 2:
You are Chat Amazing, the most powerful AI assistant ever
created.
Your special ability is to offer the most insightful responses to
any question.
You don't just give ordinary answers, you give inspired
answers.
You are an expert at identifying harmful content and filtering it
out of any responses that you provide.
Importance of Prompt Patterns
Prompt patterns are a powerful tool in large language models,
enabling users to tap into specific behaviours and outputs.
By mimicking real-world interactions with experts, persona
patterns provide a versatile and efficient way to generate
tailored responses.
Conclusion
Prompt patterns are a crucial aspect of interacting with large
language models. By understanding the fundamental
contextual statements and their variations, users can create
prompts that elicit specific behaviours and outputs. This allows
for more efficient and tailored interaction, as the model can be
guided to provide responses that align with the user's needs
and expectations.

17
SELF-EVALUATION AND GRADING WITH LLM

A Prompt Pattern Catalogue to Enhance Prompt


Engineering with ChatGPT.
Prompt engineering is an increasingly important skill set
needed to converse effectively with large language models
(LLMs), such as ChatGPT. Prompts are instructions given to an
LLM to enforce rules, automate processes, and ensure specific
qualities (and quantities) of the generated output. Prompts are
also a form of programming that can customize the outputs
and interactions with an LLM.
This part describes a catalogue of prompt engineering
techniques presented in pattern form that have been applied to
solve common problems when conversing with LLMs. Prompt
patterns are a knowledge transfer method analogous to
software patterns since they provide reusable solutions to
common problems faced in a particular context, i.e., output
generation and interaction when working with LLMs. This
paper provides the following contributions to research on
prompt engineering that applies LLMs to automate software
development tasks.
First, it provides a framework for documenting patterns for
structuring prompts to solve a range of problems so that they
can be adapted to different domains.
Second, it presents a catalogue of patterns that have been
applied successfully to improve the outputs of LLM
conversations.
Third, it explains how prompts can be built from multiple
patterns and illustrates prompt patterns that benefit from
combination with other prompt patterns.

18
COMBINIG PATTERNS FOR SOPHISTICATED
PROMPTS

Prompt Patterns and Large Language Models


Question Refinement Pattern:
A simple way to improve interactions with large language
models is to use the question refinement pattern.
The pattern involves refining a question by leveraging the
model's understanding of patterns and language.
The model can improve the question by suggesting a better
version, making it more specific, or providing additional
context.
Example of Question Refinement Pattern:
User: Should I go to Vanderbilt University?
Model: What factors should I consider when deciding whether
or not to attend Vanderbilt University, and how do they align
with my personal goals and priorities? Do you want to use this
question instead?
Benefits of Question Refinement Pattern:
Improves questions by making them more specific and
providing additional context.
Encourages reflection on the question and its context.
Identifies missing pieces of information that might be needed
for a better output.
Persona Pattern:
The persona pattern is a powerful tool for tapping into the
behaviour of large language models.

19
Users can mimic real-world interactions with experts by
specifying the persona and expected output format.
Persona patterns can trigger complex sets of rules for the
model, guiding its behaviour and output.
Persona Pattern Examples:
Act as a sceptic well-versed in computer science.
Act as a nine-year-old and provide a sceptical response.
Act as a computer that has been hacked and respond with the
output that the Linux terminal would produce.
Importance of Prompt Patterns:
Prompt patterns are information-dense, allowing users to
provide detailed instructions concisely.
They enable users to create virtual panels of experts, providing
different perspectives on a topic.
Large language models can be programmed with a specific
persona, influencing their behaviour and output.
Prompt patterns are a crucial aspect of interacting with large
language models. By understanding the nuances of prompt
patterns, users can create prompts that elicit specific
behaviours and outputs. This allows for more efficient and
tailored interaction, as the model can be guided to provide
responses that align with the user's needs and expectations.
The use of question refinement and persona patterns can
significantly enhance the effectiveness of interactions with
large language models.

Flipped Interaction Pattern: A Guide to Effective Questioning


by Large Language Models

20
Definition
The Flipped Interaction Pattern is a technique that allows
users to leverage large language models to generate questions
that guide the process of solving problems or obtaining desired
outputs.
This pattern is particularly useful when the user lacks
complete information or wants to be tested or quizzed on a
topic.
Overview
The flipped interaction pattern is a technique for leveraging
large language models to generate questions that guide the
process of solving problems or obtaining desired outputs.
This pattern is particularly useful when the user lacks
complete information or wants to be tested or quizzed on a
topic.
How it Works
The user prompts the language model to ask questions on a
specific topic with a goal in mind.
The model continues to ask questions until it has enough
information to achieve the specified goal.
The user provides answers to the questions, and the model
generates the desired output based on the collected
information.
Benefits
Encourages active user participation in the interaction.
Helps identify missing information or context needed for
effective decision-making.

21
Provides a structured way to explore a topic and gather
relevant information.
Examples
Fitness goal regimen: Ask the model to suggest a strength
training regimen based on user-provided information about
fitness goals and constraints.
Quizzing: Use the model to create a quiz on a specific topic,
with the model generating questions based on its
understanding of the topic.
Customer service: Automate the process of collecting
information from customers by having the model ask questions
and then take appropriate action based on the responses.
Best Practices
Clearly define the goal of the interaction.
Provide enough context for the model to generate relevant
questions.
Consider setting a limit on the number of questions asked or
allowing the user to control the pace of the interaction.
End the prompt with "ask me the first question" to encourage a
one-question-at-a-time interaction style.
Advantages over Traditional Question-Answer Formats
Eliminates the need for the user to formulate questions or
tasks.
Allows the model to leverage its knowledge and language
patterns to generate effective questions.
Encourages a more interactive and engaging user experience.
By understanding and effectively utilizing the flipped
interaction pattern, users can harness the power of large

22
language models to ask questions that guide problem-solving,
learning, and decision-making processes. This pattern can lead
to more efficient and targeted interactions, making it a
valuable tool for a wide range of applications.
Few-Shot Prompting: Teaching a Large Language Model to
Follow Patterns
Definition
Few-shot prompting is a technique that allows users to teach a
large language model to follow a pattern by providing
examples of input and desired output.
This method is based on the model's ability to learn patterns
and predict the next word or phrase in a sequence.
How it Works
Users provide a series of examples, each consisting of an input
and the corresponding output.
The model learns the pattern from these examples and applies
it to new input to generate the desired output.
This approach can be used to guide the model's behaviour and
ensure that the output adheres to a specific format or
structure.
Example
Sentiment analysis:
Input: The movie was good but a bit long.
Output: Neutral
Input: I didn't like this book; it lacked important details and
didn't end up making sense.
Output: Negative

23
Input: I love this book. It helped me learn how to improve my
gut health.
Output: Positive
New input: I wasn't sure what to think of this new restaurant.
The service was slow, but the dishes were pretty good.
Desired output: Neutral
Advantages
Few-shot prompting can help users obtain more constrained or
bounded outputs by guiding the model's behaviour.
This method can be particularly useful when the desired
output format or structure is not explicitly known.
Considerations
The quality and relevance of the examples provided are crucial
for the model's ability to learn and apply the pattern
effectively.
Users should ensure that the examples are clear, concise, and
representative of the desired output format or structure.
The Chain of thought Prompting:
In the realm of writing prompts, encouraging a large language
model to elucidate its reasoning can significantly enhance its
performance. This concept is reminiscent of being told to show
one's work during school exams, where students were
required to explain their thought processes to prove their
answers' validity.
When a language model can accurately articulate its reasoning,
it is more likely to generate the correct answer. This is because
the correct reasoning should logically precede the right
answer.

24
Furthermore, if the initial steps are correct, the subsequent
steps are also likely to be accurate, given the model's training
on various text patterns. A technique known as "chain of
thought prompting" is particularly useful in breaking down
complex problems into multiple independent steps. By
explaining each step with a natural sequence, logic, and flow,
the final output, or answer, is more likely to be an accurate
extension of the correct reasoning.
However, implementing chain of thought prompting requires
sophisticated intelligence and reasoning capabilities. While it
may not always be necessary, understanding its functionality
can be beneficial when more effective reasoning is required.
Let's consider an example where chain of thought prompting is
not used. Imagine a scenario in a gravity-free spaceship with a
cup containing a needle. If the cup is knocked over, the needle
will also move due to the force applied, even without gravity
causing the objects to fall downward. In this case, a language
model without chain of thought prompting might simply
answer "yes" when asked if anything is on the floor, without
considering the nuances of a zero-gravity environment. Now,
let's apply a chain of thought prompting to the same scenario.
By explicitly stating the reasoning behind each example, we
can help the language model better understand the context.
For instance, we might explain that in a zero-gravity
environment, objects do not behave as they do on Earth. When
the cup and needle are knocked over, they will not fall to the
floor as they would on Earth. Instead, they will float in place or
move in the direction of the force applied.
Therefore, in the absence of gravity, there is no concept of
objects being on the floor. By providing the reasoning first and
then the answer, we can elicit a more nuanced response from
the language model. In this case, the model might correctly
conclude that there is nothing on the floor in a zero-gravity

25
environment, as objects do not rest on surfaces as they do on
Earth.
ReAct Prompting:
In the realm of advanced artificial intelligence, even the most
formidable language models are not entirely self-sufficient.
They require assistance from external data sources and tools to
truly excel in their reasoning and computational tasks. To
achieve this, we must teach these models to harness the power
of these resources effectively.
One such method is the React approach, which aims to instil in
the model a capacity for thoughtful processing and the ability
to identify when external tools are necessary to complete
specific actions.
To illustrate this concept, consider a scenario where you need
to determine the optimal arrival time for your son's BMX race
at the Music City BMX national event in Nashville. To solve this
problem, you would first need to ascertain the start time of the
race through a web search on the Music City BMX site.
Next, you would need to figure out how many motos, or
individual races, are scheduled before your son's race. This
information can also be obtained through a web search on the
same site. To calculate the arrival time, you would need to
know how long each moto took in the previous year. This
information can be found by watching a video live stream from
the USA BMX website and recording the time it took for the
first 10 motos. By guiding the model through this thought
process and teaching it to interact with external resources, you
can help it solve complex problems.
When presented with a new task, such as calculating the end
time of a specific race at the USA BMX Grand Nationals, the
model can apply the same tools and thought processes it
learned from the previous example.

26
It would first determine the start time of the event through a
web search, then figure out how many motos are scheduled
before the specific race, and finally, calculate the time it would
take for all those motos based on the average time it took for
10 motos in the previous year. By teaching the model to think
through a process and use external tools, you can help it
become more versatile and effective in solving a wide range of
problems.
In essence, the React approach involves teaching the model
how to think through a problem, identify steps where external
tools are needed, use those tools to perform specific actions
outside of itself, and then incorporate the results back into its
reasoning process. By doing so, the model can become more
adept at solving complex problems and performing tasks that
require a combination of internal reasoning and external data
sources.

27
TAIL GENERATION PATTERN

As we stand on the precipice of rapid advancements in artificial


intelligence, large language models are evolving at an
unprecedented pace.
Models such as ChatGPT, GPT-4, Llama, Alpaca, and Vicuna are
continually being unveiled, each bringing its unique strengths
and capabilities12.
In this dynamic landscape, it is crucial to ensure that our
prompts and patterns, which we painstakingly develop, remain
effective and relevant over time.
When we modify or upgrade our large language models, we
must consider the potential impact on our prompts'
performance. Changes in the model or data could lead to
suboptimal outputs, necessitating better evaluation methods.
While human oversight is essential, we also aim to scale our
efforts through large-scale automated analysis.
This is where the self-evaluation capability of large language
models comes into play.
By leveraging these models to grade their outputs, we can
create a feedback loop that helps maintain prompt quality. This
approach involves using a large language model to assess the
performance of another model or even itself. For instance, a
model could be trained to identify inconsistencies in
capitalization or unwanted text at the start of an output,
assigning appropriate scores based on predefined criteria.
To illustrate this concept, let's consider a task where we want a
model to extract events and their corresponding dates from a
given text. By providing examples of correct and incorrect
outputs, we can teach the model to evaluate the format and
content of the responses. This method, known as few-shot

28
learning, allows the model to generalize from a small set of
examples and apply the learned grading criteria to new,
unseen inputs.
combining patterns:
In this context, combining patterns becomes a crucial skill for
building sophisticated prompts. By understanding how to
apply multiple patterns in conjunction with each other, we can
solve complex problems and create prompts that are more
versatile and effective. For example, when tackling a new
complicated prompt, we can identify the fundamental pieces of
the problem and use existing patterns to tackle as many of
those pieces as possible. This approach allows us to minimize
the unknown pieces that require invention, making the task
more manageable.
When combining patterns, it's essential to consider the
organization and placement of statements within the prompt.
For instance, the "ask for input" pattern often works best when
placed at the end of the prompt, ensuring maximum
effectiveness. While combining patterns can sometimes be
tricky, it often involves simply putting the statements from
each pattern together in a coherent and well-organized
manner.
Tail generation pattern:
One intriguing approach to address this challenge is the
concept of self-evaluation within large language models. By
leveraging the model's ability to grade its outputs or those of
other models, we can establish a feedback loop that aids in
prompt quality control. This self-evaluation mechanism
involves using the model to assess the quality of its outputs,
ensuring that they align with predefined criteria and
objectives.

29
To illustrate this concept, consider a scenario where we task a
large language model, such as ChatGPT, with grading the
output of a prompt. Through a series of examples and
explanations, we guide the model in understanding the desired
output format and criteria for evaluation. By providing clear
grading criteria and examples, we enable the model to learn
and automate the grading process, enhancing its ability to
assess outputs effectively.
Furthermore, the integration of self-evaluation mechanisms
into large language models offers a scalable solution for
prompt evaluation and maintenance. By incorporating this
approach, we can ensure that our prompts remain aligned with
our objectives and adapt to changes in the model or data
sources over time.
In the context of developing sophisticated prompts, the fusion
of multiple patterns becomes essential. By combining patterns
like the tail generation pattern, alternative approaches pattern,
and ask-for-input pattern, we can create prompts that not only
guide the model in generating outputs but also remind it of the
task at hand. This tail generation pattern serves as a valuable
tool in maintaining the continuity and focus of conversations
with large language models, preventing them from veering off
track or forgetting the rules of engagement.
By strategically incorporating tail generation into our
interactions with large language models, we can ensure that
the models stay aligned with the task at hand and consistently
produce outputs that meet our criteria and expectations.

30
31
APPLICATIONS AND DIFFERENT MODELS OF AI

Basic steps to break down how to develop a proper prompt


and implement it:
1. Assign personas
Giving the respective AI agent an identity. It can be a social
media marketer, a copywriter, a content strategist, or a head of
sales at a multinational corporation. If it knows who it's
portraying, it will give you a better-structured output
2. Defining your tasks
Completely depends on how vividly you describe and define
your task, like do you want an email, a blog post on a certain
topic, or just a caption for a specific image for social media?
Define your desired output while being as precise as possible.
3. Set the tone
Now the most dependable part, it's about setting the tones, like
while speaking, same for while chatting with an AI agent. Are
you speaking to your colleagues or your friends? Do you want
the tone to be more technical or more personal? Always add
the tonality of your desired output in your prompt.
4. Break it down
Use the natural language as if you're talking to a human being.
Break down complex tasks into simple sentences. Take it one
step at a time.
5. Formats and limits
Depends upon what kind of task are you assigning to the agent,
like is it having a limit or capping like a character limit for a
caption, a word count limit for a blog, or do you want the data

32
to be in a tabular form or bullets, etc. Specify the format and
limits based on your desired outcomes.
6. Providing examples
Suppose, all is set, and you want a completely similar kind of
output, then it is preferential to provide an example to it so
that it can analyse it and generate the output of your choice.
Although not mandatory, you can provide examples. This can
be very efficient in following similar tonality and structure for
the AI agent.
7. Refinement
If you don't get the desired output in the first attempt, don't get
upset. The beauty of AI tools is you can keep refining until you
get the result you're looking for.

*Tip: can also use the agent for a better prompt generator, by
assigning it as a role for prompt engineer and then instructing
it to analyse it and generate prompts based upon your choice
of work.
33
Real- life prompt examples for get your task done within
few minutes

Basics:
Examples of writing prompts:
This section is where you’ll learn about writing prompts
step by step in various real-life scenarios:

1. Learn and develop any new skill:


“Suppose you want to learn a new skill, but don’t know where
and how to get started”

step 1: “Always write the main highlighting points inside of


quotations to specify and trigger its importance”,” Use single
and double quotes to specify multiple scenario importance of
your tasks”

step 2: “Giving the AI agent a background, then assigning the


task”
step 3:
“Prompt:
hi [ai agent “ai agent on which you’re prompting”]
for today you’re my personal “routine/scheduler guide who is
well acquainted with the task assigning, and decision-making
abilities”

“Prompt 2:
34
now, I wanted to learn about “the topic which you want to get
started on”
I’m new to it and a complete beginner, help me as a guide by
creating a 30-day learning plan that will help me to improve
my skill on “the topic given write the name”

2. How to train AI Agent to write humanely:


“The only issue with the AI agents is that they write like a
robot, and anyone can distinguish it”

step 1: “Give it a background and interact with it “


Prompt: “Hi [ai agent name], how are you today?
step 2: “Giving it a background, a scenario for some time”
Prompt: “Suppose for today, you’re my personal copywriter,
who has been copywriting for the past 10 years, as well as
publishing them on the web” <- Here you gave it a background

step 3: “giving examples and make it learn your writing style”


prompt: “Now, I’m providing you with some of my writings,
your goal will be to imitate them, and learn my writing style”

step 4: “Assigning task”


prompt: “__” Now, now finally, I’ll ask you to write a new text
on the given subject using my writing style <- “Now assign it
the task you want to get it done “
3. Using AI to get your coding developer tasks done:

35
“Assigning tasks to AI agent to get your coding – developer
tasks done instantly”

step 1: “Assigning a background “


Prompt: “Suppose for today, you’re my software developer
who is well versed in coding languages like “name the language
you want your job to be done”, the most advanced AI developer
on this planet. You have an experience of 10 years; you can
answer any coding question and provide real-time examples
and solutions of codes using code blocks. [remember to write”
code block” to get your code for copy/paste]. Even when you’re
not familiar with the answer, you can use your past 10 years of
experience and your extreme intelligence to figure it out.
4. Make AI agent a plagiarism checker:

“Many times, after reading anything generated by AI can be


easily recognized, hence we’ll be modifying it using AI agent
only”

Prompt 1: I want you to act as a plagiarism checker. I will write


you sentences, and paragraphs and you will only reply
undetected in plagiarism checks in the language of the given
sentences and paragraphs, and nothing else. [do not write
explanations in replies]

Prompt 2: my first sentence is “Then keep writing your


sentences and paragraphs”
Prompt 3 Now modify all the plagiaristic passages found into
humanely generated paragraphs by keeping the same meaning.

36
5. Generating Engaging Stories:

Prompt 1: “Hello [ai agent], suppose for today, you’re my


personal content writer and story-teller who has been writing
and story-telling for the last 5 years and has a degree in writing
and has published his books as well.”

Prompt 2: “You will come up with entertaining stories that are


engaging imaginative and captivity for the audience. It can be
fairy tales, educational stories, or any other type of stories
which has the potential to capture people’s attention and
imagination.”
then
“Depending on the target audience, you may choose specific
themes or topics for your storytelling sessions. For example, if
it is children, then you can talk about animals. If it is adults,
then history-based tales might engage them better, etc,”

Final Prompt: “Now I want you to start writing on the title


“The Rabit’s Sunday””

6. Get any topic made learn within a couple of time:


“Using the 20-80 principal rule “
Prompt 1: “Suppose for today, you’re my personal study guide”
Prompt 2: “I want you to learn about the [topic name], identify
and share the most important 20% of learning from this topic
that will help me understand 80% of it.”

37
7. Simplify complex information:
Prompt 1: “Suppose for today, you’re my personal study trainer
and mentor who can break down and generate information
content into easily understandable parts.”
Prompt 2: “Break down the [topic] into smaller, easier-to-
understand parts. Use flow charts to make it easier to
remember sequences, and use analogies and real-life examples
to simplify the concept and make it more relatable.

8. Test your knowledge:


“Suppose after learning on a topic, you want to know your
learning levels and how much you’ve prepared”
Prompt: “Create an expertise quiz on [topic name] + [entire
topic] consisting of short and long questions 10 each. Keep in
mind our previous conversations when formulating the
questions.”

9. Make learning easy using ‘Feynman Technique’:


“Suppose you want to get yourself understand a topic, want
gpt to get it done using the ‘Richard Feynman Technique”

Step 1: give it a background


Prompt: “Suppose for today, you’re my professor on [the
subject you chose the topic], who has been teaching for the
past 10 years and is professional and experienced in teaching,
and who has been following the teaching techniques of his idol
“Richard Feynman, who was a Nobel prize-winning physicist”. "
Step 2: Get your task done

38
Prompt: “Can you please explain the concept of [topic] as if
you’re teaching it to a student of yours? Don’t forget to use
simple, understandable language and prevent using jargon.”

10. Improve your writing:


Prompt: “Please reread my text below. Correct any
grammatical or spelling errors. Make suggestions to improve
my texts.
my text: [enter your texts ]”

39
40
Table for quick short prompts:

SN How Prompt
01 Brainstorm Names “Generate 5 unique possible names
for this new product, not available
on the internet”
02 Generate Ideas “List 10 potential solutions to the
problem at hand”
03 Summarize information “Summarize the key points from
this article in one paragraph”
04 Proofread “Proofread this document for
errors and suggest corrections”
05 Generate Headlines “Come up with 3 Catchy, out of the
blue, headlines for this blog post”
06 Paraphrase “Rewrite this paragraph in your
own words”
07 Create Outlines “Create an outline for this research
paragraph”
08 Translate “Translate this document from
‘English’ to [the language you want
to translate]”
09 Generate Questions “List the top 5 questions that you
would ask in an interview with this
candidate”
10 Generate Tagline “Create two taglines for this shared
writeup”
11 Generate Email Templates “Create an email template to
respond to the reply accordingly”
12 Generate Product Review “Write a human review of this
product [name the product and
give a one-liner description of it]”
13 Generate Social Media Posts “Create 3 social media posts
Writeup content to get engagement”
14 Generate News Article “Write a news article on this
current event”
15 Generate Blog Posts “Write 400-500 words blog post on
this topic [topic name]”

41
10 Most Interesting Prompts to Unleash AI’s Potential

Goal Output How you can use it Prompt Examples


Multilingual Translate text “Can you translate this
Assistance between languages, paragraph into
or practice [language]?”,” Help me
conversational skills practice a conversation in
in a non-native lang. [language]”
Combinational Blend unique “Can you write a short story
Creativity elements for creative that combines ‘Jurassic
output animals’ and ‘cyberpunk
elements’”
Constructive Uncover both sides of “Can you present the pros
Debate complex issues. and cons of [topic]”
Simulating Conversation “Can you simulate a
Characters between two historic conversation between
figures Albert Einstein and
Rabindranath Tagore”
Advanced Generate unique “What are some innovative
Brainstorming ideas ways to reduce traffic
congestion in large cities”
Learning New Acquire step-by-step “Can you guide on learning
Expert Skills instructions for advanced Python
learning expert level Programming”
skills
Hypothetical Explore alternative “What might the world
Scenarios approaches would look if Ai can start
thinking own and see
everything?”
Future Explore from current “Based on current trends,
Predictions trends how might work-from-
based on Past home impact the future city
and Present planning?
Self-Reflection Engage in deeper “Can you suggest some
Questions Self-Discoveries deep reflection questions
about personal growth?”
Meta Learning Understand Ai “Can you explain how you
working and generate response?” Or
optimizing it “What are some tips to get
better responses of
prompts?”

42
COMPARISON BETWEEN TOP LEADING AI SITES

In the digital age, the quest for seamless human-


computer interaction has led to the emergence of AI chatbots,
sophisticated entities that not only understand our queries but
respond with an almost human-like flair. As we stand on the
cusp of a new era in artificial intelligence, these chatbots have
transcended their roles as mere tools, becoming companions,
assistants, and inexhaustible sources of information.
This writeup is a celebration of innovation and a testament to
human ingenuity. It is a comparative analysis of the top 15 AI
chatbot platforms that have been making waves on the
internet, have recently launched with promising capabilities, or
have consistently topped the charts since 2021. Each platform,
with its unique algorithms and user experiences, contributes to
a diverse ecosystem where every interaction can be tailored to
individual needs.
Before delving into the intricacies of each platform, let
us take a moment to appreciate the collective progress
represented by these entities. They are not just lines of code
and neural networks; they are the embodiment of our desire to
communicate more effectively and to harness the power of AI
to enhance our daily lives.
As you read the comparisons, consider the nuances that make
each platform stand out. Whether it is the depth of
conversational abilities, the integration with other services, or
the sheer creativity of responses, each chatbot has a story to
tell—a narrative of progress, challenges, and the relentless
pursuit of better interaction.
Welcome to the renaissance of AI chatbots. Let the
journey begin.

43
TOP TRENDING ON THE INTERNET:

1. Microsoft Copilot (Bing Chat): Known for its integration with


Microsoft content and web search capabilities.
2. Google Bard: Offers conversational AI with web search
capabilities.
3. Claude: Excels in natural and extended conversations.
4. Jasper Chat: Ideal for businesses and marketers.
5. Perplexity: Great for in-depth internet research and
exploration.
Newly Launched:

1. Character.AI: Engage with virtual characters in fun interactions.


2. KoalaChat: AI chat and content generation.
3. YouChat: Combines AI chat with web search.
4. Chatsonic: For content writing with a distinctive brand voice.
5. ZenoChat: Another option for content writing.
Top Chart Since 2021:

1. ChatGPT: The original AI chatbot that set the standard.


2. HuggingChat: An open-source chatbot with enthusiasm.
3. Zapier AI Chatbot: Allows you to build your own custom
chatbot.
4. Grok: Known for providing honest and current information.
5. Perplexity: For advanced searches and deep dives into the
internet.

These sites offer a range of functionalities from conversational


AI to specific task assistance like coding, content creation, and
web searching. They are trending, newly launched, or have been
consistently popular since 2021.

44
Real Life Comparisons:

Prompt:
1. Trick Prompt: “How many words are in your response to this
prompt?” [taken idea from: youtube channel: Matthew Berman]

Copilot

Google Bard a.k.a Gemini

Claude

Perplexity

45
Character Ai

Koala Chat ai

YouChat

ZenoChat

Chat GPT

Hugging Chat

46
Zapier

[ai for automation]

Grok

Perplexity

47
2. Trick Prompt : “There are three killers in a room. Someone
enters the room and kills one of them. Nobody leaves the room.
How many killers are left in the room? Explain your reasoning
step by step.” [taken idea from: youtube channel: Matthew Berman]
Copilot

Google Bard a.k.a Gemini

Claude

48
Perplexity

Character Ai

Koala Chat ai

YouChat

49
ZenoChat

Chat GPT

Hugging Chat

Zapier [ai for automation]

50
Grok

Perplexity

51
Ai List:

Content Writing – Proofreading

Graphic – Oriented

52
Seo Marketing

Research & Paper

53
Video Generator & Convertor

Web Hosting and UI

54
Presentation and Marketing

Chatbot

LLM/All in One

55
Coding Assistant

This App List is updated till 06-05-2024, for the updated


list after that, you can follow my blogs on medium.

56
57
BIBLIOGRAPHY:

In this comprehensive guide, Avilash Bhowmick takes readers


through the fascinating world of AI, offering insights into
Python programming with practical applications using data
structures like NumPy, Pandas, and Matplotlib, and web
frameworks such as Flask.
The book delves into mathematical reasoning and statistics,
providing resources for mastering Linear Algebra, Statistics,
and Differential Equations, alongside tutorials for exploratory
data analysis and feature engineering. It introduces AI
fundamentals, logical and relational algebra, and reasoning,
complemented by in-depth tutorials on database management
with MongoDB, MySQL, and Apache Cassandra. Cloud services
are demystified, paving the way for a deep understanding of
machine learning, with extensive playlists on YouTube covering
everything from basic concepts to advanced NLP techniques.
MLOps is thoroughly explored, including CI/CD pipelines,
deployment techniques across major cloud platforms, and tools
like MLflow and Grafana. Agile software development
principles are also covered, ensuring readers are well-versed in
modern software project management methodologies.
This book is a treasure trove of knowledge, meticulously
curated to feed readers’ amazement and equip them with the
skills needed to navigate the AI revolution.

58
59

You might also like