All Unit AI
All Unit AI
Unit 1
Artificial Intelligence (AI) is an area of computer science that
emphasizes the creation of intelligent machines that work and
reacts like humans. AI is a broad field that includes many
disciplines, such as computer science, data analytics, statistics,
neuroscience, and more.
Al has become an essential part of the technology industry.
Research associated with artificial intelligence is highly technical
and specialized.
# Goals of AI
1. To create expert systems : The systems which exhibit intelligent
behaviour, learn, demonstrate, explain, and advice its users.
# Adv of Ai
Reduction in Human Error
Useful for risky areas
High Reliability
Digital Assistant
Faster Decisions
# DisAdv of Ai
High Cost
Can't Replace Humans
Doesn't Improve With Experience
Lack of Creativity
Risk Of Unemployment.
No Feelings And Emotion
# Evolution Of AI
o AI has evolved over decades through significant milestones that
define its journey from theoretical ideas to practical applications.
The evolution can be summarized in six stages, each with key
breakthroughs and challenges.
4. Computer Vision
Computer Vision is the field of AI that focuses on enabling
machines to interpret and analyze visual data from the world, such
as images and videos. It aims to replicate human vision capabilities
in machines.
How it works:
It uses techniques like image processing, pattern recognition, and
machine learning to extract meaningful information from images.
For example, in facial recognition, it identifies unique facial
features to match with stored profiles.
Applications:
Computer Vision powers face recognition systems (used in
smartphones and security systems), object detection in self-
driving cars, and medical imaging for diagnosing diseases like
cancer. It is also widely used in augmented reality and video
surveillance.
5. Robotics
Robotics is the branch of AI that deals with designing and creating
robots that can perform tasks autonomously or semi-autonomously.
Robots combine AI with mechanical engineering.
How it works:
Robots perceive their environment through sensors (e.g., cameras,
infrared), process the information using AI algorithms, and take
actions using actuators like motors or arms. For instance, robotic
vacuum cleaners analyze room layouts to clean efficiently.
Applications:
Robotics is used in manufacturing (e.g., assembling cars), healthcare
(e.g., surgical robots), and exploration (e.g., NASA's Mars Rovers).
Robots are also used in homes for tasks like cleaning and in military
operations for surveillance and bomb disposal.
6. Expert Systems
Expert Systems are AI programs designed to solve specific problems
by mimicking the decision- making ability of a human expert in a
particular field.
How it works:
These systems have a knowledge base (a collection of facts and
rules) and an inference engine that applies the rules to the
knowledge base to solve problems. For instance, in medical
diagnosis, an expert system can analyze symptoms and
recommend treatments.
Applications:
Expert Systems are used in areas like medical diagnosis (e.g.,
MYCIN for identifying bacterial infections), engineering (to
troubleshoot equipment), and business (for financial
forecasting).
7. Fuzzy Logic
Fuzzy Logic is a branch of AI that deals with reasoning under
uncertainty. Unlike traditional logic that only considers "true" or
"false," fuzzy logic works with degrees of truth, similar to how
humans make decisions.
How it works:
Fuzzy logic uses "fuzzy sets" to handle uncertain or imprecise data.
For example, a washing machine with fuzzy logic can adjust water
usage and wash cycles based on load size and dirtiness, rather than
fixed rules.
Applications:
Fuzzy logic is used in home appliances (e.g., air conditioners,
washing machines), automotive systems (e.g., automatic gear
shifting), and decision-making systems where precision is not
possible.
8. Evolutionary Computing
Evolutionary Computing is inspired by biological evolution. It uses
algorithms like genetic algorithms to solve optimization problems by
mimicking natural selection and survival of the fittest.
How it works:
The algorithm starts with a population of possible solutions,
evaluates them based on a fitness function, and evolves better
solutions through operations like mutation and crossover. For
example, it can optimize the design of an aircraft for better
performance.
Applications:
Evolutionary Computing is used in robotics for learning walking
patterns, optimizing supply chain logistics, and solving complex
scheduling problems like assigning resources to tasks efficiently.
# Classification of AI
Based on Functionalities: Based on the ways the machines behave and
functionalities, there are four types of Artificial Intelligence
approaches:
1. Reactive Machines
Reactive Machines are the simplest type of AI systems that
only focus on the current situation.
They cannot store past experiences or use them to
influence future actions.
These machines analyze the present data to perform a
specific task and respond accordingly. They work purely on
predefined logic without adapting to changes or learning
over time.
Key Features:
o They do not have memory, so they cannot learn from
previous actions.
o They focus only on analyzing the present and taking the
best possible action at that moment.
Example:
o Games like Deep Blue, IBM's chess-playing computer, which
defeated the world chess champion in the 1990s.
o It calculates all possible moves but does not learn or adapt
after a game.
2. Limited Memory
Limited Memory AI systems can temporarily store and use
past data for a short period to improve their decision-
making process.
These systems can analyze recent events or conditions to
perform their tasks better but cannot permanently retain
this information.
They are designed to function within specific scenarios
where short-term memory can improve efficiency, but they
lack the ability to build a long- term understanding of their
environment.
Key Features:
o They use this temporary memory to make
decisions or predictions based on recent
observations.
o Once the task is completed, the stored data is
discarded.
Example:
o Self-driving cars: These use limited memory to understand
the speed and distance of nearby vehicles, the road
conditions, and traffic signals. They analyze this information
to navigate safely but do not permanently store these details.
3. Theory of Mind
Theory of Mind AI refers to systems that can understand human
emotions, thoughts, beliefs, and intentions, aiming to interact with
humans on a social and emotional level.
This type of AI focuses on replicating human-like social behavior,
allowing it to interpret and respond to complex social cues. It
seeks to enable machines to build trust and form meaningful
relationships with humans, adapting their behavior to individual
needs and contexts.
Key Features:
These systems need to interpret emotions, body language, and
social cues to interact naturally.
They would be capable of forming relationships or adapting their
behavior based on a user’s needs or feelings.
Current Status:
Research and development are actively progressing in this area,
but fully functional AI with a Theory of Mind does not exist yet.
4. Self-Awareness
Self-Awareness AI represents the most advanced and
hypothetical form of artificial intelligence. These systems are
envisioned to possess their own consciousness, allowing
them to understand their existence, emotions, and
environment. Unlike other AI types, self-aware machines
would have independent reasoning and decision-making
capabilities, making them capable of functioning
autonomously with no human guidance. This level of AI
remains theoretical and raises questions about ethics and
safety.
Key Features:
o These systems would be able to think, feel, and make
from reality.
o If achieved, it could revolutionize industries, but it also
# AI Agent
• An AI agent is a software program designed to interact with its
environment, perceive the data it receives, and take actions based
on that data to achieve specific goals.
• They use predetermined rules or trained models to make
decisions and might need external control or supervision.
# Intelligent Agent
An intelligent agent is an autonomous entity which acts upon an
environment using sensors and actuators for achieving goals. An
intelligent agent may learn from the environment to achieve their goals.
Intelligent agents are designed to interact with their surroundings, learn
from changes, and make appropriate decisions based on their
observations.
3. Goal-Based Agents
What are they?
Goal-Based Agents are designed to achieve specific goals. Unlike
reflex agents, they don't just react to the environment; they think
about how to reach a particular objective. These agents can plan
their actions and make decisions that move them closer to their
goal.
How do they work?
These agents use a goal-directed approach, where they evaluate
different possible actions and choose the one that will lead to their
goal. They might even have to consider multiple steps or
intermediate actions to reach the desired outcome.
Example:
o A chess-playing AI is a goal-based agent. Its goal is to
checkmate the opponent. It doesn't just react to the
opponent’s moves but plans its own moves to gradually
move towards achieving checkmate.
4. Utility-Based Agents
What are they?
Utility-Based Agents not only try to achieve a goal, but they also
seek to maximize the overall benefit or "utility" of their actions.
These agents are concerned with the quality of the results and aim
to choose actions that provide the most optimal outcome.
How do they work?
Utility-Based Agents evaluate different possible actions based on how well they meet
their goals. The goal isn’t just to reach the destination, but to do it in the best way
possible—whether it’s by minimizing time, cost, or maximizing efficiency.
Example:
o A ride-sharing app like Uber uses a utility- based approach. It
tries to pick the best route that minimizes both travel time
and cost while ensuring a good experience for the rider.
5. Learning Agents
What are they?
Learning Agents have the unique ability to learn from their
experiences. They start with basic knowledge and improve over
time as they interact
with the environment. Through learning, they can adapt to new
situations and make better decisions in the future.
How do they work?
These agents use feedback from their actions to update their
knowledge and improve their decision-making. They might use
techniques like trial and error, pattern recognition, or
reinforcement learning to understand what works best.
Example:
o Virtual assistants like Siri, Google Assistant, and Alexa are
learning agents. They improve over time as they interact with
users, understand speech better, and provide more relevant
responses based on past interactions.
2. Adaptability:
They can adapt to changes in their environment. For instance, a
self-driving car adjusts to traffic conditions.
3. Goal-Oriented:
Every intelligent agent has a specific goal or objective that it tries
to achieve.
4. Interaction:
Intelligent agents can communicate with humans, other agents, or
systems to gather information or collaborate on tasks.
1. Perception (Sensors):
o What is it?
The perception component allows the agent to sense or
observe its environment. It gathers data or input from the
environment using sensors. The sensors help the agent detect
changes and understand the current situation.
o Example:
A robot uses cameras and microphones to perceive its
surroundings.
2. Environment:
o What is it?
The environment is the world or space in which the agent
operates. The agent perceives and interacts with the
environment. The environment can be real (like a house or
factory) or virtual (like a game or simulation).
o Example:
For a self-driving car, the environment includes roads, other
cars, traffic signals, etc.
3. Actuators:
o What is it?
Actuators allow the agent to take actions in the
environment. Once the agent has
processed information, it needs to act or respond in some
way, and actuators are the tools that make that possible.
o Example:
A robot uses motors to move or a camera to capture images.
4. Reasoning (Processing or Decision-Making):
o What is it?
This component is responsible for processing the information
received from the environment and deciding what action to
take. It involves interpreting the data, considering possible
outcomes, and choosing the best course of action.
o Example:
A chess-playing AI uses reasoning to decide which move to
make next based on the current board situation.
5. Learning (Optional):
o What is it?
Some intelligent agents have the ability to learn from
experience. They use past actions and feedback to improve
future decision- making. Learning helps the agent adapt to
new situations and enhance performance over time.
o Example:
A voice assistant learns your preferences and gets better at
understanding your commands over time.
6. Performance Measure:
o What is it?
The performance measure is used to evaluate the success of
the agent's actions. It determines whether the agent is
achieving its goals and how effectively it is performing its
tasks.
o Example:
For a delivery robot, the performance measure could be
how efficiently it delivers packages.
Que. Explain the role of sensors and effectors
in the functioning of intelligent agent?
Ans. In an intelligent agent, sensors and effectors play crucial roles in
enabling the agent to interact with and respond to its environment. These
components allow
the agent to perceive the world around it and take actions based on its
perception.
1. Sensors (Perception)
Sensors are the components responsible for gathering information about
the environment. They enable the agent to perceive the world around it
by detecting changes or capturing data. Sensors are vital because an
intelligent agent can only make informed decisions if it has accurate and
up-to-date information about its surroundings.
Role of Sensors:
o Data Collection: Sensors collect data from the environment,
such as sounds, images, temperature, or any other type of
relevant information.
o Environmental Awareness: They allow the agent to
"sense" the state of the world, such as obstacles, people,
or events that may be occurring in its surroundings.
o Input for Decision-Making: The information gathered by
sensors serves as input for the agent's reasoning and
decision-making processes. The agent uses this data to
determine the most appropriate actions.
2. Effectors (Actuators)
Effectors (also called actuators) are responsible for executing actions or
performing tasks based on the decisions made by the intelligent agent.
After the agent processes the information gathered by its sensors and
reasons about what actions to take, the effectors act on the environment.
Role of Effectors:
o Executing Actions: Effectors carry out the actions that the
agent has decided are necessary to achieve its goals. These
could be physical actions, like moving, or virtual actions, like
sending commands.
o Interaction with the Environment: Effectors enable the
agent to interact with its environment by changing things,
such as moving objects, navigating through space, or
providing outputs like voice responses.
o Outcome of Decision Making: Effectors implement the results
of the agent’s reasoning process and enable the agent to
impact the environment based on the agent’s perception.
Que. What are the steps involved in problem solving agent ?
Ans. An agent solves problems by following a series of steps, each of
which builds upon the previous one:
Step 1: Goal Setting Introduction:
In the first step, the agent needs to decide what it
wants to achieve. This is called goal setting. The agent looks at the
environment and considers what is possible. Based on this, it sets a
goal that it aims to achieve. For example, if a robot is in a room, its
goal might be to move to a particular spot in the room.
Step 2: Goal Formulation
Once the agent has set the goal, the next step is to formalize or define
the goal more clearly in terms of actions and expected outcomes.
This is called goal formulation. The agent now needs to figure out
how to express the goal in a way that it can work towards it.
Key Activity in Goal Formulation:
o Observe Current State: The agent looks at where it is
currently. It checks the present state of the
environment to understand what is happening and
what needs to be done.
o Tabulate Performance Measures: The agent also
checks how well it is performing by measuring how
far it is from its goal. It uses these measures to track
progress.
Step 3: Problem Formulation
After clearly understanding the goal, the agent needs to figure out how to
reach it. This is done by problem formulation, which means determining
the steps (or actions) that need to be taken to reach the goal.
What Happens in Problem Formulation:
The agent looks at the current situation and thinks about the actions it
can take to move from the starting point to the goal. It may need to
decide which action to take at each step to reach the goal in the
most efficient way.
In many cases, the agent does not know exactly what will happen when it
takes an action. The environment might be unknown or unpredictable. In
such cases, the agent has to search for the best possible sequence of
actions that will lead to the goal.
What Happens in Search:
o The agent tries different actions to see what works. It
explores possible actions and learns from the results.
o As it tests these actions, the agent gathers information and
builds knowledge about how to reach its goal. This process
of learning is important, especially when the environment is
new or unknown.
o Once the agent has gathered enough information, it will
know which set of actions works best to reach the goal.
Steps Involved in Problem-Solving by an Agent
An agent solves problems by following a series of steps, each of which
builds upon the previous one.
Step 1: Goal Setting Introduction:
In the first step, the agent needs to decide what it
wants to achieve. This is called goal setting. The agent looks at the
environment and considers what is possible. Based on this, it sets a goal
that it aims to achieve. For example, if a robot is in a room, its goal
might be to move to a particular spot in the room.
Step 2: Goal Formulation
Once the agent has set the goal, the next step is to formalize or define
the goal more clearly in terms of actions and expected outcomes. This is
called goal formulation. The agent now needs to figure out how to
express the goal in a way that it can work towards it.
Key Activity in Goal Formulation:
o Observe Current State: The agent looks at where it is
currently. It checks the present state of the environment to
understand what is happening and what needs to be done.
o Tabulate Performance Measures: The agent also checks
how well it is performing by measuring how far it is from
its goal. It uses these measures to track progress.
By observing its environment and measuring its performance, the agent can plan how to
achieve its goal.
Step 3: Problem Formulation
After clearly understanding the goal, the agent needs to figure out how to
reach it. This is done by problem formulation, which means determining
the steps (or actions) that need to be taken to reach the goal.
What Happens in Problem Formulation:
The agent looks at the current situation and thinks about the actions it
can take to move from the starting point to the goal. It may need to
decide which action to take at each step to reach the goal in the
most efficient way.
Step 4: Search in an Unknown Environment
In many cases, the agent does not know exactly what will happen when it
takes an action. The environment might be unknown or unpredictable. In
such cases, the agent has to search for the best possible sequence of
actions that will lead to the goal.
What Happens in Search:
o The agent tries different actions to see what works. It
explores possible actions and learns from the results.
o As it tests these actions, the agent gathers information and
builds knowledge about how to reach its goal. This process of
learning is important, especially when the environment is new
or unknown.
Once the agent has gathered enough information, it will know which set of
actions works best to reach the goal.
Step 5: Execution Phase
After finding a possible solution, the agent enters the execution phase,
where it actually performs the actions that will lead to the goal.
Key Idea:
The agent uses the knowledge it has gathered to follow a
sequence of actions. These actions are based on the plan
made during the problem formulation and search steps.
o The agent executes these actions one by one, with each
action bringing it closer to the goal.
Once the agent executes the actions, it can evaluate whether the goal has been
reached. If not, the agent may need to reformulate its goals or actions and repeat
the process.
Unit 2
# State Space Approach
The State Space Approach is a method used in Artificial Intelligence to solve problems by
representing all possible states of a system and the transitions between them. It provides a
structured way to define and explore problems systematically.
2. Informed search:
Informed search strategies use extra information (called a heuristic) to guess which
paths are more
promising. This allows the agent to focus on better options and solve problems faster.
DFS is a search strategy that explores as deep as possible along one path
before backtracking and trying another path. It uses a stack to keep track of
the nodes that still need to be explored. DFS goes deep into a branch of the
tree (or graph) until it hits a dead-end, then backtracks and explores other
paths.
How DFS Works:
o DFS starts at the initial node and explores as far as possible along
one path.
o If it reaches a node with no unvisited neighbors (a dead-end), it
backtracks to the previous node and explores the next possible
path.
o This continues until the goal is found or all paths are explored.
Advantages of DFS:
Uses less memory
Good for deep problems
Disadvantages of DFS:
May not find the shortest path
Can get stuck
# Heuristic function
A heuristic function is a problem-solving technique used in AI to guide the search
for a solution in the most promising direction. It provides an estimate of the
"cost" or "distance" from the current state to the goal state. Heuristics are
used in informed search strategies (like A* search) to make the search process
more
efficient by focusing on the most relevant paths.
Definition:
A heuristic is a function h(n) that gives an estimate of the remaining cost from a
given node (n) to the goal. This value helps the AI agent decide which path to
take by choosing the nodes with the lowest heuristic value.
The value of the heuristic function is always positive.
h(n) <= h*(n)
here, h(n) is heuristic cost and h*(n) is the estimated cost.
# A* algorithm
A* search is the most commonly known form of best- first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search
algorithm finds the shortest path through the search space using the heuristic
function.
This search algorithm expands less search tree and provides optimal result faster.
A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of
g(n). In A* search algorithm, we use search heuristic as well as the cost to reach
the node. Hence we can combine both costs as following, and this sum is called
as a fitness number.
f(n)=g(n)+h(n)
5. Termination:
o The algorithm stops when no better solution can be found. At
this point, it has reached the best solution it can in that area
(localoptimum).
Algorithm:
1. Evaluate the initial state, if it is goal state then return success
and stop.
2. Loop until a solution is found or there is no new operator left to
apply.
3. Select and apply an operator to the current state.
4. Check new state:
If it is goal state the return success and quit.
Else if it is better than the current state then assign new state as current
state.
Else if not better than the current state, then return to step 2.
5. Exit
# Types:
There are different variants of Hill Climbing, and each has its own approach:
(x, y) -> (x-y, 0) Pour all the water into the 4 litres jug.
If x+y<=4 and y>0
x, y) -> (0, x + y) if x+y<=3 and Pour all the water from the 4 litres jug
x>0 into the 3
litres jug
(0, 2) -> (2, 0) Pour the 2 litres from the 3 litres jug
into the 4 litres
jug.
(2, y) -> (0, y) Empty the 2 litres in the 4 litres jug on the
ground.
# Constraint Satisfaction Problem
It is a mathematical problem defined by a set of variables, their possible values
(domains), and
constraints that restrict the values the variables can take. The goal is to assign
values to the variables in such a way that all the constraints are satisfied.
Constraint Satisfaction Procedure to Solve Cryptarithmetic Problems
Cryptarithmetic problems are puzzles where arithmetic equations are given in
an encrypted form, and the task is to find the unique digits for each letter so
the equation holds true.
3. Mini-max algorithm is mostly used for game playing in AI, such as Chess,
Checkers, Tic-Tac-Toe, Go, and
various two-player games. This algorithm computes the minimax decision for
the current state.
4. In this algorithm, two players play the game; one is called MAX, and the
other is called MIN.
5. Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
6. Both players of the game are opponents of each other, where MAX will
select the maximized value, and MIN will select the minimized value.
8. The minimax algorithm proceeds all the way down to the terminal node
of the tree, then
# Alpha-beta pruning:
1. Universal generalization:
o Universal generalization is a valid inference rule which states that if
premise P(c) is true for any
conclusion as ∀ x P(x).
o arbitrary element c in the universe of discourse, then we can have a
o This rule can be used if we want to show that every element has a similar
property.
2. Universal instantiation:
ground term c (a constant within domain x) from ∀ x p(x) for any object
o The UI rule state that we can infer any sentence P(c) by substituting a
3. Existential instantiation:
o Existential instantiation is also called as Existential
o elimination, which is a valid inference rule in first order logic.
o This rule states that one can infer P(c) from the formula given in the form of ∃xP(x)
for a new constant symbol c.
o It can be represented as: =∃ x P(x) / P(c)
4.Existential instantiation:
o Existential instantiation is also called as Existential
o elimination, which is a valid inference rule in first order logic.
5.Existential introduction:
o This rule states that if there is some element c in the universe of
discourse which has a property P, then we can infer that there exists
something in the universe which has the property P.
= P(c) / ∃ x P(x)
o It can be represented as:
# Unification
o Unification is the process of finding substitutions for lifted inference
rules, which can make different logical expression to look similar
(identical).
o Unification is a procedure for determining substitutions needed to make
two first order logic expressions match.
o Unification is important component of all first order logic inference
algorithms. The unification algorithm takes two sentences and returns a
unifier for them, if one exists.
# Forward Chaining and Backward Chaining
Forward Chaining
o Backward chaining starts with a list of goals (or a hypothesis) and works
backwards to see if there is data available that will support any of these
goals.
o Backward Chaining is a reasoning method used in Artificial Intelligence
(AI), particularly in Expert
4. Resolve the clauses: Combine the two clauses and eliminate the
complementary literals. The result is a new clause that contains all the
literals of the original clauses.
5. Repeat: Continue applying resolution until a contradiction is found
(an empty clause), which indicates that the original set of clauses is
unsatisfiable.
Example:
Let's consider a simple example:
Given Clauses:
1. P(x) ∨ Q(x) (If P(x) is true, then Q(x) can also be true)
2. ¬P(a) (P(a) is false)
3. Q(a) (Q(a) is true)
Goal:
Prove that Q(a) holds true using resolution. Steps:
1. Convert the clauses into CNF:
o Clause 1: P(x) ∨ Q(x)
o Clause 2: ¬P(a)
o Clause 3: Q(a)
2. Environment
What is it?
The environment is the space or world where the agent operates.
This could be a real-world environment or a virtual one.
The environment provides input to the agent through its sensors
and also receives output from the agent through its actuators.
The environment can change over time, requiring the agent to
constantly adapt its actions to achieve its goals.
How does it work?
The environment can be dynamic, meaning that it changes based
on the agent's actions or external factors.
The agent must continuously monitor its environment to adapt to
these changes and make informed decisions.
In a virtual environment, the changes may be programmed or
influenced by other virtual agents.
Example:
For a self-driving car, the environment includes roads, traffic
lights, pedestrians, and other cars.
3. Actuators
What is it?
Actuators are the components that allow the agent to take actions
based on the decisions made by the reasoning or processing unit.
Once the agent has analyzed its environment and determined
what action to take, the actuators help the agent carry out those
actions in the real world or within a virtual environment.
How does it work?
Actuators can take many forms, depending on the type of agent
and the tasks it needs to perform.
They convert the agent’s internal decisions (processed by the
reasoning unit) into physical or virtual actions that affect the
environment.
Example:
A robot might have motors as actuators, allowing it to move
forward, turn, or lift objects. A camera could be used as an
actuator to capture images, or a drone might have rotors that
allow it to fly.
5. Learning (Optional)
What is it?
Some intelligent agents have the ability to learn from their
experiences.
This means that over time, they can improve their performance
by analyzing the results of their actions and adapting their
decision-making process accordingly.
Learning helps the agent adapt to changes in the environment or
unexpected situations.
6. Performance Measure
What is it?
The performance measure is a metric or set of criteria used to
evaluate how well the agent is achieving its goals.
It assesses the effectiveness of the agent's actions and determines
whether it is on track to complete its tasks successfully.
Performance measures guide the agent in adjusting its behavior if
necessary.
How does it work?
The performance measure helps define success for the agent.
For example, it might evaluate how quickly or accurately a robot
performs a task or how effectively a self-driving car navigates
through traffic.
Agent Communication
Agent communication refers to the process by which intelligent agents
exchange information and coordinate actions with other agents within a
system.
In multi-agent systems (MAS), agents may need to communicate with
one another to achieve their individual or collective goals.
2. Requesting Communication
Description: One agent asks another agent for information or an
action. It is an interactive form of communication where the
requesting agent waits for a response or a result from the receiving
agent.
Example: In a delivery system, an agent might request the status of an
order from another agent to check if it’s ready for dispatch.
3. Commanding Communication
Description: A commanding agent sends an instruction or order to
another agent, directing it to perform a specific action or task. The
agent receiving the command is expected to act on it.
Example: A robot might command another robot to pick up an item or
move to a specific location in a warehouse.
4. Negotiating Communication
Description: Agents communicate to come to an agreement on a
particular issue, such as deciding on resource allocation or resolving
conflicting goals. This communication is often a back-and- forth
exchange of offers, counteroffers, or terms.
Example: Two agents in a negotiation system might exchange offers
about the price of a product,aiming to settle on an acceptable price for
both parties.
5. Cooperative Communication
Description: In this type, agents work together to achieve a shared
goal. They share information, resources, or tasks to ensure the
success of a collective objective.
Example: In a cooperative robot team, each robot might share its
observations and actions with others to optimize their collective
performance, like in a search-and-rescue mission where robots share
data about the environment.
Types of Languages in Agent Communication
1. KQML (Knowledge Query and Manipulation Language)
KQML is a communication language designed specifically for
exchanging information between intelligent agents.
o It is a language for the exchange of knowledge, and it
includes a set of performatives (or speech acts) that define
the type of communication an agent is engaged in, such as
querying, requesting, replying, or asserting.
Features:
o It uses a predefined set of communication "acts" like asking
questions (QUERY), requesting actions (REQUEST), and
stating facts (ASSERT).
o KQML is based on a message-passing model, meaning one
agent sends a message to another agent.
o It allows agents to communicate in a formal, structured
manner, ensuring clear understanding between agents.
Example:
o An agent might send a message like: (ask (agent1) :content
(isWeatherSunny?)) which means "Agent 1 is asked if the
weather is sunny."