Unit1 A
Unit1 A
INTRODUCTION
• Intelligence : Capacity to learn and solve problems.
More formal definition : It is a property of mind that encompasses many related mental
abilities, such as capabilities to
reason
plan
solve problems
think abstractly
comprehend ideas and language
learn
• Artificial Intelligence : Computers with the ability to mimic or duplicate the functions of
human brain.
Formal Definition : AI is a branch of computer science which is concerned with the study and
creation of computer system that exhibit
3. Goal-based agents
4. Utility-based agents
5. Learning agents
Simple Reflex Agent
• The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action.
.
Problems for the simple reflex agent design approach:
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• These agents may have to consider a long sequence of possible actions before deciding whether
the goal is achieved or not. Such considerations of different scenario are called searching and
planning, which makes an agent proactive.
Utility based agents
.
• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an agent has
to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
Learning agents
.
• A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through learning.
• A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from
environment
2. Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.
Structure of an AI agent
• The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
• Following are the main three terms involved in the structure of an AI agent:
1. Architecture: Architecture is machinery that an AI agent executes on.
2. Agent Function: Agent function is used to map a percept to an action.
3. Agent program: Agent program is an implementation of agent function.
PEAS representation
• PEAS is a type of model on which an AI agent works upon. When we define an AI agent, then we
can group its properties under PEAS representation model. It is made up of four words:
P PERFORMANCE
E ENVIRONMENT
A ACTUATORS
S SENSORS
Example :
• Self driving car