0% found this document useful (0 votes)
9 views

Module 1

ISML

Uploaded by

avg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Module 1

ISML

Uploaded by

avg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 109

Intelligent Systems and Machine Learning

Algorithms

Gahan A V
Assistant Professor
Department of Electronics and Communication Engineering
Bangalore Institute of Technology
BANGALORE INSTITUTE OF TECHNOLOGY

Syllabus

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Text Book

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

"Imagine you have a personal assistant that can learn from your daily habits and preferences. What tasks would you want it to
handle for you“

"Can you think of a movie or book where an intelligent system played a significant role? How did it interact with the humans in
the story?"

"Why do you think self driving cars need machine learning algorithms? What kind of decisions do these cars need to make that
require them to 'learn'?"

"How do you think social media platforms use machine learning to personalize your feed? What are some potential benefits and
drawbacks of this?"

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Module 1

7
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY

 We call ourselves Homo sapiens—man the wise—because our intelligence is so


important to us. [the species to which all modern human beings belong]

 For thousands of years, we have tried to understand how we think; that is, how a mere
handful of matter can perceive, understand, predict, and manipulate a world far larger and
more complicated than itself.

 The field of artificial intelligence, or AI, goes further still: it attempts not just to
understand but also to build intelligent entities such as robots, AI systems, or software
with advanced problemsolving capabilities.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

 AI is one of the newest fields in science and engineering.

 Work started in earnest soon after World War II, and the name itself was coined in
1956.

 AI currently encompasses a huge variety of subfields, ranging from the general


(learning and perception) to the specific, such as playing chess, proving mathematical
theorems, writing poetry, driving a car on a crowded street, and diagnosing diseases.

 AI is relevant to any intellectual task; it is truly a universal field.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

What is AI ?
 We have claimed that AI is exciting, but we have not said what it is.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

 The definitions on top are concerned with thought processes and


reasoning, whereas the ones on the bottom address behavior.

 The definitions on the left measure success in terms of fidelity to human


performance, whereas the ones on the right measure against an ideal
performance measure, called rationality.

"humanly" is more about the emotional and social aspects of human behavior, while
"rationally" focuses on logical and analytical thinking.

Historically, all four approaches to AI have been followed, each by different people
with different methods.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Acting humanly: The Turing Test approach

 The Turing Test, proposed by Alan Turing TURING TEST (1950), was designed to
provide a satisfactory operational definition of intelligence.

• Test for determining whether a machine


can exhibit intelligent behavior
indistinguishable from that of a human.

• If a human evaluator cannot reliably


distinguish between responses from a
machine and a human, the machine is
considered to have passed the test.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

For now, we note that programming a computer to pass a rigorously applied test provides
plenty to work on.

The computer would need to possess the following capabilities:

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Thinking humanly: The cognitive modeling approach

The cognitive modeling approach:

• involves creating models that


simulate human thought processes.

• This approach attempts to replicate


how humans think, reason, and solve
problems by building computational
models that mimic the mental
operations observed in human
cognition.

• The goal is to understand human


thought processes and implement
them in machines, often using
insights from psychology and
neuroscience.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

To understand the workings of the human mind, we can approach it in three


ways:

1. Introspection : Reflecting on and analyzing our own thoughts as they occur.

2. Psychological Experiments : Observing and studying human behavior in


controlled situations.

3. Brain Imaging : Using technologies like MRI or EEG to visualize brain


activity while the mind is at work.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

1. Develop a precise theory of the mind.


2. Express the theory as a computer program.
3. Compare the program's input output behavior with human behavior.
4. If they match, it suggests the program's mechanisms may reflect actual human
cognitive processes.

A cognitive process refers to the mental activities


involved in acquiring, processing, and storing
information.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Thinking rationally: The “laws of thought” approach

 Thinking rationally: The "laws of thought" approach means using clear, logical rules to
guide thinking and decision making.

 It’s about following strict principles of logic, like a set of rules, to ensure that
conclusions make sense and are correct.

 The aim is to create systems or models that think in a perfectly logical way, similar to
how an ideal rational person would think.

+ =

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Acting rationally: The rational agent approach

 Involves designing systems or agents that make decisions and take actions in a way that
maximizes their chances of achieving their goals.

 This approach focuses on creating agents that use available information and logical
reasoning to act in the most effective and efficient manner.

 In simple terms, a rational agent is one that does what is most likely to lead to the best
outcome, based on what it knows and its goals.

 For example, a navigation app that chooses the fastest route to your destination,
considering current traffic conditions, is using a rational agent approach.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

An agent is just something that acts

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

• A brief history of the disciplines that contributed ideas, viewpoints, and techniques to AI.

 Philosophy

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

 Mathematics

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

 Economics

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Neuroscience

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Psychology

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Computer engineering

Control theory and cybernetics

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Linguistics

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Section Focus Key Contributions
The nature of the mind, knowledge, Aristotle's logical rules, Descartes' mind-body dualism, Locke's
Philosophy and action empiricism, Carnap's computational theory of mind
Boole's propositional logic, Frege's first-order logic, Turing's computability
Mathematics Logic, computation, and probability theory, Cook and Karp's NP-completeness, Bayes' rule
Decision-making, game theory, and Adam Smith's economic theory, Walras' utility theory, von Neumann and
Economics operations research Morgenstern's game theory, Bellman's Markov decision processes
Brain structure, function, and Broca's localized brain functions, Golgi and Cajal's neuronal structures,
Neuroscience comparison to computers brain imaging and single-cell recording
Human and animal thought and Helmholtz's experimental psychology, Wundt's introspection, Watson's
Psychology behavior behaviorism, Craik's cognitive modeling
Computer Computer development and its Turing's codebreaking, Zuse's Z-3 computer, Atanasoff and Berry's ABC,
Engineering applications Mauchly and Eckert's ENIAC
Control Theory
and Self-controlling machines and Ktesibios' water clock, Watt's steam engine governor, Wiener's control
Cybernetics control systems theory, Ashby's homeostatic devices
Skinner's behaviorism, Chomsky's generative grammar, computational
Linguistics Language and thought linguistics

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

THE HISTORY OF ARTIFICIAL INTELLIGENCE

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The gestation of artificial intelligence (1943–1955)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The birth of artificial intelligence (1956)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Early enthusiasm, great expectations (1952–1969)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

A dose of reality (1966–1973)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Knowledgebased systems: The key to power? (1969–1979)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
AI becomes an industry (1980–present)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The return of neural networks (1986–present)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

AI adopts the scientific method (1987–present)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The emergence of intelligent agents (1995–present)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
The availability of very large data sets (2001–present)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


Period Key Achievements
• First neural network models based on brain cells.
• Early programs that could simulate simple thinking.
Early Years (1943-1955) • Foundation for future AI research.
• Important meeting to discuss AI.
• Program that could solve mathematical problems.
Birth of AI (1956) • Marks the official beginning of AI as a field.
• Programs that could solve problems like playing games or solving math problems.
• Creation of expert systems for specific tasks.
Early Enthusiasm (1952-1969) • Development of Lisp, a new programming language for AI.
• Early AI programs had limitations.
• Shift to focus on programs that used specific knowledge.
Challenges (1966-1973) • Recognition of the need for more practical approaches.
• Programs that used specific knowledge, like medical expertise, to solve problems.
• Development of expert systems for various fields.
Knowledge-Based Systems (1969-1979) • Recognition of the importance of domain-specific knowledge.
BANGALORE INSTITUTE OF TECHNOLOGY
• AI programs started to be used in businesses for tasks like configuring computers.
• Growth of the AI industry.
• Commercialization (1980-1988) • Increased interest in practical applications of AI.
• New type of AI that learns from data, like deep learning.
• Breakthroughs in areas like image recognition and natural language processing.
• Neural Networks (1986-present) • Revolutionized many industries.
• AI research becomes more rigorous and scientific.
• Use of experiments and analysis to improve programs.
• Scientific Methods (1987-present) • Increased focus on evidence-based approaches.
• Programs that can act and think independently.
• Applications in areas like search engines, recommendations, and automation.
• Intelligent Agents (1995-present) • Increased autonomy and decision-making capabilities.
• AI programs that learn from large amounts of data.
• Advancements in machine learning and data analysis.
• Data-Driven AI (2001-present) • Increased ability to handle complex problems and tasks.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• Chapter 1 identified the concept of rational agents as central to our approach to artificial intelligence.

• In this chapter, we make this notion more concrete.

• We begin by examining agents, environments, and the coupling between them.

• How well an agent can behave depends on the nature of the environment; some environments are more
difficult than others.

• We give a crude categorization of environments and show how properties of an environment influence
the design of suitable agents for that environment

A Rational agent in artificial intelligence is an entity that acts to achieve the best possible outcome or, when there is
uncertainty, aims for the most expected success based on the information it has.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

AGENTS AND ENVIRONMENTS

1. Agent: Anything that can perceive and act on its environment.


2. Environment: The external surroundings the agent interacts with.
3. Sensors: Devices that allow the agent to receive information from the environment.
Human agent: Eyes, ears, etc.
Robotic agent: Cameras, infrared sensors.
Software agent: Keystrokes, files, network packets.
4. Actuators: Tools that allow the agent to act on its environment.
Human agent: Hands, legs, vocal tract, etc.
Robotic agent: Motors.
Software agent: Displays, file writing, sending network packets.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

1.Percept: The input or information an agent perceives at any given moment.

2.Percept Sequence: The entire history of everything an agent has perceived up to a


certain point.

3.Action Choice: An agent makes decisions based on everything it has perceived so far
(the percept sequence), not on things it hasn’t encountered.

4.Agent Function: A mathematical way to describe an agent's behavior, mapping each


possible percept sequence to a corresponding action.

In short, an agent's behavior is determined by its history of perceptions, and this is


defined by its agent function.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

1. Agent Function Table: Describes an agent's behavior by mapping percept


sequences to actions; in practice, this table could be very large or infinite.

2. Building the Table: In theory, we can construct this table by testing all possible
percept sequences and recording the agent's responses.

3. External Characterization: The agent function provides an external, abstract


description of the agent's behavior.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The vacuumcleaner world has two locations: squares A and B.


The vacuum agent perceives its current location and whether the square has dirt.
The agent has four possible actions: move left, move right, suck up dirt, or do nothing.
A simple agent function: if the square is dirty, it sucks the dirt; if it's clean, the agent moves to the other
square.
The agent function can be partially represented in a table.
An agent program can be implemented to follow this simple agent function.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Various vacuumworld agents can be defined by adjusting their responses in the table.

- The key challenge is determining what makes an agent good, bad, intelligent, or not.

- The agent concept is a tool for analysis, not a strict classification of systems.

AI focuses on artifacts with significant computational resources and complex


decisionmaking environments.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

GOOD BEHAVIOR: THE CONCEPT OF RATIONALITY

- A rational agent is one that always does the right thing by correctly filling in the agent function
table.

- Doing the right thing means the agent’s actions lead to desirable outcomes.

- The correctness of an agent’s actions is determined by the consequences of its behavior in the
environment.

- The agent’s actions create a sequence of environment states.

- If the resulting sequence of states is desirable, the agent has performed well.

- Performance measure is used to evaluate how desirable the sequence of environment states is,
helping assess the agent’s performance.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Success is defined in terms of environment states, not the agent's internal state or
opinion.

- If success were based on the agent’s own opinion, it could wrongly believe it
performed perfectly.

- Self-deception could lead to the agent thinking it is perfectly rational, even when it
isn’t.

- Humans often exhibit this behaviour, like rationalizing failure by convincing


themselves they didn’t want the outcome (e.g., "sour grapes").

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Performance measures vary depending on the task and agent; they need to be
designed for the specific circumstances.

- For the vacuum-cleaner agent, measuring performance by the amount of dirt cleaned
could lead to counterproductive behavior, like repeatedly cleaning and dumping dirt.

- A better measure would focus on the cleanliness of the floor, such as awarding points
for each clean square and possibly factoring in penalties for energy use and noise.

- The performance measure should reflect what is actually desired in the environment,
not just how one thinks the agent should act.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

A rational agent should choose an action that it expects will best improve
its performance measure, based on the current percept sequence and any
built-in knowledge it has.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

To determine if the vacuum-cleaner agent is rational, we need to:

- Define the performance measure: One point for each clean square per time step over 1000 time steps.

- Know the environment: The layout is known, but dirt distribution and the agent's starting position are
unknown. Cleaning a square and moving left or right are the actions available.

- Confirm the agent's capabilities: It can perceive its location and whether there's dirt, and it can move or
clean accordingly.

The agent’s rationality depends on whether it effectively maximizes the performance measure under these
conditions.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Under the given circumstances, the agent is rational because its performance is as high as or higher
than any other agent’s. However, the same agent could be irrational in different scenarios:

- If all the dirt is cleaned, the agent will unnecessarily move back and forth, which could lead to poor
performance if there is a penalty for each move.

- An improved agent would stop moving once it ensures all squares are clean or would occasionally
check and clean if dirt can reappear.

- If the environment's layout is unknown, the agent should explore rather than limit itself to specific
squares.

These considerations highlight that rationality depends on the performance measure and the specific
conditions of the environment.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Omniscience, learning, and autonomy


•Omniscience: This means knowing everything about the environment and the outcomes of all possible actions. In
practice, no real agent has omniscience; they must make decisions based on incomplete information and their
current percepts.

•Learning: Learning allows an agent to improve its performance over time by gaining new knowledge or updating
its strategies based on experience. A learning agent can adapt to changes in the environment and refine its
actions to achieve better results.

•Autonomy: Autonomy refers to an agent’s ability to make decisions and perform actions without human
intervention. An autonomous agent operates independently, relying on its sensors and built-in knowledge to make
decisions and achieve its goals.

In summary, while omniscience is ideal but unrealistic, learning and autonomy are crucial for agents to effectively
interact with and adapt to their environments.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Rationality vs. Omniscience: Rationality involves making the best decisions based on available
information, while omniscience means knowing all outcomes in advance.

- Example: Crossing the street to meet a friend is rational, even if unforeseen events (like a cargo
door falling) lead to bad outcomes. It doesn't mean the action was irrational.

- Rationality vs. Perfection: Rationality aims to maximize expected performance given what’s
known, whereas perfection would require knowing and achieving the best possible outcome after
the fact.

- Design Implications: Expecting agents to always perform perfectly based on outcomes is unrealistic
because it requires knowing future events, which is impossible without advanced tools like crystal
balls or time machines.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Rationality depends on the percepts available and does not require knowing all future
outcomes.

- Looking Before Crossing: An agent must gather information (like looking both ways) to
avoid dangerous situations. Without this, the agent's decision may not be rational.

- Information Gathering: Rational agents take actions to improve future percepts, such as
checking their environment or exploring unknown areas to make better decisions.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Learning Requirement: A rational agent must learn from its experiences and adapt its
behavior accordingly. It may start with some prior knowledge but should update this
knowledge as it gains new experiences.

- Complete Knowledge: In cases where the environment is fully known from the start,
the agent does not need to perceive or learn; it can act correctly based on the
complete information.

- Fragility of Fixed Behaviors: Agents with rigid behaviors based on fixed assumptions
can be fragile.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Dung Beetle: The beetle keeps trying to plug its nest with dung even if the dung ball it
was carrying gets lost. It doesn’t adjust its behaviour when things change, which means it
fails to adapt.

- Sphex Wasp: The wasp has a set routine for dealing with its prey (a caterpillar) but
doesn’t change its actions if the caterpillar is moved. It keeps following its plan without
adapting, showing it doesn’t learn from changes in its environment.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

- Autonomy: An agent that depends too much on its designer’s prior knowledge lacks autonomy. A
truly rational agent should be able to learn from its experiences and adapt, rather than just relying
on initial instructions.

- Example: A vacuum-cleaning agent that learns to predict where and when dirt will appear will
perform better than one that doesn’t learn.

- Initial Knowledge: It’s practical to give an agent some initial knowledge or guidance when it starts,
as it may otherwise act randomly due to lack of experience.

- Learning and Independence: Over time, as the agent gains experience, it can become more
independent and rely less on prior knowledge. This makes it capable of succeeding in a wide range of
environments.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

THE NATURE OF ENVIRONMENTS

• A task environment is essentially the "problem" that a rational agent is meant to solve.

• To build a rational agent, we need to first define this environment.

• Different task environments require different agent designs, so it's important to specify the environment
correctly.

• By understanding the nature of the task environment, we can determine the most suitable approach for
designing an agent to operate effectively within it.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Specifying the task environment


• In designing a rational agent, like the vacuum-cleaner agent, we need to specify four key elements:
Performance (what the agent aims to achieve), Environment (where it operates), Actuators (what it uses
to take actions), and Sensors (how it perceives the environment).

• This is summarized by the acronym PEAS. The first step in designing an agent is fully defining its task
environment.

• While the vacuum cleaner is a simple example, a more complex case like an automated taxi driver
highlights how challenging and unpredictable real-world environments can be for agents.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• For an automated taxi driver, the performance measure would include reaching the correct destination,
minimizing fuel consumption, trip time, and costs, obeying traffic laws, maximizing safety, ensuring
passenger comfort, and possibly maximizing profits. Some of these goals might conflict, so trade-offs will be
necessary.

• The driving environment could involve various types of roads, traffic, pedestrians, animals, road obstacles,
and weather conditions. The environment might vary depending on the location, such as snowy areas
versus regions with little snow, and whether the taxi operates on the right or left side of the road.

• The actuators would include controls for the engine, steering, and braking, similar to those of a human
driver. Additionally, it would need displays or voice outputs to communicate with passengers and possibly
other vehicles.

• The sensors would include cameras to view the road, infrared or sonar sensors to detect distances to other
objects, a speedometer, an accelerometer, and sensors to monitor the car's mechanical condition. It might
also use a GPS to navigate and a keyboard or microphone for passengers to input destinations.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• In Figure 2.5, different agent types are shown with their basic PEAS elements. Surprisingly, some
agents operate in artificial environments, like software that responds to keyboard input and
screen output.

• The key point isn't whether the environment is "real" or "artificial," but rather how complex the
relationship is between the agent's behaviour, its perceptions, and the performance measure.

• Some "real" environments are quite simple, such as a robot inspecting parts on a conveyor belt
with fixed conditions. On the other hand, software agents, or softbots, can operate in highly
complex domains like the Internet.

• For instance, a softbot managing a website must process language, learn user preferences, and
adapt to changing circumstances, such as fluctuating connections or new sources. The Internet's
complexity, with both human and artificial agents, can rival that of the physical world.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Properties of task environments

• The variety of task environments in AI is immense, but we can categorize them based on a few key
dimensions. These dimensions help guide the design of appropriate agents and determine which
techniques are most suitable for their implementation.

• First, we list these dimensions, and then we examine different task environments to clarify the
concepts. While the definitions provided here are informal, more precise explanations and examples
will be given in later chapters.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Fully observable vs. partially observable

• In a fully observable environment, the agent's sensors provide complete information about the
environment's state at every point, making it easier for the agent to act without needing to track or guess
missing details.

• This is ideal because the agent doesn't need to maintain internal memory about the world.

• On the other hand, a partially observable environment provides incomplete or noisy data, requiring the
agent to infer or keep track of missing information.

• For instance, a vacuum cleaner agent with a limited dirt sensor can't detect dirt in other rooms, and an
automated taxi can't predict what other drivers will do.

• If the environment is completely unobservable (no sensors at all), the agent may still achieve its goals
under certain conditions, even though it seems challenging.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Single agent vs. multiagent


• In a single-agent environment, the agent operates alone, such as solving a crossword puzzle. In a
multiagent environment, other entities (agents) interact, like in chess.

• However, it's not always obvious when something should be treated as an agent. For example, should a
taxi driver view another vehicle as just an object obeying physical laws or as an agent with its own goals?

• The distinction depends on whether the other entity's behavior is best understood as trying to maximize
its own performance measure, which may depend on the first agent's actions.

• In chess, the opponent aims to minimize the first player's success, making it a competitive multiagent
environment.

• In contrast, in taxi driving, agents cooperate to avoid collisions, making it partially cooperative, though
there are competitive elements like fighting for parking spaces. In multiagent environments,
communication and even random behavior can be important strategies, especially in competitive
settings.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Deterministic vs. stochastic

• In a deterministic environment, the next state is fully determined by the current state and the agent's
action, with no uncertainty involved.

• In contrast, a stochastic environment has some level of unpredictability, meaning that the same action
might lead to different outcomes. In fully observable deterministic environments, the agent doesn't need
to worry about uncertainty.

• However, if the environment is only partially observable, it may seem stochastic because the agent lacks
full information.

• Most real-world environments, like taxi driving, are treated as stochastic because it's impossible to predict
everything, such as traffic behavior or unexpected events like tire blowouts.

• An uncertain environment is either not fully observable or not deterministic. "Stochastic" implies that
uncertainty is measured with probabilities, while "nondeterministic" means there are multiple possible
outcomes, but no probabilities are specified. In such cases, the agent must succeed across all possible
outcomes.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY

Episodic vs. sequential:

• In an episodic environment, the agent's actions are divided into separate, independent episodes.

• In each episode, the agent receives a percept and performs an action, and the outcome of one episode
does not affect future ones.

• For example, an agent inspecting defective parts on an assembly line makes decisions based only on the
current part, without considering past actions or affecting future parts.

• In a sequential environment, actions have long-term consequences. Decisions made now can influence
future decisions, as in chess or taxi driving, where short-term moves can impact the entire game or driving
trip.

• Episodic environments are simpler because the agent doesn’t need to plan ahead, unlike in sequential
environments.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Static vs. dynamic

• In a static environment, nothing changes while the agent is making a decision, allowing it to focus
without worrying about time or external changes.

• Dynamic environments, however, change continuously, requiring the agent to make decisions quickly; if
it takes too long, that inaction is effectively a decision to do nothing.

• If the environment itself doesn’t change but the agent’s performance score does over time, it is termed
semi dynamic.

• For example, taxi driving is dynamic because other vehicles are constantly moving. Chess played with a
timer is semi dynamic, as the state of the game doesn’t change, but the clock affects decision-making.

• In contrast, crossword puzzles are static, as they remain unchanged while the agent works on them.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Discrete vs. continuous

• The discrete vs. continuous distinction relates to the state of the environment, how time is managed,
and the agent's percepts and actions.

• In a discrete environment, like chess, there are a finite number of distinct states and a limited set of
percepts and actions.

• Conversely, taxi driving is a continuous environment because both the state (e.g., speed and location)
and time are not limited to specific values; they vary smoothly over a range.

• Actions in this context, such as steering angles, are also continuous. While inputs from digital cameras
are technically discrete, they often represent continuously varying intensities and locations, so they are
treated as continuous for practical purposes.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Known vs. unknown

• The known vs. unknown distinction pertains to the agent's knowledge about the environment's "laws of
physics."

• In a known environment, the agent understands the outcomes of its actions (or their probabilities if
stochastic). In an unknown environment, the agent must learn how it operates to make informed
decisions.

• This distinction is different from the fully vs. partially observable classifications. A known environment
can still be partially observable, like in solitaire card games where the rules are clear, but not all cards
are visible.

• Conversely, an unknown environment can be fully observable; for instance, in a new video game, the
screen may display the entire game state, but the agent doesn't know how the controls work until it
experiments.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• The most challenging environment for an agent is one that is partially observable, multiagent, stochastic,
sequential, dynamic, continuous, and unknown.

• Taxi driving exemplifies many of these challenges, though generally, the driver has knowledge of the
environment. However, driving a rented car in a new country introduces unfamiliar geography and traffic
laws, making it more complex.

• Figure 2.6 categorizes various familiar environments, showing that some classifications can be nuanced.
For instance, a part-picking robot is usually considered episodic, as it assesses each part independently.
However, if it encounters a batch of defective parts, it should adapt its behavior based on this new
information.

 The known vs. unknown distinction isn't included because it relates more to the agent's knowledge than
the environment itself. Games like chess and poker can have clear rules, but it’s still interesting to think
about how an agent could learn to play these games without prior knowledge of the rules.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

THE STRUCTURE OF AGENTS


• So far, we've focused on the behavior of agents—specifically, how they act based on their percepts.

• Now, we need to discuss the internal workings of agents. The goal of AI is to create an agent program that
implements the agent function, which maps percepts to actions.

• This program operates on a computing device, referred to as the architecture.

• The agent is defined as the combination of the architecture and the program: agent = architecture +
program.

• The chosen program must be suitable for the architecture; for example, if the program suggests actions
like "walk," the architecture should include legs.

• The architecture could be anything from a standard PC to a robotic car equipped with various sensors and
computers.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• In general, the architecture processes the sensor inputs, runs the agent program, and sends the generated action
choices to the actuators. Most of the book focuses on designing agent programs, while later chapters will address
the specifics of sensors and actuators.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Agent programs

• The agent programs we design in this book follow a standard structure: they take the current percept
from the sensors as input and return an action for the actuators.

• It’s important to distinguish between the agent program, which only uses the current percept, and the
agent function, which considers the entire history of percepts.

• If the agent needs to make decisions based on the full sequence of percepts, it must store that history.

• We describe these agent programs using a simple pseudocode language, with real implementations
available in an online repository.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• For instance, one example program keeps track of the percept sequence and uses it to look up actions in a table. This
table represents the agent function, mapping every possible percept sequence to the appropriate action. To create a
rational agent, designers must construct a comprehensive table that accounts for every potential percept sequence.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

This pseudocode outlines a simple table-driven agent:

1. Function Definition: The function `TABLE-DRIVEN-AGENT` takes a `percept` as input and returns an action.

2. Persistent Variables:
- `percepts`: A sequence that starts empty and will store the history of percepts received.
- `table`: A predefined table of actions that is indexed by percept sequences.

3. Action Steps:
- Append the current `percept` to the end of the `percepts` sequence.
- Look up the appropriate `action` in the `table` using the current `percepts` as the key.
- Return the chosen `action`.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

In the remainder of this section, we outline four basic kinds of agent programs that embody the principles
underlying almost all intelligent systems:

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Simple Reflex Agents: A Beginner's Guide

• Imagine a robot vacuum cleaner. It's a simple machine with a single task: to clean up dirt.

• How does it know where to go? It doesn't have a map or a complex brain.

• It simply follows a basic rule: if there's dirt, suck it up.

• This is a simple reflex agent in action.

• It's a type of artificial intelligence that makes decisions based on the current situation without considering past experiences.

• Think of it as a very basic "if-then" statement.

* If there's dirt, then suck it up.

Key Points:

* Relies on the present: Simple reflex agents only look at what's happening right now.
* No memory: They don't remember what they've done in the past.
* Rule-based: Their actions are guided by simple rules.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Example:

* Situation: The robot vacuum is on a dirty floor.


* Action: The robot sucks up the dirt.

Limitations:

While simple reflex agents are useful for basic tasks, they have limitations:

• Can't handle complex situations: They struggle when the environment changes or when there are multiple factors to
consider.

* No planning: They don't plan ahead or think about future consequences.

In conclusion, simple reflex agents are like a robot vacuum cleaner: they're good at their specific task but don't have the
intelligence to handle more complex situations.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Model-based reflex agents

Imagine a robot that can't see everything around it. How can it make smart decisions?

Model-Based Agents are like robots that have a mental map. This map helps them figure out what's happening even
when they can't see everything.

• Internal State: Think of this as the robot's mental map. It keeps track of what the robot knows about the world, even
if it can't see it all.

• Model of the World: This is like the robot's understanding of how the world works. It knows things like how fast
cars usually move or how traffic lights work.

* Updating the Map: The robot uses its sensors (like cameras) to see what's happening now. It then combines this
information with its mental map (internal state) and its understanding of the world (model) to update its map.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Example:

* Robot's Map: The robot knows it's on a busy street.


* Sensor Data: The robot sees a red light ahead.
* Model of the World: The robot knows that red lights mean stop.
* Updated Map: The robot now knows it should stop.

Why is this important?

* Making Smart Decisions: By having a mental map, the robot can make better decisions, even in situations where it
doesn't have all the information.
* Handling Uncertainty: The world can be unpredictable. Model-based agents can handle this by using their mental map
to make educated guesses.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Goal-based agents

Imagine a robot that wants to get to a specific place. It's not enough for it to just know where it is now and what's around
it. It also needs to know where it wants to go.

Goal-Based Agents are like robots that have a destination in mind. They use their knowledge of the world and their goal
to figure out the best way to get there.

• Goal: This is the destination the robot wants to reach.

• Model of the World: This is the robot's understanding of how the world works, similar to what we discussed in the
previous section.

* Decision Making: The robot uses its goal and its understanding of the world to plan a series of actions that will lead it
to its destination.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Example:

* Goal: The robot wants to get to the kitchen.


* Model of the World: The robot knows there's a hallway that leads to the kitchen.
* Decision: The robot decides to go down the hallway.

Why is this important?

* Flexibility: Goal-based agents can adapt to changing situations. If the robot's path is blocked, it can find a new way.
* Planning: These agents can plan ahead, thinking about the consequences of their actions.
* Complex Tasks: They can handle more complex tasks that require multiple steps to achieve a goal.

In simple terms, Goal-Based Agents are like robots that have a purpose and can plan their actions to achieve that
purpose. They're more flexible and capable than simple reflex agents that just react to their surroundings.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Utility-based agents

Imagine a robot that wants to get from point A to point B.

It's not enough for it to just know the shortest path.

It also needs to consider factors like safety, traffic, and fuel efficiency.

Utility-Based Agents are like robots that have a way to measure how good or bad different outcomes are.

They use this measurement (called utility) to make decisions that maximize their overall happiness or satisfaction.

• Utility: This is a measure of how good or bad a situation is for the robot. It takes into account factors like safety, speed,
cost, and other things that the robot values.

* Decision Making: The robot chooses actions that it believes will lead to the highest utility. It considers the possible
outcomes and their associated utilities.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Example:

* Robot's Goal: Get to point B quickly and safely.


* Utility: The robot values speed and safety.
* Decision: The robot chooses a route that is both fast and has minimal traffic.

Why is this important?

• Complex Decisions: Utility-based agents can make complex decisions that involve multiple factors.

• Trade-offs: They can weigh the pros and cons of different options and make decisions that balance competing goals.

* Uncertainty: They can handle uncertainty by considering the probabilities of different outcomes and their associated
utilities.

In simple terms, Utility-Based Agents are like robots that can make smart decisions by considering the overall value of
different outcomes. They're able to weigh factors like speed, safety, and cost to make the best choices.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Learning agents

Imagine a robot that can learn from its mistakes. Unlike a robot that follows fixed rules, a learning agent can improve
over time.

Learning Agents are like robots that can get smarter. They have four main parts:

1. Performance Element: This is the part that makes decisions and takes actions.

2. Learning Element: This is the part that learns from experience.

3. Critic: This is the part that tells the learning element how well the agent is doing.

4. Problem Generator: This part suggests new actions to try.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

How does it work?

1. The Performance Element makes a decision.


2. The Critic evaluates the decision and tells the Learning Element if it was good or bad.
3. The Learning Element uses this feedback to improve the Performance Element.
4. The Problem Generator suggests new actions to try.

Why is this important?

* Adaptability: Learning agents can adapt to new situations and improve their performance over time.
* Flexibility: They can handle tasks that are difficult to define with fixed rules.
* Real-World Applications: Learning agents are used in many areas, such as self-driving cars, medical diagnosis, and
language translation.

In simple terms, Learning Agents are robots that can learn from their experiences and become better at what they do.
They're a key part of creating intelligent systems that can adapt to the real world.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

How the components of agent programs work

Think of AI representations as different ways to describe the world. Just like you can describe a picture using words, an
AI agent can describe the world using different kinds of representations.

#Atomic Representations: The Building Blocks

* Simple and Basic: These are like single words or symbols that represent a whole idea.
* Limited Detail: They don't provide much information about the details of the world.
* Example: Describing a car as just "car" without mentioning its color, make, or model.

#Factored Representations: Breaking Things Down

* More Detailed: These representations break down things into smaller parts, like attributes or variables.
* Flexible: They can represent different combinations of these parts.
* Example: Describing a car as a "red" "Toyota" "Camry."

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

#Structured Representations: Relationships Matter

* Most Complex: These representations show how things are connected or related to each other.
* Real-World Complexity: They can capture the complex relationships in the real world.
* Example: Describing a car as having "four wheels," "a steering wheel," and "a driver."

Why do we need different representations?

* Different Tasks: Some tasks require simple representations, while others need more complex ones.
* Trade-offs: More complex representations can capture more details but can also be harder to work with.
* Real-World Complexity: The real world is complex, so we need representations that can capture its complexity.

In simple terms, atomic representations are like single words, factored representations are like sentences, and structured
representations are like stories. Each type of representation has its own strengths and weaknesses, and the best choice depends
on the specific task at hand.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

You might also like