0% found this document useful (0 votes)
38 views

Unit 1 - Problem Solving

The document outlines a curriculum for various engineering subjects, particularly focusing on Artificial Intelligence (AI) and its applications. It discusses the definition, goals, advantages, and disadvantages of AI, as well as the prerequisites for learning it. Additionally, it highlights the future potential of AI in various fields such as transportation, healthcare, and education.

Uploaded by

rajanayaki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Unit 1 - Problem Solving

The document outlines a curriculum for various engineering subjects, particularly focusing on Artificial Intelligence (AI) and its applications. It discusses the definition, goals, advantages, and disadvantages of AI, as well as the prerequisites for learning it. Additionally, it highlights the future potential of AI in various fields such as transportation, healthcare, and education.

Uploaded by

rajanayaki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Click on Subject/Paper under Semester to enter.

Random Process and Electromagnetic


Professional English Linear Algebra -
Professional English - - II - HS3252 Fields - EC3452
MA3355
I - HS3152
C Programming and Networks and
Statistics and
Data Structures - Security - EC3401
Matrices and Calculus Numerical Methods -
CS3353
- MA3151 MA3251
1st Semester

3rd Semester

Linear Integrated

4th Semester
2nd Semester

Signals and Systems - Circuits - EC3451


Engineering Physics - Engineering Graphics
- GE3251 EC3354
PH3151 Digital Signal
Processing - EC3492
Physics for Electronic Devices and
Engineering Chemistry Electronics Engg - Circuits - EC3353
- CY3151 PH3254 Communication
Systems - EC3491
Control Systems -
Basic Electrical & EC3351
Problem Solving and Instru Engg - BE3254 Environmental
Python Programming - Sciences and
GE3151 Digital Systems Design Sustainability -
Circuit Analysis - - EC3352 GE3451
EC3251

Wireless
Communication -
EC3501 Embedded Systems
and IOT Design -
ET3491
VLSI and Chip Design
5th Semester

- EC3552 Human Values and


7th Semester

8th Semester
6th Semester

Artificial Intelligence Ethics - GE3791


and Machine Learning
Transmission Lines and - CS3491
RF Systems - EC3551 Open Elective 2 Project Work /
Intership
Open Elective-1 Open Elective 3
Elective 1
Elective-4
Open Elective 4
Elective 2
Elective-5
Elective 3
Elective-6
All ECE Engg Subjects - [ B.E., M.E., ] (Click on Subjects to enter)
Circuit Analysis Digital Electronics Communication Theory
Basic Electrical and Electrical Engineering and Principles of Digital
Instrumentation Engineering Instrumentation Signal Processing
Electronic Devices Linear Integrated Circuits Signals and Systems
Electronic Circuits I Electronic Circuits II Digital Communication
Transmission Lines and Wave Control System Engineering Microprocessors and
Guides Microcontrollers
Computer Architecture Computer Networks Operating Systems
RF and Microwave Engineering Medical Electronics VLSI Design
Optical Communication and Embedded and Real Time Cryptography and
Networks Systems Network Security
Probability and Random Transforms and Partial Physics for Electronics
Processes Differential Equations Engineering
Engineering Physics Engineering Chemistry Engineering Graphics
Problem Solving and Python Object Oriented Programming Environmental Science
Programming and Data Structures and Engineering
Principles of Management Technical English Total Quality
Management
Professional Ethics in Engineering Mathematics I Engineering Mathematics
Engineering II
www.BrainKart.com

UNIT 1
PROBLEM SOLVING
Introduction to AI - AI Applications - Problem solving agents – search algorithms –
uninformed search strategies – Heuristic search strategies – Local search and
optimization problems – adversarial search – constraint satisfaction problems (CSP)

1.1 INTRODUCTION

INTELLIGENCE ARTIFICIAL INTELLIGENCE


It is a natural process. It is programmed by humans.
It is actually hereditary. It is not hereditary.
Knowledge is required for intelligence. KB and electricity are required to generate
output.
No human is an expert. We may get better Expert systems are made which aggregate
solutions from other humans. many person’s experience and ideas.

1.2 DEFINITION

The study of how to make computers do things at which at the moment, people are better.
“Artificial Intelligence is the ability of a computer to act like a human being”.

 Systems that think like humans


 Systems that act like humans
 Systems that think rationally. Systems that act rationally.

Figure 1.1 Some definitions of artificial intelligence, organized into four categories

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

(a) Intelligence - Ability to apply knowledge in order to perform better in an environment.


(b) Artificial Intelligence - Study and construction of agent programs that perform well in
a given environment, for a given agent architecture.
(c) Agent - An entity that takes action in response to precepts from an environment.
(d) Rationality - property of a system which does the “right thing” given what it knows.
(e) Logical Reasoning - A process of deriving new sentences from old, such that the new
sentences are necessarily true if the old ones are true.

Four Approaches of Artificial Intelligence:


 Acting humanly: The Turing test approach.
 Thinking humanly: The cognitive modelling approach.
 Thinking rationally: The laws of thought approach.
 Acting rationally: The rational agent approach.

In today's world, technology is growing very fast, and we are getting in touch with different new technologies
day by day.

Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create
a new revolution in the world by making intelligent machines.The Artificial Intelligence is now all around
us. It is currently working with a variety of subfields, ranging from general to specific, such as self-driving
cars, playing chess, proving theorems, playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI
holds a tendency to cause a machine to work as a human.

Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-
made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent machines which can behave like a
human, think like humans, and able to make decisions."

Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and
solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can
create a machine with programmed algorithms which can work with own intelligence, and that is the
awesomeness of AI.

It is believed that AI is not a new technology, and some people says that as per Greek myth, there were
Mechanical men in early days which can work and behave like humans.

Why Artificial Intelligence?

Before Learning about Artificial Intelligence, we should know that what is the importance of AI and why
3

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

should we learn it. Following are some main reasons to learn about AI:

o With the help of AI, you can create such software or devices which can solve real-world problems
very easily and with accuracy such as health issues, marketing, traffic issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana, Google Assistant,
Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment where survival of
humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.

Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence


2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate,
explain, and can advise to its user.

What Comprises to Artificial Intelligence?

Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors
which can contribute to it. To create the AI first we should know that how intelligence is composed, so the
Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-
solving perception, language understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the following
discipline:

o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Advantages of Artificial Intelligence

Following are some main advantages of Artificial Intelligence:

o High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy
as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI
systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action multiple times with
high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring
the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology
is currently used by various E-commerce websites to show the products as per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which
can make our journey safer and hassle-free, facial recognition for security purpose, Natural language
processing to communicate with the human in human-language, etc.

Disadvantages of Artificial Intelligence

Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being so
advantageous technology still, it has some disadvantages which we need to keep in our mind while creating
an AI system. Following are the disadvantages of AI:

o High Cost: The hardware and software requirement of AI is very costly as it requires lots of
maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work
out of the box, as the robot will only do that work for which they are trained, or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it does not have
the feeling so it cannot make any kind of emotional attachment with human, and may sometime be
harmful for users if the proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are getting more
dependent on devices and hence they are losing their mental capabilities.

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

o No Original Creativity: As humans are so creative and can imagine some new ideas but still AI
machines cannot beat this power of human intelligence and cannot be creative and imaginative.

Prerequisite

Before learning about Artificial Intelligence, you must have the fundamental knowledge of following so
that you can understand the concepts easily:

o Any computer language such as C, C++, Java, Python, etc.(knowledge of Python will be an advantage)
o Knowledge of essential Mathematics such as derivatives, probability theory, etc.

1.3 ACTING HUMANLY: THE TURING TEST APPROACH

The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence. A computer passes the test if a human
interrogator, after posing some written questions, cannot tell whether the written responses
come from a person or from a computer.

Figure 1.2 Turing Test

 natural language processing to enable it to communicate successfully in English;


 knowledge representation to store what it knows or hears;
 automated reasoning to use the stored information to answer questions and to draw
new conclusions
 machine learning to adapt to new circumstances and to detect and extrapolate patterns.

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Total Turing Test includes a video signal so that the interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
“through the hatch.” To pass the total Turing Test, the computer will need

• computer vision to perceive objects, and robotics to manipulate objects and move
about.

Thinking humanly: The cognitive modelling approach

Analyse how a given program thinks like a human, we must have some way of
determining how humans think. The interdisciplinary field of cognitive science brings together
computer models from AI and experimental techniques from psychology to try to construct
precise and testable theories of the workings of the human mind.

Although cognitive science is a fascinating field in itself, we are not going to be


discussing it all that much in this book. We will occasionally comment on similarities or
differences between AI techniques and human cognition. Real cognitive science, however, is
necessarily based on experimental investigation of actual humans or animals, and we assume
that the reader only has access to a computer for experimentation. We will simply note that AI
and cognitive science continue to fertilize each other, especially in the areas of vision, natural
language, and learning.

Thinking rationally: The “laws of thought” approach

The Greek philosopher Aristotle was one of the first to attempt to codify ``right
thinking,'' that is, irrefutable reasoning processes. His famous syllogisms provided patterns for
argument structures that always gave correct conclusions given correct premises.

For example, ``Socrates is a man; all men are mortal; therefore Socrates is mortal.''

These laws of thought were supposed to govern the operation of the mind, and initiated
the field of logic.

Acting rationally: The rational agent approach

Acting rationally means acting so as to achieve one's goals, given one's beliefs. An
agent is just something that perceives and acts.

The right thing: that which is expected to maximize goal achievement, given the
available information

Does not necessary involve thinking.

For Example - blinking reflex- but should be in the service of rational action.

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

1.4 FUTURE OF ARTIFICIAL INTELLIGENCE

 Transportation: Although it could take a decade or more to perfect them, autonomous


cars will one day ferry us from place to place.

 Manufacturing: AI powered robots work alongside humans to perform a limited range


of tasks like assembly and stacking, and predictive analysis sensors keep equipment
running smoothly.

 Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more


quickly and accurately diagnosed, drug discovery is sped up and streamlined, virtual
nursing assistants monitor patients and big data analysis helps to create a more
personalized patient experience.

 Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs.

 Media: Journalism is harnessing AI, too, and will continue to benefit from it.
Bloomberg uses Cyborg technology to help make quick sense of complex financial
reports. The Associated Press employs the natural language abilities of Automated
Insights to produce 3,700 earning reports stories per year — nearly four times more
than in the recent past

 Customer Service: Last but hardly least, Google is working on an AI assistant that can
place human-like calls to make appointments at, say, your neighborhood hair salon. In
addition to words, the system understands context and nuance.

1.5 CHARACTERISTICS OF INTELLIGENT AGENTS

Situatedness

The agent receives some form of sensory input from its environment, and it performs
some action that changes its environment in some way.

Examples of environments: the physical world and the Internet.

 Autonomy

The agent can act without direct intervention by humans or other agents and that it has
control over its own actions and internal state.

 Adaptivity

The agent is capable of


(1) reacting flexibly to changes in its environment;
(2) taking goal-directed initiative (i.e., is pro-active), when appropriate; and
(3) Learning from its own experience, its environment, and interactions with others.

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

 Sociability

The agent is capable of interacting in a peer-to-peer manner with other agents or humans What
is Artificial Intelligence?
Future of Artificial Intelligence:

1. Health Care Industries


India is 17.7% of the worlds’ population that makes it the second-largest country in terms of China’s
population. Health care facilities are not available to all individuals living in the country. It is because of the
lack of good doctors, not having good infrastructure, etc. Still, there are people who couldn’t reach to doctors/
hospitals. AI has the ability to provide the facility to detect disease based on symptoms; even if you don’t go
to the doctor, AI would read the data from Fitness band/medical history of an individual to analyze the pattern
and suggest proper medication and even deliver it on one’s fingertips just through cell-phone.
As mentioned earlier Google’s deep mind has already beaten doctors in detecting fatal diseases like breast
cancer. It’s not far away when AI will be detecting common disease as well as providing proper suggestions
for medication. The consequences of this could be: no need for doctors in the long term result in JOB
reduction.
2. AI in Education
The development of a country depends on the quality of education youth is getting. Right now, we can see
there are lots of courses are available on AI. But in the future AI is going to transform the classical way of
education. Now the world doesn’t need skilled labourers for manufacturing industries, which is mostly
replaced by robots and automation. The education system could be quite effective and can be according to
the individual’s personality and ability. It would give chance brighter students to shine and to imbecile a
better way to cop up.

Right Education can enhance the power of individuals/nations; on the other hand, misuse of the same could
lead to devastating results.

3. AI in Finance
Quantification of growth for any country is directly related to its economic and financial condition. As AI has
enormous scope in almost every field, it has great potential to boost individuals’ economic health and a nation.
Nowadays, the AI algorithm is being used in managing equity funds.

An AI system could take a lot number of parameters while figuring out the best way to manage funds. It
would perform better than a human manager. AI-driven strategies in the field of finance are going to change
the classical way of trading and investing. It could be devastating for some fund managing firms who cannot

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

afford such facilities and could affect business on a large scale, as the decision would be quick and abrupt.
The competition would be tough and on edge all the time.

4. AI in Military and Cybersecurity


AI-assisted Military technologies have built autonomous weapon systems, which won’t need humans at all
hence building the safest way to enhance the security of a nation. We could see robot Military in the near
future, which is as intelligent as a soldier/ commando and will be able to perform some tasks.

AI-assisted strategies would enhance mission effectiveness and will provide the safest way to execute it. The
concerning part with AI-assisted system is that how it performs algorithm is not quite explainable. The deep
neural networks learn faster and continuously keep learning the main problem here would be explainable AI.
It could possess devastating results when it reaches in the wrong hands or makes wrong decisions on its own.

1.6 AGENTS AND ITS TYPES

Figure 1.3 Agent types

An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.

 Human Sensors:
 Eyes, ears, and other organs for sensors.
 Human Actuators:
 Hands, legs, mouth, and other body parts.
 Robotic Sensors:
 Mic, cameras and infrared range finders for sensors
 Robotic Actuators:
 Motors, Display, speakers etc An agent can be:

Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
10

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

and hand, legs, vocal tract work for actuators.

Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.

Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cell phone, camera, and
even we are also agents. Before moving forward, we should first know about sensors, effectors,
and actuators.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.

11

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.

Figure 1.4 Effectors

1.7 PROPERTIES OF ENVIRONMENT

An environment is everything in the world which surrounds the agent, but it is not a
part of an agent itself. An environment can be described as a situation in which an agent is
present.

The environment is where agent lives, operate and provide the agent with something to
sense and act upon it.

Fully observable vs Partially Observable:

If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.

A fully observable environment is easy as there is no need to maintain the internal state
to keep track history of the world.

An agent with no sensors in all environments then such an environment is called as


unobservable.

Example: chess – the board is fully observable, as are opponent’s moves. Driving
– what is around the next bend is not observable and hence partially observable.

1. Deterministic vs Stochastic

 If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.

12

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

 A stochastic environment is random in nature and cannot be determined completely by


an agent.

 In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.

2. Episodic vs Sequential

 In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.

 However, in Sequential environment, an agent requires memory of past actions to


determine the next best actions.

3. Single-agent vs Multi-agent

 If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.

 However, if multiple agents are operating in an environment, then such an environment


is called a multi-agent environment.

 The agent design problems in the multi-agent environment are different from single
agent environment.

4. Static vs Dynamic

 If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.

 Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.

 However for dynamic environment, agents need to keep looking at the world at each
action.

 Taxi driving is an example of a dynamic environment whereas Crossword puzzles are


an example of a static environment.

5. Discrete vs Continuous

 If in an environment there are a finite number of precepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.

 A chess game comes under discrete environment as there is a finite number of moves
that can be performed.

 A self-driving car is an example of a continuous environment.


13

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

6. Known vs Unknown

 Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.

 In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.

 It is quite possible that a known environment to be partially observable and an Unknown


environment to be fully observable.

7. Accessible vs. Inaccessible

 If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.

 An empty room whose state can be defined by its temperature is an example of an


accessible environment.

 Information about an event on earth is an example of Inaccessible environment.

Task environments, which are essentially the "problems" to which rational agents are
the "solutions."

PEAS: Performance Measure, Environment, Actuators, Sensors

Performance

The output which we get from the agent. All the necessary results that an agent gives
after processing comes under its performance.

Environment

All the surrounding things and conditions of an agent fall in this section. It basically
consists of all the things under which the agents work.

Actuators

The devices, hardware or software through which the agent performs any actions or
processes any information to produce a result are the actuators of the agent.

Sensors

The devices through which the agent observes and perceives its environment are the
sensors of the agent.

14

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 1.5 Examples of agent types and their PEAS descriptions

Rational Agent - A system is rational if it does the “right thing”. Given what it knows.

Characteristic of Rational Agent

 The agent's prior knowledge of the environment.


 The performance measure that defines the criterion of success.
 The actions that the agent can perform.
 The agent's percept sequence to date.

For every possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.

 An omniscient agent knows the actual outcome of its actions and can act accordingly;
but omniscience is impossible in reality.

 Ideal Rational Agent precepts and does things. It has a greater performance measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action. No
perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The
clock works based on inbuilt program.

 Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to
take in response to any given percept sequence provides a design for ideal agent”.

15

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

 Eg. SQRT function calculation in calculator.

 Doing actions in order to modify future precepts-sometimes called information


gathering- is an important part of rationality.

 A rational agent should be autonomous-it should learn from its own prior knowledge
(experience).

The Structure of Intelligent Agents

Agent = Architecture + Agent Program


Architecture = the machinery that an agent executes on. (Hardware)
Agent Program = an implementation of an agent function. (Algorithm,
Logic – Software)

1.8 TYPES OF AGENTS

Agents can be grouped into four classes based on their degree of perceived intelligence
and capability :

 Simple Reflex Agents


 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent

The Simple reflex agents

 The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history (past State).

 These agents only succeed in the fully observable environment.

 The Simple reflex agent does not consider any part of percepts history during their
decision and action process.

 The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.

 Problems for the simple reflex agent design approach:

o They have very limited intelligence

o They do not have knowledge of non-perceptual parts of the current state

16

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

o Mostly too big to generate and to store.

o Not adaptive to changes in the environment.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.

Ex: if car-in-front-is-braking then initiate- braking.

Figure 1.6 A simple reflex agent

Model Based Reflex Agents

 The Model-based agent can work in a partially observable environment, and track the
situation.
 A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
 These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
 Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.

17

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 1.7 A model-based reflex agent

Goal Based Agents

o The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.

o The agent needs to know its goal which describes desirable situations.

o Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.

o They choose an action, so that they can achieve the goal.

o These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.

Figure 1.8 A goal-based agent

18

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Utility Based Agents

o These agents are similar to the goal-based agent but provide an extra component of
utility measurement (“Level of Happiness”) which makes them different by providing
a measure of success at a given state.

o Utility-based agent act based not only goals but also the best way to achieve the goal.

o The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.

o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.

Figure 1.9 A utility-based agent


Learning Agents

o A learning agent in AI is the type of agent which can learn from its past experiences, or
it has learning capabilities.

o It starts to act with basic knowledge and then able to act and adapt automatically
through learning.

o A learning agent has mainly four conceptual components, which are:

a. Learning element: It is responsible for making improvements by learning from


environment

b. Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.

c. Performance element: It is responsible for selecting external action

19

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

d. Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.

o Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.

Figure 1.10 Learning Agents

1.9 PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS

Problem-solving agents

In Artificial Intelligence, Search techniques are universal problem-solving methods.


Rational agents or Problem-solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result. Problem- solving agents are
the goal-based agents and use atomic representation. In this topic, wewill learn various
problem-solving search algorithms.

 Some of the most popularly used problem solving with the help of artificial intelligence
are:

1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.

Problem Searching

 In general, searching refers to as finding information one needs.

20

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

 Searching is the most commonly used technique of problem solving in artificial


intelligence.

 The searching algorithm helps us to search for solution of particular problem.

Problem: Problems are the issues which comes across any system. A solution is needed to
solve that particular problem.

Steps : Solve Problem Using Artificial Intelligence

 The process of solving a problem consists of five steps. These are:

Figure 1.11 Problem Solving in Artificial Intelligence

Defining The Problem: The definition of the problem must be included precisely. It
should contain the possible initial as well as final situations which should result in acceptable
solution.

1. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.

2. Identification Of Solutions: This phase generates reasonable amount of solutions to


the given problem in a particular range.

3. Choosing a Solution: From all the identified solutions, the best solution is chosen basis
on the results produced by respective solutions.

4. Implementation: After choosing the best solution, its implementation is done.

Measuring problem-solving performance

We can evaluate an algorithm’s performance in four ways:

21

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Completeness: Is the algorithm guaranteed to find a solution when there is one?


Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is needed to perform the search?

Search Algorithm Terminologies

 Search: Searching is a step by step procedure to solve a search-problem in a given


search space. A search problem can have three main factors:

1. Search Space: Search space represents a set of possible solutions, which a system
may have.

2. Start State: It is a state from where agent begins the search.

3. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.

 Search tree: A tree representation of search problem is called Search tree. The root of
the search tree is the root node which is corresponding to the initial state.

 Actions: It gives the description of all the available actions to the agent.

 Transition model: A description of what each action do, can be represented as a


transition model.

 Path Cost: It is a function which assigns a numeric cost to each path.

 Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.

Example Problems

A Toy Problem is intended to illustrate or exercise various problem-solving methods.


Areal- world problem is one whose solutions people actually care about.

Toy Problems

Vacuum World

States: The state is determined by both the agent location and the dirt locations. The
agent is in one of the 2 locations, each of which might or might not contain dirt. Thus there are
2*2^2=8 possible world states.

Initial state: Any state can be designated as the initial state.

22

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.

Transition model: The actions have their expected effects, except that moving Left in
the leftmost squ are, moving Right in the rightmost square, and Sucking in a clean square have
no effect. The complete state space is shown in Figure.

Goal test: This checks whether all the squares are clean.

Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Figure 1.12 Vacuum World State Space Graph

1) 8- Puzzle Problem

Figure 1.13 8- Puzzle Problem

States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.

23

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.

The simplest formulation defines the actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.

Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.

Goal test: This checks whether the state matches the goal configuration shown in
Figure. Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Queens Problem

Figure 1.14 Queens Problem

 States: Any arrangement of 0 to 8 queens on the board is a state.


 Initial state: No queens on the board.
 Actions: Add a queen to any empty square.
 Transition model: Returns the board with a queen added to the specified square.
 Goal test: 8 queens are on the board, none attacked.

Consider the given problem. Describe the operator involved in it. Consider the water
jug problem: You are given two jugs, a 4-gallon one and 3-gallon one. Neither has any
measuring marker on it. There is a pump that can be used to fill the jugs with water. How can
you get exactly 2 gallon of water from the 4-gallon jug ?

Explicit Assumptions: A jug can be filled from the pump, water can be poured out of a
jug on to the ground, water can be poured from one jug to another and that there are no other
measuring devices available.

Here the initial state is (0, 0). The goal state is (2, n) for any value of n.

24

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

State Space Representation: we will represent a state of the problem as a tuple (x, y)
where x represents the amount of water in the 4-gallon jug and y represents the amount of water
in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.

To solve this we have to make some assumptions not mentioned in the problem. They
are:

 We can fill a jug from the pump.


 We can pour water out of a jug to the ground.
 We can pour water from one jug to another.
 There is no measuring device available.

Operators - we must define a set of operators that will take us from one state to another.

Table 1.1

Sr. Current State Next State Descriptions


1 (x,y) if x < 4 (4,y) Fill the 4 gallon jug
2 (x,y) if x < 3 (x,3) Fill the 3 gallon jug
3 (x,y) if x > 0 (x – d, y) Pour some water out of the 4 gallon jug
4 (x,y) if y > 0 (x, y – d) Pour some water out of the 3 gallon jug
5 (x,y) if y > 0 (0, y) Empty the 4 gallon jug
6 (x,y) if y > 0 (x 0) Empty the 3 gallon jug on the ground
(x,y) if x + y > = 4 Pour water from the 3 gallon jug into the
7 (4, y – (4 – x))
and y > 0 4 gallon jug until the 4 gallon jug is full
(x,y) if x + y > = 3 Pour water from the 4 gallon jug into the
8 (x – (3 – x), 3)
and x > 0 3 gallon jug until the 3 gallon jug is full
(x,y) if x + y < = 4 Pour all the water from the 3 gallon jug
9 (x + y, 0)
and y > 0 into the 4 gallon jug
(x,y) if x + y < = 3 Pour all the water from the 4 gallon jug
10 (0, x + y)
and x > 0 into the 3 gallon jug
Pour the 2 gallons from 3 gallon jug into
11 (0, 2) (2, 0)
the 4 gallon jug
Empty the 2 gallons in the 4 gallon jug
12 (2, y) (0, y)
on the ground

25

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 1.15 Solution

Table 1.2

Solution

Gallons in 4-gel Gallons in 3-gel


S.No. Rule Applied
jug(x) jug (y)
1. 0 0 Initial state
2.. 4 0 1. Fill 4
3 1 3 6. Poor 4 into 3 to fill
4. 1 0 4. Empty 3
5. 0 1 8. Poor all of 4 into 3
6. 4 1 1. Fill 4
7. 2 3 6. Poor 4 into 3

 4-gallon one and a 3-gallon Jug

 No measuring mark on the jug.

 There is a pump to fill the jugs with water.

 How can you get exactly 2 gallon of water into the 4-gallon jug?

26

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

1.10 PROBLEM SOLVING BY SEARCH

An important aspect of intelligence is goal-based problem solving.

The solution of many problems can be described by finding a sequence of actions that
lead to a desirable goal. Each action changes the state and the aim is to find the sequence of
actions and states that lead from the initial (start) state to a final (goal) state.

A well-defined problem can be described by:

Initial state

 Operator or successor function - for any state x returns s(x), the set of states
reachable from x with one action

 State space - all states reachable from initial by any sequence of actions

 Path - sequence through state space

 Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs
of individual actions along the path

 Goal test - test to determine if at goal state

What is Search?

Search is the systematic examination of states to find path from the start/root state to
the goal state.

The set of possible states, together with operators defining their connectivity constitute
the search space.

The output of a search algorithm is a solution, that is, a path from the initial state to a
state that satisfies the goal test.

Problem-solving agents

A Problem solving agent is a goal-based agent. It decide what to do by finding sequence


of actions that lead to desirable states. The agent can adopt a goal and aim at satisfying it.

To illustrate the agent’s behavior, let us take an example where our agent is in the city
of Arad, which is in Romania. The agent has to adopt a goal of getting to Bucharest.

27

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Goal formulation, based on the current situation and the agent’s performance measure,
is the first step in problem solving.

The agent’s task is to find out which sequence of actions will get to a goal state.

Problem formulation is the process of deciding what actions and states to consider
given a goal.

Example: Route finding problem


Referring to figure
On holiday in Romania : currently in Arad. Flight leaves tomorrow from Bucharest
Formulate goal: be in Bucharest
Formulate problem: states: various cities
actions: drive between cities
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

Problem formulation

A problem is defined by four items:


initial state e.g., “at Arad"
successor function S(x) = set of action-state pairs e.g., S(Arad) = {[Arad -
>Zerind;Zerind],….} goal test, can be
explicit, e.g., x = at Bucharest" implicit, e.g., NoDirt(x)
path cost (additive)
e.g., sum of distances, number of actions executed, etc. c(x; a; y) is the step cost,
assumed to be >= 0
A solution is a sequence of actions leading from the initial state to a goal state.

Goal formulation and problem formulation

1.11 EXAMPLE PROBLEMS

The problem solving approach has been applied to a vast array of task environments.
Some best known problems are summarized below. They are distinguished as toy or real-world
problems

A toy problem is intended to illustrate various problem solving methods. It can be


easily used by different researchers to compare the performance of algorithms.

A real world problem is one whose solutions people actually care about.

28

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

1.12 TOY PROBLEMS

Vacuum World Example

o States: The agent is in one of two locations, each of which might or might not contain
dirt. Thus there are 2 x 22 = 8 possible world states.

o Initial state: Any state can be designated as initial state.

o Successor function: This generates the legal states that results from trying the three
actions (left, right, suck). The complete state space is shown in figure

o Goal Test: This tests whether all the squares are clean.

o Path test: Each step costs one, so that the path cost is the number of steps in the path.

Vacuum World State Space

Figure 2.1 The state space for the vacuum world.


Arcs denote actions: L = Left, R = Right

The 8-puzzle

An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A tile
adjacent to the balank space can slide into the space. The object is to reach the goal state, as
shown in Figure 2.4

Example: The 8-puzzle

29

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.2 A typical instance of 8-puzzle

The problem formulation is as follows:

o States : A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
o Initial state : Any state can be designated as the initial state. It can be noted that any
given goal can be reached from exactly half of the possible initial states.
o Successor function : This generates the legal states that result from trying the four
actions(blank moves Left, Right, Up or down).
o Goal Test : This checks whether the state matches the goal configuration shown in
Figure(Other goal configurations are possible)
o Path cost : Each step costs 1,so the path cost is the number of steps in the path.

The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as
test problems for new search algorithms in AI. This general class is known as NP-complete.
The 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved.

The 15 puzzle ( 4 x 4 board ) has around 1.3 trillion states, an the random instances can
be solved optimally in few milli seconds by the best search algorithms.

The 24-puzzle (on a 5 x 5 board) has around 1025 states and random instances are still
quite difficult to solve optimally with current machines and algorithms.

8-Queens problem

The goal of 8-queens problem is to place 8 queens on the chessboard such that no queen
attacks any other.(A queen attacks any piece in the same row, column or diagonal).

Figure 2.3 shows an attempted solution that fails: the queen in the right most column is
attacked by the queen at the top left.

An Incremental formulation involves operators that augments the state description,


starting with an empty state. For 8-queens problem, this means each action adds a queen to the
state. A complete-state formulation starts with all 8 queens on the board and move them
around. In either case the path cost is of no interest because only the final state counts.

30

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.3 8-queens problem

The first incremental formulation one might try is the following:

o States: Any arrangement of 0 to 8 queens on board is a state.


o Initial state: No queen on the board.
o Successor function: Add a queen to any empty square.
o Goal Test: 8 queens are on the board, none attacked.

In this formulation, we have 64.63…57 = 3 x 1014 possible sequences to investigate.

A better formulation would prohibit placing a queen in any square that is already
attacked.

o States : Arrangements of n queens ( 0 <= n < = 8 ),one per column in the left most
columns, with no queen attacking another are states.

o Successor function : Add a queen to any square in the left most empty column such
that it is not attacked by any other queen.

This formulation reduces the 8-queen state space from 3 x 1014 to just 2057,and solutions
are easy to find.

For the 100 queens the initial formulation has roughly 10400 states whereas the improved
formulation has about 1052 states. This is a huge reduction, but the improved state space is still
too big for the algorithms to handle.

1.13 REAL-WORLD PROBLEMS

ROUTE-FINDING PROBLEM

Route-finding problem is defined in terms of specified locations and transitions along


links between them. Route-finding algorithms are used in a variety of applications, such as
routing in computer networks, military operations planning, and airline travel planning
systems.

31

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

1.14 AIRLINE TRAVEL PROBLEM

The airline travel problem is specifies as follows:

o States: Each is represented by a location (e.g., an airport) and the current time.
o Initial state: This is specified by the problem.
o Successor function: This returns the states resulting from taking any scheduled flight
(further specified by seat class and location),leaving later than the current time plus the
within-airport transit time, from the current airport to another.

o Goal Test: Are we at the destination by some prespecified time?


o Path cost: This depends upon the monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of date, type of air plane, frequent-flyer
mileage awards, and so on.

1.15 TOURING PROBLEMS

Touring problems are closely related to route-finding problems, but with an important
difference. Consider for example, the problem, “Visit every city at least once” as shown in
Romania map.

As with route-finding the actions correspond to trips between adjacent cities. The state
space, however, is quite different.

The initial state would be “In Bucharest; visited{Bucharest}”.

A typical intermediate state would be “In Vaslui;visited {Bucharest, Urziceni,Vaslui}”.

The goal test would check whether the agent is in Bucharest and all 20 cities have been
visited.

1.15 THE TRAVELLING SALESPERSON PROBLEM(TSP)

Is a touring problem in which each city must be visited exactly once. The aim is to find
the shortest tour. The problem is known to be NP-hard. Enormous efforts have been expended
to improve the capabilities of TSP algorithms. These algorithms are also used in tasks such as
planning movements of automatic circuit-board drills and of stocking machines on shop
floors.

VLSI layout

A VLSI layout problem requires positioning millions of components and connections


on a chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize
manufacturing yield. The layout problem is split into two parts: cell layout and channel
routing.

32

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

ROBOT navigation

ROBOT navigation is a generalization of the route-finding problem. Rather than a


discrete set of routes, a robot can move in a continuous space with an infinite set of possible
actions and states. For a circular Robot moving on a flat surface, the space is essentially two-
dimensional. When the robot has arms and legs or wheels that also must be controlled, the
search space becomes multi-dimensional. Advanced techniques are required to make the search
space finite.

1.16 AUTOMATIC ASSEMBLY SEQUENCING

The example includes assembly of intricate objects such as electric motors. The aim in
assembly problems is to find the order in which to assemble the parts of some objects. If the
wrong order is choosen, there will be no way to add some part later without undoing some work
already done. Another important assembly problem is protein design, in which the goal is to
find a sequence of Amino acids that will be fold into a three-dimensional protein with theright
properties to cure some disease.

1.17 INTERNET SEARCHING

In recent years there has been increased demand for software robots that perform
Internet searching, looking for answers to questions, for related information, or for shopping
deals. The searching techniques consider internet as a graph of nodes(pages) connected by
links.

Different Search Algorithm

Figure 2.4 Different Search Algorithms

33

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

1.18 UNINFORMED SEARCH STRATGES

Uninformed Search Strategies have no additional information about states beyond


that provided in the problem definition.

Strategies that know whether one non goal state is “more promising” than another are
called

Informed search or heuristic search strategies.

There are five uninformed search strategies as given below.

o Breadth-first search
o Uniform-cost search
o Depth-first search
o Depth-limited search
o Iterative deepening search

Breadth-first search

o Breadth-first search is a simple strategy in which the root node is expanded first, then
all successors of the root node are expanded next, then their successors, and so on. In
general, all the nodes are expanded at a given depth in the search tree before any nodes
at the next level are expanded.

o Breath-first-search is implemented by calling TREE-SEARCH with an empty fringe


that is a first-in-first-out (FIFO) queue, assuring that the nodes that are visited first will
be expanded first. In otherwards, calling TREE-SEARCH (problem, FIFO-QUEUE())
results in breadth-first-search. The FIFO queue puts all newly generated successors at
the end of the queue, which means that Shallow nodes are expanded before deeper
nodes.

Figure 2.5 Breadth-first search on a simple binary tree. At each stage, the node to be
expanded next is indicated by a marker.

34

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Properties of breadth-first-search

Figure 2.6 Breadth-first-search properties

Time complexity for BFS

Assume every state has b successors. The root of the search tree generates b nodes at
the first level, each of which generates b more nodes, for a total of b2 at the second level. Each
of these generates b more nodes, yielding b3 nodes at the third level, and so on. Now suppose,
that the solution is at depth d. In the worst case, we would expand all but the last node at level
d, generating bd+1 - b nodes at level d+1.

Then the total number of nodes generated is b + b2 + b3 + …+ bd + ( bd+1 + b) = O(bd+1).

Every node that is generated must remain in memory, because it is either part of the
fringe or is an ancestor of a fringe node. The space compleity is, therefore, the same as the time
complexity

1.19 UNIFORM-COST SEARCH

Instead of expanding the shallowest node, uniform-cost search expands the node n
with the lowest path cost. Uniform-cost search does not care about the number of steps a path
has, but only about their total cost.

35

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

1.20 DEPTH-FIRST-SEARCH

Depth-first-search always expands the deepest node in the current fringe of the search
tree. The progress of the search is illustrated in Figure 1.31. The search proceeds immediately
to the deepest level of the search tree, where the nodes have no successors. As those nodes are
expanded, they are dropped from the fringe, so then the search “backs up” to the next shallowest
node that still has unexplored successors.

Figure 2.7 Depth-first-search on a binary tree. Nodes that have been expanded and have
node scendants in the fringe can be removed from the memory; these are shown in
black. Nodes at depth 3 are assumed to have no successors and M is the only goal node.

This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO)


queue, also known as a stack.

Depth-first-search has very modest memory requirements. It needs to store only a single
path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each
node on the path. Once the node has been expanded, it can be removed from the memory, as
soon as its descendants have been fully explored (Refer Figure 2.7).

For a state space with a branching factor b and maximum depth m, depth-first-search
requires storage of only bm + 1 nodes.

Using the same assumptions as Figure, and assuming that nodes at the same depth as
the goal node have no successors, we find the depth-first-search would require 118 kilobytes
instead of 10 petabytes, a factor of 10 billion times less space.

36

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Drawback of Depth-first-search

The drawback of depth-first-search is that it can make a wrong choice and get stuck
going down very long(or even infinite) path when a different choice would lead to solution
near the root of the search tree. For example, depth-first-search will explore the entire left
subtree even if node C is a goal node.

1.21 BACKTRACKING SEARCH

A variant of depth-first search called backtracking search uses less memory and only
one successor is generated at a time rather than all successors.; Only O(m) memory is needed
rather than O(bm)

DEPTH-LIMITED-SEARCH

Figure 2.8 Depth-limited-search

The problem of unbounded trees can be alleviated by supplying depth-first-search with


a pre- determined depth limit l. That is, nodes at depth l are treated as if they have no successors.
This approach is called depth-limited-search. The depth limit solves the infinite path problem.

Depth limited search will be nonoptimal if we choose l > d. Its time complexity is O(bl)
and its space complete is O(bl). Depth-first-search can be viewed as a special case of depth-
limited search with l = oo Sometimes, depth limits can be based on knowledge of the problem.
For, example, on the map of Romania there are 20 cities. Therefore, we know that if there is a
solution, it must be of length 19 at the longest, So l = 10 is a possible choice. However, it can
be shown that any city can be reached from any other city in at most 9 steps. This number
known as the diameter of the state space, gives us a better depth limit.

37

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Depth-limited-search can be implemented as a simple modification to the general tree-


search algorithm or to the recursive depth-first-search algorithm. The pseudocode for recursive
depth- limited-search is shown in Figure.

It can be noted that the above algorithm can terminate with two kinds of failure : the
standard failure value indicates no solution; the cutoffvalue indicates no solution within the
depth limit. Depth-limited search = depth-first search with depth limit l,returns cut off if any
path is cut off by depth limit

function Depth-Limited-Search( problem, limit) returns a solution/fail/cutoff return


Recursive-DLS(Make-Node(Initial-State[problem]), problem, limit) function Recursive-
DLS(node, problem, limit) returns solution/fail/cutoff cutoff-occurred? false
if Goal-Test(problem,State[node]) then return Solution(node)
else if Depth[node] = limit then return cutoff
else for each successor in Expand(node, problem) do result
Recursive-DLS(successor, problem, limit) if result = cutoff then cutoff_occurred?true
else if result not = failure then return result
ifcutoff_occurred? then return cutoff else return failure
Figure 2.9 Recursive implementation of Depth-limited-search

1.22 ITERATIVE DEEPENING DEPTH-FIRST SEARCH

Iterative deepening search (or iterative-deepening-depth-first-search) is a general


strategy often used in combination with depth-first-search, that finds the better depth limit. It
does this by gradually increasing the limit – first 0,then 1,then 2, and so on – until a goal is
found. This will occur when the depth limit reaches d, the depth of the shallowest goal node.
The algorithm is shown in Figure.

Iterative deepening combines the benefits of depth-first and breadth-first-search Like


depth-first-search, its memory requirements are modest; O(bd) to be precise.

Like Breadth-first-search, it is complete when the branching factor is finite and optimal
when the path cost is a non decreasing function of the depth of the node.

Figure shows the four iterations of ITERATIVE-DEEPENING_SEARCH on a binary


search tree, where the solution is found on the fourth iteration.

38

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.10 The iterative deepening search algorithm, which repeatedly applies
depth-limited- search with increasing limits. It terminates when a solution is found or
if the depth limited search returns failure, meaning that no solution exists.

Figure 2.11 Four iterations of iterative deepening search on a binary tree

Iterative search is not as wasteful as it might seem

39

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.12 Iterative search is not as wasteful as it might seem


Properties of iterative deepening search

Figure 2.13 Properties of iterative deepening search

40

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Bidirectional Search

The idea behind bidirectional search is to run two simultaneous searches – one forward
from the initial state and the other backward from the goal, stopping when the two searches
meet in the middle

The motivation is that bd/2 + bd/2 much less than, or in the figure, the area of the two
small circles is less than the area of one big circle centered on the start and reaching to the goal.

Figure 2.14 A schematic view of a bidirectional search that is about to succeed, when
a Branch from the Start node meets a Branch from the goal node.

• Before moving into bidirectional search let’s first understand a few terms.

• Forward Search: Looking in-front of the end from start.

• Backward Search: Looking from end to the start back-wards.

• So Bidirectional Search as the name suggests is a combination of forwarding and


backward search. Basically, if the average branching factor going out of node / fan-out,
if fan-out is less, prefer forward search. Else if the average branching factor is going
into a node/fan in is less (i.e. fan-out is more), prefer backward search.

• We must traverse the tree from the start node and the goal node and wherever they meet
the path from the start node to the goal through the intersection is the optimal solution.
The BS Algorithm is applicable when generating predecessors is easy in both forward
and backward directions and there exist only 1 or fewer goal states.

41

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.15 Comparing Uninformed Search Strategies

Figure 2.16 Evaluation of search strategies, b is the branching factor; d is the depth of
the shallowest solution; m is the maximum depth of the search tree; l is the depth
limit. Superscript caveats are as follows: a complete if b is finite; b complete if step
costs >= E for positive E; c optimal if step costs are all identical; d if both directions
use breadth-first search.

Best-first search
Best-first search is an instance of general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on an evaluation function f(n). The
node with lowest evaluation is selected for expansion, because the evaluation measures the
distance to the goal.
This can be implemented using a priority-queue, a data structure that will maintain the
fringe in ascending order of f-values.

1.23 HEURISTIC FUNCTIONS

42

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

A heuristic function or simply a heuristic is a function that ranks alternatives in


various search algorithms at each branching step basing on an available information in order to
make a decision which branch is to be followed during a search.
The key component of Best-first search algorithm is a heuristic function, denoted by
h(n): h(n) = estimated cost of the cheapest path from node n to a goal node.

For example, in Romania, one might estimate the cost of the cheapest path from Arad
to Bucharest via a straight-line distance from Arad to Bucharest (Figure 2.19).
Heuristic function are the most common form in which additional knowledge is
imparted to the search algorithm.
Greedy Best-first search
Greedy best-first search tries to expand the node that is closest to the goal, on the
grounds that this is likely to a solution quickly.
It evaluates the nodes by using the heuristic function f(n) = h(n).
Taking the example of Route-finding problems in Romania, the goal is to reach
Bucharest starting from the city Arad. We need to know the straight-line distances to Bucharest
from various cities as shown in Figure. For example, the initial state is In(Arad),and the straight
line distance heuristic hSLD (In(Arad)) is found to be 366.

Using the straight-line distance heuristic hSLD, the goal state can be reached faster.

43

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.19 Values of hSLD - straight line distances to Bucharest

Figure 2.20 progress of greedy best-first search

44

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure shows the progress of greedy best-first search using hSLD to find a path from
Arad to Bucharest. The first node to be expanded from Arad will be Sibiu, because it is closer
to Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras,
because it is closest. Fagaras in turn generates Bucharest, which is the goal.

Properties of greedy search

o Complete: No–can get stuck in loops, e.g., Iasi !Neamt !Iasi !Neamt !
Complete in finite space with repeated-state checking
o Time: O(bm), but a good heuristic can give dramatic improvement
o Space: O(bm) - keeps all nodes in memory
o Optimal: No

Greedy best-first search is not optimal, and it is incomplete.

The worst-case time and space complexity is O(bm),where m is the maximum depth of
the search space.

A* SEARCH

A* Search is the most widely used form of best-first search. The evaluation function f(n)
is obtained by combining

(1) g(n) = the cost to reach the node, and


(2) h(n) = the cost to get from the node to the goal :
f(n) = g(n) + h(n).

A* Search is both optimal and complete. A* is optimal if h(n) is an admissible heuristic.


The obvious example of admissible heuristic is the straight-line distance hSLD. It cannot be an
overestimate.

A* Search is optimal if h(n) is an admissible heuristic – that is, provided that h(n) never
overestimates the cost to reach the goal.

An obvious example of an admissible heuristic is the straight-line distance hSLD that


we used in getting to Bucharest. The progress of an A* tree search for Bucharest is shown in
Figure

The values of ‘g ‘ are computed from the step costs shown in the Romania map(figure).
Also the values of hSLD are given in Figure

45

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.21 A* Search

Figure 2.22 Example A* Search

1.24 LOCAL SEARCH ALGORITHMS AND OPTIMIZATION PROBLEMS

o In many optimization problems, the path to the goal is irrelevant; the goal state itself is
the solution

o For example, in the 8-queens problem, what matters is the final configuration of queens,
not the order in which they are added.

o In such cases, we can use local search algorithms. They operate using a single current
state (rather than multiple paths) and generally move only to neighbors of that state.

46

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

o The important applications of these class of problems are (a) integrated-circuit design,
(b) Factory-floor layout, (c) job-shop scheduling, (d) automatic programming, (e)
telecommunications network optimization, (f) Vehicle routing, and (g) portfolio
management.
Key advantages of Local Search Algorithms
(1) They use very little memory – usually a constant amount; and
(2) they can often find reasonable solutions in large or infinite(continuous) state spaces for
which systematic algorithms are unsuitable.

1.25 OPTIMIZATION PROBLEMS


In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an objective
function.

State Space Landscape


To understand local search, it is better explained using state space landscape as shown
in Figure.
A landscape has both “location” (defined by the state) and “elevation” (defined by the
value of the heuristic cost function or objective function).
If elevation corresponds to cost, then the aim is to find the lowest valley – a global
minimum; if elevation corresponds to an objective function, then the aim is to find the highest
peak – a global maximum.
Local search algorithms explore this landscape. A complete local search algorithm
always finds a goal if one exists; an optimal algorithm always finds a global
minimum/maximum.

Figure 2.23 A one dimensional state space landscape in which elevation


corresponds to the objective function. The aim is to find the global maximum.
Hill climbing search modifies the current state to try to improve it, as shown
by the arrow. The various topographic features are defined in the text

47

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Hill-climbing search

The hill-climbing search algorithm as shown in figure, is simply a loop that continually
moves in the direction of increasing value – that is, uphill. It terminates when it reaches a
“peak” where no neighbor has a higher value.

function HILL-CLIMBING( problem) return a state that is a local maximum


input: problem, a problem
local variables: current, a
node.
neighbor, a node.

current ←MAKE-NODE(INITIAL-STATE[problem])
loop do
neighbor ← a highest valued successor of current
if VALUE [neighbor] ≤ VALUE[current] then return STATE[current]
current ←neighbor
Figure 2.24 The hill-climbing search algorithm (steepest ascent version), which is
the most basic local search technique. At each step the current node is replaced
by the best neighbor; the neighbor with the highest VALUE. If the heuristic cost
estimate h is used, we could find the neighbor with the lowest h.

Hill-climbing is sometimes called greedy local search because it grabs a good neighbor
state without thinking ahead about where to go next. Greedy algorithms often perform quite
well. Problems with hill-climbing

Hill-climbing often gets stuck for the following reasons :

 Local maxima: a local maximum is a peak that is higher than each of its neighboring
states, but lower than the global maximum. Hill-climbing algorithms that reach the
vicinity of a local maximum will be drawn upwards towards the peak, but will then be
stuck with nowhere else to go

 Ridges: A ridge is shown in Figure 2.10. Ridges results in a sequence of local


maximathat is very difficult for greedy algorithms to navigate.

 Plateaux: A plateau is an area of the state space landscape where the evaluation
function is flat. It can be a flat local maximum, from which no uphill exit exists, or a
shoulder, from which it is possible to make progress.

48

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.25 Illustration of why ridges cause difficulties for hill-climbing. The grid
of states(dark circles) is superimposed on a ridge rising from left to right, creating
a sequence of local maxima that are not directly connected to each other. From
each local maximum, all the available options point downhill.

Hill-climbing variations

 Stochastic hill-climbing
o Random selection among the uphill moves.
o The selection probability can vary with the steepness of the uphill move.
 First-choice hill-climbing
o cfr. stochastic hill climbing by generating successors randomly until a better
one is found.
 Random-restart hill-climbing
o Tries to avoid getting stuck in local maxima.

Simulated annealing search

A hill-climbing algorithm that never makes “downhill” moves towards states with
lower value (or higher cost) is guaranteed to be incomplete, because it can stuck on a local
maximum. In contrast, a purely random walk –that is, moving to a successor choosen uniformly
at random from the set of successors – is complete, but extremely inefficient.

Simulated annealing is an algorithm that combines hill-climbing with a random walk in


someway that yields both efficiency and completeness.

Figure shows simulated annealing algorithm. It is quite similar to hill climbing. Instead
of picking the best move, however, it picks the random move. If the move improves the
situation, it is always accepted. Otherwise, the algorithm accepts the move with some
probability less than 1. The probability decreases exponentially with the “badness” of the move
– the amount E by which the evaluation is worsened.

49

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Simulated annealing was first used extensively to solve VLSI layout problems in the early
1980s. It has been applied widely to factory scheduling and other large-scale optimization
tasks.

Figure 2.26 The simulated annealing search algorithm, a version of stochastic


hill climbing where some downhill moves are allowed.

1.26 CONSTRAINT SATISFACTION PROBLEMS(CSP)

A Constraint Satisfaction Problem(or CSP) is defined by a set of


variables,X1,X2,….Xn, and a set of constraints C1,C2,…,Cm. Each variable Xi has a
nonempty domain D,of possible values.

Each constraint Ci involves some subset of variables and specifies the allowable
combinations of values for that subset.

A State of the problem is defined by an assignment of values to some or all of the


variables,{Xi = vi,Xj = vj,…}. An assignment that does not violate any constraints is called a
consistent or legal assignment. A complete assignment is one in which every variable is
mentioned, and a solution to a CSP is a complete assignment that satisfies all the constraints.

Some CSPs also require a solution that maximizes an objective function.

50

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Example for Constraint Satisfaction Problem

Figure shows the map of Australia showing each of its states and territories. We are
given the task of coloring each region either red, green, or blue in such a way that the
neighboring regions have the same color. To formulate this as CSP, we define the variable to
be the regions

:WA,NT,Q,NSW,V,SA, and T. The domain of each variable is the set


{red,green,blue}.The constraints require neighboring regions to have distinct colors; for
example, the allowable combinations for WA and NT are the pairs

{(red,green),(red,blue),(green,red),(green,blue),(blue,red),(blue,green)}.

The constraint can also be represented more succinctly as the inequality WA not = NT,
provided the constraint satisfaction algorithm has some way to evaluate such expressions.)
There are many possible solutions such as

{ WA = red, NT = green, Q = red, NSW = green, V = red,SA = blue,T = red}.

It is helpful to visualize a CSP as a constraint graph, as shown in Figure 2.29. The nodes
of the graph corresponds to variables of the problem and the arcs correspond to constraints.

Figure 2.29 Principle states and territories of Australia. Coloring this map can be
viewed as a constraint satisfaction problem. The goal is to assign colors to each region
so that no neighboring regions have the same color.

51

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.30 Mapping Problem

CSP can be viewed as a standard search problem as follows:

 Initial state: the empty assignment {},in which all variables are unassigned.
 Successor function: a value can be assigned to any unassigned variable, provided that
it does not conflict with previously assigned variables.
 Goal test: the current assignment is complete.
 Path cost: a constant cost(E.g.,1) for every step.

Every solution must be a complete assignment and therefore appears at depth n if there
are n variables.

Depth first search algorithms are popular for CSPs

Varieties of CSPs
(i) Discrete variables Finite domains

The simplest kind of CSP involves variables that are discrete and have finite domains.
Map coloring problems are of this kind. The 8-queens problem can also be viewed as finite-
domain

CSP, where the variables Q1,Q2,…..Q8 are the positions each queen in columns 1,….8
and each variable has the domain {1,2,3,4,5,6,7,8}. If the maximum domain size of any

52

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

variable in a CSP is d, then the number of possible complete assignments is O(dn) – that is,
exponential in the number of variables. Finite domain CSPs include Boolean CSPs, whose
variables can be either true or false. Infinite domains

Discrete variables can also have infinite domains – for example, the set of integers or
the set of strings. With infinite domains, it is no longer possible to describe constraints by
enumerating all allowed combination of values. Instead a constraint language of algebric
inequalities such as Startjob1 + 5 <= Startjob3.

(ii) CSPs with continuous domains

CSPs with continuous domains are very common in real world. For example in
operation research field, the scheduling of experiments on the Hubble Telescope requires very
precise timing of observations; the start and finish of each observation and manoeuvre are
continuous-valued variables that must obey a variety of astronomical, precedence and power
constraints. The best known category of continuous-domain CSPs is that of linear
programming problems, where the constraints must be linear inequalities forming a convex
region. Linear programming problems can be solved in time polynomial in the number of
variables.

Varieties of constraints

(i) unary constraints involve a single variable.


Example : SA # green
(ii) Binary constraints involve paris of variables.
Example : SA # WA
(iii) Higher order constraints involve 3 or more variables. Example :cryptarithmetic puzzles.

Figure 2.31 cryptarithmetic puzzles.

53

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.32 Cryptarithmetic puzzles-Solution

Figure 2.33 Numerical Solution

Backtracking Search for CSPs

The term backtracking search is used for depth-first search that chooses values for
one variable at a time and backtracks when a variable has no legal values left to assign. The
algorithm is shown in figure

54

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.34 A simple backtracking algorithm for constraint satisfaction problem. The
algorithm is modeled on the recursive depth-first search

Figure 2.34 Part of the search tree generated by simple backtracking for the map-
coloring problem

Figure 2.35 Part of search tree generated by simple backtracking for the map
coloring problem.

55

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Forward checking

One way to make better use of constraints during search is called forward checking.
Whenever a variable X is assigned, the forward checking process looks at each unassigned
variable Y that is connected to X by a constraint and deletes from Y ’s domain any value that
is inconsistent with the value chosen for X. Figure 5.6 shows the progress of a map-coloring
search with forward checking.

Figure 2.36 The progress of a map-coloring search with forward checking. WA = red
is assigned first; then forward checking deletes red from the domains of the
neighboring variables NT and SA. After Q = green, green is deleted from the domain
of NT, SA, and NSW. After V = blue, blue, is deleted from the domains of NSW and
SA, leaving SA with no legal values.

Constraint propagation

Although forward checking detects many inconsistencies, it does not detect all of them.

Constraint propagation is the general term for propagating the implications of a


constraint on one variable onto other variables.

Arc Consistency

Figure 2.37 Arc Consistency

56

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.38 Arc Consistency –CSP

k-Consistency

Local Search for CSPs

The Structure of Problems Problem Structure

Independent Subproblems

Figure 2.39 Independent Subproblems

57

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Tree-Structured CSPs

Figure 2.40 Tree-Structured CSPs

1.29 ADVERSARIAL SEARCH

Competitive environments, in which the agent’s goals are in conflict, give rise to
adversarial search problems – often known as games.

Games

Mathematical Game Theory, a branch of economics, views any multiagent


environment as a game provided that the impact of each agent on the other is “significant”,
regardless of whether the agents are cooperative or competitive. In, AI, “games” are
deterministic, turn-taking, two-player, zero-sum games of perfect information. This means
deterministic, fully observable environments in which there are two agents whose actions must
alternate and in which the utility values at the end of the game are always equal and opposite.
For example, if one player wins the game of chess(+1),the other player necessarily loses(-1). It
is this opposition between the agents’ utility functions that makes the situation adversarial.

Formal Definition of Game

We will consider games with two players, whom we will call MAX and MIN. MAX
moves first, and then they take turns moving until the game is over. At the end of the game,
points are awarded to the winning player and penalties are given to the loser. A game can be
formally defined as a search problem with the following components:

o The initial state, which includes the board position and identifies the player to move.

o A successor function, which returns a list of (move, state) pairs, each indicating a legal
move and the resulting state.

58

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

o A terminal test, which describes when the game is over. States where the game has
ended are called terminal states.

o A utility function (also called an objective function or payoff function), which give a
numeric value for the terminal states. In chess, the outcome is a win, loss, or draw, with
values+1,-1, or 0. he payoffs in backgammon range from +192 to -192.

Game Tree

The initial state and legal moves for each side define the game tree for the game.
Figure 2.18 shows the part of the game tree for tic-tac-toe (noughts and crosses). From the
initial state, MAX has nine possible moves. Play alternates between MAX’s placing an X and
MIN’s placing a 0 until we reach leaf nodes corresponding to the terminal states such that one
player has three in a row or all the squares are filled. He number on each leaf node indicates
the utility value of the terminal state from the point of view of MAX; high values are assumed
to be good for MAX and bad for MIN. It is the MAX’s job to use the search tree (particularly
the utility of terminal states) to determine the best move.

Figure 2.41 A partial search tree. The top node is the initial state, and MAX
move first, placing an X in an empty square.

Optimal Decisions in Games

In normal search problem, the optimal solution would be a sequence of move leading
to a goal state – a terminal state that is a win. In a game, on the other hand, MIN has something

59

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

to say about it, MAX therefore must find a contingent strategy, which specifies MAX’s move
in the initial state, then MAX’s moves in the states resulting from every possible response by
MIN, then MAX’s moves in the states resulting from every possible response by MIN those
moves, and so on. An optimal strategy leads to outcomes at least as good as any other strategy
when one is playing an infallible opponent.

Figure 2.42 Optimal Decisions in Games

Figure 2.43 MAX-VALUE and MIN-VALUE

60

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com

Figure 2.44 An algorithm for calculating minimax decisions. It returns the action
corresponding to the best possible move, that is, the move that leads to the outcome
with the best utility, under the assumption that the opponent plays to minimize
utility. The functions MAX-VALUE and MIN-VALUE go through the whole game
tree, all the way to the leaves, to determine the backed-up value of a state.

61

https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
Click on Subject/Paper under Semester to enter.
Environmental Sciences
Professional English and Sustainability -
Professional English - - II - HS3252 Discrete Mathematics GE3451
I - HS3152 - MA3354
Statistics and Theory of Computation
Matrices and Calculus Numerical Methods - Digital Principles and - CS3452
3rd Semester

4th Semester
- MA3151 MA3251 Computer Organization
1st Semester

2nd Semester

- CS3351 Artificial Intelligence


Engineering Graphics and Machine Learning
Engineering Physics - - CS3491
- GE3251 Foundation of Data
PH3151
Science - CS3352
Database Management
Physics for
Engineering Chemistry System - CS3492
Information Science Data Structure -
- CY3151 - PH3256 CS3301

Basic Electrical and


Algorithms - CS3401
Problem Solving and Electronics Engineering Object Oriented
Python Programming - - BE3251 Programming - CS3391 Introduction to
GE3151 Operating Systems -
Programming in C -
CS3451
CS3251

Computer Networks - Object Oriented


CS3591 Software Engineering
- CCS356
Compiler Design - Human Values and
5th Semester

CS3501 Embedded Systems Ethics - GE3791


7th Semester

8th Semester
6th Semester

and IoT - CS3691


Cryptography and Open Elective 2
Cyber Security - Open Elective-1 Project Work /
CB3491
Open Elective 3 Intership
Distributed Computing Elective-3
- CS3551 Open Elective 4
Elective-4
Elective 1
Management Elective
Elective-5
Elective 2
Elective-6
All Computer Engg Subjects - [ B.E., M.E., ] (Click on Subjects to
enter)
Programming in C Computer Networks Operating Systems
Programming and Data Programming and Data Problem Solving and Python
Structures I Structure II Programming
Database Management Systems Computer Architecture Analog and Digital
Communication
Design and Analysis of Microprocessors and Object Oriented Analysis
Algorithms Microcontrollers and Design
Software Engineering Discrete Mathematics Internet Programming
Theory of Computation Computer Graphics Distributed Systems
Mobile Computing Compiler Design Digital Signal Processing
Artificial Intelligence Software Testing Grid and Cloud Computing
Data Ware Housing and Data Cryptography and Resource Management
Mining Network Security Techniques
Service Oriented Architecture Embedded and Real Time Multi - Core Architectures
Systems and Programming
Probability and Queueing Theory Physics for Information Transforms and Partial
Science Differential Equations
Technical English Engineering Physics Engineering Chemistry
Engineering Graphics Total Quality Professional Ethics in
Management Engineering
Basic Electrical and Electronics Problem Solving and Environmental Science and
and Measurement Engineering Python Programming Engineering
Click on Subject/Paper under Semester to enter.
Random Process and Electromagnetic
Professional English Linear Algebra -
Professional English - - II - HS3252 Fields - EC3452
MA3355
I - HS3152
C Programming and Networks and
Statistics and
Data Structures - Security - EC3401
Matrices and Calculus Numerical Methods -
CS3353
- MA3151 MA3251
1st Semester

3rd Semester

Linear Integrated

4th Semester
2nd Semester

Signals and Systems - Circuits - EC3451


Engineering Physics - Engineering Graphics
- GE3251 EC3354
PH3151 Digital Signal
Processing - EC3492
Physics for Electronic Devices and
Engineering Chemistry Electronics Engg - Circuits - EC3353
- CY3151 PH3254 Communication
Systems - EC3491
Control Systems -
Basic Electrical & EC3351
Problem Solving and Instru Engg - BE3254 Environmental
Python Programming - Sciences and
GE3151 Digital Systems Design Sustainability -
Circuit Analysis - - EC3352 GE3451
EC3251

Wireless
Communication -
EC3501 Embedded Systems
and IOT Design -
ET3491
VLSI and Chip Design
5th Semester

- EC3552 Human Values and


7th Semester

8th Semester
6th Semester

Artificial Intelligence Ethics - GE3791


and Machine Learning
Transmission Lines and - CS3491
RF Systems - EC3551 Open Elective 2 Project Work /
Intership
Open Elective-1 Open Elective 3
Elective 1
Elective-4
Open Elective 4
Elective 2
Elective-5
Elective 3
Elective-6
All ECE Engg Subjects - [ B.E., M.E., ] (Click on Subjects to enter)
Circuit Analysis Digital Electronics Communication Theory
Basic Electrical and Electrical Engineering and Principles of Digital
Instrumentation Engineering Instrumentation Signal Processing
Electronic Devices Linear Integrated Circuits Signals and Systems
Electronic Circuits I Electronic Circuits II Digital Communication
Transmission Lines and Wave Control System Engineering Microprocessors and
Guides Microcontrollers
Computer Architecture Computer Networks Operating Systems
RF and Microwave Engineering Medical Electronics VLSI Design
Optical Communication and Embedded and Real Time Cryptography and
Networks Systems Network Security
Probability and Random Transforms and Partial Physics for Electronics
Processes Differential Equations Engineering
Engineering Physics Engineering Chemistry Engineering Graphics
Problem Solving and Python Object Oriented Programming Environmental Science
Programming and Data Structures and Engineering
Principles of Management Technical English Total Quality
Management
Professional Ethics in Engineering Mathematics I Engineering Mathematics
Engineering II
Click on Subject/Paper under Semester to enter.
Environmental Sciences
Professional English and Sustainability -
Professional English - - II - HS3252 Discrete Mathematics - GE3451
I - HS3152 MA3354
Statistics and Theory of Computation
Matrices and Calculus Numerical Methods - Digital Principles and - CS3452
3rd Semester

4th Semester
- MA3151 MA3251 Computer Organization
1st Semester

2nd Semester

- CS3351 Artificial Intelligence


Engineering Graphics and Machine Learning
Engineering Physics - - CS3491
- GE3251 Foundation of Data
PH3151
Science - CS3352
Database Management
Physics for
Engineering Chemistry System - CS3492
Information Science Data Structures and
- CY3151 - PH3256 Algorithms - CD3291
Web Essentials -
Basic Electrical and IT3401
Problem Solving and Electronics Engineering - Object Oriented
Python Programming - BE3251 Programming - CS3391 Introduction to
GE3151 Operating Systems -
Programming in C -
CS3451
CS3251

Computer Networks -
CS3591
Object Oriented
Full Stack Web Software Engineering - Human Values and
5th Semester

Development - IT3501 CCS356 Ethics - GE3791


7th Semester

8th Semester
6th Semester

Distributed Computing Open Elective-1 Open Elective 2


- CS3551 Project Work /
Elective-3 Open Elective 3 Intership
Embedded Systems and
IoT - CS3691 Elective-4
Open Elective 4

Elective 1 Elective-5
Management Elective

Elective-6
Elective 2
All Computer Engg Subjects - [ B.E., M.E., ] (Click on Subjects to enter)
Programming in C Computer Networks Operating Systems
Programming and Data Programming and Data Problem Solving and Python
Structures I Structure II Programming
Database Management Systems Computer Architecture Analog and Digital
Communication
Design and Analysis of Microprocessors and Object Oriented Analysis
Algorithms Microcontrollers and Design
Software Engineering Discrete Mathematics Internet Programming
Theory of Computation Computer Graphics Distributed Systems
Mobile Computing Compiler Design Digital Signal Processing
Artificial Intelligence Software Testing Grid and Cloud Computing
Data Ware Housing and Data Cryptography and Resource Management
Mining Network Security Techniques
Service Oriented Architecture Embedded and Real Time Multi - Core Architectures
Systems and Programming
Probability and Queueing Theory Physics for Information Transforms and Partial
Science Differential Equations
Technical English Engineering Physics Engineering Chemistry
Engineering Graphics Total Quality Professional Ethics in
Management Engineering
Basic Electrical and Electronics Problem Solving and Environmental Science and
and Measurement Engineering Python Programming Engineering

You might also like