0% found this document useful (0 votes)
7 views

Assignment AI 3043 BSCS 6th E

Uploaded by

tayabaahmad01
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Assignment AI 3043 BSCS 6th E

Uploaded by

tayabaahmad01
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Artificial Intelligence

Name:

Ghulam Mujtaba
Roll No=3043 Class: BSCS 6th E

Assignment: Mid-Term

Submitted To:

Dr. Ghulam Ali

What is artificial intelligence?


The field of AI refers to developing computer systems able to perform “intelligent” tasks: visual
perception, understanding language, reasoning and decision making.
Machine learning is one way of building such systems based on providing the computer with
examples of what it should do, and let it figure out (learn) how to do it.
What is agent?
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
What is Environment?
The environment in AI is everything that surrounds the agent but not the agent itself.
The environment in which an AI functions can be considered as the entity that the agent used to
make sense of things around it to eventually act upon things that can be used to effectively solve
a problem.
What are Actuators?
Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc. Effectors: Effectors are the devices which affect the environment.
Human agent:
Eyes, ears, and other organs for sensors;
Hands, legs, mouth, and other body parts for actuators.
Robotic agent:
Cameras and infrared range finders for sensors
Various motors for actuators.

Rationality
Rationality is:
 Performance measuring success
 Agents prior knowledge of environment
 Actions that agent can perform
 Agent’s percept sequence to date
 Rational is different from omniscience

Rational Agent
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
Autonomy in Agents
The autonomy of an agent is the extent to which its behavior is determined by its own
experience, rather than knowledge of designer.
Example: baby learning to crawl

PEAS: (Performance measure, Environment, Actuators, Sensors)


The task of designing an automated taxi driver:
Performance measure: Safe, fast, legal, comfortable trip, maximize profits
Environment: Roads, other traffic, pedestrians, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard.

Environment types
i. Fully observable (vs. partially observable)
ii. Deterministic (vs. stochastic)
iii. Episodic (vs. sequential)
iv. Static (vs. dynamic)
v. Discrete (vs. continuous)
vi. Single agent (vs. multiagent)

I. Fully observable (vs. partially observable)


An environment is called fully observable when the information received by an agent at any
point of time is sufficient to make the optimal decision, an environment is called partially
observable when the agent needs a memory in order to make the best possible decisions.
Example: chess, Dice Game

II. Deterministic (vs. stochastic)


Environment is called deterministic when agent’s action uniquely determine the outcome. An
environment is call stochastic when an agent’s actions don't uniquely determine the outcome.
Example: chess, Dice snake game

III. Episodic (vs. sequential)


In and episode environment there is a series of one shot actions and only the current percept is
required for the action however in sequential environment an agent requires memory of past
actions to determine the next best actions
Example: Object picker reboot, chess

IV. Static (vs. dynamic)


Is the environment can change why land agent is deliberating then we say the environment is
dynamic for the agent otherwise it is static
Example: Cross word puzzle, automated taxi

V. Discrete (vs. continuous)


Discrete environment are those on which a finite set off possibilities can we drive the outcome of
the task continuous environment relay on the unknown and rapidly changing data sources
Example: chess, automated taxi

VI. Single agent (vs. multiagent)


If only one agent is involved in an environment and operating by itself then such an environment
is called single agent environment however if multiple agents are operating in an environment
then such an environment is called a multi agent environment.
Example: Cross word puzzle, chess

Agent types
1. Simple reflex agents
2. Reflex agents with state/model
3. Goal-based agents
4. Utility-based agents

1) Simple reflex agents


Works only on current situation perceptions and ignore
the history of previous state
Works: condition action rule
Limitations:
Very limited intelligence no knowledge about non
perception parts of state
It can go into infinite loop
2) Reflex agents with state/model
 Works by finding the rule whose could not matches current situation
 Can Coork in partially observable environment and track station
 Agent keeps track of internal state which is adjustable by each percept and that depends
on percept history

3) Goal-based agents
 Focuses only on reaching the goal set
 Agent takes decision based on how far it is
currently from the goal state
 Every action is taken to minimize distance To
gold state
 More flexible agent

4) Utility-based agents
 Agents are more concern about the utility for
each state
 Act based not only on goals but also the best
way to achieve goal
 Useful when there are multiple possible
alternatives and goal has to choose in order to
perform best action

Uninformed Search:
While searching you have no clue whether one non-goal state is better than any other. Your
search is blind.

Various blind strategies:


 Breadth-first search
 Uniform-cost search
 Depth-first search
 Iterative deepening search

Breadth-first search
Expand shallowest unexpanded node
Implementation:
First-in-First-out (FIFO) queue, i.e., new successors go at end of the queue.

Properties of breadth-first search


 Complete? Yes it always reaches goal (if b is finite)
 Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1)
(this is the number of nodes we generate)
 Space? O(bd+1) (keeps every node in memory, Either in fringe or on a path to fringe).
 Optimal? Yes (if we guarantee that deeper solutions are less optimal, e.g. step-cost=1).
 Space is the bigger problem (more than time)

Uniform-cost search
Breadth-first is only optimal if step costs is
increasing with depth (e.g. constant).
Uniform-cost Search: Expand node with
smallest path cost g(n).

Properties:
 Implementation: fringe = queue ordered by path cost Equivalent to breadth-first if all
step costs all equal.
 Complete? Yes, if step cost ≥ ε (otherwise it can get stuck in infinite loops)
 Time? # of nodes with path cost ≤ cost of optimal solution.
 Space? # of nodes on paths with path cost ≤ cost of optimal solution.
 Optimal? Yes, for any step cost.

Depth-first search
Expand deepest unexpanded node
Implementation:
Last In First Out (LIPO) queue, i.e., put successors at
front

Properties of depth-first search


Complete? No: fails in infinite-depth spaces
Can modify to avoid repeated states along path
Time? O(bm) with m=maximum depth terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space? O(bm), i.e., linear space! (we only need to remember a single path + expanded
unexplored nodes)
Optimal? No (It may find a non-optimal goal first)

Iterative deepening search


To avoid the infinite depth problem of
DFS, we can decide to only search until
depth L, i.e. we don’t expand beyond
depth L.
 Depth-Limited Search
What of solution is deeper than L? 
Increase L iteratively.
 Iterative Deepening Search
As we shall see: this inherits the memory advantage of Depth-First Search.

Bidirectional Search
Idea
 simultaneously search forward from S and
backwards from G
 stop when both “meet in the middle”
 need to keep track of the intersection of 2 open sets
of nodes
What does searching backwards from G mean
 need a way to specify the predecessors of G
 this can be difficult,
 e.g., predecessors of checkmate in chess?
what if there are multiple goal states?
what if there is only a goal test, no explicit list?

You might also like