0% found this document useful (0 votes)
5 views

unit-1-3-Environments In AI

The document discusses the concept of environments in artificial intelligence, defining it as the surrounding context in which an agent operates. It categorizes environments into various types, including fully observable vs partially observable, deterministic vs stochastic, and competitive vs collaborative, among others. Each type is explained with examples to illustrate how different environments impact the behavior and decision-making of agents.

Uploaded by

mrrandom663
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

unit-1-3-Environments In AI

The document discusses the concept of environments in artificial intelligence, defining it as the surrounding context in which an agent operates. It categorizes environments into various types, including fully observable vs partially observable, deterministic vs stochastic, and competitive vs collaborative, among others. Each type is explained with examples to illustrate how different environments impact the behavior and decision-making of agents.

Uploaded by

mrrandom663
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Environments In AI

INTRODUCTION

• An environment in artificial intelligence is the surrounding of the


agent.
• It is everything in the world which surrounds the agent, but it is
not a part of an agent itself.
• An environment can also be described as a situation in which an
agent is present.
• The environment is where agent lives, operate and provide the
agent with something to sense and act upon it.
• The agent takes input from the environment through sensors
and delivers the output to the environment through actuators.
Environment Types

• Fully Observable vs Partially Observable


• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
1. Fully Observable vs Partially
Observable
• When an agent sensor is capable to sense or access the complete state of
an agent at each point in time, it is said to be a fully observable
environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to
keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no sensors in
all environments.
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around
the corner is not known.
2. Deterministic vs Stochastic
• When a uniqueness in the agent’s current state completely determines the
next state of the agent, the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and
cannot be completely determined by the agent.
• Chess – there would be only a few possible moves for a coin at the current
state and these moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies
time to time.
3. Competitive vs Collaborative

• An agent is said to be in a competitive environment when it competes


against another agent to optimize the output.
• The game of chess is competitive as the agents compete with each other to
win the game which is the output.
• An agent is said to be in a collaborative environment when multiple agents
cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they cooperate with
each other to avoid collisions and reach their destination which is the output
desired.
4. Single-Agent vs Multi-Agent
• An environment consisting of only one agent is said to be a single-agent
environment.
• A person left alone in a maze is an example of the single-agent system.
• An environment involving more than one agent is a multi-agent
environment.
• The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
• A dynamic environment is one that changes over time, often independently
of the agent’s actions.
• A static environment does not change over time, or it changes only in
response to the agent’s actions.
• A roller coaster ride is dynamic as it is set in motion and the environment
keeps changing every instant.
• An empty house is static as there’s no change in the surroundings when an
agent enters.
6. Discrete vs Continuous

• If an environment consists of a finite number of actions that can be


deliberated in the environment to obtain the output, it is said to be a
discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The
number of moves might vary with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered
i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as their actions
are driving, parking, etc. which cannot be numbered.
7. Episodic vs Sequential

• In an Episodic task environment, each of the agent’s actions is divided


into atomic incidents or episodes. There is no dependency between current
and previous incidents. In each incident, an agent receives input from the
environment and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is used to
detect defective parts from the conveyor belts. Here, every time
robot(agent) will make the decision on the current part i.e. there is no
dependency between current and previous decisions.
• In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action he has
taken previously and what action he is supposed to take in the future.
• Checkers- Where the previous move can affect all the following moves.
8. Known vs Unknown

• In a known environment, the output for all probable actions is given.


Obviously, in case of unknown environment, for an agent to make a
decision, it has to gain knowledge about how the environment works.

You might also like