0% found this document useful (0 votes)
2 views

AI - Characteristics of Environments (1)

Uploaded by

shankar726
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AI - Characteristics of Environments (1)

Uploaded by

shankar726
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ENVIRONMENT -

___

By Group 2 - Soha Patel, Khooshi Tiwari, Aayushi Panchal, Shreya Mishra, Mansi
Savaliya,, Rachana Prajapati
2

What is environment ?

●​ An environment is the surroundings or setting where an AI system works and interacts. It


includes everything the AI can sense, respond to, and learn from.
●​ Think of it as: If an AI is playing a game, the game is its environment. If an AI controls a
robot in a room, the room is its environment. If an AI is helping you online, the internet
and your messages are part of its environment.
●​ The environment gives information(input) to the AI, and the AI takes actions(output)
based on that information.

What are the types of environment ?


1.​ Fully Observable

An environment is fully observable when the agent has complete information about the current
state at all times. This means there is no hidden data, making decision-making more
straightforward.

Example - A game of chess is considered a fully observable environment, as both players can
see the board, pieces, moves. An agent can thus evaluate all options and consequences and
take decision properly .

2.​ Partially Observable

In a partially observable environment, the agent only has limited or incomplete information
about the state. It may need to rely on memory or inference to make decisions.

Example - A game of poker is considered a partially observable environment. A player can see
their own cards but cannot see the cards of opponents. An agent playing poker needs take
decision based on memory, probability and strategy in order to win.
3

3.​ Competitive

A competitive environment involves multiple agents with conflicting goals. Each agent tries to
maximize its own success while minimizing the success of others.

Example - Trading stocks is considered a competitive environment. Person trading must try to
buy low and sell high along with other traders. An agent trading competes for the same profits
and gains at another’s loss.

4.​ Collaborative

A collaborative environment requires multiple agents to work together to achieve a common


goal. Success depends on coordination and teamwork between the agents.

Example - rescue missions during natura disaster can be considered a collaborative


environment. People deployed must communicate with each other to save survivors. An agent
working in such mission mush collaborate with other agents in order to improve efficiency and
succeed in the mission.

5.​ Deterministic

In a deterministic environment, every action has a predictable outcome with no randomness. If


the same action is taken in the same state, the result will always be identical.

Example - Sudoku puzzle solving can be considered deterministic. A player solves it based on
logic. An agent solving can precisely determine the outcome of each move from current
condition.

6.​ Stochastic (Non-deterministic)

A stochastic environment involves randomness, meaning the same action can lead to different
outcomes. This adds uncertainty and requires probabilistic decision-making.

Example - Weather forecasting can be considered stochastic. Even if AI models are used with
small variations in inputs, the outcome can vary drastically due to the chaotic nature of
atmosphere.
4

7.​ Single-Agent

A single-agent system involves only one decision-making entity interacting with the
environment. It does not need to compete or cooperate with others.

Example - A robot vacuum can be considered to operate in a single agent environment. It


navigates and cleans a space without needing to consider other agents. Its decisions affects
only itself and environment.

8.​ Multi-Agent

A multi-agent environment has multiple entities interacting, which can be competitive,


collaborative, or both. The behavior of one agent can affect the decisions of others.

Example - Online multiplayer games such as Warzone, multiple agents are needed to control
characters interacting with other and work together(collaborate), even compete with other
players(competitive) . Its decisions can affect all oher agents and environment.

9.​ Static

A static environment remains unchanged unless acted upon by the agent. It does not evolve or
change over time on its own.

Example - A crossword puzzle is considered a static environment. The clues and the gird
remain same until an agent solves it by filling words. The environment remains constant unles
the puzzle is acted upon by an agent.

10. Discrete

A discrete environment has a limited number of states and actions, often in countable steps.
Decisions are made at specific points rather than continuously.

Example - Tic-tac-toe game is considered discrete. Players take turns marking their X or O and
the set possibilities for marking is finite.
5

10.​ Dynamic

A dynamic environment changes over time, even if the agent does nothing. The agent must
adapt to these changes to succeed.

Example - Driving through traffic is considered a dynamic environment. The environment


including - traffic lights, other vehicles, pedestrians change constantly and the agent has to
adapt in real-time.

11.​Continuous

A continuous environment has infinite possible states and actions. Movements and changes
happen smoothly over time.

Example - A robot arm assembling parts is working in continuous environment. The possibilities
for adjusting the positioning, rotation, force, and speed are countless.

12.​ Episodic

An episodic environment consists of independent episodes where past actions do not affect
future situations. The agent’s experience resets after each episode.

Example - Image classification task by an AI is episodic. Classifying one image has no effect on
classification of any other image.

13.​ Sequential

In a sequential environment, current actions impact future states and decisions. Past choices
influence long-term success.

Example - Playing chess is considered sequential. Each move influences future moves.
6

14.​ Known

A known environment means the agent has full knowledge of the rules, transition models, and
consequences of its actions. It does not need to explore to understand how things work.

Example - Checkers is considered a known environment. A set of rules are predefined that the
agent already knows when playing, there is nothing hidden.

15.​ Unknown

In an unknown environment, the agent does not initially know the rules or effects of actions and
must learn through exploration. This requires trial and error or reinforcement learning.

Example - A player playing Minecraft for the first time must explore before figuring out the rules,
and result of specific actions like crafting. Over time, an agent playing can figure out the basic
goals and actions through trial and error.
7

Comparisons between the environments -

1)​ Fully observable vs Partially observable -

Fully observable Partially observable

In a fully observable environment, the AI In a partially observable environment, the AI


has access to all the information about the can only see part of the environment. Some
environment at any given time. It can see information is hidden, so the AI does not
everything needed to make a decision, and know the full state and must guess or
there is no hidden information. estimate based on what it can see.

Example: In Poker, the AI can see its own


Example: In Chess, the AI can see the cards and common cards on the table, but it
entire board and all the pieces. It knows the cannot see the cards held by other players.
exact state of the game, allowing it to plan It has to make decisions based on limited
moves accurately. information and predictions.

2)​ Competitive v/s Collaborative -

Competitive Collaborative

In a competitive environment, agents work In a collaborative environment, agents work


against each other to achieve their own together towards a shared goal. Success
goals. One agent’s success often comes at depends on teamwork and coordination
the expense of another. rather than individual competition.

Example: Chess is competitive because Example:A football team plays in a


each player tries to checkmate their collaborative environment because
opponent, and one player’s gain means the teammates must work together, passing
other’s loss. and strategizing, to win the game.
8

3)​ Deterministic v/s Stochastic -

Deterministic Stochastic(Non-deterministic)

In a deterministic environment, every action In a stochastic environment, the outcome of


has a predictable outcome, and there is no an action is uncertain and involves
randomness involved. When the AI takes randomness. The same action can lead to
an action, the result is always certain and different results each time, depending on
the same every time that action is chance.
performed in the same situation.

Example: Solving a math problem is Example: Weather forecasting is stochastic


deterministic because 2 + 2 will always because, even with the same data, the
equal 4. weather can turn out differently due to
unpredictable factors.

4)​ Single-agent v/s multiple-agent -

Single-agent Multi-agent

In a single-agent environment, only one In a multi-agent environment, multiple


agent makes decisions and interacts with agents interact, influencing each other’s
the environment. Other entities, if present, decisions. They can be competitive,
do not make independent decisions. cooperative, or a mix of both.

Example: Solving a Sudoku puzzle is Example: A multiplayer online game


single-agent since only one player is involves multiple players whose actions
making choices, and the puzzle does not affect each other.
respond independently.
9

5)​ Static v/s Dynamic -

Static Dynamic

In a static environment, the world does not In a dynamic environment, the world
change while the agent is making a continues to change over time, even if the
decision. The environment remains agent does nothing. The agent must
constant unless the agent itself takes account for these changes when making
action. decisions.

Example: A chessboard is static because Example: Driving a car is dynamic because


the game state does not change unless a the traffic conditions, pedestrians, and road
player makes a move. signals change constantly.

6)​ Discrete v/s Continuous -

Discrete Continuous

In a discrete environment, the agent has a In a continuous environment, the agent can
finite set of states and actions, and changes take actions and experience states within a
occur in distinct steps rather than smoothly. continuous range, leading to smooth and
fluid transitions.

Example: A turn-based board game like Example: A robotic arm moving in space,
Monopoly, where players take turns moving where its position and movement involve
a specific number of spaces. infinite possible variations.
10

7)​ Episodic v/s Sequential -

Episodic Sequential

In an episodic environment, each decision In a sequential environment, actions


or action is independent of past and future influence future states, meaning past
actions. The outcome of one action does decisions affect what happens next. The
not influence future states. agent must consider long-term
consequences.

Example: Image classification is episodic Example: Playing chess is sequential


because identifying one image does not because every move changes the game
affect the next classification. state and affects future possibilities.

8)​ Known v/s Unknown -

Known Unknown

In a known environment, the agent has full In an unknown environment, the agent does
knowledge of the rules, states, and not have complete knowledge and must
outcomes of its actions. There is no need learn about the environment through trial
for exploration or learning. and experience.

Example: Solving a maze with a provided Example: A new player exploring an


map is known because the agent already unfamiliar video game must learn the
has all the necessary information. mechanics by interacting with the world.
11

How the characteristics of the environment affect the AI


agent design -

The characteristics of the environment determine how an AI agent should be designed.


●​ Fully vs. Partially Observable affects memory needs.
●​ Deterministic vs. Stochastic influences planning and probability handling.
●​ Competitive vs. Collaborative decides the strategy (adversarial or cooperative).
●​ Single-agent vs. Multi-agent affects decision complexity and communication.
●​ Static vs. Dynamic impacts the need for real-time updates.
●​ Discrete vs. Continuous influences state space representation.
●​ Episodic vs. Sequential affects the dependency on past actions.
●​ Known vs. Unknown decides whether the agent learns or follows rules.

Some examples -
1.​ A crossword puzzle is a game where you can see everything from the start
(fully observable). You play it alone (single-player), and there’s no luck
involved—every word fits in a specific way (deterministic). You solve it step by
step, adding one word at a time (sequential). The puzzle doesn’t change unless
you write in it (static), and everything is made up of clear, separate words and
spaces (discrete).
2.​ Chess with a clock is a game where both players can see the board at all times
(fully observable). It’s played by two people (multi-player), and there’s no
luck—moves follow strict rules (deterministic). Players take turns making moves
(sequential). The board stays the same except when a player moves a piece or
the clock counts down (semi-static). The game has clear, separate pieces and
squares with set moves (discrete).
12

Classification of task environments and their characteristics


-

Task manager Observable Agents Deterministic vs Episodic vs Static vs Discrete vs


Stochastic Sequential Dynamic Continuous

Crossword Fully Single Deterministic Sequential Static Discrete


puzzle

Chess with Fully Multi Deterministic Sequential Semi Discrete


clock

Poker Partially Multi Stochastic Sequential Static Discrete

Backgammon Partially Multi Stochastic Sequential Static Discrete

Taxi driving Partially Multi Stochastic Sequential Dynamic Continuous

Medical Partially Single stochastic Sequential Dynamic Continuous


diagnosis

Image analysis Fully Single Deterministic Episodic Semi Continuous

Part-picking Partially Single stochastic Episodic Dynamic Continuous


robot

Refinery Partially Single stochastic Sequential Dynamic Continuous


controller

Interactive Partially Multi Stochastic Sequential Dynamic Discrete


English tutor

You might also like