AI - Characteristics of Environments (1)
AI - Characteristics of Environments (1)
___
By Group 2 - Soha Patel, Khooshi Tiwari, Aayushi Panchal, Shreya Mishra, Mansi
Savaliya,, Rachana Prajapati
2
What is environment ?
An environment is fully observable when the agent has complete information about the current
state at all times. This means there is no hidden data, making decision-making more
straightforward.
Example - A game of chess is considered a fully observable environment, as both players can
see the board, pieces, moves. An agent can thus evaluate all options and consequences and
take decision properly .
In a partially observable environment, the agent only has limited or incomplete information
about the state. It may need to rely on memory or inference to make decisions.
Example - A game of poker is considered a partially observable environment. A player can see
their own cards but cannot see the cards of opponents. An agent playing poker needs take
decision based on memory, probability and strategy in order to win.
3
3. Competitive
A competitive environment involves multiple agents with conflicting goals. Each agent tries to
maximize its own success while minimizing the success of others.
Example - Trading stocks is considered a competitive environment. Person trading must try to
buy low and sell high along with other traders. An agent trading competes for the same profits
and gains at another’s loss.
4. Collaborative
5. Deterministic
Example - Sudoku puzzle solving can be considered deterministic. A player solves it based on
logic. An agent solving can precisely determine the outcome of each move from current
condition.
A stochastic environment involves randomness, meaning the same action can lead to different
outcomes. This adds uncertainty and requires probabilistic decision-making.
Example - Weather forecasting can be considered stochastic. Even if AI models are used with
small variations in inputs, the outcome can vary drastically due to the chaotic nature of
atmosphere.
4
7. Single-Agent
A single-agent system involves only one decision-making entity interacting with the
environment. It does not need to compete or cooperate with others.
8. Multi-Agent
Example - Online multiplayer games such as Warzone, multiple agents are needed to control
characters interacting with other and work together(collaborate), even compete with other
players(competitive) . Its decisions can affect all oher agents and environment.
9. Static
A static environment remains unchanged unless acted upon by the agent. It does not evolve or
change over time on its own.
Example - A crossword puzzle is considered a static environment. The clues and the gird
remain same until an agent solves it by filling words. The environment remains constant unles
the puzzle is acted upon by an agent.
10. Discrete
A discrete environment has a limited number of states and actions, often in countable steps.
Decisions are made at specific points rather than continuously.
Example - Tic-tac-toe game is considered discrete. Players take turns marking their X or O and
the set possibilities for marking is finite.
5
10. Dynamic
A dynamic environment changes over time, even if the agent does nothing. The agent must
adapt to these changes to succeed.
11.Continuous
A continuous environment has infinite possible states and actions. Movements and changes
happen smoothly over time.
Example - A robot arm assembling parts is working in continuous environment. The possibilities
for adjusting the positioning, rotation, force, and speed are countless.
12. Episodic
An episodic environment consists of independent episodes where past actions do not affect
future situations. The agent’s experience resets after each episode.
Example - Image classification task by an AI is episodic. Classifying one image has no effect on
classification of any other image.
13. Sequential
In a sequential environment, current actions impact future states and decisions. Past choices
influence long-term success.
Example - Playing chess is considered sequential. Each move influences future moves.
6
14. Known
A known environment means the agent has full knowledge of the rules, transition models, and
consequences of its actions. It does not need to explore to understand how things work.
Example - Checkers is considered a known environment. A set of rules are predefined that the
agent already knows when playing, there is nothing hidden.
15. Unknown
In an unknown environment, the agent does not initially know the rules or effects of actions and
must learn through exploration. This requires trial and error or reinforcement learning.
Example - A player playing Minecraft for the first time must explore before figuring out the rules,
and result of specific actions like crafting. Over time, an agent playing can figure out the basic
goals and actions through trial and error.
7
Competitive Collaborative
Deterministic Stochastic(Non-deterministic)
Single-agent Multi-agent
Static Dynamic
In a static environment, the world does not In a dynamic environment, the world
change while the agent is making a continues to change over time, even if the
decision. The environment remains agent does nothing. The agent must
constant unless the agent itself takes account for these changes when making
action. decisions.
Discrete Continuous
In a discrete environment, the agent has a In a continuous environment, the agent can
finite set of states and actions, and changes take actions and experience states within a
occur in distinct steps rather than smoothly. continuous range, leading to smooth and
fluid transitions.
Example: A turn-based board game like Example: A robotic arm moving in space,
Monopoly, where players take turns moving where its position and movement involve
a specific number of spaces. infinite possible variations.
10
Episodic Sequential
Known Unknown
In a known environment, the agent has full In an unknown environment, the agent does
knowledge of the rules, states, and not have complete knowledge and must
outcomes of its actions. There is no need learn about the environment through trial
for exploration or learning. and experience.
Some examples -
1. A crossword puzzle is a game where you can see everything from the start
(fully observable). You play it alone (single-player), and there’s no luck
involved—every word fits in a specific way (deterministic). You solve it step by
step, adding one word at a time (sequential). The puzzle doesn’t change unless
you write in it (static), and everything is made up of clear, separate words and
spaces (discrete).
2. Chess with a clock is a game where both players can see the board at all times
(fully observable). It’s played by two people (multi-player), and there’s no
luck—moves follow strict rules (deterministic). Players take turns making moves
(sequential). The board stays the same except when a player moves a piece or
the clock counts down (semi-static). The game has clear, separate pieces and
squares with set moves (discrete).
12