Properties of Task Environment
Properties of Task Environment
Agent program is an implementation of an agent function. An agent function is a map from
the percept sequence(history of all that an agent has perceived to date) to an action.
1) SIMPLE REFLEX AGENT: In artificial intelligence, a simple reflex agent is a type of
intelligent agent that performs actions based solely on the current situation, with
an intelligent agent generally being one that perceives its environment and then acts The
agent cannot learn or take past percepts into account to modify its behavior.
2) MODEL BASED REFLEX AGENT: Model-based reflex agents are made to deal with partial
accessibility; they do this by keeping track of the part of the world it can see now. It
does this by keeping an internal state that depends on what it has seen before so it
holds information on the unobserved aspects of the current state.
3) GOAL BASED REFLEX AGENT: A Goal-Based Agent is capable of thinking beyond the
present moment to decide the best actions to take in order to achieve its goal.
4) UTILITY BASED REFLEX AGENT: A utility-based agent is an agent that acts based not only on
what the goal is, but the best way to reach that goal. In short, it's the usefulness (or utility) of
the agent that makes itself distinct from its counterparts.
5) LEARNING AGENT: A learning agent is a tool in AI that is capable of learning from its
experiences. Unlike intelligent agents that act on information provided by a
programmer, learning agents are able to perform tasks, analyze performance, and look
for new ways to improve on those tasks - all on their own.