0% found this document useful (0 votes)
18 views

AI Module-1

Uploaded by

princerudra7978
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

AI Module-1

Uploaded by

princerudra7978
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Module-1

Introduction –The Foundations of Artificial Intelligence, INTELLIGENT AGENTS – Agents


and Environments, Good Behavior: The Concept of Rationality, the Nature of Environments,
the Structure of Agents.
Introduction to AI

• It is a branch of computer science by which we can create


intelligent machines which can behave like a human, think like
humans, and able to make decisions.
• Artificial Intelligence is composed of two
words Artificial and Intelligence, where Artificial defines "man-
made," and intelligence defines "thinking power", hence AI means "a
man-made thinking power.“
• Artificial Intelligence exists when a machine can have human based
skills such as learning, reasoning, and solving problems
The Foundations of Artificial Intelligence :
Artificial Intelligence (AI) is a broad and complex field that has evolved over decades. Its
foundations are built on a combination of ideas from several disciplines, including
computer science, mathematics, logic, cognitive science, and engineering.
Here are some of the core foundations :
Mathematical Foundations:
Linear Algebra: For understanding AI algorithma and techniques, concepts like
vectors, matrices, and tensors are fundamental.
Probability and Statistics: Used to model uncertainty, make predictions, and infer
patterns from data. Bayesian inference, hypothesis testing, and probabilistic models like
Hidden Markov Models (HMMs) are key concepts.
Calculus: Important for optimization, especially in training machine learning
models.
Logic: Propositional and predicate logic are crucial for understanding algorithms.
Computer Science and Programming :
Algorithms and Data Structures: Understanding algorithms (e.g., search
algorithms like A*, sorting, graph traversal) and data structures (e.g., trees, graphs, heaps)
is essential for building efficient AI systems.
Computational Complexity: Knowledge of computational limits (P vs NP
problems) helps in understanding what can be computed in a reasonable time.
Programming Languages: Proficiency in languages like Python, R, Java, and C++ is
important as these are commonly used in AI development.
Cognitive Science and Psychology:
Human Cognition: Concepts from cognitive psychology, such as memory,
perception, problem-solving, think, learn, and make decisions informs the development
of AI.
Neuroscience: The study of the brain and neural mechanisms has inspired the
development of neural networks and deep learning.
Philosophy :
Epistemology: Deals with the nature and scope of knowledge, which is
crucial for understanding how AI systems can learn and represent knowledge.
Ethics: AI systems increasingly impact society, ethical considerations
around fairness, accountability, transparency, and the potential consequences.
Philosophy of Mind: Concerns the nature of consciousness and whether
machines can possess minds or understand in a human-like way.
Engineering and Robotics:
Control Theory: Important in robotics and AI systems that interact with
the physical world, focusing on how to make dynamic systems behave in a
desired way.
Robotics: Integrates AI with mechanical systems, enabling machines to
perform tasks that involve perception (gathering information, interpret on it,
makes decision, takes action), manipulation, and locomotion in the physical
world.
Signal Processing: Involves analyzing and interpreting signals (like
images, audio, or sensor data).
Linguistics:
Natural Language Processing (NLP): Understanding and generating human
language. This involves syntactic analysis(applies grammer rules to provide context to
meaning at the word and sentence level) and semantic analysis(it understand the
meaning of word and interpret sentence structure so machine can understand as
humans do), machine translation, sentiment analysis, and more.
AGENTS :
Agent: An agent can be anything that perceiveits environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting.

Agent architecture/Structure of Agent:


Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its environment
through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can
be legs, wheels, arms, fingers, wings, fins, and display screen.
The structure of an intelligent agent is a combination of architecture and agent
program.
Agent = Architecture + Agent program
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.

ADVERTISEMENT
• Agent program: Agent program is an implementation of agent function. An
agent program executes on the physical architecture to produce function f.

f:P* → A

NOTE:
An agent can be:
•Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
•Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
•Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.
PEAS Representation of AI agent:
PEAS is a type of model on which an AI agent works upon. When we define an AI
agent, then we can group its properties under PEAS representation model. It is made
up of four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors
Perfromance measure: Performance measure is a criteria that measures the success
of the agent. It is used to evaluate how well the agent is acheiving its goal.
Environment: The environment represents the domain or context in which the agent
operates and interacts. This can range from physical spaces like rooms to virtual
environments
Actuators: Actuators are the mechanisms through which the AI agent performs
actions or interacts with its environment to achieve its goals.
Sensors: Sensors enable the AI agent to gather information from its environment,
providing data that informs its decision-making process and actions. These sensors
can capture various environmental parameters such as temperature, sound,
movement, or visual input.
Intelligent Agents:
An intelligent agent in AI is a system that can perceive its environment, reason
about it, and take actions to achieve specific goals.
These agents are designed to operate autonomously, meaning they can make
decisions and execute actions without human intervention, based on their
perceptions and knowledge.
Key Components of an Intelligent Agent:
Perception: The agent perceives its environment through sensors.
Reasoning: The agent processes and analyzes the data it receives, applying logical
reasoning or machine learning techniques to interpret the data and make
decisions.
Action: Based on its reasoning, the agent takes actions to influence the
environment. Actions are executed through effectors or actuators.
Learning: Some intelligent agents have the ability to learn from their experiences.
This aspect is particularly prominent in machine learning-based agents.
Autonomy: An intelligent agent operates without constant guidance or
supervision. It is capable of adapting to new situations and making decisions on
its own.

Applications of Intelligent Agents:


• Autonomous Vehicles (self-driving cars who make decisions about navigation,
and avoid obstacles)
• Personal Assistants (Virtual assistants like Siri, Alexa, and Google Assistant)
• Game AI
• Robotics
• Financial Systems
Rational Agent and Rationality:
• Rational agents in AI are very similar to intelligent agents.
• A rational agent can be said to those, who do the right thing, It is an autonomous
entity designed to perceive its environment, process information, and act in a way
that maximizes the achievement of its predefined goals or objectives.
• Rationality:
Rationality in AI refers to the principle that such agents should consistently
choose actions that are expected to lead to the best possible outcomes, given their
current knowledge and the uncertainties present in the environment.
This principle of rationality guides the behavior of intelligent agents in the following
ways:
• Perception and Information Processing: Rational agents perceive and process
information efficiently to gain the most accurate understanding of their
environment.
• Reasoning and Inference: They employ logical reasoning and probabilistic inference
to make informed decisions based on available evidence and prior knowledge.
• Decision-Making Under Uncertainty: When faced with uncertainty,
rational agents weigh the probabilities of different outcomes and
choose actions that maximize their expected utility or achieve the
best possible outcome given the available information.
• Adaptation and Learning: Rational agents adapt their behavior over
time based on feedback and experience, continuously refining their
decision-making strategies to improve performance and achieve their
goals more effectively.
Agent Environment :
• The nature of the environment in AI refers to the characteristics and properties of
the environment in which an intelligent agent operates.
• Key Characteristics of Environments in AI:
1. Fully Observable vs. Partially Observable:
Fully Observable: An environment is fully observable if the agent’s sensors can
access the complete state of the environment at any time. This means agent has all the
necessary information to make an optimal decision.
Example: Chess, where the agent can see the entire board and all pieces.
Partially Observable: An environment is partially observable if the agent’s sensors
can only access part of the state of the environment. The agent must make decisions with
incomplete information.
Example: Poker, where the agent cannot see the opponents’ cards.
2. Deterministic vs. Stochastic:
Deterministic: An environment is deterministic if the next state of the
environment is completely determined by the current state.
Example: A puzzle game where each move has a predictable outcome.
Stochastic: An environment is stochastic if there is some level of randomness i.e.
the same action may lead to different outcomes.
Example: A dice game, where the outcome of a roll is random.
3. Episodic vs. Sequential:
Episodic: An environment is episodic if the agent’s experience can be divided into
distinct episodes.
Example: Image classification tasks
Sequential: An environment is sequential means the current decision affects
future decisions and outcomes.
Example: Robot navigation, where the current movement affects future
positioning.
4. Static vs. Dynamic:
Static: An environment is static if it does not change while the agent is deliberating.
Example: Crossword puzzles, where the puzzle remains unchanged while the agent
is thinking.
Dynamic: An environment is dynamic if it changes while the agent is deliberating.
Example: Real-time strategy games, where the environment and opponent actions
change rapidly.
5. Discrete vs. Continuous:
Discrete: An environment is discrete if there are a finite number of distinct states,
actions, and percepts.
Example: Chess, where each move corresponds to a discrete action.
Continuous: An environment is continuous if it has an infinite number of possible
states and actions.
Example: Autonomous driving, where speed and steering require continuous
adjustment.
6. Single-Agent vs. Multi-Agent:
Single-Agent: An environment is single-agent if there is only one agent interacting with
the environment.
Example: Solitaire, where the player competes against the rules of the game itself.
Multi-Agent: An environment is multi-agent if multiple agents interact, either
cooperatively or competitively.
Example: Soccer, where each player (agent) interacts with teammates and opponents.
7. Known vs. Unknown:
Known: An environment is known if the agent understands the rules governing the
environment’s dynamics. The agent can predict the outcomes of its actions accurately.
Example: Board games with well-defined rules, like Go.
Unknown: An environment is unknown if the agent does not know the rules or the
dynamics in advance. The agent must learn from interaction or exploration.
Example: Exploration tasks, such as finding a path in an unknown maze.

32, 65

You might also like