Wa0254.
Wa0254.
UNIT - I
by
Ms.S.Sowmya
Assistant Professor
Department of Computer Science & Engineering
Course Objectives: To train the students to understand different types of AI agents, various AI search
algorithms, fundamentals of knowledge representation, building of simple knowledge-based systems
and to apply knowledge representation, reasoning.
•Artificial
Examples intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think like humans and mimic their actions.
•The term may also be applied to any machine that exhibits traits associated with a human mind
such as learning and problem-solving.
•The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that
have the best chance of achieving a specific goal.
•A subset of artificial intelligence is machine learning, which refers to the concept that computer
programs can automatically learn from and adapt to new data without being assisted by humans.
•Deep learning techniques enable this automatic learning through the absorption of huge amounts of
unstructured data such as text, images, or video.
•AI is being used across different industries including finance and healthcare.
•Weak AI tends to be simple and single-task oriented, while strong AI carries on tasks that
•When most people hear the term artificial intelligence, the first thing they usually think of is robots.
•That's because big-budget films and novels weave stories about human-like machines that wreak havoc on
Earth. But nothing could be further from the truth.
•Artificial intelligence is based on the principle that human intelligence can be defined in a way that a machine
can easily mimic it and execute tasks, from the most simple to those that are even more complex.
•The goals of artificial intelligence include mimicking human cognitive activity. Researchers and developers in
the field are making surprisingly rapid strides in mimicking activities such as learning, reasoning, and
perception, to the extent that these can be concretely defined.
•Some believe that innovators may soon be able to develop systems that exceed the capacity of humans to learn
or reason out any subject.
• But others remain skeptical because all cognitive activity is laced with value judgements that are subject to
human experience.
Predicted that by 2000, a machine might have a 30% chance of fooling a lay
person for 5 minutes
Anticipated all major arguments against AI in following 50 years
Suggested major components of AI: knowledge, reasoning, language
understanding, learning
Several Greek schools developed various forms of logic: notation and rules of
derivation for thoughts; may or may not have proceeded to the idea of
mechanization
Problems:
1. Not all intelligent behavior is mediated by logical deliberation
2. What is the purpose of thinking? What thoughts should I have?
The right thing: that which is expected to maximize goal achievement, given the
available information
Doesn't necessarily involve thinking – e.g., blinking reflex – but thinking should be in
the service of rational action
Weak artificial intelligence embodies a system designed to carry out one particular job.
• Weak AI systems include video games such as the chess example from above and personal
assistants such as Amazon's Alexa and Apple's Siri. You ask the assistant a question, it answers it for
you.
Strong artificial intelligence systems are systems that carry on the tasks considered to be human-like.
These tend to be more complex and complicated systems.
• They are programmed to handle situations in which they may be required to problem solve
without having a person intervene.
• These kinds of systems can be found in applications like self-driving cars or in hospital operating
rooms.
Performance Measure of Agent − It is the criteria, which determines how successful an agent is.
Behavior of Agent − It is the action that agent performs after any given sequence of percepts.
Percept − It is agent’s perceptual inputs at a given instance.
Percept Sequence − It is the history of all that an agent has perceived till date.
Agent Function − It is a map from the precept sequence to an action.
An environment in artificial intelligence is the surrounding of the agent. The agent takes input from the
environment through sensors and delivers the output to the environment through actuators. There are several types
of environments:
1. Fully Observable vs Partially Observable
2. Deterministic vs Stochastic
3. Competitive vs Collaborative
4. Single-agent vs Multi-agent
5. Static vs Dynamic
6. Discrete vs Continuous
5. Dynamic vs Static
An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic.
A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.
An idle environment with no change in it’s state is called a static environment.
An empty house is static as there’s no change in the surroundings when an agent enters.
6. Discrete vs Continuous
If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is
said to be a discrete environment.
The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but
still, it’s finite.
The environment in which the actions performed cannot be numbered ie. is not discrete, is said to be continuous.
Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be
numbered.
BVRIT HYDERABAD College of Engineering for Women
The Structure of Agents
E
N
Agent Sensors
V
I
R
Agent function O
N
M
E
Actuator
N
T
➢Performance measure
➢Environment
➢Actuators
➢Sensors
• The most effective way to handle partial observability is for the agent
―to keep track of the part of the world it cant see now. That is, the
agent which combines the current percept with the old internal state
to generate updated description of the current state.
• The current percept is combined with the old internal state and it
derives a new current state is updated in the state description is also.
This updation requires two kinds of knowledge in the agent program.
First, we need some information about how the world evolves
independently of the agent. Second, we need some information about
how the agents own actions affect the world.
• The above two knowledge implemented in simple Boolean circuits or
in complete scientific theories is called a model of the world. An agent
that uses such a model is called a model- based agent.
BVRIT HYDERABAD College of Engineering for Women
3.Goal-based agents
• An agent knows the description of current state and also needs
some sort of goal information that describes situations that are
desirable. The action matches with the current state is selected
depends on the goal state.
• The goal based agent is more flexible for more than one
destination also. After identifying one destination, the new
destination is specified, goal based agent is activated to come up
with a new behavior. Search and Planning are the subfields of AI
devoted to finding action sequences that achieve the agents goals.
• The goal-based agent appears less efficient, it is more flexible
because the knowledge that supports its decisions is represented
explicitly and can be modified. The goal-based agent‘s behavior
can easily be changed to go to a different location.
Depth First Search (DFS) algorithm traverses a graph in a depthward motion and
uses a stack to remember to get the next vertex to start a search, when a dead end
occurs in any iteration.
As C does not have any unvisited adjacent node so we keep popping the stack
until we find a node that has an unvisited adjacent node.
In this case, there's none and we keep popping until the stack is empty.