0% found this document useful (0 votes)
9 views

Why Study AI

Information technology

Uploaded by

victorkibett01
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Why Study AI

Information technology

Uploaded by

victorkibett01
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 45

Why study AI?

Search engines
Science

Medicine/
Diagnosis
Labor
Appliances What else?
CS 561, Lecture 1
Honda Humanoid Robot

Walk

Turn

http://world.honda.com/robot/
Stairs
CS 561, Lecture 1
Sony AIBO

http://www.aibo.com
CS 561, Lecture 1
Natural Language Question Answering

http://aimovie.warnerbros.com http://www.ai.mit.edu/projects/infolab/
CS 561, Lecture 1
Robot Teams

USC robotics Lab

CS 561, Lecture 1
What is AI?

CS 561, Lecture 1
Acting Humanly: The Turing Test

• Alan Turing's 1950 article Computing Machinery and Intelligence


discussed conditions for considering a machine to be intelligent
• “Can machines think?”  “Can machines behave intelligently?”
• The Turing test (The Imitation Game): Operational definition of
intelligence.

• Computer needs to posses:Natural language processing, Knowledge


representation, Automated reasoning, and Machine learning

• Are there any problems/limitations to the Turing


Test?

CS 561, Lecture 1
What tasks require AI?

• “AI is the science and engineering of making intelligent


machines which can perform tasks that require intelligence
when performed by humans …”

• What tasks require AI?

CS 561, Lecture 1
What tasks require AI?

• “AI is the science and engineering of making intelligent


machines which can perform tasks that require intelligence
when performed by humans …”

• Tasks that require AI:


• Solving a differential equation
• Brain surgery
• Inventing stuff
• Playing Jeopardy
• Playing Wheel of Fortune
• What about walking?
• What about grabbing stuff?
• What about pulling your hand away from fire?
• What about watching TV?
• What about day dreaming?

CS 561, Lecture 1
Acting Humanly: The Full Turing Test

• Alan Turing's 1950 article Computing Machinery and Intelligence


discussed conditions for considering a machine to be intelligent
• “Can machines think?”  “Can machines behave intelligently?”
• The Turing test (The Imitation Game): Operational definition of
intelligence.

• Computer needs to posses:Natural language processing, Knowledge


representation, Automated reasoning, and Machine learning
• Problem: 1) Turing test is not reproducible, constructive, and amenable to
mathematic analysis. 2) What about physical interaction with interrogator and
environment?
• Total Turing Test: Requires physical interaction and needs perception and
actuation.

CS 561, Lecture 1
What would a computer need to pass the Turing test?

• Natural language processing: to communicate with


examiner.
• Knowledge representation: to store and retrieve
information provided before or during interrogation.
• Automated reasoning: to use the stored information to
answer questions and to draw new conclusions.
• Machine learning: to adapt to new circumstances and to
detect and extrapolate patterns.
• Vision (for Total Turing test): to recognize the examiner’s
actions and various objects presented by the examiner.
• Motor control (total test): to act upon objects as requested.
• Other senses (total test): such as audition, smell, touch,
etc.

CS 561, Lecture 1
Thinking Humanly: Cognitive Science

• 1960 “Cognitive Revolution”: information-processing


psychology replaced behaviorism

• Cognitive science brings together theories and


experimental evidence to model internal activities of the
brain
• What level of abstraction? “Knowledge” or “Circuits”?
• How to validate models?
• Predicting and testing behavior of human subjects (top-down)
• Direct identification from neurological data (bottom-up)
• Building computer/machine simulated models and reproduce
results (simulation)

CS 561, Lecture 1
Thinking Rationally: Laws of Thought

• Aristotle (~ 450 B.C.) attempted to codify “right thinking”


What are correct arguments/thought processes?
• E.g., “Socrates is a man, all men are mortal; therefore
Socrates is mortal”

• Several Greek schools developed various forms of logic:


notation plus rules of derivation for thoughts.

• Problems:
1) Uncertainty: Not all facts are certain (e.g., the flight might be
delayed).
2) Resource limitations: There is a difference between solving a
problem in principle and solving it in practice under various
resource limitations such as time, computation, accuracy etc.
(e.g., purchasing a car)

CS 561, Lecture 1
Acting Rationally: The Rational Agent

• Rational behavior: Doing the right thing!

• The right thing: That which is expected to maximize the


expected return

• Provides the most general view of AI because it includes:


• Correct inference (“Laws of thought”)
• Uncertainty handling
• Resource limitation considerations (e.g., reflex vs.
deliberation)
• Cognitive skills (NLP, AR, knowledge representation, ML, etc.)

• Advantages:
1) More general
2) Its goal of rationality is well defined

CS 561, Lecture 1
How to achieve AI?

• How is AI research done?

• AI research has both theoretical and experimental sides. The


experimental side has both basic and applied aspects.

• There are two main lines of research:


• One is biological, based on the idea that since humans are intelligent,
AI should study humans and imitate their psychology or physiology.
• The other is phenomenal, based on studying and formalizing common
sense facts about the world and the problems that the world presents
to the achievement of goals.

• The two approaches interact to some extent, and both should


eventually succeed. It is a race, but both racers seem to be
walking. [John McCarthy]

CS 561, Lecture 1
Branches of AI

• Logical AI
• Search
• Natural language processing
• pattern recognition
• Knowledge representation
• Inference From some facts, others can be inferred.
• Automated reasoning
• Learning from experience
• Planning To generate a strategy for achieving some goal
• Epistemology This is a study of the kinds of knowledge that are
required for solving problems in the world.
• Ontology Ontology is the study of the kinds of things that exist. In AI,
the programs and sentences deal with various kinds of objects, and
we study what these kinds are and what their basic properties are.
• Genetic programming
• Emotions???
• …

CS 561, Lecture 1
AI Prehistory

CS 561, Lecture 1
AI History

CS 561, Lecture 1
AI State of the art

• Have the following been achieved by AI?


• World-class chess playing
• Playing table tennis
• Cross-country driving
• Solving mathematical problems
• Discover and prove mathematical theories
• Engage in a meaningful conversation
• Understand spoken language
• Observe and understand human emotions
• Express emotions
• …

CS 561, Lecture 1
Course Overview

General Introduction

• 01-Introduction. [AIMA Ch 1] Course Schedule. Homeworks,


exams and grading. Course material, TAs and office hours. Why
study AI? What is AI? The Turing test. Rationality. Branches of AI.
Research disciplines connected to and at the foundation of AI. Brief
history of AI. Challenges for the future. Overview of class syllabus.

• 02-Intelligent Agents. [AIMA Ch 2] What is Agent


an intelligent agent? Examples. Doing the right
thing (rational action). Performance measure.

sensors

effectors
Autonomy. Environment and agent design.
Structure of agents. Agent types. Reflex agents.
Reactive agents. Reflex agents with state.
Goal-based agents. Utility-based agents. Mobile
agents. Information agents.

CS 561, Lecture 1
Course Overview (cont.)

How can we solve complex problems?

• 03/04-Problem solving and search. [AIMA 9l


Ch 3] Example: measuring problem. Types of 3l 5l
problems. More example problems. Basic idea Using these 3 buckets,
behind search algorithms. Complexity. measure 7 liters of water.
Combinatorial explosion and NP completeness.
Polynomial hierarchy.

• 05-Uninformed search. [AIMA Ch 3] Depth-


first. Breadth-first. Uniform-cost. Depth-limited.
Iterative deepening. Examples. Properties.

• 06/07-Informed search. [AIMA Ch 4] Best-


first. A* search. Heuristics. Hill climbing.
Problem of local extrema. Simulated annealing.Traveling salesperson problem

CS 561, Lecture 1
Course Overview (cont.)

Practical applications of search.

• 08/09-Game playing. [AIMA Ch 5] The minimax


algorithm. Resource limitations. Aplha-beta pruning.
Elements of
chance and non-
deterministic games.

tic-tac-toe

CS 561, Lecture 1
Course Overview (cont.)

Towards intelligent agents

• 10-Agents that reason


logically 1. [AIMA Ch 6]
Knowledge-based agents.
Logic and representation.
Propositional (boolean) logic.

• 11-Agents that reason


logically 2. [AIMA Ch 6]
Inference in propositional
logic. Syntax. Semantics.
Examples.
wumpus world
CS 561, Lecture 1
Course Overview (cont.)

Building knowledge-based agents: 1st Order


Logic

• 12-First-order logic 1. [AIMA Ch 7] Syntax. Semantics.


Atomic sentences. Complex sentences. Quantifiers.
Examples. FOL knowledge base. Situation calculus.

• 13-First-order logic 2.
[AIMA Ch 7] Describing actions.
Planning. Action sequences.

CS 561, Lecture 1
Course Overview (cont.)

Representing and Organizing Knowledge

• 14/15-Building a knowledge base. [AIMA Ch 8]


Knowledge bases. Vocabulary and rules. Ontologies.
Organizing knowledge.

An ontology
for the sports
domain
Kahn & Mcleod, 2000

CS 561, Lecture 1
Course Overview (cont.)

Reasoning Logically

• 16/17/18-Inference in first-order logic. [AIMA Ch 9]


Proofs. Unification. Generalized modus ponens. Forward
and backward chaining.

Example of
backward chaining

CS 561, Lecture 1
Course Overview (cont.)

Examples of Logical Reasoning Systems

• 19-Logical reasoning systems.


[AIMA Ch 10] Indexing, retrieval
and unification. The Prolog language.
Theorem provers. Frame systems
and semantic networks.

Semantic network
used in an insight
generator (Duke
university)
CS 561, Lecture 1
Course Overview (cont.)

Logical Reasoning in the Presence of


Uncertainty

• 20/21-Fuzzy logic.
[Handout] Introduction to
fuzzy logic. Linguistic
Hedges. Fuzzy inference.
Center of gravity
Examples.

Center of largest area

CS 561, Lecture 1
Course Overview (cont.)

Systems that can Plan Future Behavior

• 22/23-Planning. [AIMA Ch 11] Definition and goals. Basic


representations for planning. Situation space and plan
space. Examples.

CS 561, Lecture 1
Course Overview (cont.)

Expert Systems

• 24-Expert systems 1. [handout] What are expert


systems? Applications. Pitfalls and difficulties. Rule-based
systems. Comparison to traditional programs. Building
expert systems. Production rules. Antecedent matching.
Execution. Control mechanisms.

• 25-Expert systems 2. [handout]


Overview of modern rule-based
expert systems. Introduction to
CLIPS (C Language Integrated
Production System). Rules.
Wildcards. Pattern matching.
Pattern network. Join network.
CS 561, Lecture 1 CLIPS expert system shell
Course Overview (cont.)

What challenges remain?

• 26/27-Towards intelligent machines. [AIMA Ch 25] The


challenge of robots: with what we have learned, what hard
problems remain to be solved? Different types of robots.
Tasks that robots are for. Parts of robots. Architectures.
Configuration spaces. Navigation and motion planning.
Towards highly-capable robots.
• 28-Overview and summary. [all of the above] What have
we learned. Where do we go from here?

CS 561, Lecture 1 robotics@USC


A driving example: Beobots

• Goal: build robots that can operate in unconstrained


environments and that can solve a wide variety of tasks.

CS 561, Lecture 1
Beowulf + robot =
“Beobot” CS 561, Lecture 1
A driving example: Beobots

• Goal: build robots that can operate in unconstrained


environments and that can solve a wide variety of tasks.

• We have:
• Lots of CPU power
• Prototype robotics platform
• Visual system to find interesting objects in the world
• Visual system to recognize/identify some of these objects
• Visual system to know the type of scenery the robot is in

• We need to:
• Build an internal representation of the world
• Understand what the user wants
• Act upon user requests / solve user problems

CS 561, Lecture 1
The basic components of vision

Original Downscaled Segmented

Riesenhuber & Poggio


Scene Layout Nat Neurosci, 1999

& Gist
Localized
Object
Recognition

Attention
CS 561, Lecture 1
CS 561, Lecture 1
Beowulf + Robot =
“Beobot”

CS 561, Lecture 1
Main challenge: extract the “minimal subscene” (i.e., small
number of objects and actions) that is relevant to present
behavior from the noisy attentional scanpaths.

Achieve representation for it that is robust and stable agains


noise, world motion,CSand egomotion.
561, Lecture 1
Prototype

Stripped-down version of proposed


general system, for simplified
goal: drive around USC olympic
track, avoiding obstacles

Operates at 30fps on quad-CPU


Beobot;

Layout & saliency very robust;

Object recognition often confused


by background clutter.

CS 561, Lecture 1
Major issues

• How to represent knowledge about the world?

• How to react to new perceived events?


• How to integrate new percepts to past experience?

• How to understand the user?


• How to optimize balance between user goals & environment
constraints?
• How to use reasoning to decide on the best course of action?
• How to communicate back with the user?

• How to plan ahead?


• How to learn from experience?

CS 561, Lecture 1
General
architecture

CS 561, Lecture 1
Ontology

CS 561, Lecture 1
Khan & McLeod, 2000
The task-relevance map

Scalar topographic map, with higher values at more relevant locations

CS 561, Lecture 1
More formally: how do we do it?

- Use ontology to describe categories, objects and relationships:


Either with unary predicates, e.g., Human(John),
Or with reified categories, e.g., John  Humans,
And with rules that express relationships or properties,
e.g., x Human(x)  SinglePiece(x)  Mobile(x)  Deformable(x)

- Use ontology to expand concepts to related concepts:


E.g., parsing question yields “LookFor(catching)”
Assume a category HandActions and a taxonomy defined by
catching  HandActions, grasping  HandActions, etc.
We can expand “LookFor(catching)” to looking for other actions in
the category where catching belongs through a simple expansion
rule:
a,b,c a  c  b  c  LookFor(a)  LookFor(b)

CS 561, Lecture 1
Outlook

• AI is a very exciting area right now.

• This course will teach you the foundations.

• In addition, we will use the Beobot example to reflect on


how this foundation could be put to work in a large-scale,
real system.

CS 561, Lecture 1

You might also like