0% found this document useful (0 votes)
3 views

Artificial Intelligence and Robotics 1985

.

Uploaded by

nuj.deshpande
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Artificial Intelligence and Robotics 1985

.

Uploaded by

nuj.deshpande
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

ARTIFICIAL INTELLIGENCE 79

Artificial Intelligence and Robotics


Michael Brady
T h e Artificial Intelligence Laboratory, M a s s a c h u s e t t s Institute o f
T e c h n o l o g y , C a m b r i d g e , M A 02139, U . S . A .

Recommended by Daniel G. Bobrow

ABSTRACT
Since Robotics is the field concerned with the connection of perception to action, Artificial Intelligence
must have a central role in Robotics if the connection is to be intelligent. Artificial Intelligence
addresses the crucial questions of: what knowledge is required in any aspect of thinking; how should
that knowledge be represented; and how should that knowledge be used. Robotics challenges A I by
forcing it to deal with real objects in the real world. Techniques and representations developed for
purely cognitive problems, often in toy domains, do not necessarily extend to meet the challenge.
Robots combine mechanical effectors, sensors, and computers. A I has made significant contributions
to each component. We review A I contributions to perception and reasoning about physical objects.
Such reasoning concerns space, path-planning, uncertainty, and compliance. We conclude with three
examples that illustrate the kinds of reasoning or problem-solving abilities we would like to endow
robots with and that we believe are worthy goals of both Robotics and Artificial Intelligence, being
within reach of both.

1. Robotics and Artificial Intelligence


Robotics is the intelligent c o n n e c t i o n o f perception to action. T h e keywords in
that sentence are 'intelligent' and concomitant 'perception'. Normally Robotics
is thought of as simply the connection of sensing to action using computers.
The typical sensing modalities of current robots include vision, force and tactile
sensing, as well as proprioceptive sensing of the robot's internal state. The
capacity for action is provided by arms, grippers, wheels, and, occasionally,
legs.
The software of modern, commercially available, robot systems such as the
IBM 7565 [128], the Unimation PUMA [135, 143], and the Automatic cyber-
vision [46, 144] includes a wide variety of functions: it performs trajectory
calculation and kinematic translation, interprets sense data, executes adaptive
control through conditional execution and real-time monitors, interfaces to
databases of geometric models, and supports program development. It does
Artificial Intelligence 26 (1985) 79-121
0004-3702/85/$3.30 © 1985, Elsevier Science Publishers B.V. (North-Holland)
80 M, BRADY

some of these tasks quite well, particularly those that pertain to Computer
Science; it does others quite poorly, particularly perception, object modelling,
and spatial reasoning.
The intelligent connection of perception to action replaces sensing by per-
ception, and software by intelligent software. Perception differs from sensing or
classification in that it implies the construction of representations that are the
basis for recognition, reasoning and action. Intelligent software addresses issues
such as: spatial reasoning, dealing with uncertainty, geometric reasoning,
compliance, and learning. Intelligence, including the ability to reason and learn
about objects and manufacturing processes, holds the key to more versatile
robots.
Insofar as Robotics is the intelligent connection of perception to action,
Artificial Intelligence (AI) is the challenge for Robotics. On the other hand,
however, we shall argue that Robotics severely challenges Artificial In-
telligence (AI) by forcing it to deal with real objects in the real world.
Techniques and representations developed for purely cognitive problems often
do not extend to meet the challenge.
First, we discuss the need for intelligent robots and we show why Robotics
poses severe challenges for Artificial Intelligence. Then we consider what is
required for robots to act on their environment. This is the domain of
kinematics and dynamics, control, innovative robot arms, multi-fingered hands,
and mobile robots. In Section 5, we turn attention to intelligent software,
focussing upon spatial reasoning, dealing with uncertainty, geometric reason-
ing, and learning. In Section 6, we discuss robot perception. Finally, in Section
7, we present some examples of reasoning that connects perception to action,
example reasoning that no robot is currently capable of. We include it because
it illustrates the reasoning and problem-solving abilities we would like to
endow robots with and that we believe are worthy goals of Robotics and
Artificial Intelligence, being within reach of both.

2. The Need for Intelligent Robots


Where is the need for intelligent robots? Current (unintelligent) robots work
fine so long as they are applied to simple tasks in almost predictable situations:
parts of the correct type are presented in positions and orientations that hardly
vary, and little dexterity is required for successful completion of the task. The
huge commercial successes of robot automation have been of this sort: parts
transfer (including palletizing and packaging), spot welding, and spray painting.
Automation has been aimed largely at highly repetitive processes such as these
in major industrial plants.
But to control the robot's environment sufficiently, it is typically necessary to
erect elaborate fixtures. Often, the set-up costs associated with designing and
installing fixtures and jigs dominate the cost of a robot application. Worse,
ARTIFICIAL INTELLIGENCE AND ROBOTICS 81

elaborate fixturing is often not transferrable to a subsequent task, reducing the


flexibility and adaptability that are supposedly the key advantages of robots.
Sensing is one way to loosen up the environmental requirements; but the
sensing systems of current industrial robots are mostly restricted to two-
dimensional binary vision. Industrial applications requiring compliance, such as
assembly, seam welding and surface finishing, have clearly revealed the in-
abilities of current robots. Research prototypes have explored the use of
three-dimensional vision, force and proximity sensors, and geometric models of
objects [30, 41, 103, 117, 118, 141]. Other applications expose the limitations of
robots even more clearly. The environment cannot be controlled for most
military applications, including smart sentries, autonomous ordinance disposal,
autonomous recovery of men and materiel, and, perhaps most difficult of all,
autonomous navigation.

3. Robotics Part of Artificial Intelligence


Artificial Intelligence (AI) is the field that aims to understand how computers
can be made to exhibit intelligence. In any aspect of thinking, whether
reasoning, perception, or action, the crucial questions are:
(1) What knowledge is needed. The knowledge needed for reasoning in
relatively formalized and circumscribed domains such as symbolic mathematics
and game playing is well known. Highly competent programs have been
developed in such domains. It has proven remarkably difficult to get experts to
precisely articulate their knowledge, and hence to develop programs with
similar expertise, in medicine, evaluating prospective mining sites, or configur-
ing computers (see [62, 99, 149] for a discussion of expert systems, and accounts
of the difficulty of teasing knowledge out of experts). Among the many severe
inadequacies of the current crop of expert systems, is the fact that they usually
have limited contact with the real world. Human experts perform the necessary
perceptual preprocessing, telling Mvon for example that the patient is "febrile
0.8". Moving from the restricted domain of the expert, to the unrestricted
world of everyday experience, determining what knowledge is needed is a
major step toward modelling stereo vision, achieving biped walking and
dynamic balance, and reasoning about mechanisms and space. What do you
need to know in order to catch a ball?
(2) Representing knowledge. A key contribution of AI is the observation that
knowledge should be represented explicitly, not heavily encoded, for example
numerically, in ways that suppress structure and constraint. A given body of
knowledge is used in many ways in thinking. Conventional data structures are
tuned to a single set of processes for access and modification, and this renders
them too inflexible for use in thinking. AI has developed a set of techniques
such as semantic networks, frames, and production rules, that are symbolic,
highly flexible encodings of knowledge, yet which can be efficiently processed.
82 M. BRADY

Robotics needs to deal with the real world, and to do this it needs detailed
geometric models. Perception systems need to produce geometric models;
reasoning systems must base their deliberations on such models; and action
systems need to interpret them. Computer-aided design (CAD) has been
concerned with highly restricted uses of geometric information, typically dis-
play and numerically controlled cutting. Representations incorporated into
current C A D systems are analogous to conventional data structures. In order
to connect perception, through reasoning, to action, richer representations of
geometry are needed. Steps toward such richer representations can be found in
configuration space [88, 89], generalized cones [9], and visual shape represen-
tations [13, 70, 74, 93].
As well as geometry, Robotics needs to represent forces, causation, and
uncertainty. We know how much force to apply to an object in an assembly to
mate parts without wedging or jamming [147]. We know that pushing too hard
on a surface can damage it; but that not pushing hard enough can be ineffective
for scribing, polishing, or fettling. In certain circumstances, we understand how
an object will move if we push it [98]. We know that the magnitude and
direction of an applied force can be changed by pulleys, gears, levers, and
cams.
We understand the way things such as zip fasteners, pencil sharpeners, and
automobile engines work. The spring in a watch stores energy, which is
released to a flywheel, causing it to rotate; this causes the hands of the watch to
rotate by a smaller amount determined by the ratios of the gear linkages.
Representing such knowledge is not simply a matter of developing the ap-
propriate mathematical laws. Differential equations, for example, are a
representation of knowledge that, while extremely useful, are still highly
limited. Forbus [45] points out that conventional mathematical representations
do not encourage qualitative reasoning, instead, they invite numerical simula-
tion. Though useful, this falls far short of the qualitative reasoning that people
are good at. Artificial Intelligence research on qualitative reasoning and naive
physics has made a promising start but has yet to make contact with the real
world, so the representations and reasoning processes it suggests have barely
been tested [11, 32, 45, 66, 150].
Robotics needs to represent uncertainty, so that reasoning can successfully
overcome it. There are bounds on the accuracy of robot joints; feeders and
sensors have errors; and though we talk about repetitive work, no two parts are
ever exactly alike. As the tolerances on robot applications become tighter, the
need to deal with uncertainty, and to exploit redundancy, becomes greater.
(3) Using knowledge. AI has also uncovered techniques for using knowledge
effectively. One problem is that the knowledge needed in any particular case
cannot be predicted in advance. Programs have to respond flexibly to a
non-deterministic world. Among the techniques offered by AI are search,
structure matching, constraint propagation, and dependency-directed reason-
ARTIFICIAL INTELLIGENCE AND ROBOTICS 83

ing. One approach to constraint propagation is being developed in models of


perception by Terzopoulos [140], Zucker, Hummel and Rosenfeld [155]. Ano-
ther has been developed by Brooks [21, 22] building on earlier work in theorem
proving. The application of search to Robotics has been developed by Goto, et
al. [51], Lozano-P~rez [88], Gaston and Lozano-P6rez [47], Grimson and
Lozano-P6rez [53], and Brooks [24]. Structure matching in Robotics has been
developed by Winston, Binford, Katz, and Lowry [150].
To be intelligent, Robotics programs need to be able to plan actions and
reason about those plans. Surely AI has developed the required planning
technology? Unfortunately, it seems that most, if not all, current proposals for
planning and reasoning developed in AI require significant extension before
they can begin to tackle the problems that typically arise in Robotics, some of
which are discussed in Section 5. One reason for this is that reasoning and
planning has been developed largely in conjunction with purely cognitive
representations, and these have mostly been abstract and idealized. Proposals
for knowledge representation have rarely been constrained by the need to
support actions by a notoriously inexact manipulator, or to be produced by a
perceptual system with no human preprocessing. ACRONYM [21, 159] is an
exception to this criticism. Another reason is that to be useful for Robotics, a
representation must be able to deal with the vagaries of the real world, its
geometry, inexactness, and noise. All too often, AI planning and reasoning
systems have only been exercised on a handful of toy examples.
In summary, Robotics challenges AI by forcing it to deal with real objects in
the real world. Techniques and representations developed for purely cognitive
problems, often in toy domains, do not necessarily extend to meet the chal-
lenge.

4. Action
In this section, we consider what is required for robots to act on their
environment. This is the subject of kinematics and dynamics, control, robot
arms, multi-fingered hands, and locomoting robots.

4.1. Kinematics, dynamics, and arm design


The kinematics of robot arms is one of the better understood areas of Robotics
[111,157]. The need for kinematic transformations arises because programmers
prefer a different representation of the space of configurations of a robot than
that which is most natural and efficient for control. Robots are powered by
motors at the joints between links. Associated with a motor are quantities that
define its position, velocity, acceleration, and torque. For a rotary motor, these
are angular positions, angular velocities, etc. It is most efficient to control robots
in joint space. However, programmers prefer to think of positions using
orthogonal, cylindrical, or spherical Cartesian coordinate frames, according to
84 M. BRADY

the task. Six degrees of freedom (DOF) are required to define the position and
orientation of an object in space. Correspondingly, many robots have six joint
motors to achieve these freedoms. Converting between the joint positions,
velocities, and accelerations and the Cartesian (task) counterparts is the job of
kinematics. The conversion is an identity transformation between the joint
space of 'Cartesian' arms (such as the IBM 7565) and orthogonal (x, y, z)
Cartesian space. Cartesian arms suffer the disadvantage of being less able to
reach around and into objects. Kinematic transformations are still needed to
spherical or cylindrical Cartesian coordinates.
The kinematics of a mechanical device are defined mathematically. The
requirement that the kinematics can be efficiently computed adds constraint,
that ultimately affects mechanical design. In general, the transformation from
joint coordinates to Cartesian coordinates is straightforward. Various efficient
algorithms have been developed, including recent recursive schemes whose
time complexity is linear in the number of joints. Hollerbach [67] discusses
such recursive methods for computing the kinematics for both the Lagrange
and Newton-Euler formulations of the dynamics. The inverse kinematics
computation, from Cartesian to joint coordinates, is often more complex. In
general, it does not have a closed form solution [114]. Pieper [114] (see also
[115]) showed that a 'spherical' wrist with three intersecting axes of rotation
leads to an exact analytic solution to the inverse kinematic equations. The
spherical wrist allows a decomposition of the typical six-degree-of-freedom
inverse kinematics into two three-degree-of-freedom computations, one to
compute the position of the wrist, the other to compute the orientation of the
hand. More recently, Paul [111], Paul, Stevenson and Renaud [113], Feather-
stone [44], and Hollerbach and Sahar [68], have developed efficient techniques
for computing the inverse kinematics for spherical wrists. Small changes in the
joints of a robot give rise to small changes in the Cartesian position of its end
effector. The small changes in the two coordinate systems are related by a
matrix that is called the Jacobian. Orin and Schrader [109] have investigated
algorithms for computing the Jacobian of the kinematic transformation that are
suited to VLSI implementation.
If the number of robot joints is equal to six, there are singularities in the
kinematics, that is, a small change in Cartesian configuration corresponds to a
large change in joint configuration. The singularities of six-degree-of-freedom
industrial robot arms are well cataloged. Singularities can be avoided by
increasing the number n of joints, but then there are infinitely many solutions
to the inverse kinematics computation. One approach is to use a generalized
inverse technique using a positive definite 6 × n matrix to find the solution that
minimizes some suitable quantity such as energy or time [78, 147]. Another
approach is to avoid singularities by switching between the redundant degrees
of freedom [164]. Finally, if the number of joints is less than six, there are
'holes' in the workspace, regions that the robot cannot reach. Such robots,
ARTIFICIAL INTELLIGENCEAND ROBOTICS 85

including the SCARA design, are nevertheless adequate for many specialized
tasks such as pick-and-place operations. One important application of kinema-
tics computations is in automatic planning of trajectories [16].
Most attention has centered on open kinematic chains such as robot arms.
An 'open' kinematic chain consists of a single sequence of links. Its analysis is
reasonably straightforward. Much less work has been done on closed kinematic
chains such as legged robots or multi-fingered hands. Although, in theory, the
kinematic chain closes when the robot grasps an object lying on a work surface,
it is usually (almost) at rest, and the kinematics of the closed linkage are
ignored. More interestingly, Hirose et al. [65] have designed a pantograph
mechanism for a quadruped robot that significantly reduces potential energy
loss in walking. Salisbury and Craig [132] (see also [129]) have used a number
of computational constraints, including mobility and optimization of finger
placement, to design a three-fingered hand. The accuracy and dexterity of a
robot varies with configuration, so attention needs to be paid to the layout of
the workspace. Salisbury and Craig [132] used the condition number of the
Jacobian matrix (using the row norm) to evaluate configurations of the hand.
This is important because the accuracy and strength of a robot varies
throughout its workspace, and the condition number provides a means of
evaluating different points. Yoshikawa [153] has introduced a measure of
manipulability for a similar purpose. Attend to some object in your field of
view, and consider the task of moving it to a different position and orientation.
The movement can be effected by translating along the line joining the
positions while rotating about some axis. The simultaneous translation and
rotation of an object is called a screw, and screw coordinates have been
developed as a tool for the analysis of such motions. Roth [127] reviews the
application to Robotics of screw coordinates to link kinematics and dynamics.
The dynamic equations of a robot arm (see Hollerbach [67]) consist of n
coupled, second-order, differential equations in the positions, velocities, and
accelerations of the joint variables. The equations are complex because they
involve terms from two adjacent joints, corresponding to reaction and Coriolis
torques. One way to visualize the effect of such torques is by moving your arm,
for example opening a door or cutting with a knife. The motion of the forearm
relative to the wrist generates a force not only at the elbow but, by the laws of
Newtonian mechanics, at the shoulder. The faster you move your forearm, the
faster you accelerate it, or the heavier the knife that you use, the larger the
resulting torque at the shoulder. Conventional techniques have simplified
dynamics by dropping or linearizing terms, or have proposed table look-up
techniques. Recently, 'recursive' recurrence formulations of the dynamic equa-
tions have been developed that:
(a) compute the kinematics from the shoulder to the hand in time propor-
tional to n;
(b) compute the inverse dynamics from the force and torque exerted on the
86 M. BRADY

hand by the world from the hand to the shoulder, again in time proportional to
n.
The importance of this result is threefold:
(1) First, it suggests that a more accurate inverse plant model can be
developed, leading to faster, more accurate arms. Friction is a major source of
the discrepancy between model and real world. Direct-drive technology [4--6]
reduces the mismatch. In a direct-drive arm, a motor is directly connected to a
joint with no intervening transmission elements, such as gears, chains, or ball
screws. The advantages are that friction and blacklash are low, so the direct-
drive joint is backdrivable. This means that it can be controlled using torque
instead of position. Torque control is important for achieving compliance, and
for feedforward dynamics compensation.
(2) Second, the recurrence structure of the equations lends itself to im-
plementation using a pipelined microprocessor architecture, cutting down
substantially on the number of wires that are threaded through the innards of a
modern robot. On current robots, a separate wire connects each joint motor to
the central controller; individual control signals are sent to each motor. The
wires need to thread around the joints, and the result is like a pan of spaghetti.
(3) Third, Hollerbach and Sahar [68] have shown that their refinement of
Featherstone's technique for computing the inverse kinematics makes available
many of the terms needed for the recursive N e w t o n - E u l e r dynamics.
Renaud [124] has developed a novel iterative Lagrangian scheme that
requires about 350 additions and 350 multiplies for a six-revolute joint robot
arm. The method has been applied to manipulators having a general tree
structure of revolute and prismatic joints. Huston and Kelly [72] and Kane and
Levinson [80] have recently adapted Kane's formulation of dynamics to robot
structures.

4.2. Control
Much of control theory has developed for slowly changing, nearly rigid
systems. The challenges of robot control are several:
- Complex dynamics. The dynamics of open-link kinematic chain robots consist
of n coupled second-order partial differential equations, where n is the number
of links. They become even more complex for a closed multi-manipulator
system such as a multi-fingered robot hand or Iocomoting robot.
-Articulated structure. The links of a robot arm are cascaded and the dynamics
and inertias depend on the configuration.
-Discontinuous change. The parameters that are to be controlled change
discontinuously when, as often happens, the robot picks an object up.
- Range of motions. To a first approximation one can identify several different
kinds of robot motion: free space or gross motions, between places where work
is to be done; approach motions (guarded moves) to a surface; and compliant
ARTIFICIAL INTELLIGENCE AND ROBOTICS 87

moves along a constraint surface. Each of these different kinds of motion poses
different control problems.
The majority of industrial robot controllers are open-loop. However, many
control designs have been investigated in Robotics; representative samples are
to be found in [111,157, 158]. They include optimal controllers [36, 78]; model
reference control [37]; sliding mode control [154]; nonlinear control [160, 161];
hierarchical control [132]; distributed control [81]; hybrid force-position control
[82, 120]; and integrated system control [2]. Cannon and Schmitz [28] have
investigated the precise control of flexible manipulators.

4.3. End effectors


Industrial uses of robots typically involve a multi-purpose robot arm and an
end effector that is specialized to a particular application. End effectors
normally have a single degree of freedom [39]: parallel jaw grippers, suction
cup, spatula, 'sticky' hand, or hook. The algorithms for using such grippers are
correspondingly simple. Paul's [163] (see also [138]) centering grasp algorithm
is one of the more advanced examples. M a n y end effectors have no built-in
sensors. Those that do typically incorporate devices that give a single bit of
information. The most common are contact switches and infra-red beams to
determine when the end effector is spanning some object. The IBM 7565 is one
of the few commercially available robot arms that incorporates force sensing
and provides linguistic support for it.
Many tasks, particularly those related to assembly, require a variety of
capabilities, such as parts handling, insertion [147], screwing, as well as fixtures
that vary from task to task. One approach is to use one arm but multiple
single-DOF grippers, or multiple arms each with a single DOF, or some
combination. One problem with using multiple single-DOF grippers is that a
large percentage of the work cycle is spent changing grippers. This has inspired
research on the mechanical designs that support fast gripper change. Another
problem is that the approach assumes that a process can be divided into a
discrete set of single-DOF operations.
Multiple arms raise the problem of coordinating their motion while avoiding
collision and without cluttering the workspace. The coordination of two arms
was illustrated at the Stanford Artificial Intelligence Laboratory in 1972 when
two arms combined to install a hinge. One of the arms performed the
installation, the other acted as a programmable fixture, presenting the hinges
and other parts to the work arm. Freund [161] has presented a control scheme
for preventing collisions between a pair of cooperating robots. Whenever there is
a possibility of a collision, one of the arms is assigned master status and the
other one has to modify its trajectory to avoid the master.
In contrast with such end effectors, a human hand has a remarkable range of
functions. The fingers can be considered to be sensor-intensive 3- or 4-DOF
88 M. BRADY

FIG. 1. The three-fingered robot hand developed by Salisbury and Craig [132]. Each finger has
three degrees of freedom, and is pulled by four tendons. The hierarchical controller includes three
finger controllers, each of which consists of four controllers, one per tendon. Each controller is of
the Proportional-lntegrai-Derivative type, (Reproduced from [129].)

robot arms. The motions of the individual fingers are limited to curl and flex
motions in a plane that is determined by the abduction/adduction of the finger
about the joint with the palm. The motions of the fingers are coordinated by
the palm, which can assume a broad range of configurations. The dexterity of
the human hand has inspired several researchers to build multi-function robot
hands.
Okada [107] described a hand consisting of three fingers evenly spaced about
a planar palm. The workspace of the individual fingers was an ellipsoid. The
three workspaces intersected in a point. Okada programmed the hand to
perform several tasks such as tighten bolts. Hanafusa and Asada [55, 56]
developed a hand consisting of three evenly spaced, spring-loaded fingers. The
real and apparent spring constants of the fingers were under program control.
Stable grasps were defined as the minima of a potential function. The definition
of stability in two dimensions was demonstrated by programming the hand to
pick up an arbitrary shaped lamina viewed by a TV camera.
Salisbury [129] (see also [132]) investigated kinematic and force constraints on
the design of a tendon-driven three-fingered hand (see Fig. 1). The goal was to
ARTIFICIAL INTELLIGENCE AND ROBOTICS 89

design a hand that could impress an arbitrary (vector) force in an arbitrary


position of the hand's workspace. Four of the parameters defining the place-
ment of the thumb were determined by solving a series of one-dimensional
nonlinear programming problems. A hierarchical controller was designed: PID
controllers for each tendon; four such for each finger; and three such for the
hand. To date, position and force controllers have been implemented for the
individual fingers. A force sensing palm has recently been developed [131]. It
can determine certain attributes of contact geometries. The force sensing
fingertips being developed for the hand will permit accurate sensing of contact
locations and surface orientations. This information is likely to be useful in
object recognition strategies and in improving the sensing and control of
contact forces.
The Center for Biomedical Design at the University of Utah and the MIT
Artificial Intelligence Laboratory are developing a tendon-operated, multiple-
DOF robot hand with multi-channel touch sensing. The hand that is currently
being built consists of three 4-DOF fingers, a 4-DOF thumb, and a 3-DOF
wrist (total 19 DOF). Three fingers suffice for static stable grasp. The Utah
MIT design incorporated a fourth finger to minimize reliance on friction and to
increase flexibility in grasping tasks. The hand features novel tendon material
and tendon routine geometry (Fig. 2).

ENTON

FIG. 2. The prototype Utah/MIT dextrous hand developed by Steven Jacobsen, John Wood, and
John Hollerbach. The four fingers each have four degrees of freedom. (a) The geometry of tendon
routing. (b) The material composition of tendons. (Reproduced from [77, Figs. 2(b) and 7].)
90 M. BRADY

4.4. Mobile robots


Certain tasks are difficult or impossible to perform in the workspace of a static
robot arm [48]. In large-scale assembly industries, such as shipyards or
automobile assembly lines, it is common to find parts being transferred along
gantries that consist of one or more degrees of linear freedom. Correspond-
ingly, there have been several instances of robot arms being mounted on rails
to extend their workspace. The rail physically constrains the motion of the
robot. More generally, the robot can be programmed to follow a path by
locally sensing it. Magnetic strips, and black strips sensed by infra-red sensing
linear arrays, have been used, for example in the Fiat plant in Turin, Italy.
More ambitious projects have used (non-local) vision and range data for
autonomous navigation. Space and military applications require considerable
autonomy for navigation, planning, and perception. Mobile robots are complex
systems that incorporate perceptual, navigation, and planning subsystems.
Shakey [104] was one of the earliest mobile robots, and certainly one of the
most ambitious system integration efforts of its time. Later work on mobile
robots includes [33, 40, 49, 50, 60, 61, 83, 84, 100-102].
All the robots referred to previously in this section are wheeled. This
restricts their movement to (nearly) even terrain. Legged vehicles can poten-
tially escape that limitation. The price to be paid for this advantage is the extra
complexity of maintaining balance and controlling motion. Following the
photographic studies of Muybridge and Moayer in the late 19th century, a
theory of locomotion developed around the concept of gait, the pattern of foot
placements and foot support duty cycles. The Ohio State University hexapod,
for example, has been programmed to use either an alternating tripod of
support or 'wave' gait, in which a left-right pair of legs is lifted and advanced
(Fig. 3) [110]. Hirose's [65] quadruped robot (Fig. 4) employs a 'crab' gait, in
which the direction of motion is at an angle to the direction the quadruped is
facing.
Two generic problems in legged locomotion are moving over uneven terrain
and achieving dynamic balance without a static pyramid of support.
The simplest walking machines employ a predetermined, fixed gait. Uneven
terrain requires dynamic determination of foot placement, implying variable
gait, Hirose [65] and Ozguner, Tsai, and McGhee [110] analyze the constraint
of balance, and use sensors in their choice of foot placement.
Miura and Shimoyama [162] and Raibert, et al. [121, 123] discuss the
dynamic requirements of balance. Miura and Shimoyama reports a series of
biped walking machines. The first of these, BIPER-3 (Fig. 5) has stilt-like legs,
with no ankle joint, and resembles a novice nordic skier in its gait. BIPER-3
falls if both feet keep contact with the surface; so it must continue to step if it is
to maintain its balance. An ambitious development, BIPER-4, shown in Fig. 6,
has knee and ankle joints. Stable walking of BIPER-4 has recently been
FIG. 3. The six-legged robot developed by McGhee, Orin, Klein, and their colleagues at the Ohio
State University. The robot uses either an alternating tripod of support or wave gait. (Reproduced
from [110, Fig. 1].)

FIG. 4. The quadruped robot built by Hirose, Umetani, and colleagues at Tokyo Institute of
Technology. The robot can walk upstairs, and can move forward in a crab gait. (Reproduced from
[65, Photo 1].)

91
92 M. BRADY

FIG. 5. The BIPER-3 walking robot built by Miura and Shimoyama at Tokyo University. See text
for details. (Reproduced from [162, Fig. 1].)

FIG. 6. The BIPER-4 walking robot under development at Tokyo University. BIPER-4 has a hip,
knee, and ankle joints on each leg. (Reproduced from [162, Fig. 14].)
ARTIFICIAL INTELLIGENCE AND ROBOTICS 93

FIG. 7. The hopping machine developed by Raibert and his colleagues at Carnegie Mellon
University. (Reproduced from [123, Fig. 16].)

demonstrated. Raibert considers balance for a three-dimensional hopping


machine (Fig. 7). He suggests that balance can be achieved by a planar
(two-dimensional) controller plus extra-planar compensation. His work sug-
gests that gait may not be as central to the theory of locomotion as has been
supposed. Instead, it may be a side-effect of achieving balance with coupled
oscillatory systems. Raibert [119] has organized a collection of papers on
legged robots that is representative of the state of the art.

5. Reasoning about Objects and Space


5.1. Task-level robot programming languages
Earlier, we listed some of the software features of modern, commercially
available robot systems: they perform trajectory calculation and kinematic
translation, interpret sense data, execute adaptive control through conditional
execution and real-time monitors, interface to databases of geometric models,
94 M. BRADY

and support program development. Despite these features, robot programming


is tedious, mostly because in currently available programming languages the
position and orientation of objects, and subobjects of objects, have to be
specified exactly in painful detail. 'Procedures' in current robot programming
languages can rarely even be parameterized, due to physical assumptions made
in the procedure design. Lozano-P6rez [90, 91] calls such programming lan-
guages robot-level.
Lozano-P6rez [90, p. 839] suggests that "existing and proposed robot pro-
gramming systems fall into three broad categories: guiding systems in which
the user leads a robot through the motions to be performed, robot-level
programming systems in which the user writes a computer program specifying
motion and sensing, and task-level programming systems in which the user
specifies operations by their desired effect on objects." Languages such as VAL
II [135] and AML [138] are considered robot-level.
One of the earliest task-level robot programming language designs was
AUTOPASS [85]. The (unfinished) implementation focussed upon planning col-
lision-free paths among polyhedral objects. The emphasis of RAPT [3, 116] has
been on the specification of geometric goals and relational descriptions of
objects. The implementation of RAPT iS based upon equation solving and
constraint propagation. Other approximations to task-level languages include
PADL [125], IBMsolid [146], and LAMA[87]. Lozano-P6rez [91] discusses spatial
reasoning and presents an example of the use of RAPT.
In the next section we discuss Brooks' work on uncertainty, several ap-
proaches to reasoning about space and avoiding obstacles, and synthesizing
compliant programs.

5.2. Dealing with uncertainty


Consider the problem illustrated in Fig. 8. A robot has been programmed to
put a screw in a hole. Will the program succeed? Each of the joint measure-
ments of the robot is subject to small errors, which produce errors in the
position and orientation of the finger tips according to the Jacobian of the
kinematics function. The position and orientation of the screwdriver in the
fingers is subject to slight error, as is the screw, box, and the lid on the box.
These errors, we will call them the base errors, are independent of the
particular task to be performed. They add up. Taylor [137] assumed particular
numerical bounds for the values of the base errors, and used linear program-
ming to bound the error in the placement of the screw relative to the hole.
Brooks [22] worked with explicit symbolic (trigonometric) expressions that
define the error in the placement of the screw relative to the hole. He applied
the expression-bounding program developed for the ACRONYMproject [21] to
the base-error bounds used by Taylor to deduce bounds for the errors in the
placement of the screw relative to the hole. The bounds he obtained were not
as tight as those obtained by Taylor, but were nearly so.
ARTIFICIAL INTELLIGENCE A N D ROBOTICS 95

/ /

))

FIG. 8. Will the screw make it into the hole? The joint measurements are all subject to small
errors, as are the placement of the box, the lid on the box, the orientation of the screwdriver in the
hand, and the orientation of the screw on the end of the screwdriver. When all these errors are
combined, it can be quite difficult to guarantee that a task will always be carried out successfully.

Brooks' approach had a substantial advantage over Taylor, however, and it is


paradigmatic of the AI approach. The expression-bounding program can be
applied with equal facility to the symbolic expression for the error and the
desired size of the screw hole (the specifications of the insertion task). The
result is a bound on the only free variable of the problem, the length of the
screwdriver. The lesson is that it is possible to apply AI techniques to reason in
the face of uncertainty. In further work, Brooks [22] has shown how sensing
might be modeled using uncertainties to automatically determine when to
splice a sensing step into a plan to cause it to succeed.

5.3. Reasoning about space and avoiding objects


Robot-level programming languages require the programmer to state, for
example, that the robot is to move the block B, whose configuration (position
and orientation) R s is to be moved to the configuration R o. To ensure the
robot does not crash into obstacles, the usual practice in robot-level languages
is to specify a sufficient number of via points (Fig. 9) (see [16]). In a task-
oriented language, one merely says something like "put B in the vise". It
follows that a crucial component of implementing a task-oriented programming
96 M. BRADY

Start, Po
x~ ~ x P~

p ~ X Goal, 1°4

(a)

~ Goal

S,ar,

Free Space x Pz
~ [ Slowly, Guarded

Slowly Guarded i;i ~:


~ : ::.=;,, ~.::.: i::i!~:i, ~i~ ~i~!~,~i;~:;::i~-:i!i!;:~:ii~i~:,~:~~
!,ii~!:s i ar t: po !i!:~i!!:~,~iii~:i~ii
iii~i i~i! :~ ~ i~i~:.:~i~!iii~i~ii~i~ ':~:.~o~'~ ....

(b)
FIG. 9. (a) Point Pt,/92, and P3 are specified as via points to coax the robot through the narrow gap
separating the two obstacles shown. (b) Via points can also be used to simplify trajectory planning.
The two via points above the start and goal of the move assure that the robot hand will not attempt
to penetrate the table as it picks up and sets down the object. (Reproduced from [16, Fig. 2(a)].)

language is automatically determining safe paths b e t w e e n configurations in the


presence of obstacles. This turns out to be an extremely hard problem.
Lozano-PGrez [89] introduced a representation called C-space that consists
of the safe configurations of a moving object. F o r a single object moving with
6 degrees of f r e e d o m (e.g., 3 translational and 3 rotational degrees of freedom),
the dimensionality of the C-space is six. If there are m such objects, each of
which can move, the dimensionality of C-space is 6m. For example, for the
c o o r d i n a t e d m o t i o n of two 3D objects, C-space is twelve-dimensional. In
practice one can deal with 'slices', projections o n t o lower-dimensional sub-
spaces.
D o n a l d [35] notes that there are two c o m p o n e n t s of spatial planning systems.
First, it is necessary to represent the problem, in particular the obstacles.
Second, it is necessary to devise an algorithm that can search for paths over the
representation. Most work on spatial reasoning has used representations that
ARTIFICIAL INTELLIGENCE AND ROBOTICS 97

approximate the exact polyhedral obstacles. Such representations may (1)


restrict the degrees of freedom in a problem, (2) bound objects in real space by
simple objects such as spheres, or prisms with parallel axes, while considering
some subset of the available degrees of freedom, (3) quantize configuration
space at certain orientations, or (4) approximate swept volumes for objects
over a range of orientations. Systems that use such representations may not be
capable of finding solutions in some cases, even if they use a complete search
procedure. An approximation of the obstacle environment, robot model, or
C-space obstacles can result in a transformed find-path problem which has no
solution.
Lozano-P6rez [88, 89] implemented an approximate algorithm for Cartesian
manipulators (for which free space and C-space are the same) that tesselated
free space into rectangloids, subdividing it as far as necessary to solve a given
problem. The search algorithm is complete for translations, and illustrates the
feasibility of the C-space approach. It works by alternately keeping the heading
of an object fixed and rotating in place to alter the heading. Recently, Brooks
and Lozano-P6rez [25] reported an algorithm capable of moving a reorientable
polygon through two-dimensional space littered with polygons. This algorithm
can find any path of interest for the two-dimensional problem. Fig. 10 shows an
example path found by Brooks and Lozano-P6rez's program. Their attempts to
extend the method to three dimensions "were frustrated by the increased
complexity for three dimensional rotations relative to that of rotations in two
dimensions" [24, p. 7].
Brooks [23] suggested that free space be represented by overlapping
generalized cones that correspond to freeways or channels. Fig. 11 shows some
of the generalized cones generated by two obstacles and the boundary of the
workspace. The key point about the representation was that the left and right
radius functions defining a freeway could be inverted easily. Given a freeway,
and the radius function of a moving convex object, he was able to determine
the legal range of orientations that ensure no collisions as the object is swept
down the freeway. Brooks' algorithm is highly efficient, and works well in
relatively uncluttered space, but it occasionally fails to find a safe path when it
is necessary to maneuver in tight spaces. Recently, Donald [34] has proposed a
novel channel-based technique.
Finally, Brooks [24] has developed an algorithm that combines the C-space
and freeway approaches to find paths for pick-and-place and insertion tasks for
a PUMA. (The PUMA is a robot developed and marketed by Unimation
Corporation from an original design by V. Scheinmann of the Stanford
Artificial Intelligence Laboratory. It is popularly found in AI Robotics
Laboratories.) Pick-and-place tasks are defined as four-degree-of-freedom
tasks in which the only reorientations permitted are about the vertical, and in
which the found path is composed of horizontal and vertical straight lines. Fig.
12 shows an example path found by Brooks' algorithm.
98 M. BRADY

FIG. 10. A path found by the Brooks and Lozano-P&ez program. (Reproduced from [25, Fig.
11(a)].)

Brooks freezes joint 4 of the PUMA. The algorithm subdivides free space to
find (i) freeways for the hand and payload assembly; (ii) freeways for the upper
arm subassembly (joints 1 and 2 of the PUMA); (iii) searches for the payload
and upper arm freeways concurrently under the projection of constraints
determined by the forearm. The subdivision of free space in this way is the
most notable feature of Brooks' approach. It stands in elegant relation to the
algorithms for computing inverse kinematics referred to earlier. It is assumed
that the payload is convex, and that the obstacles are convex stalagmites and
stalactites. It is further assumed that stalactites are in the workspace of the
upper arm of the PUMA, not of the payload.
'\~\

FIG. I1. A few of the generalized cones generated by two obstacles and the boundary of the
workspace. (Reproduced from [23, Fig. 1].)

F1G. 12. An example of path finding for a PUMA by Brooks' [24] algorithm.

99
100 M. BRADY

By restricting attention to a limited class of tasks, Brooks has designed an


efficient algorithm that will not work in all cases. The advantage is that he does
not have to contend with worst-cases situations that lead to horrendous
polynomial complexity estimates. For example, Schwartz and Sharir [134]
suggest a method whose complexity for r D O F is n 2'. For example, for two
moving objects, the complexity is n 4°96. Their algorithm is not implemented.
Donald [35] presents the first implemented, representation-complete, search-
complete algorithm (at a given resolution) for the classical Movers' problem for
Cartesian manipulators.
There has been a great deal of theoretical interest in the findpath problem by
researchers in computational complexity and computational geometry.
Schwartz and Sharir [134], and Hopcroft, Schwartz, and Sharir [69] are
representative.

5.4. Synthesizing compliant programs


Compliance is the opposite of stiffness. Any device, such as an automobile
shock absorber, that responds flexibly to external force, is called compliant.
Increasingly, compliance refers to operations that require simultaneous force
and position control [97]. Consider programming a robot to write on a
blackboard or on a softdrink can. One difficult problem is that the position of
the end effector has to be controlled in arm coordinates which bear no relation
to those of the task. But there is a greater problem to be solved. If one presses
too hard, the surface and/or writing implement can be damaged. If not hard
enough, the writing implement may leave the surface. The problem is that ideal
planes and cylindrical surfaces only occur in mathematical texts. It is necessary
to control the force exerted by the writing implement on the surface. Mathe-
matically, the problem is overdetermined! Easing up or pushing down on the
pen may cause the tip position to vary too much, detracting from the quality of
the writing.
An analysis of the forces that arise in inserting a peg into a hole led to the
development of a spring-loaded device that can perform the insertion for tight
tolerances at blinding speed. The device is called the Remote Center Com-
pliance (RCC) [147]. The RCC was an important early example of the use of
force trajectories to achieve compliant assembly. Another example is scribing a
straight line on an undulating surface. In that case, it is necessary to control
position in the tangent plane of the surface, and maintain contact with the
surface by applying an appropriate scribing force normal to the surface. Other
examples of compliance include cutting, screw insertion, and bayonet-style
fixtures, such as camera mountings. Cloeksin, et al. [30] describe a seam-
welding robot that uses the difference-of-Gaussians edge operator proposed by
Mart and Hildreth [94] to determine the welding trajectory. Ohwovoriole and
Roth [106] show how the motions possible at an assembly step can be
ARTIFICIAL INTELLIGENCE AND ROBOTICS 101

partitioned into 3 classes: those that tend to separate the bodies to be mated;
those that tend to make one body penetrate the other; and those that move the
body and maintain the original constraints. Theoretically, this provides a basis
for choosing the next step in an assembly sequence.
Trevelyan, Kovesi, and Ong [141] describe a sheep-shearing robot. Fig. 13
shows the geometry of the robot and the 'workpiece'. Trevelyan, Kovesi, and
Ong [141] note that "over two hundred sheep have been shorn by the machine
(though not completely) yet only a few cuts have occurred. This extremely low

~aw

drive for follower


piston rotation

tilt upper link

pitch
\
roll

~ . lower link
sensors

cutter centre (t

traverse motion

ace coordinate frame

manipulator

FIG. 13. The sheep shearing robot developed by Trevelyan and his colleagues at the University of
Western Australia. A sheep naturally lies quite still while it is being sheared; indeed it often falls
asleep. (Reproduced from [141, Fig. 1].)
102 M. BRADY

injury rate ~ results from the use of sensors mounted on the shearing cutter
which allow the computer controlling the robot to keep the cutter moving just
above the sheep skin". Trajectories are planned from a geometric model of a
sheep using Bernstein-B6zier parametric curves. The trajectory is modified to
comply with sense data. Two capacitance sensors are mounted under the cutter
just behind the comb. These sensors can detect the distance between the cutter
and the sheep's skin to a range of approximately 30 mm. Compliance is needed
to take account of inaccuracies in the geometric model of the sheep and the
change in shape of the sheep as it breathes.
Robots evolved for positional accuracy, and are designed to be mechanically
stiff. High-tolerance assembly tasks typically involve clearances of the order of
a thousandth of an inch. In view of inaccurate modeling of the world and
limitations on joint accuracy, low stiffnesses are required to effect assemblies.
Devices such as the Remote Center Compliance (RCC) [147] and the Hi-T
hand [51] exploit the inherent mechanical compliance of springs to accomplish
tasks. Such passive-compliant devices are fast, but the specific application is
built into the mechanical design. The challenge is that different tasks impose
different stiffness requirements. In active-compliance, a computer program
modifies the trajectory of the arm on the basis of sensed forces (and other
modalities) [112]. Active compliance is a general-purpose technique; but is
typically slow compared to passive compliance.
Mason [96] suggested that the (fixed number of) available degrees of
freedom of a task could be divided into two subsets, spanning orthogonal
subspaces. The subspaces correspond one-one with the natural constraints
determined by the physics of the task, and the artificial constraints determined
by the particular task. See [97] for details and examples. For example, in
screwdriving, a screwdriver cannot penetrate the screw, giving a natural con-
straint; successful screwdriving requires that the screwdriver blade be kept in
the screw slot, an artificial constraint. Raibert and Craig [120] refined and
implemented Mason's model as a hybrid force-position controller. Raibert and
Craig's work embodies the extremes of stiffness control in that the programmer
chooses which axes should be controlled with infinite stiffness (using position
control with an integral term) and which should be controlled with zero
stiffness (to which a bias force is added). Salisbury [130] suggests an inter-
mediate ground that he calls "active stiffness control".
Programmers find it relatively easy to specify motions in position space; but
find it hard to specify the force-based trajectories needed for compliance. This
has motivated the investigation of automatic generation of compliant fine-
motion programs [38, 92]. In Dufay and Latombe's approach, the geometry of
the task is defined by a semantic network, the initial and goal configurations of

i T r e v e l y a n i n f o r m s the a u t h o r that a h u m a n s h e e p s h e a r e r typically cuts a s h e e p o v e r 30 times,


and that serious cuts o c c u r regularly.
ARTIFICIAL INTELLIGENCEAND ROBOTICS 103

parts are defined by symbolic expressions, and the knowledge of the program is
expressed as production rules. Productions encode the 'lore' of assembly: how
to overcome problems such as moving off the chamfer during an insertion task.
Dufay and Latombe's program inductively generates assembly programs from
successful execution sequences. The method requires that the relationships
between surfaces of parts in contact be known fairly precisely. In general this is
difficult to achieve because of errors in sensors.
Lozano-P6rez, Mason, and Taylor [92] have proposed a scheme for automa-
tically synthesizing compliant motions from geometric descriptions of a task.
The approach combines Mason's ideas about compliance, Lozano-P6rez's
C-space, and Taylor's [137] proposal for programming robots by fleshing out
skeletons forming a library of operations. The approach, currently being
implemented, deals head-on with errors in assumed position and heading.
Lozano-P6rez, Mason, and Taylor use a generalized damper model to deter-
mine all the possible configurations that can result from a motion. It is
necessary to avoid being jammed in the friction cone of any of the surfaces en
route to the goal surface. This sets up a constraint for each surface. Intersecting
the constraints leaves a range of possible (sequences of) compliant moves that
are guaranteed to achieve the goal, notwithstanding errors.
Programming by fleshing out skeletons is reminiscent of the programmer's
apprentice [126]. The similarities are that the computer adopts the role of
junior partner or critic, programming is based on cliches, and design decisions
and logical dependencies are explicitly represented so that the effects of
modifications to a program can be automatically propagated through the
program. The difference is that a robot programmer's apprentice works with
rich geometric models. Lozano-P6rez has suggested that guiding can be exten-
ded to teach a robot plans that involve sensing, a large number of similar
movements (for example unloading a palette), and asynchronous control of
multiple manipulators. The requirement that a system deal with rich geometric
models also distinguishes the robot programmer's apprentice from earlier work
in AI planning [128].

6. Perception
6.1. Introduction
The perceptual abilities of commercially available robots are severely limited,
especially when compared with laboratory systems. It is convenient to dis-
tinguish contact and non-contact sensing. Contact, or local, sensing includes
tactile, proximity, and force sensing. Non-contact sensing includes passive
sensing in both visual and non-visual spectral bands, and active sensing using
infra-red, sonar, ultrasound, and millimeter radar.
Robot perception is only a special case of computer perception in the sense
that there are occasional opportunities for engineering solutions to what are, in
104 M. BRADY

general, difficult problems. Examples include: arranging the lighting, con-


trolling positional uncertainty, finessing some of the issues in depth com-
putation, and limiting the visual context of an object. Appropriate lighting can
avoid shadows, light striping and laser range finding can produce partial depth
maps, and techniques such as photometric stereo [152] can exploit control over
lighting. On the other hand, edge finding is no less difficult in industrial images,
texture is just as hard, and the bin of parts is a tough nut for stereo. Motion
tracking on a dirty conveyor belt is as hard as any other tracking problem.
Representing the shape of complex geometric parts is as difficult as any
representational problem in computer vision [18,42,43, 156]. Existing com-
mercial robot-vision systems carry out simple inspection and parts acquisition.
There are, however, many inspection, acquisition, and handling tasks, routinely
performed by humans, that exceed the abilities of current computer vision and
tactile sensing research.
The quality of sensors is increasing rapidly, especially as designs incorporate
VLSI. The interpretation of sensory data, especially vision, has significantly
improved over the past decade. Sensory data interpretation is computer
intensive, requiring billions of cycles. However, much of the computer in-
tensive early processing naturally calls for local parallel processing, and is well
suited to implementation on special purpose VLSI hardware [14, 121].

6.2. Contact sensing


Contact sensing is preferred when a robot is about to be, or is, in contact with
some object or surface. In such cases, objects are often occluded, even when a
non-contact sensor is mounted on a hand. An exception to this is seam welding
[30]. The main motivation for force sensing is not, however, to overcome
occlusion, but to achieve compliant assembly. Force sensors have improved
considerably over the past two or three years. Typical sensitivities range from a
half ounce to ten pounds. Most work on force trajectories has been application
specific (e.g. peg-in-hole insertion). Current research, aimed at developing
general techniques for interpreting force data and synthesizing compliant
programs, were discussed in the previous section. Kanade and Sommer [79]
and Okada [108] report proximity sensors.
Touch sensing is currently the subject of intensive research. Manufacturing
engineers consider tactile sensing to be of vital importance in automating
assembly [58, 59]. Unfortunately, current tactile sensors leave much to be
desired. They are prone to wear and tear, have poor hysteresis, and low
dynamic range. Industrially available tactile sensors typically have a spatial
resolution of only about 8 points per inch. Tactile sensors are as poor as TV
cameras were in the 1960s, the analogy being that they are seriously hampering
the development of tactile interpretation algorithms.
Several laboratory demonstrations point the way to future sensors. Hillis [64]
devised a tactile sensor consisting of an anisotropic silicon conducting material
ARTIFICIAL INTELLIGENCE AND ROBOTICS 105

whose lines of conduction were orthogonal to the wires of a printed circuit


board and which were separated by a thin spacer. Fig. 14 shows some example
touch images generated by Hillis' tactile sensor for four small fasteners. The
sensor had a spatial resolution of 256 points per square centimeter. Raibert and
Tanner [121] developed a VLSI tactile sensor that incorporated edge detection
processing on the chip (Fig. 15). This (potentially) significantly reduces the
bandwidth of communication between the sensor and the host computer.
Recently, Hackwood and Beni [54] have developed a tactile sensor using
magneto-restrictive materials that appears to be able to compute shear forces.
Little progress has been made in the development of tactile object recog-

~/s-in.ring C) Cotter pin ( ~

Spade lug Top of screw with slot

FIG. 14. Sample tactile images from the Hillis sensor. (Reproduced from [64, Fig. 6].
106 M. B R A D Y

PHYSICAL LAYOUT
PRESSURE SENSITIVE
ELASTIC MATERIAL
J
<J

~ 7 / / / / / / / ACE
ELECt"MET%ooEs

COMPUTING ,,
ELEMENTS \\
SILICON SUBSTRATE
FIG. 15. Architecture of the VLSI tactile sensor developed by Raibert and Tanner. A layer of
pressure-sensitive rubber is placed in contact with a VLSI wafer. Metalization on the surface of the
wafer forms large sensing electrodes that make contact with the pressure-sensitive rubber through
holes in a protective layer of SiO2, the overglass. (Reproduced from 121, Fig. 1].)

nition algorithms. Hiilis built a simple pattern-recognition program that could


recognize a variety of fasteners. Gaston and Lozano-P6rez [47] have built a
program that constructs an interpretation tree for a class of two-dimensional
objects. The program assumes that there are n discrete sensors, at each of
which the position and an approximate measure of the surface orientation is
known. They show how two constraints, a distance constraint and a constraint
on the normal directions at successive touch points, can substantially cut down
the number of possible grasped object configurations. Grimson and Lozano-
P6rez [53] have extended the analysis to three dimensions. Faugeras and
H6bert [42] have developed a similar three-dimensional recognition and posi-
tioning algorithm using geometric matching between primitive surfaces. Bajcsy
[7] has investigated the use of two tactile sensors to determine the hardness and
texture of surfaces.

6.3. Non-contactsensing
Non-contact sensing is important for a variety of applications in manufacturing.
These include:
(l) Inspection. Most current industrial inspection uses binary two-dimen-
sional images. Only recently have grey level systems become commercially
available. No commercial system currently offers a modern edge detection
system. Two-dimensional inspection is appropriate for stamped or rotationally
symmetric parts. Some experimental prototypes [42, l l7] inspect surfaces such
as engine mountings and airfoil blades.
(2) Parts acquisition. Parts may be acquired from conveyor belts, from
palettes, or from bins. Non-contact sensing means that the position of parts
ARTIFICIAL INTELLIGENCE AND ROBOTICS 107

may not be accurately specified. Parts may have to be sorted if there is a


possibility of more than one type being present.
(3) Determining grasp points. Geometric analysis of shape allows grasp points
to be determined [12, 13].
Active sensing has been developed mainly for military applications. Image
understanding is difficult and requires a great deal of computer power. For-
ward-looking infra-red (FLIR), synthetic aperture radar (SAR), and millimeter
radar imagery offer limited, computationally expedient, solutions to difficult
vision problems. For example, FLIR shows hot objects (for example a tank) as
bright, simplifying the difficult segmentation of a camouflaged tank against
trees. The algorithms that have been developed for isolating and identifying
targets in natural scenes are restricted in scope. They do not generalize easily
to manufacturing settings, where, for example, most objects are 'hot'.
Vision has the most highly developed theory, and the best sensors. Now one
can get high-quality solid-state cameras. The rapid increase in the quality of
solid-state cameras has been accompanied by the rapid development of image-
understanding techniques.
Early vision processes include edge and region finding, texture analysis, and
motion computation. All these operations are well suited to local parallel
processing. Developments in edge finding include the work of Marr and
Hildreth [94], Haralick [57], and Canny [29]. Developments in grouping include
the work of Lowe and Binford [86]. Hiidreth [63] has developed a system for
computing directional selectivity of motion using the Marr-Hildreth edge
finder. Horn and Schunck [71] and Schunck [133] have shown how to compute
the optic flow field from brightness patterns. (Bruss and Horn [26] have
developed an analysis of how the flow field can be used in passive navigation.)
Brady [15, 16, 18] has developed a new technique for representing two-dimen-
sional shape, and has applied it to inspection.
The major breakthrough in vision over the past decade has been the
development of three-dimensional vision systems. These are usually referred to
as 'shape from' processes. Examples include: shape from stereo [8, 10, 52, 105],
shape from shading [75], shape from contour [19, 151], and shape from struc-
tured light [30, 42, 117, 142, 156].
Most of these 'shape from' processes produce partial depth maps. Recently,
fast techniques for interpolating full depth maps have been developed
[139, 140]. A current area of intense investigation is the representation of
surfaces [20, 43, 75, 136]. Finally, recent work by Brooks [21] discusses object
representation and the interaction between knowledge-guided and data-driven
processing.
7. Reasoning that Connects Perception to Action
This final section is speculative. It presents three examples of reasoning and
problem solving that we are striving to make robots capable of. The aim is to
illustrate the kinds of things we would like a robot to know, and the way in
108 M. BRADY

which that knowledge might be used. The knowledge that is involved concerns
geometry, forces, process, space, and shape. The examples involve tools. They
concern the interplay between the use or recognition of a tool and constraints
on the use of tools. Reasoning between structure and function is particularly
direct in the case of tools. Shape variations, though large (there are tens of
kinds of hammer), are lessened by the fact that tools are rarely fussily adorned,
since such adornments would get in the way of using the tool.

7.1. What is that tool for?


What is the tool illustrated in Fig. 16, and how is it to be used? We
(reasonably) suppose that a vision program [17, 18] computes a description of
the object that, based on the smoothed local symmetry axis, partially matches a
crank. The model for a crank indicates that it is used by fixing the end P onto
some object O, and rotating the object O about the symmetry axis at P by
grasping the crank at the other end O and rotating in a circle whose radius is
the length of the horizontal arm of the crank. Further investigation of the crank
model tells us that it is used for increasing the m o m e n t arm and hence the
torque applied to the object O. We surmise that the tool is to be used for
increasing torque on an object O. We have now decided (almost) how the tool
is to be used, and we have a hypothesis about its purpose. The hypothesis is
wrong.
The one thing that we do not yet know about how to use the tool is how to
fix it at P to the object at O. There are many possibilities, the default being
perhaps a socket connector for a nut (as for example on a tire lever). Closer
inspection of the description computed by our vision program shows that the
ends of the crank are screwdriver blades, set orthogonal to each other. Only
screwdrivers (in our experience) have such blades. Apart from the blade, the

FIG. 16. What is this tool, and what is it for?


ARTIFICIAL INTELLIGENCE AND ROBOTICS 109

tool bears some resemblance to a standard screwdriver, which also has a handle
and a shaft. In the standard screwdriver, however, the axes of the shaft and
handle are collinear. Evidently, the tool is a special-purpose screwdriver, since
only screwdrivers have such blades.
Tools have the shape that they do in order to solve some problem that is
difficult or impossible to solve with more generally useful forms. So why the
crank shape? What problem is being solved that could not be solved with a
more conventional screwdriver? H e r e are some screwdriver-specific instances
of general problems that arise using tools:
- Parts interface bug. A part does not match the part to which it is being applied
or fastened. For example, a wrench might be too small to span a nut; a
sledgehammer is inappropriate for driving a tack. The screwdriver head may
not match the screw (one might be Philips type). There is no evidence for this
bug in Fig. 16 because the fastener is not shown.
-Restricted rotary motion bug. A tool that is operated by turning it about some
axis has encountered an obstruction that prevents it turning further. This bug
occurs frequently in using wrenches. A socket wrench solves it by engaging a
gear to turn the wrench in one direction, disengaging the gear to rotate in the
other. H o w is it solved more generally? There is an analogous restricted
linear-motion bug, in which a required linear motion cannot be performed
because of an obstruction. Think of an everyday example involving tools (one
is given at the end of the section).
-Restricted-access bug. As anyone owning a particular (expensive) kind of
British car knows only too well, often the hardest part of using a tool is mating
it to the corresponding part. Many tools have an axis along, or about, which
they are applied. The most c o m m o n version of the restricted-access bug is
when the axis is too long to fit into the available space. In the case of
screwdriving, this occurs when the screwdriver is restricted vertically above the
screw. A short, stubby screwdriver is the usual solution to this problem.
Can the crank screwdriver also overcome restricted-access bugs? Of course.
The geometric form of the crank screwdriver is necessary to solve this restric-
ted-workspace problem, rather than being a torque magnifier as initially
hypothesized. In fact, the tool is called an offset screwdriver. Fig. 17 illustrates
its use.
Since I first presented this example, another solution to the restricted access
bug has been brought to my attention. Fig. 18 shows a screwdriver whose shaft
can bend about any axis orthogonal to it.
Why are the blades of an offset screwdriver set orthogonal to one another?
Put differently, what bug do they help overcome? What would you need to
know in order to figure it out?
No program is currently capable of the reasoning sketched above. Pieces of
the required technology are available, admittedly in preliminary form, and
there is cause for optimism that they could be made to work together
110 M. B R A D Y

FIG. 17. An offset screwdriver overcomes the restricted access bug.

appropriately. First, vision programs exist that can almost generate the neces-
sary shape descriptions and model matching [17, 18]. There is considerable
interplay between form and function in the reasoning, and this has been
initially explored by Winston, Binford, and their colleagues, combining the
ACRONYM system of shape description and Winston's analogy program [150]. To
figure out that the crucial thing about the form is its ability to overcome a
restriction in the workspace, it is necessary to be able to reason about space
and the swept volumes of objects. This is the contribution of Lozano-P6rez
[88-91], Brooks [24], and Lozano-P6rez, Mason, and Taylor [92]. Forbus [45] is
developing a theory of processes, a system that can reason about physical
ARTIFICIAL INTELLIGENCE AND ROBOTICS 111

FIG. 18. A flexible screwdriver for solving restricted access bugs.

processes like water flow, heat, and springs. This builds upon earlier work by
De Kleer [32] and Bundy [27]. Notice that a system that is capable of the
reasoning sketched above might be able to invent new tools, such as the
flexible screwdriver. What is the difficult problem of realising the flexible
screwdriver design?
Answer to the problem : An example of a restricted linear-motion bug: Trying
to strike a nail with a h a m m e r when there is insufficient space to swing the
hammer.

7.2. Why are wrenches asymmetric?


Fig. 19(a) shows a standard (open-jawed) wrench. Why is it asymmetric? To
understand this question, it is necessary to understand how it would most likely
be judged asymmetric. This involves finding the head and handle [18], assigning
a 'natural' coordinate frame to each [13, 17], and realizing that they do not
line up. Since the handle is significantly longer that the head, it establishes a
frame for the whole shape, so it is the head that is judged asymmetric about the
handle frame.
Now that we at least understand the question, can we answer it? We are
encouraged to relate a question of form to one of function. What is a wrench
for, and how is it used? It is used as shown in Fig. 19(b): the head is placed
112 M. BRADY

_)
~ - ( a )

-3
(b)
FIG. 19. (a) An asymmetric wrench. (b) How to use a wrench.

against a nut; the handle is grasped and moved normal to its length; if the
diameter of the nut and the opening of the jaws of the wrench match, the nut
(assumed fixed) will cause the handle to rotate about the nut. Nowhere is
mention made of asymmetry. Surely, a symmetric wrench would be easier to
manufacture. Surely, a symmetric wrench would be equally good at turning
nuts. Or would it?
Recall that questions of form often relate not just to function, but to solving
some problem that a 'standard', here symmetric, wrench could not solve. The
main problem that arises using a wrench is the restricted rotary-motion bug. In
many tasks there is an interval [q~, q~2] (measured from the local coordinate
frame corresponding to the axis of the handle) through which the wrench can
be rotated. The crucial observation is that a wrench is (effectively) a lamina. As
such, it has a degree of freedom corresponding to flipping it over. Exploiting
this degree of freedom makes no difference to the effective workspace of a
symmetric wrench. It doubles the workspace of an asymmetric wrench, giving
[-~2, -¢1] tO [~q, q~2]. In this way, an asymmetric wrench partially solves the
restricted rotary-motion bug. Perhaps this suggests how to design the head of a
wrench, say by minimizing [-q~l, ~l] subject to keeping the jaws parallel to
each other. Perhaps it also suggests (for example to Winston's [148] analogy
program) that other turning tools should he asymmetric, for analogous reasons.
There are many examples, of course, including the offset screwdriver discussed
in the previous section.

7.3. How to disconnect a battery


The final example in this section concerns a familiar AI technique: debugging
'almost-right' plans. A battery is to be disconnected. The geometry of the
terminal is shown in Fig. 20(a). A plan, devised previously, uses two socket
wrenches, one acting as a means of fixing the (variable) position of the nut, the
other to turn the bolt. The socket wrenches are applied along their axes, which
coincide with the axes of the nut and bolt.
A new model of the battery-laden device is delivered. The plan will no
longer work because of two problems. First, there is an obstacle to the left of
the head of the nut, restricting travel of the socket wrench along its axis. Using
ARTIFICIAL INTELLIGENCE AND ROBOTICS 113

FIG. 20. (a) The geometry of a battery terminal. (b) Side view of the late model bolt.

something akin to dependency-directed reasoning, truth maintenance, or a


similar technique for recording the causal connections in a chain of reasoning,
we seek a method for adapting our almost-right plan to the new circumstance.
There are a variety of techniques, one of which was illustrated in the offset
screwdriver. That suggests bending a tool so that torque can be applied by
pushing on a lever. In fact, socket wrenches have this feature built-in, so the
first problem was easy to fix.
Unfortunately, the second problem is more difficult. The new model has a
bolt that has a head which does not leave sufficient clearance for a socket
wrench to fit around its head (Fig. 20b). We further reconsider the plan, adding
the new constraints, removing those parts that were dependent upon being able
to use the socket wrench (none, in this case). The next idea is to use a different
kind of wrench, but this will not work either, again because of the insufficient
clearance. The essence of the plan is to grasp either the nut or the bolt
securely, and to turn the other. For example, we might secure the bolt and turn
the nut. Several ways are available. The most familiar is to secure the bolt by
grasping it with a needlenose plier. Different tools are required, but the
functionality is equivalent at the level at which the plan is specified. Brady
[165, 166] reports progress on a program to handle the problems described in
this section.

8. Conclusion
Since Robotics is the connection of perception to action, Artificial Intelligence
must have a central role in Robotics if the connection is to be intelligent. We
have illustrated current interactions between A I and Robotics and presented
examples of the kinds of things we believe it is important for robots to know.
We have discussed what robots should know, how that knowledge should be
represented, and how it should be used. Robotics challenges A I by forcing it to
deal with real objects in the real world. An important part of the challenge is
dealing with rich geometric models.
114 M. BRADY

ACKNOWLEDGMENT
This paper is based on invited talks presented at the Annual Conference of the American
Association for Artificial Intelligence, Washington, DC, August 1983, and the Quatribme Congrbs
des Reconnaissance des Formes et d'lntelligence Artificielle, INRIA, Paris, January 1984. Many
colleagues commented on earlier drafts or presentations of the material in this paper, l thank
especially Phil Agre, Dave Braunegg, Bruce Donald, Olivier Faugeras, Georges Giralt, John
Hollerbach, Dan Huttenlocher, Tom~s Lozano-P6rez, Tommy Poggio, Marc Raibert, Ken Salis-
bury, Jon Taft, Dan Weld, and Patrick Winston.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts
Institute of Technology. Support for the Laboratory's Artificial Intelligence research is provided in
part by the Advanced Research Projects Agency of the Department of Defense under Office of
Naval Research contract N00014-75-C-0643, the Office of Naval Research under contract number
N0014-80-C-0505, and the System Development Foundation.

REFERENCES
1. Agin, G.J., Computer vision systems for industrial inspection and assembly, Computer 13
(1980) 1 !-20.
2. Albus, J.S., Integrated system control, in: L. Gerhardt and M. Brady (Eds.), Robotics and
Artificial Intelligence (Springer, Berlin, 1984) 65-93.
3. Ambler, A.P. and Popplestone, R.J., Inferring the positions of bodies from specified spatial
relationships, Artificial Intelligence 6 (2) (1975) 157-174.
4. Asada, H., A characteristics analysis of manipulator dynamics using principal transformations,
in: Proceedings American Control Conference, Washington, DC, 1982.
5. Asada, H. and Youcef-Toumi, K., Development of a direct-drive arm using high torque
brushless motors, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
6. Asada, H. and Kanade, T., Design concept of direct-drive manipulators using rare-earth DC
torque motors, in: Proceedings Seventh International Joint Conference on Artificial Intelligence,
Vancouver, BC (1981)-775-778.
7. Bajcsy, R., What can we learn from one finger experiments?, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
8. Baker, H.H. and Binford, T.O., Depth from edge and intensity based stereo, in: Proceedings
Seventh International Joint Conference on Artificial Intelligence, Vancouver, BC, 1981.
9. Binford, T.O., Inferring surfaces from images, Artificial Intelligence 17 (1981) 205-245.
10. Binford, T.O., Stereo vision: complexity and constraints, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
11. Bobrow, D.G. (Ed.), Qualitative Reasoning about Physical Systems (North-Holland, Am-
sterdam, 1984); also: Artificial Intelligence 24 (1984) special volume.
12. Boissonat, J.-D., Stable matching between a hand structure and an object silhouette, IEEE
Trans. Pattern Anal. Mach. lntell. 4 (1982) 603--611.
13. Brady, M., Parts description and acquisition using vision, in: A. Rosenfeld (Ed.), Robot vision,
Proceedings SPIE, Washington, DC (1982) 1-7.
14. Brady, M., Parallelism in vision (Correspondent's Report), Artificial Intelligence 21 (1983)
271-284.
15. Brady, M., Criteria for shape representations, in: J. Beck and A. Rosenfeld (Eds.), Human and
Machine Vision (Academic Press, New York, 1983).
16. Brady, M., Trajectory planning, in: M. Brady, J.M. Hollerbach, T.J. Johnson, T. Lozano-P6rez
and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT Press, Cambridge,
MA, 1983).
ARTIFICIAL INTELLIGENCE AND ROBOTICS 115

17. Brady, M., Representing shape, in: Proceedings IEEE Conference on Robotics, Atlanta, GA,
1984.
18. Brady, M. and Asada, H., Smoothed local symmetries and their implementation, in: Proceed-
ings First International Symposium on Robotics Research, 1983.
19. Brady, M. and Yuille, A., An extremum principle for shape from contour, MIT AI Memo
711, Cambridge, MA, 1983.
20. Brady, M. and Yuille, A., Representing three-dimensional shape, in: Proceedings Romansy
Conference, Udine, Italy, 1984.
21. Brooks, R.A., Symbolic reasoning among 3-D models and 2-D images, Artificial Intelligence
17 (1981) 285-348.
22. Brooks, R.A., Symbolic error analysis and robot planning, Internat. J. Robotics Res. 1 (4)
(1982) 29--68.
23. Brooks, R.A., Solving the findpath problem by good representation of free space, IEEE
Trans. Systems Man Cybernet. 13 (1983) 190-197.
24. Brooks, R.A., Planning collision free motions for pick and place operations," Internat. J.
Robotics Res. 2 (4) (1983).
25. Brooks, R.A. and Lozano-P6rez, T., A subdivision algorithm in configuration space for
findpath with rotation, in: Proceedings Eighth International Joint Conference on Artificial
Intelligence, Karlsruhe, W. Germany, 1983.
26. Bruss, A. and Horn, B.K.P., Passive Navigation, MIT AI Memo 662, Cambridge, MA, 1981.
27. Bundy, A., Byrd, L., Luger, G., Mellish, C. and Palmer, M., Solving mechanics problems using
meta-level inference, in: D. Michie (Ed.), Expert Systems in the Microelectronic Age (Edinburgh
University Press, Edinburgh, 1979).
28. Cannon, R.H., Jr. and Schmitz, E., Precise control of flexible manipulators, in: M. Brady and
R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (MIT Press,
Cambridge, MA, 1984).
29. Canny, J.F., Finding lines and edges in images, in: Proceedings Third National Conference on
Artificial Intelligence, Washington, DC, 1983.
30. CIocksin, W.S., Davey, P.G., Morgan, C.G. and Vidler, A.R., Progress in visual feedback for
arc-welding of thin sheet steel, in: A. Pugh (Ed.), Robot Vision. (Springer, Berlin, 1982) 187-198.
31. Davis, L.S. and Rosenfeld, A., Cooperating processes for low-level vision: a survey, Artificial
Intelligence 17 (1981) 245-265.
32. De Kleer, J., Qualitative and quantitative knowledge in classical mechanics, MIT Artificial
Intelligence Laboratory, AI-TR-352, Cambridge, MA, 1975.
33. Dobrotin, B. and Lewis, R., A Practical Manipulator System, in: Proceedings Fifth Inter-
national Joint Conference on Artificial Intelligence, Tblisi, USSR (1977) 749-757.
34. Donald, B.R., Hypothesising channels through free space in solving the findpath problem,
MlT AI Memo 736, Cambridge, MA, 1983.
35. Donald, B.R., Local and global techniques for motion planning, MIT Artificial Intelligence
Laboratory, Cambridge, MA, 1984.
36. Dubowsky, S., Model-reference adaptive control, in: M. Brady and L. Gerhardt (Eds.), N A T O
Advanced Study Institute on Robotics and Artificial Intelligence (Springer, Berlin, 1984).
37. Dubowsky, S. and DesForges, D.T., The application of model-referenced adaptive control to
robotic manipulators, J. Dynamic Systems Meas. Control 101 (1979) 193-200.
38. Dufay, B. and Latombe, J.-C., An approach to automatic robot programming based on
inductive learning, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research I (MIT Press, Cambridge, MA, 1984).
39. Engelberger, J.F., Robotics in Practice (Kogan Page, London, 1980).
40. Everett, H.R., A computer controlled sentry robot: a homebuilt project report, Robotics Age,
1982.
41. Faugeras, O., H6bert, M., Ponce, J. and Boissonat, J., Towards a flexible vision system, in: A.
Pugh (Ed.), Robot Vision (IFS, 1982).
42. Faugeras, O. and H6bert, M., A 3-D recognition and positioning algorithm using geometric
116 M. BRADY

matching between primitive surfaces, in: Proceedings Eighth International Joint Conference on
Artificial Intelligence, Karlsruhe, W. Germany (1983) 996--1002.
43. Faugeras, O., H6bert, M. and Ponce, J., Object representation, identification, and positioning
from range data, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research ! (MIT Press, Cambridge, MA, 1984).
44. Featherstone, R., Position and velocity transformations between robot end etlector coor-
dinates and joint angles, Internat. J. Robotics Res. 2 (2) (1983).
45. Forbus, K.D., Qualitative process theory, Artificial Intelligence 24 (1984) 8.5--168.
46. Franklin, J.W. and VanderBrug, G.J., Programming vision and robotics system with RAIL,
in: Proceedings Robots VI Conference, Detroit, MI, 1982.
47. Gaston, P.C. and Lozano-P6rez, T., Tactile recognition and localization using object models:
the case of polyhedra on a plane, M1T A1 Memo 705, Cambridge, MA, 1983.
48. Giralt, G., Mobile robots, in: M. Brady and L. Gerhardt (Eds.), N A T O Advanced Study
Institute on Robotics and Artificial Intelligence (Springer, Berlin, 1984).
49. Giralt, G., Sobek, R. and Chatila, R., A multi-level planning and navigation system for a
mobile robot: a first approach to HILARE, in: Proceedings Fifth International Joint Con-
ference on Artificial Intelligence, Tblisi, USSR, 1977.
50. Giralt, G., Chatila, R. and Vaisset, M., An integrated navigation and motion control system
for autonomous multisensory mobile robots, in: M. Brady and R. Paul (Eds.), Proceedings
International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
51. Goto, T., Takeyasu, K. and lnoyama, T., Control algorithm for precision insert operation
robots, IEEE Trans. Systems Man Cybernet. 10 (1) (1980) 19-25.
52. Grimson, W.E.L., From Images to Surfaces: A Computational Study of the Human Early
Visual System (M1T Press, Cambridge, MA, 1981).
53. Grimson, W.E.L. and Lozano-P6rez, T., Local constraints in tactile recognition, Internat. J.
Robotics Res. 3 (3) (1984).
54. Hackwood, S. and Beni, Torque sensitive tactile array for robotics, Internat. J. Robotics Res. 2
(2) (1983).
55. Hanafusa, H. and Asada, H., Stable prehension by a robot hand with elastic fingers, in:
Proceedings Seventh Symposium on Industrial Robotics (1977) 361-368.
56. Hanafusa, H. and Asada, H., A robot hand with elastic fingers and its application to assembly
process, in: Proceedings I F A C Symposium on Information Control Problems in Manufacturing
Techniques (1979) 127-138.
57. Haralick, R.M., Watson, L.T. and Lafl~ey, T..I., The topographic primal sketch, Internat. J.
Robotics Res. 2 (1) (1983) 50-72.
58. Harmon, L., Automated tactile sensing, Internat. J. Robotics Res. 1 (2) (1982) 3-33.
59. Harmon, L., Robotic traction for industrial assembly, lnternat. J. Robotics Res. 3 (1) (1984).
60. Harmon, S.Y., Coordination between control and knowledge based systems for autonomous
vehicle guidance, in: Proceedings IEEE Trends and Applications Conference, Gaithersburg,
MD, 1983.
61. Harmon, S.Y., Information processing system architecture for an autonomous robot system,
in: Proceedings Conference on Artificial Intelligence, Oakland University, Rochester, MI, 1983.
62. Hayes-Roth, P., Waterman, D. and Lenat, D. (Eds.), Building Expert Systems (Addison
Wesley, Reading, MA, 1983).
63. Hildreth, E., The Measurement of Visual Motion (MIT Press, Cambridge, MA, 1983).
64. Hillis, W.D., A high-resolution image touch sensor, Internat. J. Robotics Res. 1 (2) (1982)
33-44.
65. Hirose, S., Nose, M., Kikuchi, H. and Umetani, Y., Adaptive gait control of a quadruped
walking vehicle, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
66. Hobbs, J. and Moore, R., Formal Theories of the Common Sense World (Ablex, Norwood, N J,
1984).
ARTIFICIAL INTELLIGENCE AND ROBOTICS 117
67. Hollerbach, J.M., Dynamics, in: M. Brady, J.M. Hollerbach, T.J. Johnson, T. Lozano-
P~rez and M. Mason (Eds.), Robot Motion: Planning and Control (MIT Press, Cambridge,
MA, 1983).
68. Hollerbach, J.M. and Sahar, G., Wrist partitioned inverse kinematic accelerations and
manipulator dynamics, MIT AI Memo 717, Cambridge, MA, 1983.
69. Hopocroft, J.E., Schwartz, J.T. and Sharir, M., Efficient detection of intersections
among spheres, Internat. J. Robotics Res. 2 (4) (1983).
70. Horn, B.K.P., Sequins and quills-representations for surface topography, in: R. Bajcsy (Ed.),
Representation of 3-Dimensional Objects (Springer, Berlin, 1982).
71. Horn, B.K.P. and Schunck, B.G., Determining optical flow, Artificial Intelligence 17 (1982)
185-203.
72. Huston, R.L. and Kelly, F.A., The development of equations of motion of single-arm robots,
IEEE Trans. Systems Man Cybernet. 12 (1982) 259-266.
73. Ikeuchi, K., Determination of surface orientations of specular surfaces by using the
photometric stereo method, IEEE Trans. Pattern Anal. Mach. Intelligence 2 (6) (1981)
661--669.
74. Ikeuchi, K., Recognition of 3D object using extended Gaussian image in: Proceedings Seventh
International Joint Conference on Artificial Intelligence, Vancouver, BC, 1981.
75. Ikeuchi, K. and Horn, B.K.P., Numerical shape from shading and occluding boundaries,
Artificial Intelligence 17 (1981) 141-185.
76. lkeuchi, K., Horn, B.K.P., Nagata, S. and Callahan, T., Picking up an object from a pile of
objects in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics
Research 1 (MIT Press, Cambridge, MA, 1984).
77. Jacobsen, S.C., Wood, J.E., Knutti D.F. and Biggers, K.B., The Utah/MIT dextrous hand - work
in progress, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics
Research I (MIT Press, Cambridge, MA, 1984).
78. Kahn, M.E. and Roth, B., The near minimum-time control of open-loop articulated kinematic
chains, J. Dynamic Systems Meas. Control 93 (1971) 164-172.
79. Kanade, T. and Sommer, T., An optical proximity sensor for measuring surface position and
orientation for robot manipulation, in: M. Brady and R. Paul (Eds.), Proceedings of the
International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
80. Kane, T.R. and Levinson, D.A., The use of Kane's dynamical equations in robotics, Internat.
J. Robotics Res. 2 (3) 0983) 3-22.
81. Klein, C.A. and Wahawisan, J.J., Use of a multiprocessor for control of a robotic system,
Internal J. Robotics Res. 1 (2) (1982) 45-59.
82. Klein, C.A., Olson, K.W. and Pugh, D.R., Use of force and attitude sensors for locomotion of
a legged vehicle over irregular terrain, Internat. J. Robotics Res. 2 (2) (1983) 3-17,
83. Lewis, R.A. and Bejczy, A.K., Planning considerations for a roving robot with arm, in:
Proceedings Third International Joint Conference on Artificial Intelligence, Stanford, CA (1973)
308-315.
84. Lewis, R.A. and Johnson, A.R., A scanning laser rangefinder for a roving robot with arm, in:
Proceedings Fifth International Joint Conference on Artificial Intelligence, Tblisi, USSR (1977)
762-768.
85. Lieberman, L.I. and Wesley, M.A., AUTOPASS: an automatic programming system for
computer controlled mechanical assembly, I B M J. Res. Develop. 21 (4) (1977) 321-333.
86. Lowe, D.G. and Binford, T.O., Segmentation and aggregation: an approach to figure-ground
phenomena, in: L.S. Baumann (IEd.), Proceedings Image Understanding Workshop (Science
Applications, Tysons Corner, VA, 1982) 168-178.
87. Lozano-P&ez, T., The design of a mechanical assembly system, MIT, Artificial Intelligence
Laboratory, AI TR 397, Cambridge, MA, 1976.
88. Lozano-P6rez, T., Automatic planning of manipulator transfer movements, IEEE Trans.
Systems Man Cybernet. 11 (1981) 681-698.
118 M. BRADY

89. Lozano-P6rez, T., Spatial planning: a configuration space approach, IEEE Trans. Comput. 32
(1983) 108--120.
90. Lozano-P6rez, T., Robot programming, Proc. IEEE 71 (1983) 821-841.
91. Lozano-P6rez, T., Spatial Reasoning, in: M. Brady, J.M. Hollerbach, T.J. Johnson, T.
Lozano-P6rez and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT Press,
Cambridge, MA, 1983.
92. Lozano-P6rez, T., Mason, T. and Taylor, R.H., Automatic synthesis of fine-motion strategies
for robots, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics
Research 1 (MIT Press, Cambridge, MA, 1984).
93. Marr, D., Vision (Freeman, San Francisco, CA, 1982).
94. Marr, D. and Hildreth, E.C., Theory of edge detection, Proc. Roy. Soc. Lond. B 270 (1980)
187-217.
95. Marr, D. and Poggio, T., A theory of human stereo vision, Proc. Roy. Soc. Lond. B 204 (1979)
301-328.
96. Mason, T.M., Compliance and force control for computer control manipulators, IEEE Trans.
Systems Man Cybernet. 11 (1981) 418--432; reprinted in: M. Brady, J.M. Hollerbach, T.J.
Johnson, T. Lozano-P6rez and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT
Press, Cambridge, MA, 1983).
97. Mason, T.M., Compliance in: M. Brady, J.M. Hollerbach, T.J. Johnson, T. Lozano-P6rez and
M.T. Mason (Eds.), Robot Motion : Planning and Control (MIT Press, Cambridge, MA, 1983).
98. Mason, T.M., Manipulator grasping and pushing operations, MIT, 1982.
99. Michie, D., Expert Systems in the Microelectronic Age (Ellis Horwood, Chichester, 1979).
100. Mor~,vec, H.P., Robot Rover Visual Navigation (UMI Research Press, Ann Arbor, M1,
1981).
101. Moravec, H.P., The Stanford cart and the CMU rover, IEEE Trans. Indust. Electronics
(1983).
102. Moravec, H.P., Locomotion, vision, and intelligence, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
103. Nakagawa, Y. and Ninomiya, T., Structured light method for inspection of solder joints and
assembly robot vision system, in: M. Brady and R. Paul (Eds.), Proceedings International
Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
104. Nilsson, N.J., A mobile automation: an application of Artificial Intelligence techniques, in:
Proceedings First International Joint Conference on Artificial Intelligence, Washington, DC,
1969.
105. Nishihara, H.K., and Poggio, T., Stereo vision for robotics, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
106. Ohwovoriole, M.S. and Roth, B., An extension of screw theory, Trans. A S M E J. Mech.
Design 103 (1981) 725-735.
107. Okada, T., Computer control of multi-jointed finger system, in: Proceedings Sixth Inter-
national Joint Conference on Artificial Intelligence, Tokyo, Japan, 1979.
108. Okada, T., Development of an optical distance sensor for robots, Internat. J. Robotics Res. 1
(4) (1982) 3-14.
109. Orin, D.E. and Schrader, W.W., Efficient Jacobian determination for robot manipulators, in:
M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, MA, 1984).
110. Ozguner, F., Tsai, S.J. and McGhee, R.B., Rough terrain locomotion by a hexapod robot
using a binocular ranging system, in: M. Brady and R. Paul (Eds.), Proceedings International
Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
111. Paul, R.P., Robot Manipulators: Mathematics, Programming, and Control (MIT Press, Cam-
bridge, MA, 1981).
ARTIFICIAL INTELLIGENCE AND ROBOTICS 119

112. Paul, R.C. and Shimano, B.E., Compliance and control, in: Proceedings Joint Automatic
Control Conference (1976) 694--699.
113. Paul, R.C., Stevenson, C.N. and Renaud, M., A systematic approach for obtaining the
kinematics of recursive manipulators based on homogeneous transformations, in: M. Brady
and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (M1T Press,
Cambridge, MA, 1984).
114. Pieper, D.L., The kinematics of manipulators under computer control, Ph.D. Thesis,
Department of Computer Science, Stanford University, Stanford, CA, 1968.
115. Pieper, D.L. and Roth, B., The kinematics of manipulators under computer control, in:
Proceedings Second International Conference on Theory of Machines and Mechanisms, War-
saw, Poland, 1969:
116. Popplestone, R.J., Ambler, A.P. and Bellos, I.M., An interpreter for a language for
describing assemblies, Artificial Intelligence 14 (1980) 79-107.
117. Porter, G. and Mundy, J., A non-contact profile sensor system for visual inspections, in:
Proceedings IEEE Workshop on Industrial Applications of Machine Vision, 1982.
118. Porter, G. and Mundy, J., A model-driven visual inspection module, in: M. Brady and R. Paul
(Eds.), Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge,
MA, 1984).
119. Raibert, M.H., Special Issue on walking machines, lnternat. J. Robotics Res. 3 (2) (1984).
120. Raibert, M.H. and Craig, J.J., A hybrid force and position controller, in: M. Brady, J.M.
Hollerbach, T.J. Johnson, T. Lozano-Prrez and M.T. Mason (Eds.), Robot Motion: Planning
and Control (M1T Press, Cambridge, MA, 1983).
121. Raibert, M.H. and Tanner, J.E., Design and implementation of a VLSI tactile sensing
computer, Internat. Robotics Res. 1 (3) (1982) 3-18.
122. Raibert, M.H. and Sutherland, I.E., Machines that walk, Scientific American 248 (1983)
44-53.
123. Raibert, M.H., Brown, H.B., Jr. and Murthy, S.S., 3D balance using 2D algorithms?, in: M.
Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (MIT
Press, Cambridge, MA, 1984).
124. Renaud, M., An efficient iterative analytical procedure for obtaining a robot manipulator
dynamic model, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
125. Requicha, A.A.G., Representation of rigid solids: theory, methods and systems, Comput.
Surveys 12 (4) (1980) 437-464.
126. Rich, C.R. and Waters, R., Abstraction, inspection, and debugging in programming, MIT AI
Memo 634, Cambridge, MA, 1981.
127. Roth, B., Screws, motors, and wrenches that cannot be bought in a hardware store, in: M.
Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (M1T
Press, Cambridge, MA, 1984).
128. Sacerdoti, E., A structure for plans and behavior, SRI Artificial Intelligence Center TR-109,
Menlo Park, CA, 1975.
129. Salisbury, J.K., Kinematic and force analysis of articulated hands, Ph.D. Thesis, Department
Of Mechanical Engineering, Stanford University, Stanford, CA, 1982.
130. Salisbury, J.K., Active stiffness control of a manipulator in Cartesian coordinates, in:
Proceedings 1EEE Conference on Decision and Control, Albuquerque, NM, 1980.
131. Salisbury, J.K., Interpretation of contact geometries from force measurements, in: M. Brady
and R. Paul (Eds.), Proceedings International Symposium on Robotics Research I (MIT Press,
Cambridge, MA, 1984).
132. Salisbury, J.K. and Craig, J.J., Articulated hands: force control and kinematic issues, lnternat.
J. Robotics Res. 1 (1) (1982) 4--17.
133. Schunck, B.G., Motion segmentation and estimation, MIT Artificial Intelligence Laboratory,
1983.
120 M. BRADY

134. Schwartz, J.T. and Sharir, M., The piano movers problem 111, lnternat. J. Robotics Res. 2 (3)
(1983).
135. Shimano, B.E., Geschke, C.C. and Spaulding, C.H., VAL I1: a robot programming language
and control system, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
136. Shirai, Y., Koshikawa, K., Oshima, M. and Ikeuchi, K., An approach to object recognition
using 3-D solid model, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium
on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
137. Taylor, R.H., The synthesis of manipulator control programs from task-level specifications,
AI Memo 282, Stanford University, Stanford, CA, 1976.
138. Taylor, R.H., Summers, P.D. and Meyer, J.M., AML: a manufacturing language, Internat. J.
Robotics Res. 1 (3) (1982) 19-41.
139. Terzopoulos, D., Multilevel reconstruction of visual surfaces: Variational principles and finite
element representations, in: A. Rosenfeld (Ed.), Multiresolution Image Processing and
Analysis (Springer, New York, 1984).
140. Terzopoulos, D., Multilevel computational processes for visual surface reconstruction, Corn-
put. Vision Graphics Image Process. 24 (1983) 52-96.
141. Trevelyan, J.P., Kovesi, P.D. and Ong, M.C.H., Motion control for a sheep shearing robot, in:
M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, MA, 1984).
142. Tsuji, S. and Asada, M., Understanding of three-dimensional motion in time-varying imagery,
in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, MA, 1984).
143. VAL, User's guide: a robot programming and control system, CONDEC Unimation Robo-
tics, 1980.
144. Villers, P., Present industrial use of vision sensors for robot guidance, in: A. Pugh (Ed.) Robot
Vision (IFS, 1982).
145. Vilnrotter, F., Nevatia, R. and Price, K.E., Structural analysis of natural textures, in: L.S.
Baumann (Ed.), Proceedings Image Understanding Workshop (Science Applications, Tysons
Corner, VA, 1981) 61~i8.
146. Wesley, M.A., et al., A geometric modeling system for automated mechanical assembly, IBM
J. Res. Develop. 24 (1) (1980) 64-74.
147. Whitney, D.E., The mathematics of compliance, in: M. Brady, J.M. Hollerbach, T.J. Johnson,
T. Lozano-P6rez and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT Press,
Cambridge, MA, 1983).
148. Winston, P.H., Learning and reasoning by analogy, Comm. A C M 23 (1980).
149. Winston, P.H., Artificial Intelligence (Addison-Wesley, Reading, MA, 2nd ed., 1984).
150. Winston, P.H., Binford, T.O., Katz, B. and Lowry, M., Learning physical descriptions from
functional descriptions, examples, and precedents, in: M. Brady and R. Paul (Eds.), Proceed-
ings of.the International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
151. Witkin, A.P., Recovering surface shape and orientation from texture, Artificial Intelligence 17
(1981) 17-.47.
152. Woodham, R.J., Analysing images of curved surfaces, Artificial Intelligence 17 (1981) 117-140.
153. Yoshikawa, T., Analysis and control of robot manipulators with redundancy, in: M. Brady
and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (MIT Press,
Cambridge, MA, 1984).
154. Young, K.K.D., Controller design for a manipulator using theory of variable structure
systems, I E E E Trans. Systems Man Cybernet. 8 (1978) 101-109; reprinted in: [157].
155. Zucker, S.W., Hummel, R.A. and Rosenfeld, A., An application of relaxation labelling to line
and curve enhancement, IEEE Trans. Comput. 26 (1977) 394--403, 922-929.
ARTIFICIAL INTELLIGENCE AND ROBOTICS 121

ADDITIONAL REFERENCES

156. Bolles, R.C., Horaud, P. and Hannah, M.J., 3DPO: a three-dimensional parts orientation system,
in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT, Cambridge, MA, 1984) 41:3--424.
157. Brady, M., Hollerbach, J.M., Johnson, T.J., Lozano-P6rez, T. and Masow, M.T. (Eds.), Robot
Motion: Planning and Control (MIT Press, Cambridge, MA, 1983).
158. Brady, M. and Paul, R. (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, 1984).
159. Brooks, R.A. and Binford, T.O., Representing and reasoning about partially specified scenes, in:
L.S. Banmann (Ed.), Proceedings Image Understanding Workshop (Science Applications, Tysons
Corner, VA, 1980) 95-103.
160. Freund, E., Fast non-linear control with arbitrary pole placement for industrial robots and
manipulators, Internat. J. Robotics Research 1 (1) (1982) 65-78.
161. Freund, E., Hierarchical non-linear control for robots, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research l (MIT, Cambridge, MA, 1984)
817-840.
162. Miura, H. and Shimoyama, I., Dynamical walk of biped locomotion, in: M. Brady and R. Paul
(Eds.), Proceedings International Symposium on Robotics Research 1 (MIT, Cambridge, MA,
1984) 303-325.
163. Paul, R., Modelling, trajectory calculation, and servoing a computer controlled arm, AIM 177,
Stanford AI Laboratory, Stanford, CA, 1972.
164. Fisher, W.D. A kinematic control of redundant manipulation, Ph.D. Thesis, Department of
Electrical Engineering, Purdue University, Lafayette, IN, 1984.
165. Brady, M., Agre, P., Braunegg, D.J. and Connell, J.H., The mechanic's mate in: T. O'Shea (Ed.),
ECAI 84: Advances in Artificial Intelligence (Elsevier Science Publishers B.V. (North-Holland),
Amsterdam, 1984) 681-696.
166. Connell, J.H. and Brady, M., Learning shape descriptions, MIT AI Memo 824, Artificial
Intelligence Laboratory, Cambridge, MA, 1985.

R e c e i v e d February 1982; revised version received Ap ri l 1984

You might also like