Artificial Intelligence and Robotics 1985
Artificial Intelligence and Robotics 1985
ABSTRACT
Since Robotics is the field concerned with the connection of perception to action, Artificial Intelligence
must have a central role in Robotics if the connection is to be intelligent. Artificial Intelligence
addresses the crucial questions of: what knowledge is required in any aspect of thinking; how should
that knowledge be represented; and how should that knowledge be used. Robotics challenges A I by
forcing it to deal with real objects in the real world. Techniques and representations developed for
purely cognitive problems, often in toy domains, do not necessarily extend to meet the challenge.
Robots combine mechanical effectors, sensors, and computers. A I has made significant contributions
to each component. We review A I contributions to perception and reasoning about physical objects.
Such reasoning concerns space, path-planning, uncertainty, and compliance. We conclude with three
examples that illustrate the kinds of reasoning or problem-solving abilities we would like to endow
robots with and that we believe are worthy goals of both Robotics and Artificial Intelligence, being
within reach of both.
some of these tasks quite well, particularly those that pertain to Computer
Science; it does others quite poorly, particularly perception, object modelling,
and spatial reasoning.
The intelligent connection of perception to action replaces sensing by per-
ception, and software by intelligent software. Perception differs from sensing or
classification in that it implies the construction of representations that are the
basis for recognition, reasoning and action. Intelligent software addresses issues
such as: spatial reasoning, dealing with uncertainty, geometric reasoning,
compliance, and learning. Intelligence, including the ability to reason and learn
about objects and manufacturing processes, holds the key to more versatile
robots.
Insofar as Robotics is the intelligent connection of perception to action,
Artificial Intelligence (AI) is the challenge for Robotics. On the other hand,
however, we shall argue that Robotics severely challenges Artificial In-
telligence (AI) by forcing it to deal with real objects in the real world.
Techniques and representations developed for purely cognitive problems often
do not extend to meet the challenge.
First, we discuss the need for intelligent robots and we show why Robotics
poses severe challenges for Artificial Intelligence. Then we consider what is
required for robots to act on their environment. This is the domain of
kinematics and dynamics, control, innovative robot arms, multi-fingered hands,
and mobile robots. In Section 5, we turn attention to intelligent software,
focussing upon spatial reasoning, dealing with uncertainty, geometric reason-
ing, and learning. In Section 6, we discuss robot perception. Finally, in Section
7, we present some examples of reasoning that connects perception to action,
example reasoning that no robot is currently capable of. We include it because
it illustrates the reasoning and problem-solving abilities we would like to
endow robots with and that we believe are worthy goals of Robotics and
Artificial Intelligence, being within reach of both.
Robotics needs to deal with the real world, and to do this it needs detailed
geometric models. Perception systems need to produce geometric models;
reasoning systems must base their deliberations on such models; and action
systems need to interpret them. Computer-aided design (CAD) has been
concerned with highly restricted uses of geometric information, typically dis-
play and numerically controlled cutting. Representations incorporated into
current C A D systems are analogous to conventional data structures. In order
to connect perception, through reasoning, to action, richer representations of
geometry are needed. Steps toward such richer representations can be found in
configuration space [88, 89], generalized cones [9], and visual shape represen-
tations [13, 70, 74, 93].
As well as geometry, Robotics needs to represent forces, causation, and
uncertainty. We know how much force to apply to an object in an assembly to
mate parts without wedging or jamming [147]. We know that pushing too hard
on a surface can damage it; but that not pushing hard enough can be ineffective
for scribing, polishing, or fettling. In certain circumstances, we understand how
an object will move if we push it [98]. We know that the magnitude and
direction of an applied force can be changed by pulleys, gears, levers, and
cams.
We understand the way things such as zip fasteners, pencil sharpeners, and
automobile engines work. The spring in a watch stores energy, which is
released to a flywheel, causing it to rotate; this causes the hands of the watch to
rotate by a smaller amount determined by the ratios of the gear linkages.
Representing such knowledge is not simply a matter of developing the ap-
propriate mathematical laws. Differential equations, for example, are a
representation of knowledge that, while extremely useful, are still highly
limited. Forbus [45] points out that conventional mathematical representations
do not encourage qualitative reasoning, instead, they invite numerical simula-
tion. Though useful, this falls far short of the qualitative reasoning that people
are good at. Artificial Intelligence research on qualitative reasoning and naive
physics has made a promising start but has yet to make contact with the real
world, so the representations and reasoning processes it suggests have barely
been tested [11, 32, 45, 66, 150].
Robotics needs to represent uncertainty, so that reasoning can successfully
overcome it. There are bounds on the accuracy of robot joints; feeders and
sensors have errors; and though we talk about repetitive work, no two parts are
ever exactly alike. As the tolerances on robot applications become tighter, the
need to deal with uncertainty, and to exploit redundancy, becomes greater.
(3) Using knowledge. AI has also uncovered techniques for using knowledge
effectively. One problem is that the knowledge needed in any particular case
cannot be predicted in advance. Programs have to respond flexibly to a
non-deterministic world. Among the techniques offered by AI are search,
structure matching, constraint propagation, and dependency-directed reason-
ARTIFICIAL INTELLIGENCE AND ROBOTICS 83
4. Action
In this section, we consider what is required for robots to act on their
environment. This is the subject of kinematics and dynamics, control, robot
arms, multi-fingered hands, and locomoting robots.
the task. Six degrees of freedom (DOF) are required to define the position and
orientation of an object in space. Correspondingly, many robots have six joint
motors to achieve these freedoms. Converting between the joint positions,
velocities, and accelerations and the Cartesian (task) counterparts is the job of
kinematics. The conversion is an identity transformation between the joint
space of 'Cartesian' arms (such as the IBM 7565) and orthogonal (x, y, z)
Cartesian space. Cartesian arms suffer the disadvantage of being less able to
reach around and into objects. Kinematic transformations are still needed to
spherical or cylindrical Cartesian coordinates.
The kinematics of a mechanical device are defined mathematically. The
requirement that the kinematics can be efficiently computed adds constraint,
that ultimately affects mechanical design. In general, the transformation from
joint coordinates to Cartesian coordinates is straightforward. Various efficient
algorithms have been developed, including recent recursive schemes whose
time complexity is linear in the number of joints. Hollerbach [67] discusses
such recursive methods for computing the kinematics for both the Lagrange
and Newton-Euler formulations of the dynamics. The inverse kinematics
computation, from Cartesian to joint coordinates, is often more complex. In
general, it does not have a closed form solution [114]. Pieper [114] (see also
[115]) showed that a 'spherical' wrist with three intersecting axes of rotation
leads to an exact analytic solution to the inverse kinematic equations. The
spherical wrist allows a decomposition of the typical six-degree-of-freedom
inverse kinematics into two three-degree-of-freedom computations, one to
compute the position of the wrist, the other to compute the orientation of the
hand. More recently, Paul [111], Paul, Stevenson and Renaud [113], Feather-
stone [44], and Hollerbach and Sahar [68], have developed efficient techniques
for computing the inverse kinematics for spherical wrists. Small changes in the
joints of a robot give rise to small changes in the Cartesian position of its end
effector. The small changes in the two coordinate systems are related by a
matrix that is called the Jacobian. Orin and Schrader [109] have investigated
algorithms for computing the Jacobian of the kinematic transformation that are
suited to VLSI implementation.
If the number of robot joints is equal to six, there are singularities in the
kinematics, that is, a small change in Cartesian configuration corresponds to a
large change in joint configuration. The singularities of six-degree-of-freedom
industrial robot arms are well cataloged. Singularities can be avoided by
increasing the number n of joints, but then there are infinitely many solutions
to the inverse kinematics computation. One approach is to use a generalized
inverse technique using a positive definite 6 × n matrix to find the solution that
minimizes some suitable quantity such as energy or time [78, 147]. Another
approach is to avoid singularities by switching between the redundant degrees
of freedom [164]. Finally, if the number of joints is less than six, there are
'holes' in the workspace, regions that the robot cannot reach. Such robots,
ARTIFICIAL INTELLIGENCEAND ROBOTICS 85
including the SCARA design, are nevertheless adequate for many specialized
tasks such as pick-and-place operations. One important application of kinema-
tics computations is in automatic planning of trajectories [16].
Most attention has centered on open kinematic chains such as robot arms.
An 'open' kinematic chain consists of a single sequence of links. Its analysis is
reasonably straightforward. Much less work has been done on closed kinematic
chains such as legged robots or multi-fingered hands. Although, in theory, the
kinematic chain closes when the robot grasps an object lying on a work surface,
it is usually (almost) at rest, and the kinematics of the closed linkage are
ignored. More interestingly, Hirose et al. [65] have designed a pantograph
mechanism for a quadruped robot that significantly reduces potential energy
loss in walking. Salisbury and Craig [132] (see also [129]) have used a number
of computational constraints, including mobility and optimization of finger
placement, to design a three-fingered hand. The accuracy and dexterity of a
robot varies with configuration, so attention needs to be paid to the layout of
the workspace. Salisbury and Craig [132] used the condition number of the
Jacobian matrix (using the row norm) to evaluate configurations of the hand.
This is important because the accuracy and strength of a robot varies
throughout its workspace, and the condition number provides a means of
evaluating different points. Yoshikawa [153] has introduced a measure of
manipulability for a similar purpose. Attend to some object in your field of
view, and consider the task of moving it to a different position and orientation.
The movement can be effected by translating along the line joining the
positions while rotating about some axis. The simultaneous translation and
rotation of an object is called a screw, and screw coordinates have been
developed as a tool for the analysis of such motions. Roth [127] reviews the
application to Robotics of screw coordinates to link kinematics and dynamics.
The dynamic equations of a robot arm (see Hollerbach [67]) consist of n
coupled, second-order, differential equations in the positions, velocities, and
accelerations of the joint variables. The equations are complex because they
involve terms from two adjacent joints, corresponding to reaction and Coriolis
torques. One way to visualize the effect of such torques is by moving your arm,
for example opening a door or cutting with a knife. The motion of the forearm
relative to the wrist generates a force not only at the elbow but, by the laws of
Newtonian mechanics, at the shoulder. The faster you move your forearm, the
faster you accelerate it, or the heavier the knife that you use, the larger the
resulting torque at the shoulder. Conventional techniques have simplified
dynamics by dropping or linearizing terms, or have proposed table look-up
techniques. Recently, 'recursive' recurrence formulations of the dynamic equa-
tions have been developed that:
(a) compute the kinematics from the shoulder to the hand in time propor-
tional to n;
(b) compute the inverse dynamics from the force and torque exerted on the
86 M. BRADY
hand by the world from the hand to the shoulder, again in time proportional to
n.
The importance of this result is threefold:
(1) First, it suggests that a more accurate inverse plant model can be
developed, leading to faster, more accurate arms. Friction is a major source of
the discrepancy between model and real world. Direct-drive technology [4--6]
reduces the mismatch. In a direct-drive arm, a motor is directly connected to a
joint with no intervening transmission elements, such as gears, chains, or ball
screws. The advantages are that friction and blacklash are low, so the direct-
drive joint is backdrivable. This means that it can be controlled using torque
instead of position. Torque control is important for achieving compliance, and
for feedforward dynamics compensation.
(2) Second, the recurrence structure of the equations lends itself to im-
plementation using a pipelined microprocessor architecture, cutting down
substantially on the number of wires that are threaded through the innards of a
modern robot. On current robots, a separate wire connects each joint motor to
the central controller; individual control signals are sent to each motor. The
wires need to thread around the joints, and the result is like a pan of spaghetti.
(3) Third, Hollerbach and Sahar [68] have shown that their refinement of
Featherstone's technique for computing the inverse kinematics makes available
many of the terms needed for the recursive N e w t o n - E u l e r dynamics.
Renaud [124] has developed a novel iterative Lagrangian scheme that
requires about 350 additions and 350 multiplies for a six-revolute joint robot
arm. The method has been applied to manipulators having a general tree
structure of revolute and prismatic joints. Huston and Kelly [72] and Kane and
Levinson [80] have recently adapted Kane's formulation of dynamics to robot
structures.
4.2. Control
Much of control theory has developed for slowly changing, nearly rigid
systems. The challenges of robot control are several:
- Complex dynamics. The dynamics of open-link kinematic chain robots consist
of n coupled second-order partial differential equations, where n is the number
of links. They become even more complex for a closed multi-manipulator
system such as a multi-fingered robot hand or Iocomoting robot.
-Articulated structure. The links of a robot arm are cascaded and the dynamics
and inertias depend on the configuration.
-Discontinuous change. The parameters that are to be controlled change
discontinuously when, as often happens, the robot picks an object up.
- Range of motions. To a first approximation one can identify several different
kinds of robot motion: free space or gross motions, between places where work
is to be done; approach motions (guarded moves) to a surface; and compliant
ARTIFICIAL INTELLIGENCE AND ROBOTICS 87
moves along a constraint surface. Each of these different kinds of motion poses
different control problems.
The majority of industrial robot controllers are open-loop. However, many
control designs have been investigated in Robotics; representative samples are
to be found in [111,157, 158]. They include optimal controllers [36, 78]; model
reference control [37]; sliding mode control [154]; nonlinear control [160, 161];
hierarchical control [132]; distributed control [81]; hybrid force-position control
[82, 120]; and integrated system control [2]. Cannon and Schmitz [28] have
investigated the precise control of flexible manipulators.
FIG. 1. The three-fingered robot hand developed by Salisbury and Craig [132]. Each finger has
three degrees of freedom, and is pulled by four tendons. The hierarchical controller includes three
finger controllers, each of which consists of four controllers, one per tendon. Each controller is of
the Proportional-lntegrai-Derivative type, (Reproduced from [129].)
robot arms. The motions of the individual fingers are limited to curl and flex
motions in a plane that is determined by the abduction/adduction of the finger
about the joint with the palm. The motions of the fingers are coordinated by
the palm, which can assume a broad range of configurations. The dexterity of
the human hand has inspired several researchers to build multi-function robot
hands.
Okada [107] described a hand consisting of three fingers evenly spaced about
a planar palm. The workspace of the individual fingers was an ellipsoid. The
three workspaces intersected in a point. Okada programmed the hand to
perform several tasks such as tighten bolts. Hanafusa and Asada [55, 56]
developed a hand consisting of three evenly spaced, spring-loaded fingers. The
real and apparent spring constants of the fingers were under program control.
Stable grasps were defined as the minima of a potential function. The definition
of stability in two dimensions was demonstrated by programming the hand to
pick up an arbitrary shaped lamina viewed by a TV camera.
Salisbury [129] (see also [132]) investigated kinematic and force constraints on
the design of a tendon-driven three-fingered hand (see Fig. 1). The goal was to
ARTIFICIAL INTELLIGENCE AND ROBOTICS 89
ENTON
FIG. 2. The prototype Utah/MIT dextrous hand developed by Steven Jacobsen, John Wood, and
John Hollerbach. The four fingers each have four degrees of freedom. (a) The geometry of tendon
routing. (b) The material composition of tendons. (Reproduced from [77, Figs. 2(b) and 7].)
90 M. BRADY
FIG. 4. The quadruped robot built by Hirose, Umetani, and colleagues at Tokyo Institute of
Technology. The robot can walk upstairs, and can move forward in a crab gait. (Reproduced from
[65, Photo 1].)
91
92 M. BRADY
FIG. 5. The BIPER-3 walking robot built by Miura and Shimoyama at Tokyo University. See text
for details. (Reproduced from [162, Fig. 1].)
FIG. 6. The BIPER-4 walking robot under development at Tokyo University. BIPER-4 has a hip,
knee, and ankle joints on each leg. (Reproduced from [162, Fig. 14].)
ARTIFICIAL INTELLIGENCE AND ROBOTICS 93
FIG. 7. The hopping machine developed by Raibert and his colleagues at Carnegie Mellon
University. (Reproduced from [123, Fig. 16].)
/ /
))
FIG. 8. Will the screw make it into the hole? The joint measurements are all subject to small
errors, as are the placement of the box, the lid on the box, the orientation of the screwdriver in the
hand, and the orientation of the screw on the end of the screwdriver. When all these errors are
combined, it can be quite difficult to guarantee that a task will always be carried out successfully.
Start, Po
x~ ~ x P~
p ~ X Goal, 1°4
(a)
~ Goal
S,ar,
Free Space x Pz
~ [ Slowly, Guarded
(b)
FIG. 9. (a) Point Pt,/92, and P3 are specified as via points to coax the robot through the narrow gap
separating the two obstacles shown. (b) Via points can also be used to simplify trajectory planning.
The two via points above the start and goal of the move assure that the robot hand will not attempt
to penetrate the table as it picks up and sets down the object. (Reproduced from [16, Fig. 2(a)].)
FIG. 10. A path found by the Brooks and Lozano-P&ez program. (Reproduced from [25, Fig.
11(a)].)
Brooks freezes joint 4 of the PUMA. The algorithm subdivides free space to
find (i) freeways for the hand and payload assembly; (ii) freeways for the upper
arm subassembly (joints 1 and 2 of the PUMA); (iii) searches for the payload
and upper arm freeways concurrently under the projection of constraints
determined by the forearm. The subdivision of free space in this way is the
most notable feature of Brooks' approach. It stands in elegant relation to the
algorithms for computing inverse kinematics referred to earlier. It is assumed
that the payload is convex, and that the obstacles are convex stalagmites and
stalactites. It is further assumed that stalactites are in the workspace of the
upper arm of the PUMA, not of the payload.
'\~\
FIG. I1. A few of the generalized cones generated by two obstacles and the boundary of the
workspace. (Reproduced from [23, Fig. 1].)
F1G. 12. An example of path finding for a PUMA by Brooks' [24] algorithm.
99
100 M. BRADY
partitioned into 3 classes: those that tend to separate the bodies to be mated;
those that tend to make one body penetrate the other; and those that move the
body and maintain the original constraints. Theoretically, this provides a basis
for choosing the next step in an assembly sequence.
Trevelyan, Kovesi, and Ong [141] describe a sheep-shearing robot. Fig. 13
shows the geometry of the robot and the 'workpiece'. Trevelyan, Kovesi, and
Ong [141] note that "over two hundred sheep have been shorn by the machine
(though not completely) yet only a few cuts have occurred. This extremely low
~aw
pitch
\
roll
~ . lower link
sensors
cutter centre (t
traverse motion
manipulator
FIG. 13. The sheep shearing robot developed by Trevelyan and his colleagues at the University of
Western Australia. A sheep naturally lies quite still while it is being sheared; indeed it often falls
asleep. (Reproduced from [141, Fig. 1].)
102 M. BRADY
injury rate ~ results from the use of sensors mounted on the shearing cutter
which allow the computer controlling the robot to keep the cutter moving just
above the sheep skin". Trajectories are planned from a geometric model of a
sheep using Bernstein-B6zier parametric curves. The trajectory is modified to
comply with sense data. Two capacitance sensors are mounted under the cutter
just behind the comb. These sensors can detect the distance between the cutter
and the sheep's skin to a range of approximately 30 mm. Compliance is needed
to take account of inaccuracies in the geometric model of the sheep and the
change in shape of the sheep as it breathes.
Robots evolved for positional accuracy, and are designed to be mechanically
stiff. High-tolerance assembly tasks typically involve clearances of the order of
a thousandth of an inch. In view of inaccurate modeling of the world and
limitations on joint accuracy, low stiffnesses are required to effect assemblies.
Devices such as the Remote Center Compliance (RCC) [147] and the Hi-T
hand [51] exploit the inherent mechanical compliance of springs to accomplish
tasks. Such passive-compliant devices are fast, but the specific application is
built into the mechanical design. The challenge is that different tasks impose
different stiffness requirements. In active-compliance, a computer program
modifies the trajectory of the arm on the basis of sensed forces (and other
modalities) [112]. Active compliance is a general-purpose technique; but is
typically slow compared to passive compliance.
Mason [96] suggested that the (fixed number of) available degrees of
freedom of a task could be divided into two subsets, spanning orthogonal
subspaces. The subspaces correspond one-one with the natural constraints
determined by the physics of the task, and the artificial constraints determined
by the particular task. See [97] for details and examples. For example, in
screwdriving, a screwdriver cannot penetrate the screw, giving a natural con-
straint; successful screwdriving requires that the screwdriver blade be kept in
the screw slot, an artificial constraint. Raibert and Craig [120] refined and
implemented Mason's model as a hybrid force-position controller. Raibert and
Craig's work embodies the extremes of stiffness control in that the programmer
chooses which axes should be controlled with infinite stiffness (using position
control with an integral term) and which should be controlled with zero
stiffness (to which a bias force is added). Salisbury [130] suggests an inter-
mediate ground that he calls "active stiffness control".
Programmers find it relatively easy to specify motions in position space; but
find it hard to specify the force-based trajectories needed for compliance. This
has motivated the investigation of automatic generation of compliant fine-
motion programs [38, 92]. In Dufay and Latombe's approach, the geometry of
the task is defined by a semantic network, the initial and goal configurations of
parts are defined by symbolic expressions, and the knowledge of the program is
expressed as production rules. Productions encode the 'lore' of assembly: how
to overcome problems such as moving off the chamfer during an insertion task.
Dufay and Latombe's program inductively generates assembly programs from
successful execution sequences. The method requires that the relationships
between surfaces of parts in contact be known fairly precisely. In general this is
difficult to achieve because of errors in sensors.
Lozano-P6rez, Mason, and Taylor [92] have proposed a scheme for automa-
tically synthesizing compliant motions from geometric descriptions of a task.
The approach combines Mason's ideas about compliance, Lozano-P6rez's
C-space, and Taylor's [137] proposal for programming robots by fleshing out
skeletons forming a library of operations. The approach, currently being
implemented, deals head-on with errors in assumed position and heading.
Lozano-P6rez, Mason, and Taylor use a generalized damper model to deter-
mine all the possible configurations that can result from a motion. It is
necessary to avoid being jammed in the friction cone of any of the surfaces en
route to the goal surface. This sets up a constraint for each surface. Intersecting
the constraints leaves a range of possible (sequences of) compliant moves that
are guaranteed to achieve the goal, notwithstanding errors.
Programming by fleshing out skeletons is reminiscent of the programmer's
apprentice [126]. The similarities are that the computer adopts the role of
junior partner or critic, programming is based on cliches, and design decisions
and logical dependencies are explicitly represented so that the effects of
modifications to a program can be automatically propagated through the
program. The difference is that a robot programmer's apprentice works with
rich geometric models. Lozano-P6rez has suggested that guiding can be exten-
ded to teach a robot plans that involve sensing, a large number of similar
movements (for example unloading a palette), and asynchronous control of
multiple manipulators. The requirement that a system deal with rich geometric
models also distinguishes the robot programmer's apprentice from earlier work
in AI planning [128].
6. Perception
6.1. Introduction
The perceptual abilities of commercially available robots are severely limited,
especially when compared with laboratory systems. It is convenient to dis-
tinguish contact and non-contact sensing. Contact, or local, sensing includes
tactile, proximity, and force sensing. Non-contact sensing includes passive
sensing in both visual and non-visual spectral bands, and active sensing using
infra-red, sonar, ultrasound, and millimeter radar.
Robot perception is only a special case of computer perception in the sense
that there are occasional opportunities for engineering solutions to what are, in
104 M. BRADY
FIG. 14. Sample tactile images from the Hillis sensor. (Reproduced from [64, Fig. 6].
106 M. B R A D Y
PHYSICAL LAYOUT
PRESSURE SENSITIVE
ELASTIC MATERIAL
J
<J
~ 7 / / / / / / / ACE
ELECt"MET%ooEs
COMPUTING ,,
ELEMENTS \\
SILICON SUBSTRATE
FIG. 15. Architecture of the VLSI tactile sensor developed by Raibert and Tanner. A layer of
pressure-sensitive rubber is placed in contact with a VLSI wafer. Metalization on the surface of the
wafer forms large sensing electrodes that make contact with the pressure-sensitive rubber through
holes in a protective layer of SiO2, the overglass. (Reproduced from 121, Fig. 1].)
6.3. Non-contactsensing
Non-contact sensing is important for a variety of applications in manufacturing.
These include:
(l) Inspection. Most current industrial inspection uses binary two-dimen-
sional images. Only recently have grey level systems become commercially
available. No commercial system currently offers a modern edge detection
system. Two-dimensional inspection is appropriate for stamped or rotationally
symmetric parts. Some experimental prototypes [42, l l7] inspect surfaces such
as engine mountings and airfoil blades.
(2) Parts acquisition. Parts may be acquired from conveyor belts, from
palettes, or from bins. Non-contact sensing means that the position of parts
ARTIFICIAL INTELLIGENCE AND ROBOTICS 107
which that knowledge might be used. The knowledge that is involved concerns
geometry, forces, process, space, and shape. The examples involve tools. They
concern the interplay between the use or recognition of a tool and constraints
on the use of tools. Reasoning between structure and function is particularly
direct in the case of tools. Shape variations, though large (there are tens of
kinds of hammer), are lessened by the fact that tools are rarely fussily adorned,
since such adornments would get in the way of using the tool.
tool bears some resemblance to a standard screwdriver, which also has a handle
and a shaft. In the standard screwdriver, however, the axes of the shaft and
handle are collinear. Evidently, the tool is a special-purpose screwdriver, since
only screwdrivers have such blades.
Tools have the shape that they do in order to solve some problem that is
difficult or impossible to solve with more generally useful forms. So why the
crank shape? What problem is being solved that could not be solved with a
more conventional screwdriver? H e r e are some screwdriver-specific instances
of general problems that arise using tools:
- Parts interface bug. A part does not match the part to which it is being applied
or fastened. For example, a wrench might be too small to span a nut; a
sledgehammer is inappropriate for driving a tack. The screwdriver head may
not match the screw (one might be Philips type). There is no evidence for this
bug in Fig. 16 because the fastener is not shown.
-Restricted rotary motion bug. A tool that is operated by turning it about some
axis has encountered an obstruction that prevents it turning further. This bug
occurs frequently in using wrenches. A socket wrench solves it by engaging a
gear to turn the wrench in one direction, disengaging the gear to rotate in the
other. H o w is it solved more generally? There is an analogous restricted
linear-motion bug, in which a required linear motion cannot be performed
because of an obstruction. Think of an everyday example involving tools (one
is given at the end of the section).
-Restricted-access bug. As anyone owning a particular (expensive) kind of
British car knows only too well, often the hardest part of using a tool is mating
it to the corresponding part. Many tools have an axis along, or about, which
they are applied. The most c o m m o n version of the restricted-access bug is
when the axis is too long to fit into the available space. In the case of
screwdriving, this occurs when the screwdriver is restricted vertically above the
screw. A short, stubby screwdriver is the usual solution to this problem.
Can the crank screwdriver also overcome restricted-access bugs? Of course.
The geometric form of the crank screwdriver is necessary to solve this restric-
ted-workspace problem, rather than being a torque magnifier as initially
hypothesized. In fact, the tool is called an offset screwdriver. Fig. 17 illustrates
its use.
Since I first presented this example, another solution to the restricted access
bug has been brought to my attention. Fig. 18 shows a screwdriver whose shaft
can bend about any axis orthogonal to it.
Why are the blades of an offset screwdriver set orthogonal to one another?
Put differently, what bug do they help overcome? What would you need to
know in order to figure it out?
No program is currently capable of the reasoning sketched above. Pieces of
the required technology are available, admittedly in preliminary form, and
there is cause for optimism that they could be made to work together
110 M. B R A D Y
appropriately. First, vision programs exist that can almost generate the neces-
sary shape descriptions and model matching [17, 18]. There is considerable
interplay between form and function in the reasoning, and this has been
initially explored by Winston, Binford, and their colleagues, combining the
ACRONYM system of shape description and Winston's analogy program [150]. To
figure out that the crucial thing about the form is its ability to overcome a
restriction in the workspace, it is necessary to be able to reason about space
and the swept volumes of objects. This is the contribution of Lozano-P6rez
[88-91], Brooks [24], and Lozano-P6rez, Mason, and Taylor [92]. Forbus [45] is
developing a theory of processes, a system that can reason about physical
ARTIFICIAL INTELLIGENCE AND ROBOTICS 111
processes like water flow, heat, and springs. This builds upon earlier work by
De Kleer [32] and Bundy [27]. Notice that a system that is capable of the
reasoning sketched above might be able to invent new tools, such as the
flexible screwdriver. What is the difficult problem of realising the flexible
screwdriver design?
Answer to the problem : An example of a restricted linear-motion bug: Trying
to strike a nail with a h a m m e r when there is insufficient space to swing the
hammer.
_)
~ - ( a )
-3
(b)
FIG. 19. (a) An asymmetric wrench. (b) How to use a wrench.
against a nut; the handle is grasped and moved normal to its length; if the
diameter of the nut and the opening of the jaws of the wrench match, the nut
(assumed fixed) will cause the handle to rotate about the nut. Nowhere is
mention made of asymmetry. Surely, a symmetric wrench would be easier to
manufacture. Surely, a symmetric wrench would be equally good at turning
nuts. Or would it?
Recall that questions of form often relate not just to function, but to solving
some problem that a 'standard', here symmetric, wrench could not solve. The
main problem that arises using a wrench is the restricted rotary-motion bug. In
many tasks there is an interval [q~, q~2] (measured from the local coordinate
frame corresponding to the axis of the handle) through which the wrench can
be rotated. The crucial observation is that a wrench is (effectively) a lamina. As
such, it has a degree of freedom corresponding to flipping it over. Exploiting
this degree of freedom makes no difference to the effective workspace of a
symmetric wrench. It doubles the workspace of an asymmetric wrench, giving
[-~2, -¢1] tO [~q, q~2]. In this way, an asymmetric wrench partially solves the
restricted rotary-motion bug. Perhaps this suggests how to design the head of a
wrench, say by minimizing [-q~l, ~l] subject to keeping the jaws parallel to
each other. Perhaps it also suggests (for example to Winston's [148] analogy
program) that other turning tools should he asymmetric, for analogous reasons.
There are many examples, of course, including the offset screwdriver discussed
in the previous section.
FIG. 20. (a) The geometry of a battery terminal. (b) Side view of the late model bolt.
8. Conclusion
Since Robotics is the connection of perception to action, Artificial Intelligence
must have a central role in Robotics if the connection is to be intelligent. We
have illustrated current interactions between A I and Robotics and presented
examples of the kinds of things we believe it is important for robots to know.
We have discussed what robots should know, how that knowledge should be
represented, and how it should be used. Robotics challenges A I by forcing it to
deal with real objects in the real world. An important part of the challenge is
dealing with rich geometric models.
114 M. BRADY
ACKNOWLEDGMENT
This paper is based on invited talks presented at the Annual Conference of the American
Association for Artificial Intelligence, Washington, DC, August 1983, and the Quatribme Congrbs
des Reconnaissance des Formes et d'lntelligence Artificielle, INRIA, Paris, January 1984. Many
colleagues commented on earlier drafts or presentations of the material in this paper, l thank
especially Phil Agre, Dave Braunegg, Bruce Donald, Olivier Faugeras, Georges Giralt, John
Hollerbach, Dan Huttenlocher, Tom~s Lozano-P6rez, Tommy Poggio, Marc Raibert, Ken Salis-
bury, Jon Taft, Dan Weld, and Patrick Winston.
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts
Institute of Technology. Support for the Laboratory's Artificial Intelligence research is provided in
part by the Advanced Research Projects Agency of the Department of Defense under Office of
Naval Research contract N00014-75-C-0643, the Office of Naval Research under contract number
N0014-80-C-0505, and the System Development Foundation.
REFERENCES
1. Agin, G.J., Computer vision systems for industrial inspection and assembly, Computer 13
(1980) 1 !-20.
2. Albus, J.S., Integrated system control, in: L. Gerhardt and M. Brady (Eds.), Robotics and
Artificial Intelligence (Springer, Berlin, 1984) 65-93.
3. Ambler, A.P. and Popplestone, R.J., Inferring the positions of bodies from specified spatial
relationships, Artificial Intelligence 6 (2) (1975) 157-174.
4. Asada, H., A characteristics analysis of manipulator dynamics using principal transformations,
in: Proceedings American Control Conference, Washington, DC, 1982.
5. Asada, H. and Youcef-Toumi, K., Development of a direct-drive arm using high torque
brushless motors, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
6. Asada, H. and Kanade, T., Design concept of direct-drive manipulators using rare-earth DC
torque motors, in: Proceedings Seventh International Joint Conference on Artificial Intelligence,
Vancouver, BC (1981)-775-778.
7. Bajcsy, R., What can we learn from one finger experiments?, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
8. Baker, H.H. and Binford, T.O., Depth from edge and intensity based stereo, in: Proceedings
Seventh International Joint Conference on Artificial Intelligence, Vancouver, BC, 1981.
9. Binford, T.O., Inferring surfaces from images, Artificial Intelligence 17 (1981) 205-245.
10. Binford, T.O., Stereo vision: complexity and constraints, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
11. Bobrow, D.G. (Ed.), Qualitative Reasoning about Physical Systems (North-Holland, Am-
sterdam, 1984); also: Artificial Intelligence 24 (1984) special volume.
12. Boissonat, J.-D., Stable matching between a hand structure and an object silhouette, IEEE
Trans. Pattern Anal. Mach. lntell. 4 (1982) 603--611.
13. Brady, M., Parts description and acquisition using vision, in: A. Rosenfeld (Ed.), Robot vision,
Proceedings SPIE, Washington, DC (1982) 1-7.
14. Brady, M., Parallelism in vision (Correspondent's Report), Artificial Intelligence 21 (1983)
271-284.
15. Brady, M., Criteria for shape representations, in: J. Beck and A. Rosenfeld (Eds.), Human and
Machine Vision (Academic Press, New York, 1983).
16. Brady, M., Trajectory planning, in: M. Brady, J.M. Hollerbach, T.J. Johnson, T. Lozano-P6rez
and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT Press, Cambridge,
MA, 1983).
ARTIFICIAL INTELLIGENCE AND ROBOTICS 115
17. Brady, M., Representing shape, in: Proceedings IEEE Conference on Robotics, Atlanta, GA,
1984.
18. Brady, M. and Asada, H., Smoothed local symmetries and their implementation, in: Proceed-
ings First International Symposium on Robotics Research, 1983.
19. Brady, M. and Yuille, A., An extremum principle for shape from contour, MIT AI Memo
711, Cambridge, MA, 1983.
20. Brady, M. and Yuille, A., Representing three-dimensional shape, in: Proceedings Romansy
Conference, Udine, Italy, 1984.
21. Brooks, R.A., Symbolic reasoning among 3-D models and 2-D images, Artificial Intelligence
17 (1981) 285-348.
22. Brooks, R.A., Symbolic error analysis and robot planning, Internat. J. Robotics Res. 1 (4)
(1982) 29--68.
23. Brooks, R.A., Solving the findpath problem by good representation of free space, IEEE
Trans. Systems Man Cybernet. 13 (1983) 190-197.
24. Brooks, R.A., Planning collision free motions for pick and place operations," Internat. J.
Robotics Res. 2 (4) (1983).
25. Brooks, R.A. and Lozano-P6rez, T., A subdivision algorithm in configuration space for
findpath with rotation, in: Proceedings Eighth International Joint Conference on Artificial
Intelligence, Karlsruhe, W. Germany, 1983.
26. Bruss, A. and Horn, B.K.P., Passive Navigation, MIT AI Memo 662, Cambridge, MA, 1981.
27. Bundy, A., Byrd, L., Luger, G., Mellish, C. and Palmer, M., Solving mechanics problems using
meta-level inference, in: D. Michie (Ed.), Expert Systems in the Microelectronic Age (Edinburgh
University Press, Edinburgh, 1979).
28. Cannon, R.H., Jr. and Schmitz, E., Precise control of flexible manipulators, in: M. Brady and
R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (MIT Press,
Cambridge, MA, 1984).
29. Canny, J.F., Finding lines and edges in images, in: Proceedings Third National Conference on
Artificial Intelligence, Washington, DC, 1983.
30. CIocksin, W.S., Davey, P.G., Morgan, C.G. and Vidler, A.R., Progress in visual feedback for
arc-welding of thin sheet steel, in: A. Pugh (Ed.), Robot Vision. (Springer, Berlin, 1982) 187-198.
31. Davis, L.S. and Rosenfeld, A., Cooperating processes for low-level vision: a survey, Artificial
Intelligence 17 (1981) 245-265.
32. De Kleer, J., Qualitative and quantitative knowledge in classical mechanics, MIT Artificial
Intelligence Laboratory, AI-TR-352, Cambridge, MA, 1975.
33. Dobrotin, B. and Lewis, R., A Practical Manipulator System, in: Proceedings Fifth Inter-
national Joint Conference on Artificial Intelligence, Tblisi, USSR (1977) 749-757.
34. Donald, B.R., Hypothesising channels through free space in solving the findpath problem,
MlT AI Memo 736, Cambridge, MA, 1983.
35. Donald, B.R., Local and global techniques for motion planning, MIT Artificial Intelligence
Laboratory, Cambridge, MA, 1984.
36. Dubowsky, S., Model-reference adaptive control, in: M. Brady and L. Gerhardt (Eds.), N A T O
Advanced Study Institute on Robotics and Artificial Intelligence (Springer, Berlin, 1984).
37. Dubowsky, S. and DesForges, D.T., The application of model-referenced adaptive control to
robotic manipulators, J. Dynamic Systems Meas. Control 101 (1979) 193-200.
38. Dufay, B. and Latombe, J.-C., An approach to automatic robot programming based on
inductive learning, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research I (MIT Press, Cambridge, MA, 1984).
39. Engelberger, J.F., Robotics in Practice (Kogan Page, London, 1980).
40. Everett, H.R., A computer controlled sentry robot: a homebuilt project report, Robotics Age,
1982.
41. Faugeras, O., H6bert, M., Ponce, J. and Boissonat, J., Towards a flexible vision system, in: A.
Pugh (Ed.), Robot Vision (IFS, 1982).
42. Faugeras, O. and H6bert, M., A 3-D recognition and positioning algorithm using geometric
116 M. BRADY
matching between primitive surfaces, in: Proceedings Eighth International Joint Conference on
Artificial Intelligence, Karlsruhe, W. Germany (1983) 996--1002.
43. Faugeras, O., H6bert, M. and Ponce, J., Object representation, identification, and positioning
from range data, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research ! (MIT Press, Cambridge, MA, 1984).
44. Featherstone, R., Position and velocity transformations between robot end etlector coor-
dinates and joint angles, Internat. J. Robotics Res. 2 (2) (1983).
45. Forbus, K.D., Qualitative process theory, Artificial Intelligence 24 (1984) 8.5--168.
46. Franklin, J.W. and VanderBrug, G.J., Programming vision and robotics system with RAIL,
in: Proceedings Robots VI Conference, Detroit, MI, 1982.
47. Gaston, P.C. and Lozano-P6rez, T., Tactile recognition and localization using object models:
the case of polyhedra on a plane, M1T A1 Memo 705, Cambridge, MA, 1983.
48. Giralt, G., Mobile robots, in: M. Brady and L. Gerhardt (Eds.), N A T O Advanced Study
Institute on Robotics and Artificial Intelligence (Springer, Berlin, 1984).
49. Giralt, G., Sobek, R. and Chatila, R., A multi-level planning and navigation system for a
mobile robot: a first approach to HILARE, in: Proceedings Fifth International Joint Con-
ference on Artificial Intelligence, Tblisi, USSR, 1977.
50. Giralt, G., Chatila, R. and Vaisset, M., An integrated navigation and motion control system
for autonomous multisensory mobile robots, in: M. Brady and R. Paul (Eds.), Proceedings
International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
51. Goto, T., Takeyasu, K. and lnoyama, T., Control algorithm for precision insert operation
robots, IEEE Trans. Systems Man Cybernet. 10 (1) (1980) 19-25.
52. Grimson, W.E.L., From Images to Surfaces: A Computational Study of the Human Early
Visual System (M1T Press, Cambridge, MA, 1981).
53. Grimson, W.E.L. and Lozano-P6rez, T., Local constraints in tactile recognition, Internat. J.
Robotics Res. 3 (3) (1984).
54. Hackwood, S. and Beni, Torque sensitive tactile array for robotics, Internat. J. Robotics Res. 2
(2) (1983).
55. Hanafusa, H. and Asada, H., Stable prehension by a robot hand with elastic fingers, in:
Proceedings Seventh Symposium on Industrial Robotics (1977) 361-368.
56. Hanafusa, H. and Asada, H., A robot hand with elastic fingers and its application to assembly
process, in: Proceedings I F A C Symposium on Information Control Problems in Manufacturing
Techniques (1979) 127-138.
57. Haralick, R.M., Watson, L.T. and Lafl~ey, T..I., The topographic primal sketch, Internat. J.
Robotics Res. 2 (1) (1983) 50-72.
58. Harmon, L., Automated tactile sensing, Internat. J. Robotics Res. 1 (2) (1982) 3-33.
59. Harmon, L., Robotic traction for industrial assembly, lnternat. J. Robotics Res. 3 (1) (1984).
60. Harmon, S.Y., Coordination between control and knowledge based systems for autonomous
vehicle guidance, in: Proceedings IEEE Trends and Applications Conference, Gaithersburg,
MD, 1983.
61. Harmon, S.Y., Information processing system architecture for an autonomous robot system,
in: Proceedings Conference on Artificial Intelligence, Oakland University, Rochester, MI, 1983.
62. Hayes-Roth, P., Waterman, D. and Lenat, D. (Eds.), Building Expert Systems (Addison
Wesley, Reading, MA, 1983).
63. Hildreth, E., The Measurement of Visual Motion (MIT Press, Cambridge, MA, 1983).
64. Hillis, W.D., A high-resolution image touch sensor, Internat. J. Robotics Res. 1 (2) (1982)
33-44.
65. Hirose, S., Nose, M., Kikuchi, H. and Umetani, Y., Adaptive gait control of a quadruped
walking vehicle, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
66. Hobbs, J. and Moore, R., Formal Theories of the Common Sense World (Ablex, Norwood, N J,
1984).
ARTIFICIAL INTELLIGENCE AND ROBOTICS 117
67. Hollerbach, J.M., Dynamics, in: M. Brady, J.M. Hollerbach, T.J. Johnson, T. Lozano-
P~rez and M. Mason (Eds.), Robot Motion: Planning and Control (MIT Press, Cambridge,
MA, 1983).
68. Hollerbach, J.M. and Sahar, G., Wrist partitioned inverse kinematic accelerations and
manipulator dynamics, MIT AI Memo 717, Cambridge, MA, 1983.
69. Hopocroft, J.E., Schwartz, J.T. and Sharir, M., Efficient detection of intersections
among spheres, Internat. J. Robotics Res. 2 (4) (1983).
70. Horn, B.K.P., Sequins and quills-representations for surface topography, in: R. Bajcsy (Ed.),
Representation of 3-Dimensional Objects (Springer, Berlin, 1982).
71. Horn, B.K.P. and Schunck, B.G., Determining optical flow, Artificial Intelligence 17 (1982)
185-203.
72. Huston, R.L. and Kelly, F.A., The development of equations of motion of single-arm robots,
IEEE Trans. Systems Man Cybernet. 12 (1982) 259-266.
73. Ikeuchi, K., Determination of surface orientations of specular surfaces by using the
photometric stereo method, IEEE Trans. Pattern Anal. Mach. Intelligence 2 (6) (1981)
661--669.
74. Ikeuchi, K., Recognition of 3D object using extended Gaussian image in: Proceedings Seventh
International Joint Conference on Artificial Intelligence, Vancouver, BC, 1981.
75. Ikeuchi, K. and Horn, B.K.P., Numerical shape from shading and occluding boundaries,
Artificial Intelligence 17 (1981) 141-185.
76. lkeuchi, K., Horn, B.K.P., Nagata, S. and Callahan, T., Picking up an object from a pile of
objects in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics
Research 1 (MIT Press, Cambridge, MA, 1984).
77. Jacobsen, S.C., Wood, J.E., Knutti D.F. and Biggers, K.B., The Utah/MIT dextrous hand - work
in progress, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics
Research I (MIT Press, Cambridge, MA, 1984).
78. Kahn, M.E. and Roth, B., The near minimum-time control of open-loop articulated kinematic
chains, J. Dynamic Systems Meas. Control 93 (1971) 164-172.
79. Kanade, T. and Sommer, T., An optical proximity sensor for measuring surface position and
orientation for robot manipulation, in: M. Brady and R. Paul (Eds.), Proceedings of the
International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
80. Kane, T.R. and Levinson, D.A., The use of Kane's dynamical equations in robotics, Internat.
J. Robotics Res. 2 (3) 0983) 3-22.
81. Klein, C.A. and Wahawisan, J.J., Use of a multiprocessor for control of a robotic system,
Internal J. Robotics Res. 1 (2) (1982) 45-59.
82. Klein, C.A., Olson, K.W. and Pugh, D.R., Use of force and attitude sensors for locomotion of
a legged vehicle over irregular terrain, Internat. J. Robotics Res. 2 (2) (1983) 3-17,
83. Lewis, R.A. and Bejczy, A.K., Planning considerations for a roving robot with arm, in:
Proceedings Third International Joint Conference on Artificial Intelligence, Stanford, CA (1973)
308-315.
84. Lewis, R.A. and Johnson, A.R., A scanning laser rangefinder for a roving robot with arm, in:
Proceedings Fifth International Joint Conference on Artificial Intelligence, Tblisi, USSR (1977)
762-768.
85. Lieberman, L.I. and Wesley, M.A., AUTOPASS: an automatic programming system for
computer controlled mechanical assembly, I B M J. Res. Develop. 21 (4) (1977) 321-333.
86. Lowe, D.G. and Binford, T.O., Segmentation and aggregation: an approach to figure-ground
phenomena, in: L.S. Baumann (IEd.), Proceedings Image Understanding Workshop (Science
Applications, Tysons Corner, VA, 1982) 168-178.
87. Lozano-P&ez, T., The design of a mechanical assembly system, MIT, Artificial Intelligence
Laboratory, AI TR 397, Cambridge, MA, 1976.
88. Lozano-P6rez, T., Automatic planning of manipulator transfer movements, IEEE Trans.
Systems Man Cybernet. 11 (1981) 681-698.
118 M. BRADY
89. Lozano-P6rez, T., Spatial planning: a configuration space approach, IEEE Trans. Comput. 32
(1983) 108--120.
90. Lozano-P6rez, T., Robot programming, Proc. IEEE 71 (1983) 821-841.
91. Lozano-P6rez, T., Spatial Reasoning, in: M. Brady, J.M. Hollerbach, T.J. Johnson, T.
Lozano-P6rez and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT Press,
Cambridge, MA, 1983.
92. Lozano-P6rez, T., Mason, T. and Taylor, R.H., Automatic synthesis of fine-motion strategies
for robots, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics
Research 1 (MIT Press, Cambridge, MA, 1984).
93. Marr, D., Vision (Freeman, San Francisco, CA, 1982).
94. Marr, D. and Hildreth, E.C., Theory of edge detection, Proc. Roy. Soc. Lond. B 270 (1980)
187-217.
95. Marr, D. and Poggio, T., A theory of human stereo vision, Proc. Roy. Soc. Lond. B 204 (1979)
301-328.
96. Mason, T.M., Compliance and force control for computer control manipulators, IEEE Trans.
Systems Man Cybernet. 11 (1981) 418--432; reprinted in: M. Brady, J.M. Hollerbach, T.J.
Johnson, T. Lozano-P6rez and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT
Press, Cambridge, MA, 1983).
97. Mason, T.M., Compliance in: M. Brady, J.M. Hollerbach, T.J. Johnson, T. Lozano-P6rez and
M.T. Mason (Eds.), Robot Motion : Planning and Control (MIT Press, Cambridge, MA, 1983).
98. Mason, T.M., Manipulator grasping and pushing operations, MIT, 1982.
99. Michie, D., Expert Systems in the Microelectronic Age (Ellis Horwood, Chichester, 1979).
100. Mor~,vec, H.P., Robot Rover Visual Navigation (UMI Research Press, Ann Arbor, M1,
1981).
101. Moravec, H.P., The Stanford cart and the CMU rover, IEEE Trans. Indust. Electronics
(1983).
102. Moravec, H.P., Locomotion, vision, and intelligence, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
103. Nakagawa, Y. and Ninomiya, T., Structured light method for inspection of solder joints and
assembly robot vision system, in: M. Brady and R. Paul (Eds.), Proceedings International
Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
104. Nilsson, N.J., A mobile automation: an application of Artificial Intelligence techniques, in:
Proceedings First International Joint Conference on Artificial Intelligence, Washington, DC,
1969.
105. Nishihara, H.K., and Poggio, T., Stereo vision for robotics, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
106. Ohwovoriole, M.S. and Roth, B., An extension of screw theory, Trans. A S M E J. Mech.
Design 103 (1981) 725-735.
107. Okada, T., Computer control of multi-jointed finger system, in: Proceedings Sixth Inter-
national Joint Conference on Artificial Intelligence, Tokyo, Japan, 1979.
108. Okada, T., Development of an optical distance sensor for robots, Internat. J. Robotics Res. 1
(4) (1982) 3-14.
109. Orin, D.E. and Schrader, W.W., Efficient Jacobian determination for robot manipulators, in:
M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, MA, 1984).
110. Ozguner, F., Tsai, S.J. and McGhee, R.B., Rough terrain locomotion by a hexapod robot
using a binocular ranging system, in: M. Brady and R. Paul (Eds.), Proceedings International
Symposium on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
111. Paul, R.P., Robot Manipulators: Mathematics, Programming, and Control (MIT Press, Cam-
bridge, MA, 1981).
ARTIFICIAL INTELLIGENCE AND ROBOTICS 119
112. Paul, R.C. and Shimano, B.E., Compliance and control, in: Proceedings Joint Automatic
Control Conference (1976) 694--699.
113. Paul, R.C., Stevenson, C.N. and Renaud, M., A systematic approach for obtaining the
kinematics of recursive manipulators based on homogeneous transformations, in: M. Brady
and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (M1T Press,
Cambridge, MA, 1984).
114. Pieper, D.L., The kinematics of manipulators under computer control, Ph.D. Thesis,
Department of Computer Science, Stanford University, Stanford, CA, 1968.
115. Pieper, D.L. and Roth, B., The kinematics of manipulators under computer control, in:
Proceedings Second International Conference on Theory of Machines and Mechanisms, War-
saw, Poland, 1969:
116. Popplestone, R.J., Ambler, A.P. and Bellos, I.M., An interpreter for a language for
describing assemblies, Artificial Intelligence 14 (1980) 79-107.
117. Porter, G. and Mundy, J., A non-contact profile sensor system for visual inspections, in:
Proceedings IEEE Workshop on Industrial Applications of Machine Vision, 1982.
118. Porter, G. and Mundy, J., A model-driven visual inspection module, in: M. Brady and R. Paul
(Eds.), Proceedings International Symposium on Robotics Research 1 (MIT Press, Cambridge,
MA, 1984).
119. Raibert, M.H., Special Issue on walking machines, lnternat. J. Robotics Res. 3 (2) (1984).
120. Raibert, M.H. and Craig, J.J., A hybrid force and position controller, in: M. Brady, J.M.
Hollerbach, T.J. Johnson, T. Lozano-Prrez and M.T. Mason (Eds.), Robot Motion: Planning
and Control (M1T Press, Cambridge, MA, 1983).
121. Raibert, M.H. and Tanner, J.E., Design and implementation of a VLSI tactile sensing
computer, Internat. Robotics Res. 1 (3) (1982) 3-18.
122. Raibert, M.H. and Sutherland, I.E., Machines that walk, Scientific American 248 (1983)
44-53.
123. Raibert, M.H., Brown, H.B., Jr. and Murthy, S.S., 3D balance using 2D algorithms?, in: M.
Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (MIT
Press, Cambridge, MA, 1984).
124. Renaud, M., An efficient iterative analytical procedure for obtaining a robot manipulator
dynamic model, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
125. Requicha, A.A.G., Representation of rigid solids: theory, methods and systems, Comput.
Surveys 12 (4) (1980) 437-464.
126. Rich, C.R. and Waters, R., Abstraction, inspection, and debugging in programming, MIT AI
Memo 634, Cambridge, MA, 1981.
127. Roth, B., Screws, motors, and wrenches that cannot be bought in a hardware store, in: M.
Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (M1T
Press, Cambridge, MA, 1984).
128. Sacerdoti, E., A structure for plans and behavior, SRI Artificial Intelligence Center TR-109,
Menlo Park, CA, 1975.
129. Salisbury, J.K., Kinematic and force analysis of articulated hands, Ph.D. Thesis, Department
Of Mechanical Engineering, Stanford University, Stanford, CA, 1982.
130. Salisbury, J.K., Active stiffness control of a manipulator in Cartesian coordinates, in:
Proceedings 1EEE Conference on Decision and Control, Albuquerque, NM, 1980.
131. Salisbury, J.K., Interpretation of contact geometries from force measurements, in: M. Brady
and R. Paul (Eds.), Proceedings International Symposium on Robotics Research I (MIT Press,
Cambridge, MA, 1984).
132. Salisbury, J.K. and Craig, J.J., Articulated hands: force control and kinematic issues, lnternat.
J. Robotics Res. 1 (1) (1982) 4--17.
133. Schunck, B.G., Motion segmentation and estimation, MIT Artificial Intelligence Laboratory,
1983.
120 M. BRADY
134. Schwartz, J.T. and Sharir, M., The piano movers problem 111, lnternat. J. Robotics Res. 2 (3)
(1983).
135. Shimano, B.E., Geschke, C.C. and Spaulding, C.H., VAL I1: a robot programming language
and control system, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on
Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
136. Shirai, Y., Koshikawa, K., Oshima, M. and Ikeuchi, K., An approach to object recognition
using 3-D solid model, in: M. Brady and R. Paul (Eds.), Proceedings International Symposium
on Robotics Research 1 (MIT Press, Cambridge, MA, 1984).
137. Taylor, R.H., The synthesis of manipulator control programs from task-level specifications,
AI Memo 282, Stanford University, Stanford, CA, 1976.
138. Taylor, R.H., Summers, P.D. and Meyer, J.M., AML: a manufacturing language, Internat. J.
Robotics Res. 1 (3) (1982) 19-41.
139. Terzopoulos, D., Multilevel reconstruction of visual surfaces: Variational principles and finite
element representations, in: A. Rosenfeld (Ed.), Multiresolution Image Processing and
Analysis (Springer, New York, 1984).
140. Terzopoulos, D., Multilevel computational processes for visual surface reconstruction, Corn-
put. Vision Graphics Image Process. 24 (1983) 52-96.
141. Trevelyan, J.P., Kovesi, P.D. and Ong, M.C.H., Motion control for a sheep shearing robot, in:
M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, MA, 1984).
142. Tsuji, S. and Asada, M., Understanding of three-dimensional motion in time-varying imagery,
in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, MA, 1984).
143. VAL, User's guide: a robot programming and control system, CONDEC Unimation Robo-
tics, 1980.
144. Villers, P., Present industrial use of vision sensors for robot guidance, in: A. Pugh (Ed.) Robot
Vision (IFS, 1982).
145. Vilnrotter, F., Nevatia, R. and Price, K.E., Structural analysis of natural textures, in: L.S.
Baumann (Ed.), Proceedings Image Understanding Workshop (Science Applications, Tysons
Corner, VA, 1981) 61~i8.
146. Wesley, M.A., et al., A geometric modeling system for automated mechanical assembly, IBM
J. Res. Develop. 24 (1) (1980) 64-74.
147. Whitney, D.E., The mathematics of compliance, in: M. Brady, J.M. Hollerbach, T.J. Johnson,
T. Lozano-P6rez and M.T. Mason (Eds.), Robot Motion: Planning and Control (MIT Press,
Cambridge, MA, 1983).
148. Winston, P.H., Learning and reasoning by analogy, Comm. A C M 23 (1980).
149. Winston, P.H., Artificial Intelligence (Addison-Wesley, Reading, MA, 2nd ed., 1984).
150. Winston, P.H., Binford, T.O., Katz, B. and Lowry, M., Learning physical descriptions from
functional descriptions, examples, and precedents, in: M. Brady and R. Paul (Eds.), Proceed-
ings of.the International Symposium on Robotics Research 1 (MIT Press, Cambridge, MA,
1984).
151. Witkin, A.P., Recovering surface shape and orientation from texture, Artificial Intelligence 17
(1981) 17-.47.
152. Woodham, R.J., Analysing images of curved surfaces, Artificial Intelligence 17 (1981) 117-140.
153. Yoshikawa, T., Analysis and control of robot manipulators with redundancy, in: M. Brady
and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1 (MIT Press,
Cambridge, MA, 1984).
154. Young, K.K.D., Controller design for a manipulator using theory of variable structure
systems, I E E E Trans. Systems Man Cybernet. 8 (1978) 101-109; reprinted in: [157].
155. Zucker, S.W., Hummel, R.A. and Rosenfeld, A., An application of relaxation labelling to line
and curve enhancement, IEEE Trans. Comput. 26 (1977) 394--403, 922-929.
ARTIFICIAL INTELLIGENCE AND ROBOTICS 121
ADDITIONAL REFERENCES
156. Bolles, R.C., Horaud, P. and Hannah, M.J., 3DPO: a three-dimensional parts orientation system,
in: M. Brady and R. Paul (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT, Cambridge, MA, 1984) 41:3--424.
157. Brady, M., Hollerbach, J.M., Johnson, T.J., Lozano-P6rez, T. and Masow, M.T. (Eds.), Robot
Motion: Planning and Control (MIT Press, Cambridge, MA, 1983).
158. Brady, M. and Paul, R. (Eds.), Proceedings International Symposium on Robotics Research 1
(MIT Press, Cambridge, 1984).
159. Brooks, R.A. and Binford, T.O., Representing and reasoning about partially specified scenes, in:
L.S. Banmann (Ed.), Proceedings Image Understanding Workshop (Science Applications, Tysons
Corner, VA, 1980) 95-103.
160. Freund, E., Fast non-linear control with arbitrary pole placement for industrial robots and
manipulators, Internat. J. Robotics Research 1 (1) (1982) 65-78.
161. Freund, E., Hierarchical non-linear control for robots, in: M. Brady and R. Paul (Eds.),
Proceedings International Symposium on Robotics Research l (MIT, Cambridge, MA, 1984)
817-840.
162. Miura, H. and Shimoyama, I., Dynamical walk of biped locomotion, in: M. Brady and R. Paul
(Eds.), Proceedings International Symposium on Robotics Research 1 (MIT, Cambridge, MA,
1984) 303-325.
163. Paul, R., Modelling, trajectory calculation, and servoing a computer controlled arm, AIM 177,
Stanford AI Laboratory, Stanford, CA, 1972.
164. Fisher, W.D. A kinematic control of redundant manipulation, Ph.D. Thesis, Department of
Electrical Engineering, Purdue University, Lafayette, IN, 1984.
165. Brady, M., Agre, P., Braunegg, D.J. and Connell, J.H., The mechanic's mate in: T. O'Shea (Ed.),
ECAI 84: Advances in Artificial Intelligence (Elsevier Science Publishers B.V. (North-Holland),
Amsterdam, 1984) 681-696.
166. Connell, J.H. and Brady, M., Learning shape descriptions, MIT AI Memo 824, Artificial
Intelligence Laboratory, Cambridge, MA, 1985.