Expanding Frontiers of Humanoid Robotics
Expanding Frontiers of Humanoid Robotics
EDITORS'
Expanding Frontiers of Humanoid
Robotics
Mark L. Swinson, DARPA
David J. Bruemmer, Strategic Analysis
Mobile robots pose a unique set of challenges to artificial intelligence researchers. Such
challenges include issues of autonomy, uncertainty (both sensing and control), and reliability,
which are all constrained by the discipline that the real world imposes. Planning, sensing, and
acting must occur in concert and in context. That is, information processing must satisfy not only
the constraints of logical correctness but also some assortment of crosscutting, physical
constraints. Particularly interesting among these robots are humanoids, which assume an
anthropomorphic (human-like) form.
A growing number of roboticists believe that the human form provides an excellent platform on
which to enable interactive, real-world machine learning. Robots that can learn from natural,
multimodal interactions with the environment might be able to accomplish tasks by means their
designers did not explicitly implement and to adapt to the unanticipated circumstances in an
unstructured environment. Ultimately, humanoids might prove to be the ideal robot design to
interact with people. After all, humans tend to naturally interact with other human-like entities.
Eventually, humans and humanoids might be able to cooperate in ways now imaginable only in
science fiction. Humanoids might also provide a revolutionary way of studying cognitive
science. As we review successes and failures in the field, we provide a contextual backdrop for
understanding where humanoid research began, the dilemmas with which it currently struggles,
and where it might take us in the future. We also discuss how these technological developments
have and will continue to affect the ways in which we understand ourselves.
Problems with hard-coded, top-down control. In their zeal to make robots think like humans,
many researchers focused on high-level cognition and provided no mechanism for building
control from the bottom up. Although intended to model humans, most systems did not, like
humans, acquire their knowledge through interaction with the real world. When in the real world,
these robots possessed little mastery over it. Even in the fortunate event that sensors could
accurately connect internal archetypes to real-world objects, robots could only extend the
knowledge thrust on them in rudimentary, systematic ways. Such robots carried out preconceived
actions with no ability to react to unforeseen features of the environment or task.
Once a cause of great optimism, attempts to create human-like intelligence became a favored
target for philosophical criticism. In 1979, Hubert Dreyfus argued that computer simulation
assumes incorrectly that explicit rules can govern intellectual processes.2 An ability to break
rules, Dreyfus thought, better characterizes human intelligence. Rules allow only elementary
capabilities and are routinely broken once we achieve true competence. He viewed this
competence not merely as a new, more sophisticated set of rules but as the ability to serve
principles that have not yet and might never become explicit. Another argument was that
computer programs are inherently goal seeking and thus require the designer to know before
hand exactly what behavior is desired (as in a chess match as opposed to a work of art).3 In
contrast, humans are value-seeking-that is, we do not always begin with an end goal in mind but
seek to bring implicit values to fruition, on the fly, through engagement in a creative or analytical
process.
Although some of the ultimate conclusions were premature, these arguments aptly called
attention to the fact that static programs, explicit rules, and knowledge bases drove robots
estranged from the real world. As such, robots remained information-processing machines,
applicable only to highly structured domains such as assembly lines. At best, those who claimed
to be creating human-like intelligence were labeled positivists. At worst, they were considered
delusional. Most roboticists forsook the goal of human-like cognition entirely and focused on
creating functional, high-utility agents, using the lower animal world as a model (if they even
needed models).
1. D. Michie, "Machine Learning in the Next Five Years," Proc. Third European Working
Session on Learning, Pitman, London, 1988, pp. 107-122. Back to Text Body
2. H.L. Dreyfus, What Computers Can't Do: The Limits of Artificial Intelligence, Harper
Colophon Books, New York, 1979. Back to Text Body
3. G.B. Kleindorfer and J.E. Martin, "The Iron Cage, Single Vision, and Newton's Sleep,"
Research in Philosophy and Technology, Vol. 3, 1983, pp.127-142. Back to Text Body
4. D.L. Schacter, C.Y.P Chiu, and K.N. Ochsner, "Implicit Memory: A Selective Review,"
Ann. Rev. of Neuroscience, Vol. 16, 1993, pp. 159-182. Back to Text Body
5. P.E. Agre and D. Chapman, "What are Plans For?" Robotics and Autonomous Systems,
Vol. 6, 1990, pp. 17-34. Back to Text Body
6. S.F. Giszter, F.A. Mussa-Ivaldi, and E. Bizzi, "Convergent Force Fields Organized in the
Frog's Spinal Cord," J. Neuroscience, Vol. 13, No. 2, 1993, pp. 467-491. Back to Text Body