100% found this document useful (1 vote)
142 views

Experiences With Minirobot Platforms in Robotics and AI Laboratory

This document describes experiences using a configurable autonomous minirobot platform in robotics and AI laboratory courses. The platform uses an inexpensive, highly configurable design with a 68HC11-based control board. Various laboratory experiments are carried out using this single platform, ranging from studies of robot kinematics to learning strategies. The platform and its components, including sensors, motors, and software tools are described briefly.

Uploaded by

EnricCervera
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
142 views

Experiences With Minirobot Platforms in Robotics and AI Laboratory

This document describes experiences using a configurable autonomous minirobot platform in robotics and AI laboratory courses. The platform uses an inexpensive, highly configurable design with a 68HC11-based control board. Various laboratory experiments are carried out using this single platform, ranging from studies of robot kinematics to learning strategies. The platform and its components, including sensors, motors, and software tools are described briefly.

Uploaded by

EnricCervera
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Experiences with Minirobot Platforms

in Robotics and AI Laboratory


Enric Cervera, Pedro J. Sanz

Robotics and AI Lab,


Jaume-I University, Campus Riu Sec, 12071 Castelló (SPAIN)
E-mail: ecervera,sanzp  @icc.uji.es

Abstract. This paper presents some laboratory


experiences carried out in our Robotics and AI
courses. Using a configurable autonomous mo-
bile robot platform, a wide variety of problems
can be studied: from robot kinematics to learn-
ing architectures. A brief description of the
robot platform is provided, together with some
theoretical and practical aspects of each labo-
ratory activity.

1 INTRODUCTION

During the last two years, we have carried out several


laboratory experiences involving autonomous minirobots
in our Robotics and AI courses, ranging from kinematic
studies to learning strategies, using a single robot plat- 


Minirobot platform.
form. This paper presents the off-the-shelf components
of the robot, and briefly describes the laboratory experi-
ences.
in a differential drive configuration [3], thus allowing the
The rest of the paper is organized as follows: Section 2 robot to go forward, backward, or turn in place.
describes the robotic platform (mechanics, hardware and
software) used in the experiments. In Section 3, different When possible, Lego gear motors (Fig. 2) are used in our
learning experiments are presented. Finally, Section 4 designs. These are 9V DC motors with internal gear re-
presents some conclusions and future plans. duction, turning at 350 rpm with no load. They are more
efficient than standard DC motors with external reduc-
tion.
2 EXPERIMENTAL PLATFORM

The autonomous mobile minirobot platform is presented.


The overall robot is rather inexpensive, highly config-
urable, and it has proven to be very robust. Moreover, it
provides enough computing power for some rather com-
plex AI algorithms.
Figure 1 depicts the platform as used in our fire fighting
robot contest. The robot is controlled by a 68HC11-based
board, which also reads sensor signals from IR sensors.
The fire extinguisher is merely a standard CPU cooler.
  
Lego gear motor.
2.1 Electromechanics

Robot structure is made completely of Lego parts. It Additionally, standard servos (Fig. 3) are used in special
has diametrically opposed drive wheels and two free- mechanisms like rotating bases for sensors, in order to
wheeling castor. Each wheel is driven by its own motor scan a broad area, looking for obstacles or a flame.


Sharp GP2D02 infrared range sensor.



Servo kit.
view and either hits an object or just keeps on going. In
the case of no object, the light is never reflected and the
reading shows no object. If the light reflects off an object,
Besides the Lego parts, only miscellaneous electronic it returns to the detector and creates a triangle (Fig. 6) be-
components were used, e.g. bumpers, infrared prox- tween the point of reflection, the emitter, and the detector.
imity sensors, infrared rangers, photo-cells, etc. While
bumpers and photocells are common and inexpensive
sensors, infrared proximity and range sensors deserve a
little more comment.
Infrared reflective sensors (Fig. 4) consist of an infrared
LED and a phototransistor in a compact package. Both
elements are tuned to the same frequency, thus filtering
ambient light. The output of the sensor is related to the
amount of radiation reflected by a surface, which depends
on the proximity of the surface and its reflectiveness.

  
Different angles with different distances.

The angles in this triangle vary based on the distance to


the object. The receiver portion of these new detectors is
actually a precision lens that transmits the reflected light
onto various portions of the enclosed linear CCD array
based on the angle of the triangle described above. The
CCD array can then determine what angle the reflected
light came back at and therefore, it can calculate the dis-
tance to the object.
This new method of ranging is almost immune to interfer-
ence from ambient light and offers amazing indifference

 to the color of object being detected. Detecting a black
Infrared reflectance sensors: on the left QRB1114,
on the right QRD1114, at bottom sensor schematics.
wall in full sunlight is possible.
We have use these sensors to detect the opponent robot in
a sumo combat, or to dectect walls and doorways in the
We have used these sensors both for detecting obstacles fire fighting environment.
without contact, and for detecting white lines on a black
Finally, the Lego rotation sensor (Fig. 7) has been used in
ground. Depending on the environment, there are sen-
experiences where a velocity measurement was needed.
sors focused for sensing specular reflection (QRB1114)
The rotation sensor tracks how much an axle inserted in
or unfocused for sensing diffused surfaces (QRD1114).
the sensor turns. The sensor measures in increments of
Infrared range sensors (Fig. 5) work in a different man- 16 counts per rotation meaning that as the axle completes
ner: they use triangulation and a small linear CCD array one rotation the sensor counts to 16. If the axle turns
to compute the distance and/or presence of objects in the 90 degrees, the sensor counts to 4. The sensor can track
field of view. The basic idea is this: a pulse of IR light is counter-clockwise rotation by counting backwards (up to
emitted by the emitter. This light travels out in the field of -32,768).


   Handy Board expansion kit.
Lego rotation sensor.

and a connector mount for Polaroid 6500 ultrasonic rang-


2.2 Hardware ing system.

Autonomous robot control is achieved by a Handy Board


2.3 Software
[4]: it is a 68HC11-based controller board designed for
experimental mobile robotics work. MIT has licensed the
A wide range of options are available for developing soft-
Handy Board at no charge for educational, research, and
ware on the Handy Board, including free assembly lan-
industrial use (a typical kit is depicted in Fig. 8).
guage tools provided by Motorola, and commercial C
Besides the Motorola MC68HC11 processor, the Handy compilers.
Board includes 32 K of battery-backed static RAM, four
Additionally, the Handy Board is compatible with Inter-
outputs for DC motors, a connector system that allows
active C, the programming environment created for the
active sensors to be individually plugged into the board,
MIT Lego Robot Design project. Interactive C (IC) is a
an LCD screen, and an integrated, rechargeable battery
multi-tasking, C language based compiler that includes a
pack.
user command line for dynamic expression compilation
and evaluation.
Though not used yet in our experiences, the Handy Board
can be programmed in Java using simpleRTJ, a clean
room implementation of the Java language. It differs
from other Java VM implementations not only in small
memory footprint (which is on most microcontrollers 18-
23KB) but also the way how the class loading is per-
formed.
On most embedded systems the Java applications will not
be frequently updated or reloaded from the host comput-
ers but may require frequent re-starts (turning power on).
To minimize the Java application start-up times and to
speed up the bytecodes execution the simpleRTJ executes
pre-linked Java applications. The Java application class
   files are linked on the host computer. Such generated
Handy Board kit. application file is then uploaded to the target device for
direct execution by the simpleRTJ.

Figure 8 depicts a complete Handy Board kit, consisting


of the main board, the serial interface / charger board, 3 LABORATORY EXPERIENCES
the AC adapter, a serial cable to connect the PC, and a
telephone cable to connect the board to the interface /
The presented experiences have been carried out in the
charger.
MSc Program in Computer Science at Jaume-I Univer-
The board can be upgraded with an Expansion Board sity. First, two experiences in the Robotics course are
(Fig. 9) which plugs on top of it, providing additional presented: kinematics and sensor-based control. Next
features, like additional analog sensor inputs, active Lego three experiences belong to the AI course: reinforcement
sensor inputs, digital outputs, servo motor control signals, learning, subsumption, and agents.
3.1 Mobile Robot Kinematics of the robot and to synchronize its two wheels so that the
robot will travel in a straight line [3].
A rotation sensor was attached to each wheel. The goal
was to construct the direct and inverse kinematics of a
differential-drive mobile robot (see Fig. 10). The model 3.2 Sensor-driven Control
itself is rather simple [5], but the low precision of sen-
sors together with the poor control of motors makes the The goal of this experience was to program the robot for a
problem rather difficult. sumo tournament (Fig. 12) [1]. The robot had to be con-
trolled mainly from its sensor inputs, so a state finite au-
tomaton was designed, where transitions were triggered
by sensor conditions.


 !
Mobile robot kinematics.

The proposed direct kinematic model is:


"# $%'&("# $%() )
ẋ cos θ 0
πr * n πr * n q˙1
ẏ sin θ 0 + 2πr * nd 2πr * nd , 
 !
θ̇ 0 1
q˙2 , Minirobot sumo competition.
(1)
-
where
Robots had line sensors for detecting the ring boundary,
- q1 and q2 are the angle of each wheel, bumpers, and an IR range finder for detection of the op-
ponent.
- r is the radius of the wheels,
- n is the sensor count for a turn,
3.3 Reinforcement Learning
and d is the distance between both wheels.
Reinforcement learning [7] is a computational approach
Nonetheless, several interesting results were achieved, to learning whereby an agent tries to maximize the total
e.g. trajectory control made possible to program the robot amount of reward it receives when interacting with the
to go straight. environment.
Robots had to learn a line following behavior. A rather
simple task was chosen in order to see the learning
progress in a small amount of time. Instead of hard-
coding line following algorithms, students programmed
a reinforcement learning algorithm which actually learnt
to control the robot by its own experience, following dif-
ferent lines marked on the floor.

3.4 Subsumption Architecture

Brooks’ subsumption architecture [2] consists of behav-


iors, i.e. layers of control systems that all run in parallel

 .
 whenever appropriate sensors fire. Parallel behaviors can
Control loop. be easily implemented in IC on the Handy Board, using
its parallel processing primitives.
As depicted in Fig. 11, a simple proportional-integral Different sensors (bumpers, proximity, light) were
control loop can be implemented in software, using a ro- mounted on the robot, and students had to design and im-
tation sensor attached to each wheel, to control the speed plement a layer of control systems. The resulting robot
had an emergent behavior: it wandered around the labora- Robot perception consisted of two IR reflective sensors
/tory, avoiding obstacles and either following or escaping for detecting white lines at doorways, three IR distance
from the light. sensors for detecting walls (front, left and right), and a
light sensor mounted on a servo for detecting the flame.
Driving the robot was a hard challenge in itself, since the
3.5 Agent-based Architecture differential-drive mechanism was not accurate enough for
driving straight, nor it had any dead-reckoning mecha-
According to the intelligent agent view [6], the problem nism. Instead, the robot tried to keep a constant distance
of AI is to describe and build agents that receive percepts to side walls in order to travel along the corridors.
from the environment and perform actions. Robotics is Though being our first edition, several robots succeeded
not defined independently: this approach emphasizes the in finding and extinguising the flame, and one team even
task environment characteristics in determining the ap- programmed a more difficult task: the robot started at
propriate agent design. an unknown position in the environment, instead of the
fixed one. It was able to recognise its location and explore
The goal task was inspired by the Trinity College Fire
the environment, extinguish the flame, and return to its
Fighting Contest: robots had to find and extinguish a fire
original location.
in an office-like environment (Figs. 13 and 14) [1].

4 SUMMARY

This paper has presented the autonomous mobile robot


platform used by our students in robotics and AI, and
some laboratory experiences ranging from robot kinemat-
ics to learning and agent architectures.
The most important issue is the high degree of motivation
of the students in all of the experiences. We believe that,
besides the challenging goals, an important factor was the
easy-to-use hardware and software platform: assuming
basic skills in C, in less than an hour students were able
to write and execute their first robot programs.
In the future, we plan to make new experiences allow-
ing robot communication, in order to develop multi-robot
   ! systems and collective behavior.
Fire fighting robot in action.
Acknowledgements.
Support from Jaume-I University (Educative Support Depart-
ment) and BP Oil Refineria de Castelló SA is gratefully ac-
knowledged.

References
[1] http://robot.act.uji.es/compet/. Minirobot Competitions at
UJI.
[2] Rodney A. Brooks. A robust layered control system for a
mobile robot. IEEE Journal of Robotics and Automation,
RA-2:14–23, 1986.
[3] Joseph L. Jones, Anita M. Flynn, and Bruce A. Seiger. Mo-
bile Robots: Inspiration to Implementation. A K Peters,
Ltd., 2nd edition, 1999.
[4] Fred G. Martin. Robotic Explorations: a Hands-on Intro-
duction to Engineering. Prentice Hall, 2001.
[5] Philip J. McKerrow. Introduction to Robotics. Addison-
Wesley, 1993.
[6] Suart Russell and Peter Norvig. Artificial Intelligence: A
Modern Approach. Prentice Hall, 1994.

 0 [7] Richard S. Sutton and Andrew G. Barto. Reinforcement
Fire fighting competition.
Learning: An Introduction. The MIT Press, 1998.

You might also like