0% found this document useful (0 votes)
9 views

wahab2009

This paper presents a dual artificial neural network approach for the navigation of an autonomous mobile robot, enabling it to safely move and find targets in environments with obstacles. The first neural network focuses on obstacle avoidance by determining free space, while the second guides the robot towards its target based on sensor measurements. Simulation results demonstrate the effectiveness of the proposed algorithm in various scenarios, including noise conditions and different room layouts.

Uploaded by

SENTHILKUMAR P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

wahab2009

This paper presents a dual artificial neural network approach for the navigation of an autonomous mobile robot, enabling it to safely move and find targets in environments with obstacles. The first neural network focuses on obstacle avoidance by determining free space, while the second guides the robot towards its target based on sensor measurements. Simulation results demonstrate the effectiveness of the proposed algorithm in various scenarios, including noise conditions and different room layouts.

Uploaded by

SENTHILKUMAR P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Autonomous Mobile Robot Navigation

Using a Dual Artificial Neural Network

Wahidin Wahab
Department of Electrical Engineering
Faculty of Technology, University of Indonesia
Depok, Indonesia
wahidin.wahab@ui. ac.id

Abstract—This paper deals with an intelligent control of an


autonomous mobile robot which should move safely in an
environment to find a target. We describe our approach to solve
the motion-planning problem in mobile robot control using
artificial neural networks technique.The algorithm constructs a
collision-free path for moving robot among obstacles based on
two neural networks. The first neural network is used to
determine the free space needed to avoid obstacles. The second
neural network is used to navigate robot into target. Simulation
examples is presented at the end of the paper.

Keywords-mobile robot; autonomous; navigation; dual ann;


simulation; algorithm; obstacle avoidance;

I. INTRODUCTION
Autonomous mobile robot has been used in many useful Figure 1. Obstacle detection sensors
applications in helping human task. Such as bomb detector,
nuclear waste carrier and disposal, building patrol etc. Robot is
designed and preprogrammed to do a spesific task
automatically, as such the robot needs an intelligent algorithm
to control robot movement. an artificial neural network, that
has learning capability is required for dealing with different
and/or dynamic environtment.
In this paper, the design of a neural network controller is
discussed, the neural network is used to determine the robot
heading angle based on a set of sensors measure-ments.
Controllers have to move the robot to navigate in an unknown
environtmen to find a target without hitting any obstacle.

II. THE MOBILE ROBOT STRUCTURE


To aid the robot navigations, it is equipped with two sets of Figure 2. Target detection sensors
sensors, these sensors may be of ultrasonic ranging types
and/or of infra-red types, depends on the availability of the III. ROBOT CONTROLLER
sensors. The first set of sensors consist of nine ultrasonic
sensors which are sets to have a three meter maximum range of Assumptions used in the controller design are as follows,
detection and used to measure the distance of the robot to any the robot has nine obstacle range sensors that have three meter
obstacles around it.. The second sets of sensors consists of only range, as describe, and has two target detection sensors that
two infra red sensors, it is used to detect the target position have six meter range with coverage angle 120º . The robot can
based on the differential measurement for the two target sensor not move backward and have maximum velocity 0.1 meter/s.
(r1-r2). These sensors are able to identify the target below the anisotropic variables, such as gravitation, wind friction, slip,
range of 6 meter from the robot and each has a coverage of etc. are Ignored. Robot has heading angle limit from -45º to
120º angle in front of the robot. The attachement of these two 45º. Off-line training is done for the ANN controller. Fig. 3
sets of sensors are illustrated in Fig. 1 and Fig. 2. shows robot control diagram.

978-1-4244-4547-9/09/$26.00 ©2009 IEEE TENCON 2009


1
Robot heading angle controllers consist of two neural Where v is robot velocity (m/s), vmaks is robot velocity
networks. The first controller is used to avoid obstacles. The maximum, Smaks is the sensor maximum range, and Si, i=3 to 7,
second controller is used to navigate the robot into the direction is sensor measurement.
of the target. Robot velocity is controlled proportional with
sensor measurement in each controller. Robot velocity determination in target direction controller
is depend on target sensor position measurement. Robot
During the robot navigation, the two controller are used velocity is proportional with target identification sensor
independently and it is used depending the condition detected, measurement expressed as
the Algorithm that determined which controller is used at a
particular time is shown in Fig. 4.
v 2 = vmaks ×
( r1 + r2 ) (2)
2 × rmaks
Where rmaks is sensor maximum range measurement, and r1,
r2 is target sensor position measurement.

IV. NEURAL NETWORK CONTROLLER DESIGN


The robot uses two neural network controllers. First
controller is used for obstacles avoidance while the second
controller is used to navigate robot directly into target position.

A. Obstacles avoidance controller


The basic philosophy of the algorithm is adopted from the
natural motion of human being in the environment when one is
moving between obstacles based on the view of his eyes, and
then one decide to make the next step to the goal in the free
space[4]. Analogically, the robot will move safely in
Figure 3. A dual ANN Control Scheme environment based on of the data from range sensors.
Neural network used in this work is based on that
principles. It is trained using sensor data as input and heading
angle as output. While it is being trained, the sets of known
input-output data are used to find suitable weight that shows
correlation between those inputs and outputs.
The obstacle avoidance Controller is a multi feed-forward
neural network which consists of one input layer, one hidden
layer, and one output layer. This network has nine neurons in
the input layer, twenty neurons in the hidden layer, and one
neuron on output layer, as shown in Fig. 5. Activation function
on hidden layer is a hyperbolic tangent sigmoid, while it uses a
symmetric saturating linear on the output layer.

Figure 4. Controller flow chart

Robot velocity determination in obstacle avoidance


controller is based on S3 to S7 sensor. If obstacle is detected
very close from the robot, it will move slower. If obstacle is
detected far from the robot, it will move faster. When avoiding
obstacles, the robot velocity are given by the expression,

⎡ S3 ⎤
⎢S ⎥
vmaks ⎢ 4⎥ (1)
v1 = × [1 1 6 1 1] ⎢ S5 ⎥
10 × Smaks ⎢ ⎥
⎢ S6 ⎥ Figure 5. Structure of ANN of the Obstacle avoidance controller
⎢⎣ S7 ⎥⎦

2
Back-propagation algorithm is used for neural networks for epoch > 1, the bisa is :
training. It starts with the Initialization of weights, and
determine the maximum number of epoch, target error, and Δb 2k = mc × Δb 2 k (old ) + αβ 2 k (11)
learning rate (α). Then initialize epoch = 0, and MSE = 1. Do
the following steps when (Epoch < Maximum Epoch), and e) For each hidden unit (zj, j = 1, 2, 3, ..., p) adds its
(MSE > Target Error) : delta input from upper layer units :
1) increase epoch, Epoch = Epoch + 1 m
2) Do these steps for each trained coupled element: δ _ in j = ∑ δ 2k w jk (12)
a) Each input unit (si, i = 1, 2, 3, ..., n) gets signal xi and k =1
forward them into all unit in upper layer (hidden layer). Then multiply this value with its activated function
b) Add the weighted input signals to each unit in hidden derivative to calculate error information :
layer (Zj, j = 1, 2, 3, ..., p) as follows :
n
δ 1 j = δ _ in j f ' ( z _ in j ) (13)
z _ in j = b1 j + ∑ si vij (3)
i =1
ϕ1ij = δ 1 j s j
Use activation function to calculate output signal : β1j = δ 1j
z j = f ( z _ in j ) (4) Then calculate weight correction used to correct vij :
for epoch =1, the weight correction is :
And forward them into upper layer (output layer).
c) Add the weighted output signals to each output unit Δvij = αϕ1ij (14)
(ζk, k = 1, 2, 3, ..., m) as follows :
for epoch > 1, the weight correction is :
n
ζ _ ink = b2k + ∑ z j w jk (5)
Δvij = mc × Δvij (old ) + αϕ1ij (15)
j =1

Then use activated function to calculate output signal ζk: Then calculate bias correction used to correct b1j :
for epoch = 1, it is :
ζ k = f (ζ _ ink ) (6)
Δb1 j = αβ 1 j (16)
And forward those signal into upper layer (output units)
For epoch > 1 is :
d) Each output unit (ζk, k = 1, 2, 3, ..., m) gets target
forms that connected to the input forms, then calculate error :
Δb1 j = mc × Δb1 j (old ) + αβ 1 j (17)
δ 2k = ( tk − kk ) f ' (ζ _ ink ) (7)
f) At each output unit (ζk, k = 1, 2, 3, ..., m) correct its
ϕ 2 jk = δ k z j bias and weight (j = 0, 1, 2, ..., p) :

β 2k = δ k w jk (new) = w jk (old ) + Δw jk (18)

Then calculate the weight correction value to correct wjk as


follows : b 2 k (new) = b 2 k (old ) + Δb 2 k (19)

for epoch = 1, the correction value is : At each hidden units (zj, j = 1, 2, 3, ..., p) correct its bias
and weight (i = 0, 1, 2, ..., n) :
Δw jk = αϕ 2 jk (8)
vij (new) = vij (old ) + Δvij (20)
while for epoch > 1, the weight correction value is :
b1 j ( new) = b1 j (old ) + Δb1 j (21)
Δw jk = mc × Δw jk (old ) + αϕ 2 jk (9)
3) Calculate MSE.
Then calculate bias correction to correct b2k :
for epoch = 1, the bias is : Input-output data for training are collected by placing the
mobile robot in a room and move it manually, then reading and
Δb 2 k = αβ 2k (10) saving the sensor and heading angles data for every sampled

3
time. Fig. 6 shows robot movement in five different training
room models.

Figure 6. Pattern for obstacle avoidance training


Figure 9. Validation of target direction controller
After the training is finished, the weight values resulted
with MSE (mean squared error) output is equal to 5.5×10-3.
Fig. 7 shows heading angle reference and heading angle of the V. SIMULATION
trained networks.
Fig. 10 show the simulation model of the mobile robot
which is a four wheeled mobile robot with two front wheeled to
determine robot direction and two back wheeled to determine
robot velocity.

Figure 7. Obstacle avoidance controller training results.

Figure 10. Mobile robot model


B. Target direction controller
The Controller is to compare measured value from two The Environment model is a room sized m×n. which
sensors. Neural network that was used is a saturated linear consists of walls and other obstacles in it. The room model is
neural network with single layer. This layer uses symmetric determined as a two dimension environment described by a
saturating linear for their activated function. Fig. 8 shows matrix A sized m×n. Where A=[ aij ] with element aij where i =
network scheme of saturated linear neural network. Fig. 9 1, 2, 3, .., m and j = 1, 2, 3, .., n. Each matrix element aij
shows reference signal and trained signal. represents a square area inside the environment. When all
matrix elements are combined, it forms an environment that
shown in Fig. 11.
In our simulation, we add noise on sensors which are
normally distributed random value with zero mean. For
obstacle detection sensors, noise signal has standard deviation
0.1 meter and variance 0.01 meter. For target position sensor,
noise signal has standard deviation 0.02 meter and variance
0.0004 meter.
Fig. 12 shows simulation result of robot movement in a
simple room that similar to training environment. the Robot
moves on the same path for the two conditions, with noise and
Figure 8. Neural network saturated linear scheme without noise.

4
The second behaviour is called the turning left-preference
behavior, as the robot moves it prefers to turn left when the
robot doesn’t detect any obstacle nearby, as shown in Fig. 13b.
Fig. 14 shows robot movement on an unknown room
environment. In this case, it is shown that the robot is
successfull to reach the target.

Figure 11. Room Environment model

Figure 14. Robot path in unknown room environment

Fig. 15 shows a condition when robot can not find the target
which is placed behind a small opening that is too small for
the robot to pass through. This condition will happen if the
target position is unreacheable by sensors as shown in Fig. 16.
and the path to the target is not enough to let the robot pass
through.

Figure 12. Robot path from (1,1) into the target at (11,1),
with and without noise

The next simulation shows the robot behaviour when it is


given a wide environment as shown in Fig. 13. This simulation
shows two unique behavior of the robot which has never been
given at the training session. The first behavior is when it
detects an obstacle nearby, then the robot moves away to avoid
it and it follows the obstacle wall closedly, this is called the
obstacle following behavior as shown in Fig. 13a).

Figure 15. Robot path when it can not reach the target

Figure 13. wall following and turning left preference behavior.

5
and without noisy signals from its sensors are the same path
which concluded that the algorithm is relatively capable of
handling noisy sensor conditions. The robot will always be able
to reach the target if the target position is detectable by sensors
and the size of the opening to the target is appropriately larger
than the size of the robot.

ACKNOWLEDGMENT
The author wish to thanks his student, Mr. Falah Arbio
Yasin, a final year student who had helped to run the
simulations and preparing the graphs.

REFERENCES
[1] M.I.Gufron, ”Perancangan dan simulasi sistem kemudi otomatis robot
bergerak otonom dengan menggunakan Fuzzy Logic Controller”, Final
Project, Faculty of Engineering University of Indonesia 2001 (in
Indonesian)
Figure 16. Target can not be detected by target sensor
[2] A.Mufti , ”Penerapan algoritma genetika pada penjejakan lintasan serta
pengendalian gerak robot otonom”, Final Project, Faculty of Engineering
University of Indonesia, 2003 (In Indonesian)
[3] S. Kusumadewi, ”Membangun Jaringan Syaraf Tiruan Menggunakan
VI. CONCLUSION MATLAB & EXCEL LINK”, 1st edition, Graha Ilmu, Yogyakarta,
2004.(in Indonesian)
Robot navigation is successfully managed by two ANN [4] D. Janglová, ”Neural Networks in Mobile Robot Motion”, in Intl.
controller, which are called the obstacle avoidance controller Journal of Advanced Robotic Systems, Vol. 1 No. 1, pp. 15-22.
and the target position controller. The neural network [5] J.Holland, “Designing Autonomous Mobile Robots”, Elsevier Inc.,
controllers have some interresting behavior in the environtment Tokyo, 2004.
that is never been given in its training sessions. These [6] P. Stone, “Intelligent Autonomous Robotics”, 1st ed., Morgan and
behaviour are uniquely shown when the robot is placed in an Claypool Publishers, 2007
environment which is very different from the training
conditions. The robot movement path for the conditions with .

You might also like