Sensors: Obstacle Avoidance of Multi-Sensor Intelligent Robot Based On Road Sign Detection
Sensors: Obstacle Avoidance of Multi-Sensor Intelligent Robot Based On Road Sign Detection
Article
Obstacle Avoidance of Multi-Sensor Intelligent Robot Based on
Road Sign Detection
Jianwei Zhao 1,† , Jianhua Fang 1, *,† , Shouzhong Wang 2,† , Kun Wang 1 , Chengxiang Liu 1 and Tao Han 1
1 School of Mechatronic Engineering, China University of Mining and Technology, Beijing 100089, China;
[email protected] (J.Z.); [email protected] (K.W.);
[email protected] (C.L.); [email protected] (T.H.)
2 Beijing Special Engineering and Design Institute, Beijing 100028, China; [email protected]
* Correspondence: [email protected]
† These authors contributed equally to this work.
Abstract: The existing ultrasonic obstacle avoidance robot only uses an ultrasonic sensor in the
process of obstacle avoidance, which can only be avoided according to the fixed obstacle avoidance
route. Obstacle avoidance cannot follow additional information. At the same time, existing robots
rarely involve the obstacle avoidance strategy of avoiding pits. In this study, on the basis of ultrasonic
sensor obstacle avoidance, visual information is added so the robot in the process of obstacle
avoidance can refer to the direction indicated by road signs to avoid obstacles, at the same time, the
study added an infrared ranging sensor, so the robot can avoid potholes. Aiming at this situation,
this paper proposes an intelligent obstacle avoidance design of an autonomous mobile robot based
on a multi-sensor in a multi-obstruction environment. A CascadeClassifier is used to train positive
and negative samples for road signs with similar color and shape. A multi-sensor information fusion
Citation: Zhao, J.; Fang, J.; Wang, S.;
is used for path planning and the obstacle avoidance logic of the intelligent robot is designed to
Wang, K.; Liu, C.; Han, T. Obstacle
realize autonomous obstacle avoidance. The infrared sensor is used to obtain the environmental
Avoidance of Multi-Sensor Intelligent
Robot Based on Road Sign Detection.
information of the ground depression on the wheel path, the ultrasonic sensor is used to obtain the
Sensors 2021, 21, 6777. https:// distance information of the surrounding obstacles and road signs, and the information of the road
doi.org/10.3390/s21206777 signs obtained by the camera is processed by the computer and transmitted to the main controller.
The environment information obtained is processed by the microprocessor and the control command
Academic Editors: Anastassia is output to the execution unit. The feasibility of the design is verified by analyzing the distance
Angelopoulou, Jude Hemanth, Peter acquired by the ultrasonic sensor, infrared distance measuring sensors, and the model obtained
M. Roth, Epameinondas Kapetanios by training the sample of the road sign, as well as by experiments in the complex environment
and Jose Garcia Rodriguez constructed manually.
infrared cameras and active sensing using lidar and sonar sensors to detect dynamic or
stationary obstacles in real time [11]. Laser ranging is used to analyze the wheel skid of
the four-wheel sliding steering mobile robot. Some other studies have proposed target
tracking of wheeled mobile robots based on visual methods [12,13].
For an unknown environment, sensors are usually used for intelligent obstacle avoid-
ance and path planning. The early method of obstacle avoidance and path planning is to
detect the stickers on the ground by infrared ray for navigation. This method can only be
used in a known environment. In Ref. [14] Jiang et al. [15] utilized six ultrasonic sensors to
capture relative information about of ambient wheeled robots and to identify a parking
space for automatic parking. In 1995, Yuta and Ando [16] installed ultrasonic sensors on
the front of a robot and in various locations on the left and right sides. In Refs. [17,18]
multiple ultrasonic data were used to create the map of the surrounding environment or
establish the surface shape of obstacles.
At present, the research on an obstacle avoidance robot is mostly about the motor driv-
ing principle, motor speed regulation scheme and ranging principle, and the research on
obstacle avoidance is also about obstacle avoidance. Few people study mobile robots when
they encounter pits during automatic travel. In this paper, ultrasonic sensor information,
infrared distance measuring sensors information and camera information are fused. After
solving the above problems, the function of road sign recognition is also introduced, which
allows the mobile robot to make accurate movement based on the traffic sign information.
In the process of moving, the relative sliding between the mobile robot and the ground
is inevitable. We used the slip rate to describe the wheel slip and its calculation formula is:
wi r − vix
si = × 100% (2)
wi r
When si < 0, that is, the wheel linear velocity is less than the wheel center velocity, the
Sensors 2021, 21, x FOR PEER REVIEW frictional force between the wheels and the ground is the braking force and slippage occurs.
3 of 18
When si = 0, that is, the linear speed of the wheel is equal to the speed of the wheel
center, the robot is in a complete rolling state.
When 𝑠𝑖 < 0, that is, the wheel linear velocity is less than the wheel center velocity,
Since there is sliding, the linear velocity is represented by the longitudinal velocity
the frictional force between the wheels and the ground is the braking force and slippage
of the wheel center, while the lateral velocity can be represented by the sideslip angle, so
occurs.
formula (5) can be obtained.
When 𝑠𝑖 = 0, that is, the linear speed of the wheel is equal to the speed of the wheel
center, the robot is in
a complete
rolling1 −
state.
sl 1 − sr
vGx
We defined theside r center of rotation as the inside wl and the other side
vGy near
= the side
tanα(1 − sl ) tanα(1 − sr ) (7)
as the outside side. When . 2 wr outer wheel slips.
ϕ the robot turns, the
2(1−
− point sl ) inner wheel
2(1−sr )slips and the
The instantaneous center of the contact b between bthe wheels on both sides and the
groundThrough
is equalcoordinate
to the y coordinate of the rotating center.
system transformation, the kinematics equation of the mobile
When the robot rotates, the longitudinal
robot in the ground coordinate system XOY can velocity at the center
be expressed of the same-side
by formula (6). wheel
is equal:
𝑤1 𝑟(1 − 𝑠1 ) = 𝑤2 𝑟(1 − 𝑠2 ) (3)
.
X
.
cosθ sinθ 0 vGx
Y = −sinθ cosθ 0 vGy (8)
. .
θ 0 0 1 ϕ
Parameter Meaning
r The wheel radius
b Distance between left and right wheel centroid
wl Left wheel speed
wr RPM of the right wheel
sl Slip rate of left wheel
sr The slip rate of the right wheel
α Sideslip Angle
.
ϕ The yaw velocity of rotation about the z axis in the XOY plane
v Gx The longitudinal velocity of the center of mass
v Gy The lateral velocity of the center of mass
the environment information is fed back to the main controller, and the main controller
Sensors 2021, 21, 6777 processes the information. The driving system is driven according to the environment
5 of 17 in-
formation to control the movement speed and attitude of the robot, so as to avoid obstacles
in the range of activity. Figure 2 shows the system composition of the mobile robot.
Figure2.2.Composition
Figure Compositionofofmobile
mobile robot
robot system.
system.
4.4.Detection
DetectionSystemSystem
InInthe
theresearch
researchofofobstacle
obstacleavoidance
avoidanceofofa mobile
a mobile robot, thethe
robot, processing
processing of of
thethe
sur-
sur-
rounding
roundingenvironment
environmentinformation
information is is
especially
especiallyimportant.
important. The environment
The environment is dynamic
is dynamic
and
andunknown
unknownininreal reallife.
life.AtAtthe
thesame
same time, inin
time, some
some environments,
environments, there areare
there signs thatthat
signs
require the robot to move in the specified direction. It is important that
require the robot to move in the specified direction. It is important that the robot moves the robot moves
safely
safelytotoitsitsdestination
destinationinina complex
a complex location.
location.ByBy selecting
selectingappropriate
appropriatesensors
sensorsto collect
to collect
and analyze environmental information, the robot can realize the
and analyze environmental information, the robot can realize the above functions. above functions. In this
In this
design, the HC-SR04 ultrasonic sensor, GP2YA02 and USB driver-free
design, the HC-SR04 ultrasonic sensor, GP2YA02 and USB driver-free camera are selected camera are selected
for the components of the detection system.
for the components of the detection system.
4.1. Sensor Layout
4.1. Sensor Layout
In order to make the robot work normally in both static and dynamic environments,
In order tosensors,
nine ultrasonic make the tworobot workdistance
infrared normally in both static
measuring and and
sensors, dynamic
a USBenvironments,
driverless
nine ultrasonic sensors, two infrared distance measuring sensors,
camera were installed on the robot body. An ultrasonic sensor was used to detect and a USBobstacle
driverless
camera were
information ofinstalled on the robot
the surrounding bulge;body. An ultrasonic
an infrared sensor
distance was used
measuring to detect
sensor was obstacle
posi-
tioned between two wheels in front of the bottom wheel for detecting the ground pit;posi-
information of the surrounding bulge; an infrared distance measuring sensor was a
tioned was
camera between
usedtwo wheels
to detect theinroad
frontsign
of the bottom wheel
information. for detectingdetected
The information the ground pit; a
by the
camera
sensor was used totodetect
is transmitted the controller
the main road sign for information.
processing,The
andinformation
a command detected
is sent to by
thethe
sensordriver
motor is transmitted
to controltothethe main
robot forcontroller for processing,
corresponding movement. andThe
a command is sent
sensor layout to the
of the
motor robot
mobile driveristo control
shown the robot
in Figure 3. for corresponding movement. The sensor layout of the
mobile robot is shown in Figure 3.
4.2. Target Detection Based on Adaboost Algorithm
4.2.1. Sample Pretreatment
The training sample is divided into a positive sample and negative sample, the positive
sample is a road sign sample picture and the negative sample is any other picture. In this
paper, 1000 positive samples and 2000 negative samples were selected, and samples were
grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as to
avoid different pictures to calculate a different number of features. The picture shown in
Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the
picture of the negative sample.
Sensors 2021,
Sensors 21, x6777
2021, 21, FOR PEER REVIEW 66ofof1718
Figure 3.
Figure 3. Sensor
Sensor Layout.
Layout.
4.2. Target
4.2. Target Detection
Detection Based
Based on
on Adaboost
Adaboost Algorithm
Algorithm
4.2.1. Sample
4.2.1. Sample Pretreatment
Pretreatment
The training
The training sample
sample isis divided
divided into
into aa positive
positive sample
sample and
and negative
negative sample,
sample, the
the posi-
posi-
tive sample is a road sign sample picture and the negative sample is any other
tive sample is a road sign sample picture and the negative sample is any other picture. In picture. In
this paper,
this paper, 1000
1000 positive
positive samples
samples and and 2000
2000 negative
negative samples
samples were
were selected,
selected, and
and samples
samples
were grayed and normalized to 128 × 72 gray scale as to form a training
were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so sample set, so as
as
to avoid different pictures to calculate a different number of features. The
to avoid different pictures to calculate a different number of features. The picture shown picture shown
in Figure
in Figure 44 is
is the
the sample
sample ofof the
the three
three road
road signs
signs to
to be
be trained
trained onon separately.
separately. Figure
Figure 55 is
is the
the
Figure
picture
Figure 3. Sensor
3. of Layout. sample.
the negative
Sensor negative
Layout.
picture of the sample.
4.2. Target Detection Based on Adaboost Algorithm
4.2.1. Sample Pretreatment
The training sample is divided into a positive sample and negative sample, the posi-
tive sample is a road sign sample picture and the negative sample is any other picture. In
this paper,
Figure 1000 positive
4.Positive
Positive samples and 2000 negative samples were selected, and samples
samplepicture.
picture.
Figure
Figure4.4. Positivesample
sample picture.
were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as
to avoid different pictures to calculate a different number of features. The picture shown
in Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the
picture of the negative sample.
Figure5.5.
Figure
Figure 5.Negative
NegativeSample
Negative SamplePicture.
Sample Picture.
Picture.
4.2.2.CascadeClassifier
4.2.2.
4.2.2. CascadeClassifierTraining
CascadeClassifier TrainingBased
Training Based
Based on
onon Adaboost
Adaboost
Adaboost
TheAdaboost
The Adaboost algorithm
algorithm is adaptive
is an an adaptive boosting
boosting algorithm.
algorithm. The idea
The basic basicofidea of Ada-
Adaboost
FigureThe Adaboost
4. Positive samplealgorithm
picture. is an adaptive boosting algorithm. The basic idea of Ada-
isboost
to use
boost is weak
to use classifier
weak and sample
classifier and space
sample of different
space of weight
different
is to use weak classifier and sample space of different weight distribution to builddistribution
weight to build
distribution a strong
to build aa
classifier
strong [19–22].
classifier The Adaboost
[19–22]. The algorithm
Adaboost synthesizes
algorithm a strong
synthesizes
strong classifier [19–22]. The Adaboost algorithm synthesizes a strong CascadeClassifier CascadeClassifier
a strong with
CascadeClassifier a
strong classification
with aa strong
with ability
strong classification by
classification ability superposing
ability by by superposinga large number
superposing aa large of
large number simple
number of CascadeClassifiers
of simple
simple CascadeClas-
CascadeClas-
with general
sifiers with classification
general ability.
classification A strong
ability. A CascadeClassifier
strong is
CascadeClassifier
sifiers with general classification ability. A strong CascadeClassifier is formed by formed is by selecting
formed weak
by selecting
selecting
CascadeClassifiers
weak CascadeClassifiers with the with
CascadeClassifiers best resolution
the best performance
best resolution
resolution and the leastand
performance error.theThe principle
least error. isThe
weak with the performance and the least error. The
to carry outisTto
principle cycle iteration,
carry out T selectiteration,
cycle an optimal and weak
select an CascadeClassifier
optimal and weak each time, then
CascadeClassifier
principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier
update 5.the
each time,
Figure time, sample weight,
then update
Negative update
Sample thereduce
sample
Picture. theweight,
weight reduce
of correctly resolved
the weight
weight of samples,
correctlyand increase
resolved sam-
each then the sample weight, reduce the of correctly resolved sam-
the weight
ples, and of incorrectly
increase the resolved
weight of samples.
incorrectly The specific
resolved algorithm
samples. Theis as follows
specific [23–25]:
algorithm is as
as
ples, and increase the weight of incorrectly resolved samples. The specific algorithm is
4.2.2. Step 1: given a set ofTraining
CascadeClassifier
follows [23–25]: data setsBased for training:
on Adaboost
follows [23–25]:
Step1:
The
Step1: given aa algorithm
Adaboost
given set of
set of data
datais sets for training:
{{ xsets anfor adaptive
training: boosting algorithm. The basic idea of Ada-
1 , y1 }, { x2 , y2 }, . . . , { xn , yn }}, (9)
boost is to use weak classifier and {{ sample
𝑥 , 𝑦 }, {space
𝑥 , 𝑦 of…different
}, , { 𝑥 , 𝑦 weight distribution to build(9)
}}, a
{{ 𝑥11, 𝑦11}, { 𝑥22, 𝑦22}, … , { 𝑥𝑛𝑛, 𝑦𝑛𝑛}}, (9)
where xclassifier
strong i is the input training
[19–22]. Thesample
Adaboost images, yi is thesynthesizes
algorithm result of classification and yi ∈ [0, 1]
a strong CascadeClassifier
is 1 sample,
with a strong 0 means a negative
classification abilitysample;
by superposing a large number of simple CascadeClas-
sifiersStep
with 2: general
specifiesclassification
the number ability. of loop A iterations;
strong CascadeClassifier is formed by selecting
Step 3: initializes the weight of the sample:
weak CascadeClassifiers with the best resolution performance and the least error. The
principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier
w1 = {w1,1 . . . , w1,N }, w1,j = d(i ), (10)
each time, then update the sample weight, reduce the weight of correctly resolved sam-
ples,
whereand d(i )increase the weightprobability
is the distribution of incorrectly usedresolved samples.
to initialize The specific
the strictly algorithm is as
impossible;
followsStep[23–25]:
4: t = 1, 2, . . . , T (T is the number of training times, which determines the number
Step1:
of final weak given a set of data sets for training:
CascadeClassifiers):
(1): Initialization weight: {{ 𝑥1 , 𝑦t 1 }, { 𝑥2 , 𝑦2 }, … , { 𝑥𝑛 , 𝑦𝑛 }}, (9)
p = pt1 , . . . , ptN , (11)
Sensors 2021, 21, 6777 7 of 17
(2): The initial sample is trained by a learning algorithm to obtain a weak CascadeClassifier.
(3): The error rate of each weight under the current weight is found:
N
εt = ∑i=1 pt,i |ht (Xi ) − yi |, (13)
the CascadeClassifier with the smallest error rate from the obtained weak CascadeClassifier
is selected and added to the strong CascadeClassifier
(4): Weight:
ωt+1,i = ωt,i β1−|ht ( xi )−yi | , (14)
If the sample of i is classified correctly:
|ht ( xi ) − yi | = 0, (15)
Otherwise:
|ht ( xi ) − yi | = 1, (16)
where:
εt
αt = , (17)
1 − εt
(5): After passing the T wheel, the strong CascadeClassifier obtained is:
∑tT=1 αt ht ( x ) ≥ 1
∑tT=1 αt
1 2
H (x) = , (18)
0 other
where
1
αt = log , (19)
βt
Figure6.6.Identification
Figure Identificationflow
flowchart.
chart.
5.
5. Obstacle
ObstacleAvoidance
AvoidanceStrategy
StrategyforforMobile
MobileRobots
Robots
In this study, the obstacle avoidance is mainly
In this study, the obstacle avoidance is mainlyrealized in theinfollowing
realized two situations,
the following two situa-
namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance
tions, namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance for a
for a ground sag obstacle. Figure 7 shows the schematic diagram of obstacles and obstacle
avoidance routes in this study. We made the following assumptions about obstacles:
1. Ground raised obstacles and a ground pit will only meet the conditions shown in
Figure 8;
Figure 6. Identification flow chart.
Sensors 2021, 21, 6777 5. Obstacle Avoidance Strategy for Mobile Robots 8 of 17
In this study, the obstacle avoidance is mainly realized in the following two situa-
tions, namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance
groundforsag
a ground sagFigure
obstacle. obstacle. Figurethe
7 shows 7 shows the schematic
schematic diagram diagram of obstacles
of obstacles and obstacle
and obstacle
avoidance
avoidance routes routes in this We
in this study. study. Wethe
made made the following
following assumptions
assumptions about obstacles:
about obstacles:
1. Ground
1. Ground raisedraised
obstacles and aand
obstacles ground pit will
a ground pitonly
will meet the conditions
only meet shown
the conditions shown in
in Figure 8; 8;
Figure
2. Ground
2. Groundraised raised
obstacles and a ground
obstacles pit do not
and a ground pit appear simultaneously;
do not appear simultaneously;
3. The road sign is present on a raised obstacle;
3. The road sign is present on a raised obstacle;
4. The
4. ground pit width
The ground is less than
pit width is lessthe wheel
than the spacing of the robot;
wheel spacing of the robot;
5. There is a stop
5. There is aroad
stopsign
road atsign
the destination that prompts
at the destination stop. stop.
that prompts
(a) (b)
Sensors 2021, 21, x FOR PEER REVIEW 9 of 18
Figure 7. Schematic Figure
diagram7. of obstacle,diagram
Schematic (a) is theof
raised obstacle
obstacle, (a) isonthe
theraised
ground; (b) is on
obstacle a sunken obstacle
the ground; (b) on
is athe ground.
sunken
obstacle on the ground.
Figure 8 shows a block diagram of the motion of a mobile robot. During the operation
of the intelligent vehicle, first initialize the data and give the vehicle an initial forward
speed; then start the ultrasonic sensor in front and the infrared distance measuring sen-
sors; if the ultrasonic sensor detects an obstacle, stop and start the camera to judge whether
there is a road sign indication; if there is a road sign indication, avoid the obstacle accord-
ing to the road sign indication; if there is no road sign, avoid the obstacle autonomously
according to the built-in program; if the infrared distance measuring sensors detect a
ground pit, use the built-in program to perform the corresponding movement of avoiding
the ground pit.
Figure8.8.Block
Figure Blockdiagram
diagramofofobstacle
obstacleavoidance
avoidanceprogram.
program.
Figure
When8the shows a block
robot avoidsdiagram of the
obstacles, motiontoofleave
it needs a mobile robot.space
a certain During
sothe operation
that the robot
of theturn
can intelligent vehicle,
safely. Since thefirst initialize
length the data
and width and
of the giveare
robot theboth
vehicle an initial
70 cm, forward
the distance be-
speed; then start the ultrasonic sensor in front and the infrared distance measuring sensors;
tween its rotating center and the furthest point is about 50 cm. The safe distance is set as
60 cm because the robot has deviation when rotating. When the distance measured by the
sensor in the front is equal to 60 cm, it indicates that there is an obstacle in front. At this
time, open the camera to detect whether there is a road sign. If the road sign is detected,
the obstacle should be avoided in the direction indicated by the type of road sign. If no
Sensors 2021, 21, 6777 9 of 17
if the ultrasonic sensor detects an obstacle, stop and start the camera to judge whether there
is a road sign indication; if there is a road sign indication, avoid the obstacle according to
the road sign indication; if there is no road sign, avoid the obstacle autonomously according
to the built-in program; if the infrared distance measuring sensors detect a ground pit, use
the built-in program to perform the corresponding movement of avoiding the ground pit.
When the robot avoids obstacles, it needs to leave a certain space so that the robot can
turn safely. Since the length and width of the robot are both 70 cm, the distance between
its rotating center and the furthest point is about 50 cm. The safe distance is set as 60 cm
because the robot has deviation when rotating. When the distance measured by the sensor
in the front is equal to 60 cm, it indicates that there is an obstacle in front. At this time, open
the camera to detect whether there is a road sign. If the road sign is detected, the obstacle
should be avoided in the direction indicated by the type of road sign. If no road sign is
detected, turn off the camera and perform obstacle avoidance. The reason cameras are used
only when obstacles are detected is that road signs are fixed to the surface of obstacles on
the ground, and because there is no other way of ranging other than by ultrasonic sensors,
the cameras cannot determine the distance of road signs once they are detected. Therefore,
the ultrasonic sensor detects the obstacle and determines the distance of the obstacle, and
then turns on the camera to determine whether there is a road sign on the obstacle and the
distance of the road sign.
The obstacle avoidance movement of raised obstacles on the ground is as follows:
when the distance of obstacles detected by the ultrasonic sensor on the front side is 60 cm,
the left and right ultrasonic waves start to detect for obstacles. If the distance measured by
the ultrasonic on the left is greater than that measured by the ultrasonic sensor on the right,
it turns to the left. At this time, the speed of the left wheel and the speed of the right wheel
are reversed and the left wheel reverses. If the distance measured by the ultrasonic on the
right is greater than that measured by the ultrasonic sensor on the left, it turns to the right.
At this time, the speed of the left wheel is in reverse with that of the right wheel, and the
left wheel is turning positively. After successful turning, the robot drives forward until it is
out of the range of the obstacle. At this point, the robot turns in the opposite direction of
the previous turning direction, and then continues driving and finally leaves the obstacle.
When the robot leaves the obstacle range, the detection value of the ultrasonic sensor on
the side of the robot is greater than 60 cm.
The obstacle avoidance movement of the ground pits is as follows: the distance
between the infrared sensor and the ground is 6 cm, and the distance between the chassis
and the ground is 3 cm, so the safe distance is less than 9 cm, which is set to 8 cm in this
experiment. So, the distance detected by the infrared ranging sensor is greater than 8 cm
to avoid the past. When the detection distance of the left infrared sensor is greater than
or equal to 8 cm, the left wheel slows down, and the right wheel accelerates to the left to
avoid the pit. When the detection distance of the right infrared sensor is greater than or
equal to 8 cm, the right wheel slows down, and the right wheel accelerates to the right
to avoid the pit. When pits are detected on both sides, the vehicle stops and waits for
manual movement.
11
12
33
32 2
31
22
Figure9.9.Physical
Figure Physicalprototype.
prototype.
Table2.2.Notes
Table Notestotophysical
physicalprototype.
prototype.
Label
Label 11 11 111
111 12 12 2 2 22 22 31 31 32 32 33 33
Name First Mounting
First mount Mounting 2Nd Mounting
hole 2Nd Mounting block Rear wheelFront
Front wheel Sensor Ultrasonic
Name Rear wheel IR IR
Sensor Ultrasonic sensor Camera
Camera
mount hole block wheel sensor
6.1. Sensor Ranging Experiment
6.1. Sensor Ranging
Figure Experiment
10 is an ultrasonic sensor. The HC-SR04 ultrasonic ranging module can pro-
videFigure 10 iscm
2 cm–450 annon-contact
ultrasonic sensor. Thefunction,
ranging HC-SR04ranging
ultrasonic ranging
accuracy upmodule canthe
to 3 mm; provide
module
Sensors 2021, 21, x FOR PEER REVIEW
2includes
cm–450 an
cmultrasonic
non-contact ranging function, ranging accuracy up to 3 mm; the 11 of 18
module
transmitter, receiver and control circuit.
includes an ultrasonic transmitter, receiver and control circuit.
Figure10.
Figure 10.Ultrasonic
Ultrasonictransducer.
transducer.
The ultrasonic module has four pins:, Trig (control end), Echo (receiving end), GND;
VCC and GND are connected to 5 V power supply, Trig (control end) controls the ultra-
sonic signal sent, and Echo (receiving end) receives the reflected ultrasonic signal.
Figure 11 is the principle of ultrasonic sensor ranging. The ultrasonic sensor ranging
is based on the reflection characteristics of the ultrasonic sensor. The transmitter end of
Sensors 2021, 21, 6777 11 of 17
Figure 10. Ultrasonic transducer.
The ultrasonic module has four pins:, Trig (control end), Echo (receiving end), GND;
VCC The
andultrasonic
GND are connected
module hastofour5 V pins:,
powerTrig supply, Trigend),
(control (control
Echoend) controls
(receiving the GND;
end), ultra-
sonic signal sent, and Echo (receiving end) receives the reflected ultrasonic
VCC and GND are connected to 5 V power supply, Trig (control end) controls the ultrasonic signal.
Figure
signal sent, 11
andisEcho
the principle
(receivingof end)
ultrasonic
receivessensor ranging.ultrasonic
the reflected The ultrasonic sensor ranging
signal.
is based on 11
Figure theisreflection characteristics
the principle of ultrasonicof sensor
the ultrasonic
ranging.sensor. The transmitter
The ultrasonic end of
sensor ranging
the ultrasonic
is based on thesensor emits
reflection a beam of ultrasonic
characteristics wave, and
of the ultrasonic at the The
sensor. same time it starts
transmitter endtiming,
of the
and the ultrasonic wave is transmitted in the medium at the same time.
ultrasonic sensor emits a beam of ultrasonic wave, and at the same time it starts timing, Because sound
waves
and thehave reflective
ultrasonic properties,
wave they bounce
is transmitted in the back when
medium atthey encounter
the same time.obstacles. When
Because sound
the receiving end of the ultrasonic sensor receives the reflected ultrasonic
waves have reflective properties, they bounce back when they encounter obstacles. When wave back, it
stops the timing.
the receiving endThe propagation
of the ultrasonicmedium in this study
sensor receives is air, and
the reflected the propagation
ultrasonic wave back,speedit
of sound
stops in air isThe
the timing. 340 propagation
m/s. According to theinrecorded
medium this studytime t, the
is air, anddistance S between
the propagation the
speed
launching
of sound in position and
air is 340 the obstacle
m/s. According cantobethe
calculated
recordedaccording
time t, the todistance
the formula S = 340 the
S between t/2.
launching position and the obstacle can be calculated according to the formula S = 340 t/2.
Figure
Figure Sequencediagram
12.12.Sequence diagramofofultrasonic
ultrasonicsensor.
sensor.
An ultrasonic sensor was used to measure the value of distance from the object. The
test distance and actual distance of ultrasonic sensor obtained through multiple experi-
ments are shown in Table 3.
Figure
Figure 13.13. Curve
Curve after
after fitting.
fitting.
pure color, which can meet the requirements of the robot. In Figure 16, from left to right,
and from top to bottom, the test distances are 10 cm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm,
70 cm, 80 cm, 90 cm, and 100 cm. In this study, experiments were conducted on road signs
at different distances. The test results show that in the detection environment where the
camera is 20 cm away from the road sign and 25 cm away from the road sign, the road
sign cannot all be in the field of view due to the influence of the camera parameters, it is
only partially in the field of view. Tables 5 and 6 are the experimental data of road sign
recognition. From these two tables, it can be concluded that the success rate of recognition
Sensors 2021, 21, x FOR PEER REVIEW
is lower when the distance between the camera and the road sign is less 14 than 14 cm.
25 of 18
Sensors 2021, 21, x FOR PEER REVIEW of 18
The success rate of recognition is higher when the distance between the camera and the
road sign is greater than or equal to 30 cm, reaching an average of 99.625%, which can meet
the requirements of accurate obstacle avoidance.
Figure 14.Identification
Figure14. Identificationeffect
effectofofroad
roadsigns.
signs.
Figure 14. Identification effect of road signs.
command to control the positive and negative movement of the dual motor. Figure 17 is
the motor speed curve after PID adjustment. It can be observed from the figure that when
the speed of the motor increases from 0 to the maximum speed suddenly, the overshoot
Figure15.
Figure 15.Road
Roadsign
signdetection
detectionenvironment.
environment.
Figure is
15.very
Roadsmall and the
sign detection curve reaches dynamic equilibrium in a very short time.
environment.
Table 5. Experimental data of short distance road sign recognition.
Table 5. Experimental data of short distance road sign recognition.
Detection Number of Number of Detection Success Rate
Detection Number Number
of of Number of Detection Success Rate
Distance
Number of Successful Errors De- (Number of Successes/Total
Distance Inspections
Successful Errors De- (Number of Successes/Total
/cm Inspections Tests tected Number of Failures)
/cm Tests tected Number of Failures)
20 100 63 37 63%
20 100 63 37 63%
25 100 80 20 80%
25 100 80 20 80%
Table 6. Experimental data of long distance road sign recognition.
Table 6. Experimental data of long distance road sign recognition.
Detection Number of Detection Success Rate
Detection Number of Number ofNumber
Suc- of Detection Success Rate
Distance
Number of Number of Suc- Errors De- (Number of Successes/Total
Distance Inspections cessful Tests
Errors De- (Number of Successes/Total
/cmInspections cessful Tests tected Number of Failures)
/cm tected Number of Failures)
30 100 100 0 100%
30 100 100 0 100%
40 100 99 1 99%
40 100 99 1 99%
50
Figure16.
16.Motor 100
Motordriver.
driver. 100 0 100%
50
Figure 100 100 0 100%
60 100 99 1 99%
60 100 99 1 99%
70 100 100 0 100%
70 100 100 0 100%
80 100 99 1 99%
80 100 99 1 99%
90 100 100 0 100%
90 100 100 0 100%
100 100 100 0 100%
Sensors 2021, 21, 6777 14 of 17
Figure17.
Figure 17.Motor
Motorspeed
speedcurve.
curve.
Figure18
Figure 18 shows the
the artificially
artificiallyconstructed
constructedexperimental
experimental environment,
environment, including
includingthe
environment of obstacles and road signs. The obstacle avoidance experiment
the environment of obstacles and road signs. The obstacle avoidance experiment of the of the phys-
ical prototype
physical was was
prototype conducted in theinconstructed
conducted experimental
the constructed environment,
experimental and the
environment, ex-
and
perimental
the results
experimental are shown
results in Figure
are shown 19 The19
in Figure experimental resultsresults
The experimental show that
showthisthat
method
this
can successfully
method identifyidentify
can successfully road signs
road and
signsrealize autonomous
and realize obstacle
autonomous avoidance
obstacle in com-
avoidance in
plex environments.
complex environments.Figure 20 shows
Figure 20 showsthe the
real-time
real-timepicture of road
picture sign
of road detection
sign detection (not the
(not
experimental
the experimentalenvironment
environment as shown
as shown in Figure
in Figure19).19).
Sensors 2021, 21, 6777 15 of 17
Sensors 2021, 21, x FOR PEER REVIEW 16 of 18
Sensors
Sensors2021,
2021,21,
21,xxFOR
FORPEER
PEERREVIEW
REVIEW 1616ofof1818
Figure18.
Figure
Figure 18.Experimental
18. ExperimentalEnvironment.
Experimental Environment.
Environment.
Figure Environment.
Figure
Figure 19.
19. Physical
Physical prototype
prototypeexperiment.
experiment.
Figure 19. Physical
Figure19. Physicalprototype
prototypeexperiment.
experiment.
7. Conclusions
In the physical prototype experiment, the mobile robot can pass through the narrow
gap between obstacles stably and safely, and can run correctly according to the direction
indicated by the road signs and finally reach the given destination position. The experi-
mental results verify the feasibility of the design, the accuracy of the road sign detection
and obstacle avoidance. The method for information fusion of multiple sensors can not
only make up for the error generated by a single sensor, but also sense the information
of multi-directional and multi-type obstacles of the robot at this moment and realize the
obstacle avoidance function. Therefore, it can be widely used in mobile robot systems.
Author Contributions: Conceptualization, J.Z., J.F. and S.W.; methodology, J.F. and S.W.; software,
J.F.; validation, J.F., C.L. and K.W.; formal analysis, J.F.; investigation, K.W., S.W.; resources, T.H.;
data curation, J.F. and C.L.; writing—original draft preparation, T.H. and S.W.; writing—review and
editing, J.Z. and T.H.; visualization, J.F.; supervision, J.Z. and T.H.; project administration, J.Z., S.W.
and K.W.; funding acquisition, K.W. All authors have read and agreed to the published version of
the manuscript.
Funding: This research was supported in part by the National Social Science Foundation of China
under Grant BIA200191.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data are available upon request.
Acknowledgments: This study is supported by the National Social Science Foundation of China
(Grant/Award Numbers: BIA200191). We also thank the editors and reviewers.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Liu, C.; Tomizuka, M. Real time trajectory optimization for nonlinear robotic systems: Relaxation and convexification. Syst.
Control. Lett. 2017, 108, 56–63. [CrossRef]
2. Das, P.; Behera, H.; Panigrahi, B. A hybridization of an improved particle swarm optimization and gravitational search algorithm
for multi-robot path planning. Swarm Evol. Comput. 2016, 28, 14–28. [CrossRef]
3. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A search-and-rescue robot system for remotely sensing the underground coal mine envi-ronment.
Sensors 2017, 17, 2426. [CrossRef]
4. Milioto, A.; Lottes, P.; Stachniss, C. Real-time semantic segmentation of crop and weed for precision agriculture robots lev-eraging
background knowledge in CNNs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA),
Brisbane, QLD, Australia, 21–25 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2229–2235.
5. Goodin, C.; Carrillo, J.T.; Mcinnis, D.P.; Cummins, C.L.; Durst, P.J.; Gates, B.Q.; Newell, B.S. Unmanned ground vehicle sim-
ulation with the virtual autonomous navigation environment. In Proceedings of the 2017 International Conference on Military
Technologies (ICMT), Brno, Czech Republic, 31 May–2 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 160–165.
6. Peterson, J.; Li, W.; Cesar-Tondreau, B.; Bird, J.; Kochersberger, K.; Czaja, W.; McLean, M. Experiments in unmanned aerial
vehicle/unmanned ground vehicle radiation search. J. Field Robot. 2019, 36, 818–845. [CrossRef]
7. Rivera, Z.B.; de Simone, M.C.; Guida, D. Unmanned ground vehicle modelling in gazebo/ROS-Based environments. Machines
2019, 7, 42. [CrossRef]
8. Galati, R.; Reina, G. Terrain Awareness Using a Tracked Skid-Steering Vehicle With Passive Independent Suspensions. Front.
Robot. AI 2019, 6, 46. [CrossRef]
9. Dogru, S.; Marques, L. Power Characterization of a Skid-Steered Mobile Field Robot with an Application to Headland Turn
Optimization. J. Intell. Robot. Syst. 2018, 93, 601–615. [CrossRef]
10. Figueras, A.; Esteva, S.; Cufí, X.; De La Rosa, J. Applying AI to the motion control in robots. A sliding situation. IFAC-PapersOnLine
2019, 52, 393–396. [CrossRef]
11. Almeida, J.; Santos, V.M. Real time egomotion of a nonholonomic vehicle using LIDAR measurements. J. Field Robot. 2012, 30,
129–141. [CrossRef]
12. Kim, C.; Ashfaq, A.M.; Kim, S.; Back, S.; Kim, Y.; Hwang, S.; Jang, J.; Han, C. Motion control of a 6WD/6WS wheeled plat-form
with in-wheel motors to improve its maneuverability. Int. J. Control. Autom. Syst. 2015, 13, 434–442. [CrossRef]
13. Reinstein, M.; Kubelka, V.; Zimmermann, K. Terrain adaptive odometry for mobile skid-steer robots. In Proceedings of the 2013
IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; IEEE: Piscataway, NJ,
USA, 2013; pp. 4706–4711.
Sensors 2021, 21, 6777 17 of 17
14. Petriu, E. Automated guided vehicle with absolute encoded guide-path. IEEE Trans. Robot. Autom. 1991, 7, 562–565. [CrossRef]
15. Jiang, K.; Seneviratne, L.D. A sensor guided autonomous parking system for nonholonomic mobile robots. In Proceedings of the
1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), Detroit, MI, USA, 10–15 May 1999.
[CrossRef]
16. Ando, Y.; Yuta, S. Following a wall by an autonomous mobile robot with a sonar-ring. In Proceedings of the 1995 IEEE
International Conference on Robotics and Automation, Nagoya, Japan, 21–27 May 1995. [CrossRef]
17. Han, Y.; Hahn, H. Localization and classification of target surfaces using two pairs of ultrasonic sensors. Robot. Auton. Syst. 2003,
33, 31–41. [CrossRef]
18. Silver, D.; Morales, D.; Rekleitis, I.; Lisien, B.; Choset, H. Arc carving: Obtaining accurate, low latency maps from ultrasonic range
sensors. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1
May 2004; Volume 2, pp. 1554–1561. [CrossRef]
19. Mu, Y.; Yan, S.; Liu, Y.; Huang, T.; Zhou, B. Discriminative local binary patterns for human detection in personal album. In
Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008;
IEEE: Piscataway, NJ, USA, 2008; pp. 1–8.
20. Zhou, S.; Liu, Q.; Guo, J.; Jiang, Y. ROI-HOG and LBP Based Human Detection via Shape Part-Templates Matching Procs. Lect.
Notes Comput. Sci. 2012, 7667, 109–115.
21. Bar-Hillel, A.; Levi, D.; Krupka, E.; Goldberg, C. Part-Based Feature Synthesis for Human Detection. In European Conference on
Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6314, pp. 127–142. [CrossRef]
22. Walk, S.; Schindler, K.; Schiele, B. Disparity Statistics for Pedestrian Detection: Combining Appearance, Motion and Stereo; Springer:
Berlin/Heidelberg, Germany, 2010; pp. 182–195. [CrossRef]
23. Liu, Y.; Shan, S.; Chen, X.; Heikkilä, J.; Gao, W.; Pietikäinen, M. Spatial-Temporal Granularity-Tunable Gradients Partition (STGGP)
Descriptors for Human Detection; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6311, pp. 327–340. [CrossRef]
24. Geronimo, D.; Sappa, A.D.; Ponsa, D. Computer vision and Image Understanding (Special Issue on Intelligent Vision Systems).
Comput. Vis. Image Underst. 2010, 114, 583–595.
25. Cho, H.; Rybski, P.E.; Bar-Hillel, A.; Zhang, W. Real-Time Pedestrian Detection with Deformable Part Models; Springer:
Berlin/Heidelberg, Germany, 2012; pp. 1035–1042. [CrossRef]