Design of Indoor Security Robot Based On Robot Operating System
Design of Indoor Security Robot Based On Robot Operating System
https://www.scirp.org/journal/jcc
ISSN Online: 2327-5227
ISSN Print: 2327-5219
School of Computer Science and Technology, Shandong University of Technology, Zibo, China
Keywords
Indoor Security Robot, Indoor Navigation, SLAM, Object Detection, YOLO
1. Introduction
Using indoor robots to solve problems in real life not only saves labor, but also
conforms to the development of The Times. At present, indoor robots have been
widely used in patrol security, monitoring, industrial production, smart home
and other aspects, and researchers’ exploration and design of it is still accelerat-
ing [1]. At present, the research of indoor robot mainly focuses on the localiza-
tion and autonomous navigation direction with SLAM algorithm as the core,
and some researchers independently explore the object detection direction in
computer vision, but there is a lack of work combining the two functions to-
gether. Therefore, the purpose of this paper is to realize the combination of ro-
bot autonomous navigation and object detection. Specifically, this paper pro-
poses an indoor security robot architecture combining autonomous navigation
module and object detection module, and designs and implements it. In this ar-
chitecture, autonomous navigation module is the central module of the robot. In
terms of links, it connects perception and motion control, making decisions
based on tasks and perceived environmental information, and planning the ro-
bot's motion trajectory. As for the object detection module, in short, it is an ex-
tension module of the robot. Through the fine perception of the external envi-
ronment of the module, it can realize many functions, such as external intrusion
detection, employee identification and so on. Finally, the effectiveness of the two
modules is verified by systematic testing. Both modules perform well in indoor
environment.
In recent years, the exploration of indoor robot is a research hotspot in the
field of robot. In terms of robot autonomous navigation, Zhang et al. collected
video images of indoor environment by using cameras placed at fixed indoor lo-
cations, and designed real-time image analysis algorithms to achieve obstacle
avoidance, robot position tracking and other functions [2]. Chen et al. studied
the theory and implementation of a self-positioning algorithm for a single mo-
bile robot based on odometer and laser sensor in a general indoor environment
[3]. Yang et al. realized wireless communication between upper computer, in-
door positioning system and mobile robot, as well as real-time positioning func-
tion of mobile robot [4]. Zhong et al. proposed to use image recognition tech-
nology to obtain the navigation deflection Angle of robot [5]. Guo et al. took a
two-wheeled differential mobile robot equipped with ROS as the platform and
used Lidar as the main sensor to solve the localization, raster map creation and
navigation problems of the mobile robot [6]. Zeng et al. constructed a visual
SLAM system based on point-line feature fusion by adding line feature to point
feature [7]. Zheng et al. used the TEB (Timed Elastic Band) algorithm as a local
path planning algorithm to propose and implement an indoor autonomous na-
vigation system for mobile robots based on obstacle detection with depth camera
[8].
In terms of robot indoor environment perception: Sun et al. proposed a target
autonomous positioning and segmentation extraction algorithm, which inde-
pendently obtains ROI (Region Of Interest) region and extracts and segments
target point cloud according to the voxel-based method [7]. Li et al. combined
deep learning method to study object detection algorithm based on 3D laser
ranging sensor. The effectiveness of the proposed algorithm is proved by expe-
riments [8]. In order to solve the imbalance and instability of dynamic tracking
of preschool children’s companion robot in indoor environment, Jin et al. pro-
posed a human target dynamic tracking method for preschool children’s com-
panion robot in indoor environment. The experimental results show that the de-
sign method has low error and high accuracy in dynamic tracking of human
body target, which has certain application value [9].
Indoor security robot, as a development direction of robotics, is in essence an
excellent intelligent system. It not only has good environmental perception abil-
ity and can make independent decisions, but also is equipped with robot vision
function module to realize object detection related work [10]. This paper takes
tracked robots as the research object, which is equipped with an ROS system.
The development work of the system is completed through the design and inte-
gration of indoor navigation and object detection functions. Among them, how
to use the robot sensor to construct the accurate two-dimensional raster map
under the condition of low cost and realize the object detection through the high
precision and fast neural network is the focus of this paper. This paper makes a
detailed analysis and research on the functional design of the two modules, and
finally makes the robot system can meet the basic use requirements.
The design and development of the indoor security robot is completed on the
ROS platform. ROS, which was developed in 2007, is an open-source robot op-
erating system that has gained popularity among robot developers. ROS adopts
distributed architecture as its main design idea [11]. Software and functional
modules of robots are regarded as nodes one by one, and the communication
between nodes needs to be realized through topics. In this way, these nodes can
be deployed in different systems or machines, and the advantages of distributed
architecture can be reflected. A common tool used in ROS, ‘rviz’, is shown in
Figure 1.
During the development of object detection function, the main development
tools used are Darknet deep learning framework and OpenCV (open source
small objects has been greatly improved [19]. The YOLOv3 loss function is as
follows,
(
b − bˆ
) + (b ) + (b ) + (b )
2 2 2 2
∑i ∑
S2
=L λcoord= − bˆy − bˆw − bˆh
B obj
1
0=j 0 i , j x x y w h
∑i
+=
S2 B
∑ obj
1 − log ( pc ) + ∑ n BCE ( cˆi , ci ) (1)
0=j 0 i , j =i 1
∑i ∑ − log (1 − pc ) ,
S2
+ λnoobj=
B noobj
0=j 0 i , j
1
where: S is the number of grids, that is, S2 is 13 * 13, 26 * 26 and 52 * 52; B is for
box; 1iobj
, j means that if the box has a target object, its value is 1, otherwise it is 0;
1inoobj
,j means that if box has no object, its value is 1, otherwise it is 0; BCE (Bi-
nary Cross Entropy) is calculated as follows,
BCE ( cˆi , ci ) =−cˆi ∗ log ( ci ) − (1 − cˆi ) ∗ log (1 − ci ) , (2)
installs a function pack that can publish image topics. usb_cam package can
publish the images read by the camera as image topics. After the camera driver
package is downloaded, the image topic can be released by running the launch
file, and the camera display interface can be observed on the PC virtual machine.
Finally, the topic subscribed by darknet_ros corresponds to the topic released by
usb_cam through modification of the configuration related to the function pack.
At this time, the launch file corresponding to the function pack can be executed
for object detection.
Navigation routes
Robot
have been marked with red circles and blue arrows respectively, which are con-
sistent with the actual position and movement of the robot in the test scene.
After the map is built and saved, the 2D raster map can be used for navigation.
Start file bringup.launch first, then start file navigate.launch after data informa-
tion can be obtained, run path planning package move_base, and then start the
rviz tool. Open file navigate.rviz inside, and the 2D raster map saved before ap-
pears. At this time, navigate through the upper toolbar.
As can be seen from Figure 12, the trained YOLOv3 model has a high accu-
racy and meets the basic usage requirements of indoor security robots.
5. Conclusion
In this paper, a ROS-based indoor security robot system is designed and imple-
mented, and the function and reliability of the system are well guaranteed
through complex environment configuration, programming and model training.
The system can make full use of the robot’s laser radar, imu and other sensors to
make the robot’s positioning, mapping and navigation more accurate and stable.
At the same time, because of the YOLOv3 model with good training effect, the
robot’s usb camera can be used to complete relatively accurate target recognition.
Acknowledgements
During my research, many teachers and partners gave me help and support, es-
pecially my supervisor, Mr. Zhang Liye. I would like to thank them all here.
Fund
This work was supported by the (National Natural Science Foundation of China,
Research on WiFi indoor localization System based on image assistance) under
Grant (number 62001272) and the (Natural Science Foundation of Shandong
Province) under Grant (number ZR2019BF022).
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this pa-
per.
References
[1] Ge, J.Q. (2020) Implementation of YOLO-v2 Vision Neural Network under ROS
Framework of Mobile Robot Platform. China Water Transport (Second Half), 20,
95-97.
[2] Yang, J.L. and Shi, E.X. (2013) Research on Indoor Mobile Robot Localization Me-
thod Based on Wireless Network. Mechanical Science and Technology, 32, 457-
461+468.
[3] Zhong, Y.C., Liu, A. and Xie, R.L. (2017) Determination of Deflection Angle of In-
door Mobile Robot Navigation Based on Machine Vision. Modular Machine Tools
and Automatic Processing Technology, No. 4, 1-4+9.
[4] Guo, C. (2020) Research on Indoor Mobile Robot Navigation Based on Lidar. Mas-
ter’s Thesis, Zhejiang University of Technology, Hangzhou.
[5] Zeng, N.W. (2021) Research and Implementation of Visual SLAM for Indoor Mo-
bile Robots Based on Point-and-Line Feature Fusion. Master’s Thesis, Chongqing
University of Posts and Telecommunications, Chongqing.
[6] Zheng, W.Q. and Wang, D. (2022) Obstacle Avoidance Research of ROS Indoor
Mobile Robot. Metrology and Measurement Technology, 49, 44-47.
[7] Sun, Z.Y. (2018) Research on Three-Dimensional Modeling and Object Recognition
Technology of Mobile Robot Indoor Environment. Master’s Thesis, Harbin Insti-
tute of Technology, Harbin.
[8] Li, X.P., Geng, D., Gong, Y.S., et al. (2018) Indoor Environment Object Detection
for Mobile Robot Based on 3D Laser Sensor. Proceedings of the 2018 37th Chinese
Control Conference, Wuhan, 25-27 July 2018, 5.
[9] Jin, F. and Qi, C. (2021) Human Target Dynamic Tracking Method of Preschool
Children’s Companion Robot in Indoor Environment. Automation & Instrumenta-
tion, No. 11, 156-159.
[10] Sun, T.T. (2021) Research on Multi-Sensor Indoor Mobile Robot Autonomous Lo-
calization Based on ROS System. Internet of Things Technology, 11, 33-35.
[11] Nagla, S. (2020) 2D Hector Slam of Indoor Mobile Robot Using 2D Lidar. Proceed-
ings of 2020 International Conference on Power, Energy, Control and Transmission
Systems (ICPECTS), Chennai, 10-11 December 2020, 1-4.
https://doi.org/10.1109/ICPECTS49113.2020.9336995
[12] Zhan, R.Z. and Jiang, F. (2018) Object Recognition System of Mobile Robot Based
on ROS and Deep Learning. Electronic Test, No. 15, 70-71+64.
[13] Mahendru, M. and Dubey, S.K. (2021) Real Time Object Detection with Audio
Feedback Using Yolo vs. Yolo_v3. Proceedings of 2021 11th International Confe-
rence on Cloud Computing, Data Science & Engineering (Confluence), Noida,
28-29 January 2021, 734-740.
https://doi.org/10.1109/Confluence51648.2021.9377064
[14] Mur-Artal, R., Montiel, J.M.M. and Tardos, J.D. (2015) ORB-SLAM: A Versatile
and Accurate Monocular SLAM System. IEEE Transactions on Robotics, 31, 1147-
1163. https://doi.org/10.1109/TRO.2015.2463671
[15] Sadruddin, H., Mahmoud, A. and Atia, M.M. (2020) Enhancing Body-Mounted
LiDAR SLAM Using an IMU-Based Pedestrian Dead Reckoning (PDR) Model.
Proceedings of 2020 IEEE 63rd International Midwest Symposium on Circuits and
Systems (MWSCAS), Springfield, 9-12 August 2020, 901-904.
https://doi.org/10.1109/MWSCAS48704.2020.9184561
[16] Fu, G.P., Zhu, L.x. and Zhang, S. (2021) Training Robot Based on ROS and Lidar
SLAM. Information Technology and Informatization, No. 11, 32-35+42.
[17] Liu, Y.Q. (2021) Improved Target Detection Algorithm Based on YOLO Series.
Master’s Thesis, Jilin University, Jilin.
[18] Li, B. (2020) Localization and Perception of Inspection Robot Based on Vision.
Master’s Thesis, Shanghai Institute of Electric Engineering, Shanghai.
[19] Won, J.H., Lee, D.H., Lee, K.M., et al. (2019) An Improved YOLOv3-Based Neural
Network for De-Identification Technology. Proceedings of 2019 34th International
Technical Conference on Circuits/Systems, Computers and Communications (ITC-
CSCC), JeJu, 23-26 June 2019, 1-2. https://doi.org/10.1109/ITC-CSCC.2019.8793382