Autonomous Service Robot for Efficient Tidying of Children s Toys Using ROS2 Framework - Anyeh
Autonomous Service Robot for Efficient Tidying of Children s Toys Using ROS2 Framework - Anyeh
Abstract— This paper presents the design and evaluation of • Implementation of a webserver node to visualize real-time
an autonomous service robot aimed at tidying up children’s results through a responsive, user-friendly dashboard for
toys. The system leverages the ROS2 framework and state-of- enhanced monitoring and control.
the-art algorithms for object detection, mapping, and task
automation. The primary objectives are to report the count and • Integration of waypoint navigation using the ROS2 Nav2
location of objects in a defined environment and develop a map- stack for optimized path planning and obstacle
based tidying approach. Through simulation-based avoidance in cluttered environments
implementation and qualitative and quantitative evaluations,
the system demonstrates significant potential in automating II. RELATED WORK
household tidying tasks. The results highlight key
advancements, limitations, and future directions in service A. Introduction to Service Robot
robotics. Service robotics is a rapidly evolving field that focuses on
automating tasks in domestic and industrial environments.
Keywords— Service Robotics, ROS2 Framework, Object The significance of this research lies in its potential to enhance
Detection, Autonomous Navigation, Household Automation efficiency and reduce human effort in repetitive tasks. Among
the critical themes in this domain are object detection,
I. INTRODUCTION navigation, and task execution.
The integration of service robots into household
environments is a growing trend aimed at improving quality
of life by automating repetitive tasks. Among these, the task
of tidying children’s toys presents unique challenges,
including diverse object types, unpredictable layouts, and B. Foundational Frameworks and Tools
varying levels of clutter. Quigley et al. (2009) introduced the ROS framework,
which has since become a cornerstone for robotic system
development. ROS2, the updated version, enhances
This work aims to design an autonomous service robot modularity and scalability, facilitating complex tasks like
capable of efficiently identifying, categorizing, and localizing navigation and manipulation. The Navigation2 (Nav2) stack,
toys to tidy up a designated space. The specific objectives for instance, supports robust path planning and obstacle
include: avoidance, which are crucial for autonomous robots.
a) Reporting the count and location of objects in the
environment.
C. Object Detection and Classification
b) Developing a map-based tidying strategy adaptable Advanced vision systems leveraging OpenCV and Image
to various layout complexities. Geometry enable precise object detection and classification.
The motivation stems from the increasing demand for Redmon et al. (2016) introduced YOLO (You Only Look
intelligent robotic systems that enhance domestic efficiency. Once), a real-time object detection algorithm that balances
By employing advanced technologies in object recognition, accuracy and speed, making it suitable for real-world
mapping, and automation, the proposed system addresses a applications. These methods have been instrumental in
practical household need while advancing the field of service enabling robots to identify and manipulate diverse objects
robotics. Existing literature provides foundational insights efficiently.
into these areas, which are expanded upon in this work.
D. Mapping and Localization
This paper makes the following key contributions: SLAM (Simultaneous Localization and Mapping) has
been widely adopted for creating maps of unknown
• Development of a simulation-based framework for a environments while tracking the robot’s location. Durrant-
service robot capable of tidying children’s toys using the Whyte et al. (2006) provided a comprehensive survey of
ROS2 framework. SLAM methodologies, highlighting their application in
• Extension of object detection capabilities to recognize four dynamic settings. Despite their effectiveness, challenges
distinct colors (red, green, blue, yellow) and multiple remain in adapting these techniques to cluttered and dynamic
shapes (cylindrical, spherical, cubical, triangular) with environments.
scalability for additional attributes.
E. Challenges in Household Environments object localization. OpenCV-based algorithms for
Despite these advancements, significant challenges persist color and shape classification. The robot is
in cluttered and dynamic environments. Hidden or stacked programmed to identify spherical, cylindrical, and
objects complicate detection and retrieval processes. Studies cubical objects in red, green, blue, and yellow color
on map-based navigation have employed waypoints to objects, representing children’s toys.
optimize path planning, but these often assume relatively • Planning: The planning module employs the Nav2
static environments, limiting their applicability in dynamic stack for path planning and navigation. Waypoint
household scenarios. navigation in the ROS2 Navigation stack involves
guiding a robot through a series of predefined points
(waypoints) in a specific order. The robot navigates
F. Positioning This Work from one waypoint to the next, performing tasks or
This paper builds upon these advancements by integrating collecting data along the way. This allows the robot to
ROS2 components and addressing specific gaps, such as navigate the environment efficiently while avoiding
handling occluded objects and adapting to diverse layouts. obstacles.
However, unlike prior works that utilized real-world testing,
• Control: The control module uses inverse kinematics
this study was limited to simulation due to hardware to control the robot’s arm for precise manipulation
constraints. Additionally, robotic arm manipulation was not
tasks.
fully implemented, marking areas for improvement.
In summary, while significant progress has been made in C. Simulation and Visualization
autonomous robotics, there remain gaps in handling dynamic Simulation and visualization tools are used to create a
environments, diverse object recognition, and versatile realistic testing environment and to monitor the robot’s
manipulation. Our research addresses these gaps by performance.
integrating advanced perception, planning, and manipulation • Gazebo: The Gazebo simulator is used to create a
techniques within the ROS2 framework, thereby enhancing realistic playground environment for testing the
the robot’s efficiency in tidying children’s toys. The following robot’s capabilities. The LIMO robot is simulated in
section details the methodology used to achieve these this environment to evaluate its performance.
objectives.
• RViz: RViz is utilized for visualizing the robot’s
.
perception and navigation in real-time.
III. METHODOLOGY D. Data Management
The robot’s architecture is based on the ROS2 framework, Efficient data management is crucial for tracking and
which provides a robust and flexible platform for developing organizing the toys during the tidying process.
robotic applications. The methodology section details the
hardware components, including sensors and actuators, and • SQLite: An SQLite database is used to store the
the software modules for perception, planning, and control. location and color of the objects as the robot scans
The perception module utilizes a combination of RGB-D across the children’s playground. This database helps
cameras and machine learning algorithms to identify and in tracking and organizing the toys efficiently
locate toys. The planning module employs path planning
algorithms to navigate the environment, while the
manipulation module uses inverse kinematics to control the
robot’s arm for picking up and placing toys.
A. Hardware Components
The simulated hardware setup includes the LIMO robot,
equipped with various sensors and actuators to perform the toy
tidying task.
• Robot: The LIMO robot is used for the experiments,
both in simulation and real-world scenarios.
• Sensors: RGB-D cameras for object detection and
localization.
• Actuators: The researcher could not get a robotic arm
with a versatile gripper for picking up and placing toys.
B. Software Modules
The software architecture is designed to handle object
detection & classification, planning, and control tasks,
leveraging the capabilities of ROS2, Gazebo and RViz.
• Object Detection and Classification: The perception
module utilizes OpenCV for image processing and
object recognition. Utilizing 3D vision systems for
Identify applicable funding agency here. If none, delete this text box.
E. Experimental Setup • The script checks if an object with the same color and
The experimental setup involves testing the robot in both shape already exists near the estimated position in the
simulated and real-world environments to validate its database.
performance. • If an existing object is found, its record is updated
• The robot is tested in simulated only. The with the current timestamp and incremented count.
playground is set up with various toys scattered • If no existing object is found, a new record is
around. inserted into the database with the detected
• Environment layout takes two scenarios: basic object's information.
and advanced. In the basic scenarios, few objects This methodology allows for real-time detection and tracking
placed in fixed patterns while in the advanced, of objects based on color and shape, with the results stored in
randomly placed, hidden, or stacked objects a database for further analysis.
requiring adaptive strategies.
• The robot scans the playground, identifies the
toys, and stores their information in the SQLite G. Second Implementation
database. In this script, you implemented a ROS2 node named
• Using the Nav2 stack, the robot plans its path to Color3DDetection that detects and localizes colored objects
each toy, picks it up using the robotic arm, and in 3D space using color and depth images from a camera.
places it in a designated area.. Here is a concise explanation of the methodology:
i. Initialization:
F. First Implementation
The ROS2 node is initialized and named Color3DDetection.
I implemented a ROS2 node named CameraClassifier that
Parameters such as real_robot, min_area_size, global_frame,
processes images from a camera, detects objects based on their
color and shape, and stores the detected objects' information and visualisation are declared and initialized.
in a SQLite database. Here is a brief explanation of the
methodology: ii. Camera Info Callbacks:
• Shape Detection: Contours are detected in the image, • The process_contours method processes detected
and the number of sides of the largest contour is used contours to estimate the 3D position of objects.
to classify the shape (triangle, box, cylinder). • Image coordinates are transformed to camera
coordinates using depth data.
• Camera coordinates are transformed to global
ii. Position Estimation: coordinates using TF2.
The position of the detected object is estimated (hard- • The global coordinates of detected objects are
coded values for demonstration). published as PoseStamped messages.
• If visualization is enabled, detected objects are
marked on the color image and displayed.
iii. Database Operations:
Both detection and classification methods used Waypoint
follower strategy. In my project, I implemented waypoint
navigation using the ROS2 Nav2 stack. I defined waypoints A. Reflection and Proposed Improvements
as coordinates and orientations for the robot to visit, used • Limitations: Handling occluded objects, complex stacking
global and local planners for path computation, and employed scenarios, lack of real-world testing, and incomplete
controllers to generate velocity commands for stable robotic arm functionality.
navigation. The system provided real-time feedback on
progress and invoked recovery behaviors when necessary. • Enhancements: Incorporating advanced SLAM
This setup enabled efficient and reliable navigation through techniques, adaptive learning algorithms, and full
predefined paths, as demonstrated in both simulated and real- robotic arm integration in future iterations
world environment. The setup methodology allows for real-
time detection and localization of colored objects in 3D
space, with results published for further processing or From the result, the robot can clearly detect and classify
the toys, however, when the toys become more clustered, the
visualization.
count of the toys do not corresponded to the right amount E.g.
a scenario where a large encompassing box is place behind a
In conclusion, the methodology outlined above provides a small box. This require further research. Potential solutions
comprehensive approach to developing and testing an and future research directions are proposed to enhance the
autonomous service robot for tidying children’s toys. The robot’s performance..
following section presents the results of the experimental
evaluations conducted to assess the robot’s performance.
V. CONCLUSION
IV. EVALUATION, RESULTS AND DISCUSSION This paper presents a comprehensive approach to
designing and evaluating a service robot for tidying children’s
toys. The system effectively integrates ROS2 components to
Experimental evaluations were conducted in a simulated address the challenges of object detection, classification, and
environment and a real-world household setting. The results navigation in diverse environments. The findings contribute to
section presents quantitative metrics such as success rate, time the field by demonstrating the potential of modular and
efficiency, and error rates. The robot successfully tidied toys scalable robotic solutions for domestic tasks. However, the
with a high success rate, demonstrating its capability to handle absence of real-robot testing and full robotic arm
various toy shapes and sizes. Comparative analysis with implementation highlights areas for future work. Future
manual tidying highlights the robot’s efficiency and research will focus on enhancing adaptability and efficiency
consistency. through machine learning and real-world testing.
ACKNOWLEDGMENT
Figures 1 and 2 illustrate detection accuracy and I would like to express my deepest gratitude to my Robot
navigation efficiency across basic and advanced layouts. Programming instructors, Riccardo Polvara and Grzegor
Cielniak, for their invaluable guidance, support, and
encouragement throughout this project. Their expertise and
insights have been instrumental in shaping my understanding
and approach to robotics.
REFERENCES