Real-Time Color-Based Sorting
Real-Time Color-Based Sorting
I. INTRODUCTION
In recent years, robotic automation is a process that is
extremely important for industrial environments, since it
improves the quality while reduce the time spent to accomplish
a given task, all with a minimal human intervention. People
always been seeking for means of developing intelligent
machines for their further employment to a variety of useful
applications which has already achieved tremendous advances Figure 2. Phantom X Reactor robotic arm (left) and Dynamixels AX-
in the field of robotic[1]. Workers usually operate machines so 12A (right)
that they can perform certain task. Operating process requires
robust functional algorithms to deliver a desirable outcome
efficiently. The main objective of the sorting machine is to
control the robotic arm to perform exact movements for picking
up arbitrary color objects from the panel and put it into an
assigned sorting boxes.
In this paper, we present a new method to sort objects
automatically using single camera image processing technique
and Inverse Kinematics algorithm [2] [3]. Color-based
detection, segmentation and locating objects captured by the
Figure 3. Base rotation (left) and vertical reach (right)
camera in 2D plane are used to generate commands through a
serial communication to a robotic arm controller for 3D picking The robot controller is an ArbotiX-M microcontroller as
and repositioning of the objects. shown in Figure 4. The controller has two Serial Ports with 28
Digital I/O, and it is programmable by the Arduino IDE
II. HARDWARE SYSTEM ARCHITECTURE software.
The hardware system layout including the robotic arm are
shown Figure 1.
(a) (b)
Figure 10. Image after color detection (a) Silver objects; (b) yellow
Figure 7. Erosion on a binary image objects
Dilation can gradually enlarge the boundaries of regions of IV. CONTROL OF ROBOTIC ARM
foreground pixels (typically white pixels) making the All information obtained through the image processing is sent
foreground pixels grow while holes within those regions become to the Arduino via a serial port. To be more specific, sending the
smaller. Erosion is eroding away the boundaries of the information of objects (centroid coordinates, object’s
foreground (white) pixels, shrink objects and enlarging holes orientation, and object type) through the serial port, which will
within those areas. The combination of these operations allows be received by Arduino and used to control the robot.
us to remove most unwanted noise in the background, as well as The angle of each joint is estimated by Inverse Kinematics,
isolating objects from each other. The result of these where the information given is the object’s centroids, and this is
morphological operations can be seen in Figure 8. converted to robot joint angles. With angles of joints provided,
robotic arm can reach and pick target object successfully. In our
system, the Arduino receives the centroids and handles all the
kinematics operation. After picking up an object, the coordinates
of proper boxes are sent to the robotic arm. Then robotic arm
will drop the object in the box. Finally, the robotic arm will
return to initial gesture. The detecting and sorting processes are
(a) (b) performed repeatedly until there is no more object within the
reach of the robotic arm [6].
Figure 8. Morphological Operation Results (a) Original image; (b) The robot arm takes control inputs from an Arduino.
after morphological operations
Arduino is responsible for the commands that directly control
The next step is assessing circularities of the object to the motors and algorithms of the Inverse Kinematics [7].
differentiate different objects which in this case are bolts and Inverse Kinematic algorithm is based on angles of joints on the
nuts, then differentiate the silver objects from the yellow objects. robotic arm need to be turned while Robot Arm is reaching a
To perform color detection, the image is converted into HSV particular 3D position. Using the Denavit-Hartenberg [8] matrix
color space. HSV stands for hue, saturation, and value to solve this task was possible, and it is an ideal approach when
respectively. Hue represents the shade of the color; saturation working with the kinematics of a robotic arm (see Figure 11).
describes the intensity of the color; and value (or brightness)
describes how dark the color is, often referenced as luminance.
A representation of these parameters can be seen in Figure 9.
HSV separates the image intensity from the color information.
When using color base method, HSV allows detecting objects
with less error. The result of this color detection method can be
seen in Figure 9 [4].
REFERENCES
[1] N. Rai, B. Rai and P. Rai, "Computer vision approach for controlling
educational robotic arm based on object properties," in Emerging
Technology Trends in Electronics, Communication and Networking
(ET2ECN), 2014 2nd International Conference on, 2014.
[2] R. Mussabayev, "Colour-based object detection, inverse kinematics
algorithms and pinhole camera model for controlling robotic arm
Figure 14. Picking up a silver nut and delivering it to its correct box movement system," in Electronics Computer and Computation
(ICECCO), 2015 Twelve International Conference on, 2015.
The performance of the system can be affected by a series of [3] P. S. Lengare and M. E. Rane, "Human hand tracking using MATLAB
factors, such as illumination of the environment, the reflection to control Arduino based robotic arm," in Pervasive Computing (ICPC),
on the surface of objects. With parameters in the system tuned, 2015 International Conference on, 2015.
the system can adapt to different environments. Through an [4] S. D. Gajbhiye and P. P. Gundewar, "A real-time color-based object
tracking and occlusion handling using ARM cortex-A7," in India
iterative process all object can be delivered properly, regardless Conference (INDICON), 2015 Annual IEEE, 2015.
of their positions, color and orientations. [5] Q. Ji and W. Qi, "A color management system for textile based on HSV
model and bayesian classifier," in Control, Automation, Robotics and
Vision, 2008. ICARCV 2008. 10th International Conference on, 2008.
VII. CONCLUSION
[6] R. Szabó and A. Gontean, "Controlling a robotic arm in the 3D space
By combining image processing algorithms, Inverse with stereo vision," in Telecommunications Forum (TELFOR), 2013
Kinematics algorithm, we developed a robotic arm that can sort 21st, 2013.
objects according to their shape and color. For future works, the [7] C.-L. Hwang and J.-Y. Huang, "Neural-network-based 3-D localization
and inverse kinematics for target grasping of a humanoid robot by an
image processing will be implemented on ARM based active stereo vision system," in Neural Networks (IJCNN), The 2012
Raspberry Pi. Performing image processing on Raspberry Pi International Joint Conference on, 2012.
will reduce the size of the system while increase the efficiency [8] R. Alqasemi and R. Dubey, "Kinematics, control and redundancy
in terms of power consumption. Installing light sensors can resolution of a 9-DoF wheelchair-mounted robotic arm system for ADL
reduce the interference from the environment. Using more tasks," in Mechatronics and its Applications, 2009. ISMA'09. 6th
sophisticated neural network can help the system differentiate a International Symposium on, 2009.
larger variety of the objects. [9] R. Szabó and G. Gontean, "Robotic arm control with stereo vision made
in LabWindows/CVI," in Telecommunications and Signal Processing
(TSP), 2015 38th International Conference on, 2015.
[10] R. Szabó, A. Gontean and A. Sfiraţ, "Robotic arm control in space with
color recognition using a Raspberry PI," in Telecommunications and
Signal Processing (TSP), 2016 39th International Conference on , 2016.