0% found this document useful (0 votes)
39 views5 pages

Real-Time Color-Based Sorting

This document describes a real-time color-based sorting robotic arm system. The system uses a camera to capture images of objects, detects the objects' colors using image processing, calculates the objects' positions in 3D space, and then uses inverse kinematics algorithms to control a robotic arm to sort the objects by color into different bins. The key components are a camera, robotic arm, microcontroller, and bins for sorted objects. Image processing techniques including dilation, erosion, and color detection are used to identify objects and determine their properties for sorting.

Uploaded by

aditdharkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views5 pages

Real-Time Color-Based Sorting

This document describes a real-time color-based sorting robotic arm system. The system uses a camera to capture images of objects, detects the objects' colors using image processing, calculates the objects' positions in 3D space, and then uses inverse kinematics algorithms to control a robotic arm to sort the objects by color into different bins. The key components are a camera, robotic arm, microcontroller, and bins for sorted objects. Image processing techniques including dilation, erosion, and color detection are used to identify objects and determine their properties for sorting.

Uploaded by

aditdharkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

354

Real-Time Color-Based Sorting


Robotic Arm System
Yonghui Jia, Guojun Yang and Jafar Saniie
Department of Electrical and Computer Engneering
Illinois Institute of Technology, Chicago, Illinois, USA

Abstract— Sorting is a labor intense process. With machines


that can recognize objects, it is possible to automate the sorting 1. Camera
process. In this paper, we present a robotic sorting arm based on
color recognition technique. In this system, when a new frame is 2. Robotic Arm
captured by the camera, the object will be detected using color- 3. Rusty Bolt Basket
base image processing technique. The position of the object in real-
world will be calculated by its mass center in image. Using Inverse 4. Rusty Nut Basket
Kinematics algorithms, the control input for the robotic arm will
be calculated and then sent to Arduino microcontroller. Then, the 5. Silver Bolt Basket
microcontroller will drive the motors on the robotic arm to sort
and position the objects according to their color. Using the 6. Silver Nut Basket
proposed technique, the sorting robotic arm system can
7. Objects
distinguish and sort different objects successfully with properly
tuned parameters for both machine vision and 3D mobility of the
robotic arm.
Figure 1. Sorting robotic arm system layout

I. INTRODUCTION
In recent years, robotic automation is a process that is
extremely important for industrial environments, since it
improves the quality while reduce the time spent to accomplish
a given task, all with a minimal human intervention. People
always been seeking for means of developing intelligent
machines for their further employment to a variety of useful
applications which has already achieved tremendous advances Figure 2. Phantom X Reactor robotic arm (left) and Dynamixels AX-
in the field of robotic[1]. Workers usually operate machines so 12A (right)
that they can perform certain task. Operating process requires
robust functional algorithms to deliver a desirable outcome
efficiently. The main objective of the sorting machine is to
control the robotic arm to perform exact movements for picking
up arbitrary color objects from the panel and put it into an
assigned sorting boxes.
In this paper, we present a new method to sort objects
automatically using single camera image processing technique
and Inverse Kinematics algorithm [2] [3]. Color-based
detection, segmentation and locating objects captured by the
Figure 3. Base rotation (left) and vertical reach (right)
camera in 2D plane are used to generate commands through a
serial communication to a robotic arm controller for 3D picking The robot controller is an ArbotiX-M microcontroller as
and repositioning of the objects. shown in Figure 4. The controller has two Serial Ports with 28
Digital I/O, and it is programmable by the Arduino IDE
II. HARDWARE SYSTEM ARCHITECTURE software.
The hardware system layout including the robotic arm are
shown Figure 1.

978-1-5090-4767-3/17/$31.00 ©2017 IEEE


355

III. IMAGE PROCESSING AND OBJECT DETECTION


Machine vision is achieved by using images captured via an
external webcam mounted on the frame holding manipulator.
These images are sent to the computer for image processing. We
1. Digital I/O are using only one RGB camera. This camera is calibrated for
distortion of images created by the lens using a camera
2. Power Socket
calibration function implemented in OpenCV(Open Source
3. Pins Computer Vision library). Then the position of objects in image
plane can be converted into position in 3D coordinate more
4. Serial Ports accurately. This process is very important since it allows the
system to calculate the position of objects using the image
obtained by camera.
The process of camera calibration requires capturing several
Figure 4. ArbotiX-M microcontroller pictures of a printed black-white chessboard pattern on random
positions while recording the size of the squares. The program,
To build the system, following steps were taken:
converts the pixel coordinates of the camera to millimeters
through comparison between the real size of the pattern and its
• Object detection and recognition using input from pixel size. The algorithm then outputs a calibration matrix and a
camera. matrix with the camera’s intrinsic and extrinsic parameters.
• Estimate location of objects in 3D coordinate using These data are used to convert pixel coordinates to 3D
their location in images. coordinates using equation 1, shown as below.
• Solve the inverse kinematics problem using Denavit-
Hartenberg matrix. 0
• Developing the software for Arduino microcontroller. = 0 (1)
0 0 1
The work flow of the entire system is as follows:
In this equation, the first vector contains the coordinates in
pixel coordinates, where “z” represents, the “z” axis in
homograph coordinate. In the 3 by 3 matrix, we have the
intrinsic camera parameters, where “ ” and “ ” are the camera
focal lengths while “ ” and “ ” denotes the optical center in
pixel coordinates. The second vector denotes corresponding
position in real world coordinates. Using images from calibrated
camera, the objects can be captured without distortion.
All images captured with RGB camera will be converted to
grayscale. These grayscale images will be further converted to
binary images. Once been converted into binary images, these
images will be processed via morphological operations which
are Dilation and Erosion as shown in Figure 6 and Figure 7.

Figure 6. Dilation on a binary image

Figure 5. Sorting Robotic Arm Work Flow


356

(a) (b)
Figure 10. Image after color detection (a) Silver objects; (b) yellow
Figure 7. Erosion on a binary image objects

Dilation can gradually enlarge the boundaries of regions of IV. CONTROL OF ROBOTIC ARM
foreground pixels (typically white pixels) making the All information obtained through the image processing is sent
foreground pixels grow while holes within those regions become to the Arduino via a serial port. To be more specific, sending the
smaller. Erosion is eroding away the boundaries of the information of objects (centroid coordinates, object’s
foreground (white) pixels, shrink objects and enlarging holes orientation, and object type) through the serial port, which will
within those areas. The combination of these operations allows be received by Arduino and used to control the robot.
us to remove most unwanted noise in the background, as well as The angle of each joint is estimated by Inverse Kinematics,
isolating objects from each other. The result of these where the information given is the object’s centroids, and this is
morphological operations can be seen in Figure 8. converted to robot joint angles. With angles of joints provided,
robotic arm can reach and pick target object successfully. In our
system, the Arduino receives the centroids and handles all the
kinematics operation. After picking up an object, the coordinates
of proper boxes are sent to the robotic arm. Then robotic arm
will drop the object in the box. Finally, the robotic arm will
return to initial gesture. The detecting and sorting processes are
(a) (b) performed repeatedly until there is no more object within the
reach of the robotic arm [6].
Figure 8. Morphological Operation Results (a) Original image; (b) The robot arm takes control inputs from an Arduino.
after morphological operations
Arduino is responsible for the commands that directly control
The next step is assessing circularities of the object to the motors and algorithms of the Inverse Kinematics [7].
differentiate different objects which in this case are bolts and Inverse Kinematic algorithm is based on angles of joints on the
nuts, then differentiate the silver objects from the yellow objects. robotic arm need to be turned while Robot Arm is reaching a
To perform color detection, the image is converted into HSV particular 3D position. Using the Denavit-Hartenberg [8] matrix
color space. HSV stands for hue, saturation, and value to solve this task was possible, and it is an ideal approach when
respectively. Hue represents the shade of the color; saturation working with the kinematics of a robotic arm (see Figure 11).
describes the intensity of the color; and value (or brightness)
describes how dark the color is, often referenced as luminance.
A representation of these parameters can be seen in Figure 9.
HSV separates the image intensity from the color information.
When using color base method, HSV allows detecting objects
with less error. The result of this color detection method can be
seen in Figure 9 [4].

Figure 11. Robotic arm simplified [6] [9] [10]


The Denavit-Hartenberg parameters are obtained by
calculating following equations using parameters shown in
Table I [8]:

TABLE I. DENAVIT-HARTENBERG PARAMETERS


Figure 9. HSV color space representation I a Α d Θ
1 90 0 L1 θ1
After combining of these image processing operations, two 2 0 Lh 0 θ2
sets of images can be obtained: one with the information of 3 0 L3 0 θ3
whether it is a bolt, nut or other object, the other set of images 4 0 L4 0 θ4
are images of objects with color data. All this information is
combined to successfully separate all the objects [5]. The results cos 1 0 sin 1 0
are shown in the Figure 10 as shown below. T1 = sin 1 0 − cos 1 0 (2)
0 1 0 1
0 0 0 1
357

image processing and general-purpose numerical algorithms for


cos 2 − sin 2 0 ℎ ∗ cos 2 a large range of computer vision problems. The majority of the
T2 = sin 1 cos 2 0 ℎ ∗ sin 2 (3) algorithms implemented in OpenCV are more efficient when
0 0 1 0
compare with Matlab. OpenCV also has C/C++, Python, Java
0 0 0 1
cos 3 − sin 3 0 3 ∗ cos 3 and other interfaces for the most popular programming
T3 = sin 3 cos 3 0 3 ∗ sin 3 (4)
language. To interface with Arduino, C++ is used to interface
0 0 1 0 with image processing and serial communication. Given the
0 0 0 1 nature, that C++ is a general-purpose, high-level programming
cos 4 − sin 4 0 4 ∗ cos 4
language widely used for dealing with computer vision
T4 = sin 4 cos 4 0 4 ∗ sin 4 (5) problems.
0 0 1 0
0 0 0 1 The control input of the arm is generated by the Inverse
Kinematics algorithm implemented on Arduino controller.
Using T1 – T4, it is possible to find the following three Arduino is an open-source single board microcontroller
equation that are the physical Cartesian coordinates (x,y,z) to including software which comprises a standard programming
calculate the position that the robotic arm will reach based on language compiler. IDE and a bootloader loads the compiled
the inputs given to the joint angles. code into the chip by serial port through USB port. With
PCA9685 board, the Arduino controller can serve as an
x = cos( 1) ∗ ( 3 ∗ cos( 2 + 3) + ℎ ∗ cos( 2) + 4 ∗ instrument for controlling all servo motors on the arm
cos( 2 + 3 + 4)) (6) simultaneously.
After receiving the control inputs form Arduino controller,
y = sin( 1) ∗ ( 3 ∗ cos( 2 + 3) + ℎ ∗ cos( 2) + 4 ∗
PWM (Pulse-width modulation) signals will be generated by
cos(cos( 2 + 3 + 4)) (7)
ArbotiX-M microcontroller. These signals will be used to
z = L1 + L3 ∗ sin( 2 + 3) + ℎ ∗ sin( 2) + 4 ∗ sin( 2 + 3 + control each motor. Servo motor provides precise positioning of
4) (8) any joint to which it is stuck on the specific angle. Servo motors
do not rotate continually and their rotation is restricted within
To capture objects in any given orientation (see Figure 12), fixed angles. Usually they are capable of rotating from 0 to 180
the wrist rotation angle based on the wrist output values was degree, but some of them manage to turn up to 360 degrees.
provided by the software. Based on the geometrical analysis, Servo motor usually has three inputs: + 5V power, ground and
the following equation was developed. PWM. By changing the PWM signal servo motor and
consequently, the arm can perform different movements to sort
( )= ( )+ ( ) − 90° (9) the objects.
VI. RESULT ANALYSIS
Some of the results for the robot pickup and delivering
process are seen in Figure 13 and 14. We can see that every
object was picked up and delivered properly. The system can
distinguish the undesired objects and remove them from the
table and then separate the bolts and nuts correctly and place
them in the corresponding bins. Finally, the system can also
distinguish overlapped bolts with different colors, and separate
them by shifting or pushing them, so they can be picked in the
next iterations. In the future, we will transfer the whole system
from PC to Raspberry Pi.

Figure 12. Angles for wrist calculation


Serial communication is setup between the computer which
performs image processing algorithm and the Arduino which
controls the robotic arm. The image processing part sends the
coordinate, orientation, and the type of the object to the robot
controller which transforms this information into joint
movements for picking up the desired objects and placing it into
the appropriate basket. Figure 13. Picking up a silver bolt and delivering it to its correct box
V. SOFTWARE IMPLEMENTATION
To conduct the image processing in real time, the algorithms
need to be implemented on the host computer. OpenCV is a
library that includes a great number of the computer vision,
358

REFERENCES

[1] N. Rai, B. Rai and P. Rai, "Computer vision approach for controlling
educational robotic arm based on object properties," in Emerging
Technology Trends in Electronics, Communication and Networking
(ET2ECN), 2014 2nd International Conference on, 2014.
[2] R. Mussabayev, "Colour-based object detection, inverse kinematics
algorithms and pinhole camera model for controlling robotic arm
Figure 14. Picking up a silver nut and delivering it to its correct box movement system," in Electronics Computer and Computation
(ICECCO), 2015 Twelve International Conference on, 2015.
The performance of the system can be affected by a series of [3] P. S. Lengare and M. E. Rane, "Human hand tracking using MATLAB
factors, such as illumination of the environment, the reflection to control Arduino based robotic arm," in Pervasive Computing (ICPC),
on the surface of objects. With parameters in the system tuned, 2015 International Conference on, 2015.
the system can adapt to different environments. Through an [4] S. D. Gajbhiye and P. P. Gundewar, "A real-time color-based object
tracking and occlusion handling using ARM cortex-A7," in India
iterative process all object can be delivered properly, regardless Conference (INDICON), 2015 Annual IEEE, 2015.
of their positions, color and orientations. [5] Q. Ji and W. Qi, "A color management system for textile based on HSV
model and bayesian classifier," in Control, Automation, Robotics and
Vision, 2008. ICARCV 2008. 10th International Conference on, 2008.
VII. CONCLUSION
[6] R. Szabó and A. Gontean, "Controlling a robotic arm in the 3D space
By combining image processing algorithms, Inverse with stereo vision," in Telecommunications Forum (TELFOR), 2013
Kinematics algorithm, we developed a robotic arm that can sort 21st, 2013.
objects according to their shape and color. For future works, the [7] C.-L. Hwang and J.-Y. Huang, "Neural-network-based 3-D localization
and inverse kinematics for target grasping of a humanoid robot by an
image processing will be implemented on ARM based active stereo vision system," in Neural Networks (IJCNN), The 2012
Raspberry Pi. Performing image processing on Raspberry Pi International Joint Conference on, 2012.
will reduce the size of the system while increase the efficiency [8] R. Alqasemi and R. Dubey, "Kinematics, control and redundancy
in terms of power consumption. Installing light sensors can resolution of a 9-DoF wheelchair-mounted robotic arm system for ADL
reduce the interference from the environment. Using more tasks," in Mechatronics and its Applications, 2009. ISMA'09. 6th
sophisticated neural network can help the system differentiate a International Symposium on, 2009.
larger variety of the objects. [9] R. Szabó and G. Gontean, "Robotic arm control with stereo vision made
in LabWindows/CVI," in Telecommunications and Signal Processing
(TSP), 2015 38th International Conference on, 2015.
[10] R. Szabó, A. Gontean and A. Sfiraţ, "Robotic arm control in space with
color recognition using a Raspberry PI," in Telecommunications and
Signal Processing (TSP), 2016 39th International Conference on , 2016.

You might also like