Fruit Packaging
Fruit Packaging
Fruit Packaging
Abdul Md Mazid
Pavel Dzitac
School of Engineering and Built Environment
School of Engineering and Built Environment Central Queensland University
Central Queensland University Rockhampton, Australia
Rockhampton, Australia E-mail: [email protected]
E-mail: [email protected]
private Context context; For the purpose of this project the depth map provided by
private ScriptNode scriptNode; the depth sensor is converted to a 2D image after the vertical
private DepthGenerator depth; Z-distance to the object is obtained. The Z-distance of interest
in this project is the Z-distance from the sensor to the top of
A Context is defined as a workspace where the application
the object. This distance is found by applying a distance
builds its OpenNI production graph and holds the information
regarding the state of the application. The production graph is threshold to the depth map data and then using the AForge
a map of all the nodes used in the application. image processing library functions to find the distance at
which the objects are detected. The distance threshold is
To use OpenNI we constructed and initialized a Context. sequentially increased from the minimum distance of 800mm
The ScriptNode allows OpenNI to create nodes and manages to a desired maximum distance, which is sufficient to detect
all nodes that it has created by using scripts. The all objects of interest.
DepthGenerator is a node that has generated the depth map
from the raw depth point cloud. When the first object is detected its Z-distance is recorded.
From this first Z-distance found the threshold is increased for
The following C# code initializes the Context workspace a further distance say 30mm. The Z-distance of each of the
using an XML file and the ScriptNode. It also initializes the objects found in the 30mm scanned region is recorded into an
DepthGenerator node. array. The AForge library is used at the same time to find the
XY location of each of the detected objects and to provide
context=Context.CreateFromXmlFile(SAMPLE_XML_FIL other information such as object’s projected area and
E, out scriptNode); orientation. Once this information is available the robot
depth = context.FindExistingNode(NodeType.Depth) control application computes the control logic and determines
as DepthGenerator; where to direct the end-effectors of the pick-and-place robot.
The following C# code shows the required declarations for The below written C# code shows the depth data reading
AForge. sequence. The first statement creates an instance of a
private BlobCounter blobCounter = new DepthMetaData. The depth metadata contains the actual
BlobCounter(); distance data (depth map) after correction factors were applied
private Blob[] blobs; to the raw depth data. Next OpenNI is instructed to wait for
update from the depth node. When the update is received
The BlobCounter is an AForge function that processes a OpenNI is instructed to get the metadata and store it in the
2D image, typically formatted as a 24bpp RGB image, finds depthMD variable of type DepthMetaData.
“blobs” in the image and computes their location, area and
other parameters. A blob is an island of pixels in the image. //Create a DepthMetaData instance
AForge thresholds the image to generate a black and white DepthMetaData depthMD = new DepthMetaData();
//Read next available raw depth data }
context.WaitOneUpdateAll(depth); else if(blobCnt > 0)
}
//Get depth metadata from raw depth data //when blob found record its Z distance
depthMD = depth.GetMetaData(); blobZ[0] = z_distance;
z_distance = 0;
Once the metadata (depth map) is available it can be extracted }
from the depthMD structure and processed as desired. We use else
}
the AForge library to find the objects (blobs) and the Z-
//no blob found if search range exceeded
distance to each object. no_blobs = true;
z_distance = 0;
The first step is to convert the depth information to a bitmap }
image so it can be processed by AForge. This is done by
applying a threshold to the distance data and converting it to a The above code can be modified to find all objects that are
binary image. The converted bitmap image is then passed to in the selected search range. This can be done by searching for
AForge for processing and blob finding as shown in the C# objects until the specified maximum search distance is
code below. reached, and not limiting the search by the number of objects
found. When the objects are found, their XYZ location
//process the 2D image information is sent to the robot controller. The robot is then
blobCounter.ProcessImage(AForge.Imaging.Image.Cl commanded to direct the gripper to the given location, pick up
one(image,System.Drawing.Imaging.PixelFormat.For
the detected object and put it in a new desired location, such as
mat24bppRgb));
in a packaging box or a quality inspection machine.
The blob information is then retrieved from AForge as an
array of blobs, as shown in the C# code below. In this case the IV. EXPERIMENT FOR PERFORMANCE TESTING
maximum number of detected blobs is determined by the robot
control application, which can be designed to find objects
within a specified depth search range (but within the sensor The blob detection application was tested using a single
working range). layer of several oranges as objects on a horizontal plane, as
shown in Figure 5.
//Get info for each blob found
blobs = blobCounter.GetObjectsInformation(); The Xtion sensor was located about 860mm above the
horizontal plane. The aim of the test was to determine whether
Each element of the retrieved array of blobs holds the the search algorithm would detect the objects and their XYZ
information about each blob in a structure and can be queried
location reliably.
to retrieve the required information as shown in the code
below.
The following C# code searches for the first blob (top-most Figure 5. Experimental setup for object detection
object) by incrementing the blob search distance (z_distance)
until the blob is found or the maximum search distance is The robot control application detected the objects as shown
reached. in Figure 6 (the front row of oranges in Figure 5 is the top row
of blobs in Figure 6). Out of fifteen oranges thirteen were
//Search for blob if(blobCnt < 1 && z_distance < detected as blobs, one was ignored as too small by the filter
830) setting in the application and one was not detected at all
{ because it was too low (outside the preset detection range).
//increment z distance until next blob found
The results on the right are for the first blob in the array of
z_distance += 2;
detected blobs The Centre of Gravity is the XY position of the measure depth is given by Nate Lowry [9].
detected object. The Z position (not shown in the picture) was
calculated separately during the vertical search for each blob. V. CONCLUSION
The two small size blobs on the two edges of the image are the A flexible and inexpensive object detection and localization
two legs of the sensor support. When detecting objects in a method for pick-and-place robots that can be developed with
container the edges of the depth map can be cropped to ignore little effort has been presented.
the sides of the container during object detection. Filters
would also reject most of the artifacts in the image. Although the Xtion and Kinect depth sensors were intended
for gaming applications, they can be used for indoor robotics
to provide useful depth information in many applications.
Depth sensors provide the robots with flexible and powerful
means of locating objects, such as boxes, without the need to
hardcode the exact co-ordinates of the box in the robot
program. This technology can be used for sorting and
packaging of various fruit and vegetables such as oranges,
apples, pineapples and grapefruit.