0% found this document useful (0 votes)
334 views

Halcon 6.1 Shape Matching PDF

Halcon MVTech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
334 views

Halcon 6.1 Shape Matching PDF

Halcon MVTech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

HALCON Application Note

How to Use Shape-Based Matching to


Find and Localize Objects

Provided Functionality

⊲ Finding objects starting based on a single model image


⊲ Localizing objects with subpixel accuracy

Typical Applications

⊲ Object recognition and localization


⊲ Intermediate machine vision steps, e.g., alignment of ROIs
⊲ Completeness check
⊲ Parts inspection

Involved Operators

create shape model, create scaled shape model


inspect shape model, get shape model params
set shape model origin, get shape model origin
find shape model, find shape models
find scaled shape model, find scaled shape models
write shape model, read shape model
clear shape model, clear all shape models

Copyright
c 2002-2005 by MVTec Software GmbH, München, Germany MVTec Software GmbH
Overview

HALCON’s operators for shape-based matching enable you to find and localize objects based
on a single model image, i.e., from a model. This method is robust to noise, clutter, occlusion,
and arbitrary non-linear illumination changes. Objects are localized with subpixel accuracy in
2D, i.e., found even if they are rotated or scaled.
The process of shape-based matching (see section 1 for a quick overview) is divided into two
distinct phases: In a first phase, you specify and create the model. This model can be stored in
a file to be reused in different applications. Detailed information about this phase can be found
in section 2. In the second phase, the model is used to find and localize an object. Section 3
describes how to optimize the outcome of this phase by restricting the search space.
Shape-based matching is a powerful tool for various machine vision tasks, ranging from inter-
mediate image processing, e.g., to place ROIs automatically or to align them to a moving part,
to complex tasks, e.g., recognize and localize a part in a robot vision application. Examples can
be found in section 4.
Unless specified otherwise, the example programs can be found in the subdirectory
shape matching of the directory %HALCONROOT%\examples\application guide.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise,
without prior written permission of the publisher.

Edition 1 June 2002 (HALCON 6.1)


Edition 1a May 2003 (HALCON 6.1.2)

Microsoft, Windows, Windows NT, Windows 2000, and Windows XP are either trademarks or registered
trademarks of Microsoft Corporation.
All other nationally and internationally recognized trademarks and tradenames are hereby recognized.

More information about HALCON can be found at:

http://www.mvtec.com/halcon/
3

Contents

1 A First Example 4

2 Creating a Suitable Model 6


2.1 A Closer Look at the Region of Interest . . . . . . . . . . . . . . . . . . . . . 6
2.2 Which Information is Stored in the Model? . . . . . . . . . . . . . . . . . . . 12
2.3 Synthetic Model Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Optimizing the Search Process 20


3.1 Restricting the Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Searching for Multiple Instances of the Object . . . . . . . . . . . . . . . . . . 23
3.3 Searching for Multiple Models Simultaneously . . . . . . . . . . . . . . . . . 24
3.4 A Closer Look at the Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5 How to Optimize the Matching Speed . . . . . . . . . . . . . . . . . . . . . . 28

4 Using the Results of Matching 30


4.1 Introducing Affine Transformations . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Creating and Applying Affine Transformations With HALCON . . . . . . . . . 30
4.3 Using the Estimated Position and Orientation . . . . . . . . . . . . . . . . . . 32
4.4 Using the Estimated Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5 Miscellaneous 44
5.1 Adapting to a Changed Camera Orientation . . . . . . . . . . . . . . . . . . . 44
5.2 Reusing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

HALCON 6.1.4
4 Application Note on Shape-Based Matching

1 A First Example

In this section we give a quick overview of the matching process. To follow the example ac-
tively, start the HDevelop program hdevelop\first example shape matching.dev, which
locates the print on an IC; the steps described below start after the initialization of the applica-
tion (press F5 once to reach this point).
Step 1: Select the object in the model image
Row1 := 188
Column1 := 182
Row2 := 298
Column2 := 412
gen_rectangle1 (ROI, Row1, Column1, Row2, Column2)
reduce_domain (ModelImage, ROI, ImageROI)

After grabbing the so-called model image, i.e., a representative image of the object to find,
the first task is to create a region containing the object. In the example program, a rectangu-
lar region is created using the operator gen rectangle1; alternatively, you can draw the region
interactively using, e.g., draw rectangle1 or use a region that results from a previous segmen-
tation process. Then, an image containing just the selected region is created using the operator
reduce domain. The result is shown in figure 1.
Step 2: Create the model

inspect_shape_model (ImageROI, ShapeModelImages, ShapeModelRegions, 8, 30)


create_shape_model (ImageROI, NumLevels, 0, rad(360), 0, ’none’,
’use_polarity’, 30, 10, ModelID)

With the operator create shape model, the so-called model is created, i.e., the internal data
structure describing the searched object. Before this, we recommend to apply the operator
inspect shape model, which helps you to find suitable parameters for the model creation.
inspect shape model shows the effect of two parameters: the number of pyramid levels on
which the model is created, and the minimum contrast that object points must have to be in-
cluded in the model. As a result, the operator inspect shape model returns the model points


1
2

Figure 1:
1 specifying the object;
2 the internal model (4 pyramid levels).

HALCON Application Guide, 2005-02-01


1 A First Example 5

Figure 2: Finding the object in other images.

on the selected pyramid levels as shown in figure 1; thus, you can check whether the model
contains the relevant information to describe the object of interest.
When actually creating the model with the operator create shape model, you can specify
additional parameters besides NumLevels and Contrast: First of all, you can restrict the range
of angles the object can assume (parameters AngleStart and AngleExtent) and the angle
steps at which the model is created (AngleStep). With the help of the parameter Optimization
you can reduce the number of model points; this is useful in the case of very large models. The
parameter Metric lets you specify whether the polarity of the model points must be observed.
Finally, you can specify the minimum contrast object points must have in the search images to
be compared with the model (MinContrast). The creation of the model is described in detail
in section 2.
As a result, the operator create shape model returns a handle for the newly created model
(ModelID), which can then be used to specify the model, e.g., in calls to the operator
find shape model. Note that if you use HALCON’s COM or C++ interface and call the oper-
ator via the classes HShapeModelX or HShapeModel, no handle is returned because the instance
of the class itself acts as your handle.
If not only the orientation but also the scale of the searched object is allowed to vary, you must
use the operator create scaled shape model to create the model; then, you can describe the
allowed range of scaling with three parameters similar to the range of angles.
Step 3: Find the object again
for i := 1 to 20 by 1
grab_image (SearchImage, FGHandle)
find_shape_model (SearchImage, ModelID, 0, rad(360), 0.8, 1, 0.5,
’interpolation’, 0, 0.9, RowCheck, ColumnCheck,
AngleCheck, Score)
endfor

To find the object again in a search image, all you need to do is call the operator
find shape model; figure 2 shows the result for one of the example images. Besides the

HALCON 6.1.4
6 Application Note on Shape-Based Matching

already mentioned ModelID, find shape model provides further parameters to optimize the
search process: The parameters AngleStart, AngleExtent, and NumLevels, which you al-
ready specified when creating the model, allow you to use more restrictive values in the search
process; by using the value 0 for NumLevels, the value specified when creating the model is
used. With the parameter MinScore you can specify how many of the model points must be
found; a value of 0.5 means that half of the model must be found. Furthermore, you can specify
how many instances of the object are expected in the image (NumMatches) and how much two
instances of the object may overlap in the image (MaxOverlap). To compute the position of the
found object with subpixel accuracy the parameter SubPixel should be set to a value different
from ’none’. Finally, the parameter Greediness describes the used search heuristics, rang-
ing from “safe but slow” (value 0) to “fast but unsafe” (value 1). How to optimize the search
process is described in detail in section 3.
The operator find shape model returns the position and orientation of the found object in-
stances in the parameters Row, Column, and Angle, and their coresponding Score, i.e., how
much of the model was found.
If you use the operator find scaled shape model (after creating the model using
create scaled shape model), the scale of the found object is returned Scale.

2 Creating a Suitable Model

A prerequisite for a successful matching process is, of course, a suitable model for the object
you want to find. A model is suitable if it describes the significant parts of the object, i.e.,
those parts that characterize it and allow to discriminate it clearly from other objects or from the
background. On the other hand, the model should not contain clutter, i.e., points not belonging
to the object (see, e.g., figure 4).

2.1 A Closer Look at the Region of Interest

When creating the model, the first step is to select a region of interest (ROI), i.e., the part of the
image which serves as the model. In HALCON, a region defines an area in an image or, more
generally, a set of points. A region can have an arbitrary shape; its points do not even need to
be connected. Thus, the region of the model can have an arbitrary shape as well.
The sections below describe how to create simple and more complex regions. The following
code fragment shows the typical next steps after creating an ROI:

reduce_domain (ModelImage, ROI, ImageROI)


create_shape_model (ImageROI, 0, 0, rad(360), 0, ’none’, ’use_polarity’,
30, 10, ModelID)

Note that the region of interest used when creating a shape model influences the matching
results: Its center of gravity is used as the reference point of the model (see section 4 for more
information).

HALCON Application Guide, 2005-02-01


2.1.1 How to Create a Region 7

Figure 3: Creating an ROI from two regions.

2.1.1 How to Create a Region

HALCON offers multiple operators to create regions, ranging from standard shapes
like rectangles (gen rectangle2) or ellipses (gen ellipse) to free-form shapes (e.g.,
gen region polygon filled). These operators can be found in the HDevelop menu
Operators ⊲ Regions ⊲ Creation.
However, to use these operators you need the “parameters” of the shape you want to create,
e.g., the position, size and, orientation of a rectangle or the position and radius of a circle.
Therefore, they are typically combined with the operators in the HDevelop menu Operators ⊲
Graphics ⊲ Drawing, which let you draw a shape on the displayed image and then return the
shape parameters:

draw_rectangle1 (WindowHandle, ROIRow1, ROIColumn1, ROIRow2, ROIColumn2)


gen_rectangle1 (ROI, ROIRow1, ROIColumn1, ROIRow2, ROIColumn2)

2.1.2 How to Combine and Mask Regions

You can create more complex regions by adding or subtracting standard regions using the op-
erators union2 and difference. For example, to create an ROI containing the square and the
cross in figure 3, the following code fragment was used:

draw_rectangle1 (WindowHandle, ROI1Row1, ROI1Column1, ROI1Row2,


ROI1Column2)
gen_rectangle1 (ROI1, ROI1Row1, ROI1Column1, ROI1Row2, ROI1Column2)
draw_rectangle1 (WindowHandle, ROI2Row1, ROI2Column1, ROI2Row2,
ROI2Column2)
gen_rectangle1 (ROI2, ROI2Row1, ROI2Column1, ROI2Row2, ROI2Column2)
union2 (ROI1, ROI2, ROI)

Similarly, you can subtract regions using the operator difference. This method is useful to
“mask” those parts of a region containing clutter, i.e., high-contrast points that are not part of
the object. In figure 4, e.g., the task is to find the three capacitors. When using a single circular
ROI, the created model contains many clutter points, which are caused by reflections on the

HALCON 6.1.4
8 Application Note on Shape-Based Matching

model for full-circle ROI

model for ring-shaped ROI

Figure 4: Masking the part of a region containing clutter.

metallic surface. Thus, the other two capacitors are not found. The solution to this problem is
to use a ring-shaped ROI, which can be created by the following lines of code:

draw_circle (WindowHandle, ROI1Row, ROI1Column, ROI1Radius)


gen_circle (ROI1, ROI1Row, ROI1Column, ROI1Radius)
gen_circle (ROI2, ROI1Row, ROI1Column, ROI1Radius-8)
difference (ROI1, ROI2, ROI)

Note that the ROI should not be too “thin”, otherwise it vanishes at higher pyramid levels! As a
rule of thumb, an ROI should be 2N umLevels−1 pixels wide; in the example, the width of 8 pixels
therefore allows to use 4 pyramid levels.
For this task even better results can be obtained by using a synthetic model image. This is
described in section 2.3.

HALCON Application Guide, 2005-02-01


2.1.3 Using Image Processing to Create and Modify Regions 9

a) b)

c) d)

Figure 5: Using image processing to create an ROI: a) extract bright regions; b) select the card;
c) the logo forms the ROI; d) result of the matching.

2.1.3 Using Image Processing to Create and Modify Regions

In the previous sections, regions were created explicitely by specifying their shape parameters.
Especially for complex ROIs this method can be inconvenient and time-consuming. In the
following, we therefore show you how to extract and modify regions using image processing
operators.

Example 1: Determining the ROI Using Blob Analysis


To follow the example actively, start the HDevelop program
hdevelop\create roi via vision.dev, which locates the MVTec logo on a pendu-
lum (see figure 5); we start after the initialization of the application (press F5 once). The
main idea is to “zoom in” on the desired region in multiple steps: First, find the bright region
corresponding to the card, then extract the dark characters on it.
Step 1: Extract the bright regions

threshold (ModelImage, BrightRegions, 200, 255)


connection (BrightRegions, ConnectedRegions)
fill_up (ConnectedRegions, FilledRegions)

First, all bright regions are extracted using a simple thresholding operation (threshold); the
operator connection forms connected components. The extracted regions are then filled up

HALCON 6.1.4
10 Application Note on Shape-Based Matching

via fill up; thus, the region corresponding to the card also encompasses the dark characters
(see figure 5a).
Step 2: Select the region of the card

select_shape (FilledRegions, Card, ’area’, ’and’, 1800, 1900)

The region corresponding to the card can be selected from the list of regions with the operator
select shape. In HDevelop, you can determine suitable features and values using the dialog
Visualization ⊲ Region Info; just click into a region, and the dialog immediately displays
its feature values. Figure 5b shows the result of the operator.
Step 3: Use the card as an ROI for the next steps

reduce_domain (ModelImage, Card, ImageCard)

Now, we can restrict the next image processing steps to the region of the card using the operator
reduce domain. This iterative focusing has an important advantage: In the restricted region of
the card, the logo characters are much easier to extract than in the full image.
Step 4: Extract the logo

threshold (ImageCard, DarkRegions, 0, 230)


connection (DarkRegions, ConnectedRegions)
select_shape (ConnectedRegions, Characters, ’area’, ’and’, 150, 450)
union1 (Characters, CharacterRegion)

The logo characters are extracted similarly to the card itself; as a last step, the separate character
regions are combined using the operator union1.
Step 5: Enlarge the region using morphology

dilation_circle (CharacterRegion, ROI, 1.5)


reduce_domain (ModelImage, ROI, ImageROI)
create_shape_model (ImageROI, 0, 0, rad(360), 0, ’none’, ’use_polarity’,
30, 10, ModelID)

Finally, the region corresponding to the logo is enlarged slightly using the operator
dilation circle. Figure 5c shows the resulting ROI, which is then used to create the shape
model.

Example 2: Further Processing the Result of inspect shape model


You can also combine the interactive ROI specification with image processing. A useful
method in the presence of clutter in the model image is to create a first model region inter-
actively and then process this region to obtain an improved ROI. Figure 6 shows an exam-
ple; the task is to locate the arrows. To follow the example actively, start the HDevelop pro-
gram hdevelop\process shape model.dev; we start after the initialization of the application
(press F5 once).
Step 1: Select the arrow

gen_rectangle1 (ROI, 361, 131, 406, 171)

First, an initial ROI is created around the arrow, without trying to exclude clutter (see figure 6a).

HALCON Application Guide, 2005-02-01


2.1.3 Using Image Processing to Create and Modify Regions 11

a)

model for model for model for


Contrast = 30 Contrast = 90 Contrast = 134

b)

processed region final ROI final model

c)

d)

Figure 6: Processing the result of inspect shape model: a) interactive ROI; b) models for dif-
ferent values of Contrast; c) processed model region and corresponding ROI and
model; d) result of the search.

Step 2: Create a first model region

reduce_domain (ModelImage, ROI, ImageROI)


inspect_shape_model (ImageROI, ShapeModelImage, ShapeModelRegion, 1, 30)

Figure 6b shows the shape model regions that would be created for different values of the pa-
rameter Contrast. As you can see, you cannot remove the clutter without losing characteristic
points of the arrow itself.
Step 3: Process the model region
fill_up (ShapeModelRegion, FilledModelRegion)
opening_circle (FilledModelRegion, ROI, 3.5)

You can solve this problem by exploiting the fact that the operator inspect shape model
returns the shape model region; thus, you can process it like any other region. The main idea to

HALCON 6.1.4
12 Application Note on Shape-Based Matching

get rid of the clutter is to use morphological operator opening circle, which eliminates small
regions. Before this, the operator fill up must be called to fill the inner part of the arrow,
because only the boundary points are part of the (original) model region. Figure 6c shows the
resulting region.
Step 4: Create the final model

reduce_domain (ModelImage, ROI, ImageROI)


create_shape_model (ImageROI, 3, 0, rad(360), 0, ’none’, ’use_polarity’,
30, 10, ModelID)

The processed region is then used to create the model; figure 6c shows the corresponding ROI
and the final model region. Now, all arrows are located successfully.

2.1.4 How the ROI Influences the Search

Note that the ROI used when creating the model also influences the results of the subsequent
matching: The center point of the ROI acts as the so-called point of reference of the model for
the estimated position, rotation, and scale. You can query the reference point using the opera-
tor get shape model origin and modify it using set shape model origin; please refer to
sections 3.4 and 4.3 for additional information.
The point of reference also influences the search itself: An object is only found if the point
of reference lies within the image, or more exactly, within the domain of the image (see
also section 3.1.1). Please note that this test is always performed for the original point of
reference, i.e., the center point of the ROI, even if you modified the reference point using
set shape model origin.

2.2 Which Information is Stored in the Model?

As the name shape-based pattern matching suggests, objects are represented and recognized by
their shape. There exist multiple ways to determine or describe the shape of an object. Here,
the shape is extracted by selecting all those points whose contrast exceeds a certain threshold;
typically, the points correspond to the contours of the object (see, e.g., figure 1). Section 2.2.1
takes a closer look at the corresponding parameters.
To speed up the matching process, a so-called image pyramid is created, consisting of the origi-
nal, full-sized image and a set of downsampled images. The model is then created and searched
on the different pyramid levels (see section 2.2.2 for details).
If the object is allowed to appear rotated or scaled, the corresponding information is used al-
ready when creating the model. This also speeds up the matching process, at the cost of higher
memory requirements for the created model. Section 2.2.3 and section 2.2.4 describe the corre-
sponding parameters.
In the following, all parameters belong to the operator create shape model if not stated oth-
erwise.

HALCON Application Guide, 2005-02-01


2.2.1 Which Pixels are Part of the Model? 13

a) b)

c) d)

Figure 7: Selecting significant pixels via Contrast: a) complete object but with clutter; b) no
clutter but incomplete object; c) hysteresis threshold; d) minimum contour size.

2.2.1 Which Pixels are Part of the Model?

For the model those pixels are selected whose contrast, i.e., gray value difference to
neighboring pixels, exceeds a threshold specified by the parameter Contrast when calling
create shape model. In order to obtain a suitable model the contrast should be chosen in
such a way that the significant pixels of the object are included, i.e., those pixels that character-
ize it and allow to discriminate it clearly from other objects or from the background. Obviously,
the model should not contain clutter, i.e., pixels that do not belong to the object.
In some cases it is impossible to find a single value for Contrast that removes the clutter but
not also parts of the object. Figure 7 shows an example; the task is to create a model for the outer
rim of a drill-hole: If the complete rim is selected, the model also contains clutter (figure 7a); if
the clutter is removed, parts of the rim are missing (figure 7b).
To solve such problems, the parameter Contrast provides two additional methods: hysteresis
thresholding and selection of contour parts based on their size. Both methods are used by
specifying a tuple of values for Contrast instead of a single value.
Hysteresis thresholding (see also the operator hysteresis threshold) uses two thresholds, a
lower and an upper threshold. For the model, first pixels that have a contrast higher than the
upper threshold are selected; then, pixels that have a contrast higher than the lower threshold
and that are connected to a high-contrast pixel, either directly or via another pixel with contrast
above the lower threshold, are added. This method enables you to select contour parts whose
contrast varies from pixel to pixel. Returning to the example of the drill-hole: As you can see
in figure 7c, with a hysteresis threshold you can create a model for the complete rim without
clutter. The following line of code shows how to specify the two thresholds in a tuple:
inspect_shape_model (ImageROI, ModelImages, ModelRegions, 1, [26,52])

HALCON 6.1.4
14 Application Note on Shape-Based Matching

The second method to remove clutter is to specify a minimum size, i.e., number of pixels, for the
contour components. Figure 7d shows the result for the example task. The minimum size must
be specified in the third element of the tuple; if you don’t want to use a hysteresis threshold, set
the first two elements to the same value:
inspect_shape_model (ImageROI, ModelImages, ModelRegions, 1, [26,26,12])

Alternative methods to remove clutter are to modify the ROI as described in section 2.1 or create
a synthetic model (see section 2.3).

2.2.2 How Subsampling is Used to Speed Up the Search

To speed up the matching process, a so-called image pyramid is created, both for the model
image and for the search images. The pyramid consists of the original, full-sized image and a
set of downsampled images. For example, if the the original image (first pyramid level) is of
the size 600x400, the second level image is of the size 300x200, the third level 150x100, and
so on. The object is then searched first on the highest pyramid level, i.e., in the smallest image.
The results of this fast search are then used to limit the search in the next pyramid image, whose
results are used on the next lower level until the lowest level is reached. Using this iterative
method, the search is both fast and accurate. Figure 8 depicts 4 levels of an example image
pyramid together with the corresponding model regions.
You can specify how many pyramid levels are used via the parameter NumLevels. We recom-
mend to choose the highest pyramid level at which the model contains at least 10-15 pixels and
in which the shape of the model still resembles the shape of the object. You can inspect the
model image pyramid using the operator inspect shape model, e.g., as shown in the HDe-
velop program hdevelop\first example shape matching.dev:

inspect_shape_model (ImageROI, ShapeModelImages, ShapeModelRegions, 8, 30)


area_center (ShapeModelRegions, AreaModelRegions, RowModelRegions,
ColumnModelRegions)
HeightPyramid := |ShapeModelRegions|
for i := 1 to HeightPyramid by 1
if (AreaModelRegions[i-1] >= 15)
NumLevels := i
endif
endfor
create_shape_model (ImageROI, NumLevels, 0, rad(360), 0, ’none’,
’use_polarity’, 30, 10, ModelID)

After the call to the operator, the model regions on the selected pyramid levels are displayed in
HDevelop’s Graphics Window; you can have a closer look at them using the online zooming
(menu entry Visualization ⊲ Online Zooming). The code lines following the operator call
loop through the pyramid and determine the highest level on which the model contains at least
15 points. This value is then used in the call to the operator create shape model.
A much easier method is to let HALCON select a suitable value itself by specifying the value 0
for NumLevels. You can then query the used value via the operator get shape model params.
The operator inspect shape model returns the pyramid images in form of an image tu-
ple (array); the individual images can be accessed like the model regions with the operator
! select obj. Please note that object tuples start with the index 1, whereas control parame-

HALCON Application Guide, 2005-02-01


2.2.2 How Subsampling is Used to Speed Up the Search 15

Figure 8: The image and the model region at four pyramid levels (original size and zoomed to
equal size).

ter tuples start with the index 0!


You can enforce a further reduction of model points via the parameter Optimization. This

HALCON 6.1.4
16 Application Note on Shape-Based Matching

may be useful to speed up the matching in the case of particularly large models. Please note
that regardless of your selection all points passing the contrast criterion are displayed, i.e., you
cannot check which points are part of the model.

2.2.3 Allowing a Range of Orientation

If the object’s rotation may vary in the search images you can specify the allowed range in
the parameter AngleExtent and the starting angle of this range in the parameter AngleStart
(unit: rad). Note that the range of rotation is defined relative to the model image, i.e., a starting
angle of 0 corresponds to the orientation the object has in the model image. Therefore, to allow
rotations up to +/-5◦ , e.g., you should set the starting angle to -rad(5) and the angle extent to
rad(10).
We recommend to limit the allowed range of rotation as much as possible in order to speed up
the search process and to minimize the required memory. Note that you can further limit the
allowed range when calling the operator find shape model (see section 3.1.2). If you want to
reuse a model for different tasks requiring a different range of angles and if memory is not an
issue, you can therefore use a large range when creating the model and a smaller range for the
search.
If the object is (almost) symmetric you should limit the allowed range. Otherwise, the search
process will find multiple, almost equally good matches on the same object at different angles;
which match (at which angle) is returned as the best can therefore “jump” from image to image.
The suitable range of rotation depends on the symmetry: For a cross-shaped or square object the
allowed extent must be less than 90◦ , for a rectangular object less than 180◦, and for a circular
object 0◦ .
To speed up the matching process, the model is precomputed for different angles within the
allowed range, at steps specified with the parameter AngleStep. If you select the value 0,
HALCON automatically chooses an optimal step size φopt to obtain the highest possible accu-
racy by determining the smallest rotation that is still discernible in the image. The underlying
algorithm is explained in figure 9: The rotated version of the cross-shaped object is clearly dis-
cernible from the original if the point that lies farthest from the center of the object is moved by
at least 2 pixels. Therefore, the corresponding angle φopt is calculated as follows:
!
d2 2
 
2 2 2
d = l + l − 2 · l · l · cos φ ⇒ φopt = arccos 1 − = arccos 1 − 2
2 · l2 l

with l being the maximum distance between the center and the object boundary and d = 2
pixels.

l
d
φ
l

Figure 9: Determining the minimum angle step size from the extent of the model.

HALCON Application Guide, 2005-02-01


2.2.4 Allowing a Range of Scale 17

The automatically determined angle step size φopt is suitable for most applications; therefore,
we recommend to select the value 0. You can query the used value after the creation via the
operator get shape model params. By selecting a higher value you can speed up the search
process, however, at the cost of a decreased accuracy of the estimated orientation. Note that for
very high values the matching may fail altogether!
The value chosen for AngleStep should not deviate too much from the optimal value ( 13 φopt ≤
φ ≤ 3φopt ). Note that choosing a very small step size does not result in an increased angle
accuracy!

2.2.4 Allowing a Range of Scale

Similarly to the range of orientation, you can specify an allowed range of scale with the param-
eters ScaleMin, ScaleMax, and ScaleStep of the operator create scaled shape model.
Again, we recommend to limit the allowed range of scale as much as possible in order to speed
up the search process and to minimize the required memory. Note that you can further limit the
allowed range when calling the operator find scaled shape model (see section 3.1.2).
Note that if you are searching for the object on a large range of scales you should create the
model based on a large scale because HALCON cannot “guess” model points when precom-
puting model instances at scales larger than the original one. On the other hand, NumLevels
should be chosen such that the highest level contains enough model points also for the smallest
scale.
If you select the value 0 for the parameter ScaleStep, HALCON automatically chooses a
suitable step size to obtain the highest possible accuracy by determining the smallest scale
change that is still discernible in the image. Similarly to the angle step size (see figure 9), a
scaled object is clearly discernible from the original if the point that lies farthest from the center
of the object is moved by at least 2 pixels. Therefore, the corresponding scale change ∆sopt is
calculated as follows:
d 2
∆s = ⇒ ∆sopt =
l l
with l being the maximum distance between the center and the object boundary and d = 2
pixels.
The automatically determined scale step size is suitable for most applications; therefore, we
recommend to select the value 0. You can query the used value after the creation via the
operator get shape model params. By selecting a higher value you can speed up the search
process, however, at the cost of a decreased accuracy of the estimated scale. Note that for very
high values the matching may fail altogether!
The value chosen for ScaleStep should not deviate too much from the optimal value ( 13 ∆sopt ≤
∆s ≤ 3∆sopt ). Note that choosing a very small step size does not result in an increased scale
accuracy!

2.2.5 Which Pixels are Compared with the Model?

For efficiency reasons the model contains information that influences the search process: With
the parameter MinContrast you can specify which contrast a point in a search image must at

HALCON 6.1.4
18 Application Note on Shape-Based Matching

least have in order to be compared with the model. The main use of this parameter is to exclude
noise, i.e., gray value fluctuations, from the matching process. You can determine the noise by
examining the gray values with the HDevelop dialog Visualization ⊲ Pixel Info; then, set
the minimum contrast to a value larger than the noise.
The parameter Metric lets you specify whether the polarity, i.e., the direction of the contrast
must be observed. If you choose the value ’use polarity’ the polarity is observed, i.e., the
points in the search image must show the same direction of the contrast as the corresponding
points in the model. If, for example, the model is a bright object on a dark background, the
object is found in the search images only if it is also brighter than the background.
You can choose to ignore the polarity globally by selecting the value
’ignore global polarity’. In this mode, an object is recognized also if the direction
of its contrast reverses, e.g., if your object can appear both as a dark shape on a light back-
ground and vice versa. This flexibility, however, is obtained at the cost of a slightly lower
recognition speed.
If you select the value ’ignore local polarity’, the object is found even if the contrast
changes locally. This mode can be useful, e.g., if the object consists of a part with a medium
gray value, within which either darker of brighter sub-objects lie. Please note however, that
the recognition speed may decrease dramatically in this mode, especially if you allowed a large
range of rotation (see section 2.2.3).

2.3 Synthetic Model Images


Depending on the application it may be difficult to create a suitable model because there is no
“good” model image containing a perfect, easy to extract instance of the object. An example of
such a case was already shown in section 2.1.2: The task of locating the capacitors seems to be
simple at first, as they are prominent bright circles on a dark background. But because of the
clutter inside and outside the circle even the model resulting from the ring-shaped ROI is faulty:
Besides containing clutter points also parts of the circle are missing.
In such cases, it may be better to use a synthetic model image. How to create such an image
to locate the capacitors is explained below. To follow the example actively, start the HDevelop
program hdevelop\synthetic circle.dev; we start after the initialization of the application
(press F5 once).
Step 1: Create an XLD contour
RadiusCircle := 43
SizeSynthImage := 2*RadiusCircle + 10
gen_ellipse_contour_xld (Circle, SizeSynthImage / 2, SizeSynthImage / 2, 0,
RadiusCircle, RadiusCircle, 0, 6.28318,
’positive’, 1.5)

First, we create a circular region using the operator gen ellipse contour xld (see
figure 10a). You can determine a suitable radius by inspecting the image with the HDevelop
dialog Visualization ⊲ Online Zooming. Note that the synthetic image should be larger
than the region because pixels around the region are used when creating the image pyramid.
Step 2: Create an image and insert the XLD contour
gen_image_const (EmptyImage, ’byte’, SizeSynthImage, SizeSynthImage)
paint_xld (Circle, EmptyImage, SyntheticModelImage, 128)

HALCON Application Guide, 2005-02-01


2.3 Synthetic Model Images 19

c)

a) b)

Figure 10: Locating the capacitors using a synthetic model: a) paint region into synthetic image;
b) corresponding model; c) result of the search.

Then, we create an empty image using the operator gen image const and insert the XLD
contour with the operator paint xld. In figure 10a the resulting image is depicted.
Step 3: Create the model

create_scaled_shape_model (SyntheticModelImage, 0, 0, 0, 0.01, 0.8, 1.2, 0,


’none’, ’use_polarity’, 30, 10, ModelID)

Now, the model is created from the synthetic image. Figure 10d shows the corresponding model
region, figure 10e the search results.
Note how the image itself, i.e., its domain, acts as the ROI in this example.

HALCON 6.1.4
20 Application Note on Shape-Based Matching

3 Optimizing the Search Process

The actual matching is performed by the operators find shape model,


find scaled shape model, find shape models, or find scaled shape models. In
the following, we show how to select suitable parameters for these operators to adapt and
optimize it for your matching task.

3.1 Restricting the Search Space

An important concept in the context of finding objects is that of the so-called search space.
Quite literally, this term specifies where to search for the object. However, this space encom-
passes not only the 2 dimensions of the image, but also other parameters like the possible range
of scales and orientations or the question of how much of the object must be visible. The more
you can restrict the search space, the faster the search will be.

3.1.1 Searching in a Region of Interest

The obvious way to restrict the search space is to apply the operator find shape model to a
region of interest only instead of the whole image as shown in figure 11. This can be realized
in a few lines of code:
Step 1: Create a region of interest
Row1 := 141
Column1 := 163
Row2 := 360
Column2 := 477
gen_rectangle1 (SearchROI, Row1, Column1, Row2, Column2)

Figure 11: Searching in a region of interest.

HALCON Application Guide, 2005-02-01


3.1.2 Restricting the Range of Orientation and Scale 21

First, you create a region, e.g., with the operator gen rectangle1 (see section 2.1.1 for more
ways to create regions).
Step 2: Restrict the search to the region of interest
for i := 1 to 20 by 1
grab_image (SearchImage, FGHandle)
reduce_domain (SearchImage, SearchROI, SearchImageROI)
find_shape_model (SearchImageROI, ModelID, 0, rad(360), 0.8, 1, 0.5,
’interpolation’, 0, 0.9, RowCheck, ColumnCheck,
AngleCheck, Score)
endfor
The region of interest is then applied to each search image using the operator reduce domain.
In this example, the searching speed is almost doubled using this method.
Note that by restricting the search to a region of interest you actually restrict the position of the
point of reference of the model, i.e., the center of gravity of the model ROI (see section 2.1.4).
This means that the size of the search ROI corresponds to the extent of the allowed movement;
for example, if your object can move ± 10 pixels vertically and ± 15 pixels horizontally you can
restrict the search to an ROI of the size 20×30. In order to assure a correct boundary treatment
on higher pyramid levels, we recommend to enlarge the ROI by 2N umLevels−1 pixels; to continue
the example, if you specified NumLevels = 4, you can restrict the search to an ROI of the size
36×46.
Please note that even if you modify the point of reference using set shape model origin, the
original one, i.e., the center point of the model ROI, is used during the search. Thus, you must
always specify the search ROI relative to the original reference point.

3.1.2 Restricting the Range of Orientation and Scale

When creating the model with the operator create shape model (or
create scaled shape model), you already specified the allowed range of orientation
and scale section 2.2.3 and section 2.2.4. When calling the operator find shape model
(or find scaled shape model) you can futher limit these ranges with the parameters
AngleStart, AngleExtent, ScaleMin, and ScaleMax. This is useful if you can restrict
these ranges by other information, which can, e.g., be obtained by suitable image processing
operations.
Another reason for using a larger range when creating the model may be that you want to reuse
the model for other matching tasks.

3.1.3 Visibility

With the parameter MinScore you can specify how much of the object — more precisely: of
the model — must be visible. A typical use of this mechanism is to allow a certain degree of
occlusion as demonstrated in figure 12: The security ring is found if MinScore is set to 0.7.
Let’s take a closer look at the term “visibility”: When comparing a part of a search image with
the model, the matching process calculates the so-called score, which is a measure of how many
model points could be matched to points in the search image (ranging from 0 to 1). A model
point may be “invisible” and thus not matched because of multiple reasons:

HALCON 6.1.4
22 Application Note on Shape-Based Matching

a) b) c)

Figure 12: Searching for partly occluded objects: a) model of the security ring; b) search result
for MinScore = 0.8; c) search result for MinScore = 0.7.

• Parts of the object’s contour are occluded, e.g., as in figure 12.


! Please note that an object must not be clipped at the image border; this case is not
treated as an occlusion! More precisely, the smallest rectangle surrounding the model
must not be clipped.
• Parts of the contour have a contrast lower than specified in the parameter MinContrast
when creating the model (see section 2.2.5).
• The polarity of the contrast changes globally or locally (see section 2.2.5).
• If the object is deformed, parts of the contour may be visible but appear at an incorrect
position and therefore do not fit the model anymore. Note that this effect also occurs if
camera observes the scene under an oblique angle; section 5.1 shows how to handle this
case.
Besides these obvious reasons, which have their root in the search image, there are some not so
obvious reasons caused by the matching process itself:
• As described in section 2.2.3, HALCON precomputes the model for intermediate angles
within the allowed range of orientation. During the search, a candidate match is then
compared to all precomputed model instances. If you select a value for the parameter
AngleStep that is significantly larger than the automatically selected minimum value, the
effect depicted in figure 13 can occur: If the object lies between two precomputed angles,
points lying far from the center are not matched to a model point, and therefore the score
decreases.
Of course, the same line of reasoning applies to the parameter ScaleStep (see
section 2.2.4).
• Another stumbling block lies in the use of an image pyramid which was introduced in
section 2.2.2: When comparing a candidate match with the model, the specified minimum
score must be reached on each pyramid level. However, on different levels the score may
vary, with only the score on the lowest level being returned in the parameter Score; this
sometimes leads to the apparently paradox situation that MinScore must be set signifi-
cantly lower than the resulting Score.
Recommendation: The higher MinScore, the faster the search!

HALCON Application Guide, 2005-02-01


3.1.4 Thoroughness vs. Speed 23

AngleStep = 20 AngleStep = 30

Figure 13: The effect of a large AngleStep on the matching.

3.1.4 Thoroughness vs. Speed

With the parameter Greediness you can influence the search algorithm itself and thereby trade
thoroughness against speed. If you select the value 0, the search is thorough, i.e., if the object is
present (and within the allowed search space and reaching the minimum score), it will be found.
In this mode, however, even very unlikely match candidates are also examined thoroughly,
thereby slowing down the matching process considerably.
The main idea behind the “greedy” search algorithm is to break off the comparison of a candi-
date with the model when it seems unlikely that the minimum score will be reached. In other
words, the goal is not to waste time on hopeless candidates. This greediness, however, can have
unwelcome consequences: In some cases a perfectly visible object is not found because the
comparison “starts out on a wrong foot” and is therefore classified as a hopeless candidate and
broken off.
You can adjust the Greediness of the search, i.e., how early the comparison is broken off, by
selecting values between 0 (no break off: thorough but slow) and 1 (earliest break off: fast but
unsafe). Note that the parameters Greediness and MinScore interact, i.e., you may have to
specify a lower minimum score in order to use a greedier search. Generally, you can reach a
higher speed with a high greediness and a sufficiently lowered minimum score.

3.2 Searching for Multiple Instances of the Object

All you have to do to search for more than one instance of the object is to set the parameter
NumMatches accordingly. The operator find shape model (or find scaled shape model)
then returns the matching results as tuples in the parameters Row, Column, Angle, Scale, and
Score. If you select the value 0, all matches are returned.
Note that a search for multiple objects is only slightly slower than a search for a single object.
A second parameter, MaxOverlap, lets you specify how much two matches may overlap (as a
fraction). In figure 14b, e.g., the two security rings overlap by a factor of approximately 0.2.
In order to speed up the matching as far as possible, however, the overlap is calculated not for
the models themselves but for their smallest surrounding rectangle. This must be kept in mind
when specifying the maximum overlap; in most cases, therefore a larger value is needed (e.g.,
compare figure 14b and figure 14d).

HALCON 6.1.4
24 Application Note on Shape-Based Matching

a) c)

b) d) e)

Figure 14: A closer look at overlapping matches: a) model of the security ring; b) model overlap;
c) smallest rectangle surrounding the model; d) rectangle overlap; e) pathological
case.

Figure 14e shows a “pathological” case: Even though the rings themselves do not overlap, their
surrounding rectangles do to a large degree. Unfortunately, this effect cannot be prevented.

3.3 Searching for Multiple Models Simultaneously

If you are searching for instances of multiple models in a single image, you can of course
call the operator find shape model (or find scaled shape model) multiple times. A much
faster alternative is to use the operators find shape models or find scaled shape models
instead. These operators expect similar parameters, with the following differences:
• With the parameter ModelIDs you can specify a tuple of model IDs instead of a single one.
As when searching for multiple instances (see section 3.2), the matching result parameters
Row etc. return tuples of values.
• The output parameter Model shows to which model each found instance belongs. Note
that the parameter does not return the model IDs themselves but the index of the model ID
in the tuple ModelIDs (starting with 0).
• The search is always performed in a single image. However, you can restrict the search to
a certain region for each model individually by passing an image tuple (see below for an
example).
• You can either use the same search parameters for each model by specifying single values
for AngleStart etc., or pass a tuple containing individual values for each model.
• You can also search for multiple instances of multiple models. If you search for a certain
number of objects independent of their type (model ID), specify this (single) value in the

HALCON Application Guide, 2005-02-01


3.3 Searching for Multiple Models Simultaneously 25

a) b)

Figure 15: Searching for multiple models : a) models of ring and nut; b) search ROIs for the two
models.

parameter NumMatches. By passing a tuple of values, you can specify for each model
individually how many instances are to be found. In this tuple, you can mix concrete
values with the value 0; the tuple [3,0], e.g., specifies to return the best 3 instances of the
first model and all instances of the second model.
Similarly, if you specify a single value for MaxOverlap, the operators check whether a
found instance is overlapped by any of the other instances independent of their type. By
specifying a tuple of values, each instance is only checked against all other instances of
the same type.
The example HDevelop program hdevelop\multiple models.dev uses the operator
find scaled shape models to search simultaneously for the rings and nuts depicted in
figure 15.
Step 1: Create the models

create_scaled_shape_model (ImageROIRing, 0, -rad(22.5), rad(45), 0, 0.8,


1.2, 0, ’none’, ’use_polarity’, 60, 10,
ModelIDRing)
create_scaled_shape_model (ImageROINut, 0, -rad(30), rad(60), 0, 0.6, 1.4,
0, ’none’, ’use_polarity’, 60, 10, ModelIDNut)
ModelIDs := [ModelIDRing, ModelIDNut]

First, two models are created, one for the rings and one for the nuts. The two model IDs are
then concatenated into a tuple using the operator assign.
Step 2: Specify individual search ROIs

gen_rectangle1 (SearchROIRing, 110, 10, 130, Width - 10)


gen_rectangle1 (SearchROINut, 315, 10, 335, Width - 10)
SearchROIs := [SearchROIRing,SearchROINut]
add_channels (SearchROIs, SearchImage, SearchImageReduced)

HALCON 6.1.4
26 Application Note on Shape-Based Matching

In the example, the rings and nuts appear in non-overlapping parts of the search image; there-
fore, it is possible to restrict the search space for each model individually. As explained in
section 3.1.1, a search ROI corresponds to the extent of the allowed movement; thus, narrow
horizontal ROIs can be used in the example (see figure 15b).
The two ROIs are concatenated into a region array (tuple) using the operator concat obj and
then “added” to the search image using the operator add channels. The result of this operator
is an array of two images, both having the same image matrix; the domain of the first image is
restricted to the first ROI, the domain of the second image to the second ROI.
Step 3: Find all instances of the two models

find_scaled_shape_models (SearchImageReduced, ModelIDs, [-rad(22.5),


-rad(30)], [rad(45), rad(60)], [0.8, 0.6], [1.2,
1.4], 0.8, 0, 0, ’interpolation’, 0, 0.9,
RowCheck, ColumnCheck, AngleCheck, ScaleCheck,
Score, ModelIndex)

Now, the operator find scaled shape models is applied to the created image array. Be-
cause the two models allow different ranges of rotation and scaling, tuples are specified for
the corresponding parameters. In contrast, the other parameters are are valid for both models.
Section 4.3.3 shows how to access the matching results.

3.4 A Closer Look at the Accuracy

During the matching process, candidate matches are compared with instances of the model
at different positions, angles, and scales; for each instance, the resulting matching score is
calculated. If you set the parameter SubPixel to ’none’, the result parameters Row, Column,
Angle, and Scale contain the corresponding values of the best match. In this case, the accuracy
of the position is therefore 1 pixel, while the accuracy of the orientation and scale is equal to
the values selected for the parameters AngleStep and ScaleStep, respectively, when creating
the model (see section 2.2.3 and section 2.2.4).
If you set the parameter SubPixel to ’interpolation’, HALCON examines the matching
scores at the neighboring positions, angles, and scales around the best match and determines the
maximum by interpolation. Using this method, the position is therefore estimated with subpixel
1
accuracy (≈ 20 pixel in typical applications). The accuracy of the estimated orientation and
scale depends on the size of the object, like the optimal values for the parameters AngleStep
and ScaleStep (see section 2.2.3 and section 2.2.4): The larger the size, the more accurately
the orientation and scale can be determined. For example, if the maximum distance between the
center and the boundary is 100 pixel, the orientation is typically determined with an accuracy
1 ◦
of ≈ 10 .
Recommendation: Because the interpolation is very fast, you can set SubPixel to
’interpolation’ in most applications.
When you choose the values ’least squares’, ’least squares high’, or
’least squares very high’, a least-squares adjustment is used instead of an interpo-
lation, resulting in a higher accuracy. However, this method requires additional computation
time.
! Please note that the accuracy of the estimated position may decrease if you modify the

HALCON Application Guide, 2005-02-01


3.4 A Closer Look at the Accuracy 27

model rotation rotation inaccuracy

original
new p. of ref.
p. of ref.

Figure 16: Effect of inaccuracy of the estimated orientation on a moved point of reference.

point of reference using set shape model origin! This effect is visualized in figure 16: As
you can see in the right-most column, an inaccuracy in the estimated orientation “moves” the
modified point of reference, while the original point of reference is not affected. The resulting
positional error depends on multiple factors, e.g., the offset of the reference point and the ori-
entation of the found object. The main point to keep in mind is that the error increases linearly
with the distance of the modified point of reference from the original one (compare the two
rows in figure 16).
An inaccuracy in the estimated scale also results in an error in the estimated position, which
again increases linearly with the distance between the modified and the original reference point.
For maximum accuracy in case the reference point is moved, the position should be determined
using the least-squares adjustment. Note that the accuracy of the estimated orientation and scale
is not influenced by modifying the reference point.

HALCON 6.1.4
28 Application Note on Shape-Based Matching

3.5 How to Optimize the Matching Speed

In the following, we show how to optimize the matching process in two steps. Please note
! that in order to optimize the matching it is very important to have a set of representative test
images from your application in which the object appears in all allowed variations regarding
its position, orientation, occlusion, and illumination.
Step 1: Assure that all objects are found
Before tuning the parameters for speed, we recommend to find settings such that the matching
succeeds in all test images, i.e., that all object instances are found. If this is not the case when
using the default values, check whether one of the following situations applies:
? Is the object clipped at the image border?
Unfortunately, this failure cannot be prevented, i.e., you must assure that the object is not
clipped (see section 3.1.3).
? Is the search algorithm “too greedy”?
As described in section 3.1.4, in some cases a perfectly visible object is not found if the
Greediness is too high. Select the value 0 to force a thorough search.
? Is the object partly occluded?
If the object should be recognized in this state nevertheless, reduce the parameter
MinScore.
? Does the matching fail on the highest pyramid level?
As described in section 3.1.3, in some cases the minimum score is not reached on the
highest pyramid level even though the score on the lowest level is much higher. Test
this by reducing NumLevels in the call to find shape model. Alternatively, reduce the
MinScore.
? Does the object have a low contrast?
If the object should be recognized in this state nevertheless, reduce the parameter
MinContrast (operator create shape model!).
? Is the polarity of the contrast inverted globally or locally?
If the object should be recognized in this state nevertheless, use the appropriate value for
the parameter Metric when creating the model (see section 2.2.5). If only a small part of
the object is affected, it may be better to reduce the MinScore instead.
? Does the object overlap another instance of the object?
If the object should be recognized in this state nevertheless, increase the parameter
MaxOverlap (see section 3.2).
? Are multiple matches found on the same object?
If the object is almost symmetric, restrict the allowed range of rotation as described in
section 2.2.3 or decrease the parameter MaxOverlap (see section 3.2).
Step 2: Tune the parameters regarding speed
The speed of the matching process depends both on the model and on the search parameters.
To make matters more difficult, the search parameters depend on the chosen model parameters.
We recommend the following procedure:

HALCON Application Guide, 2005-02-01


3.5 How to Optimize the Matching Speed 29

• Increase the MinScore as far as possible, i.e., as long as the matching succeeds.
• Now, increase the Greediness until the matching fails. Try reducing the MinScore; if
this does not help restore the previous values.
• If possible, use a larger value for NumLevels when creating the model.
• Restrict the allowed range of rotation and scale as far as possible as described in
section 2.2.3 and section 2.2.4. Alternatively, adjust the corresponding parameters when
calling find shape model or find scaled shape model.
• Restrict the search to a region of interest as described in section 3.1.1.
The following methods are more “risky”, i.e., the matching may fail if you choose unsuitable
parameter values.
• Increase the MinContrast as long as the matching succeeds.
• If you a searching for a particularly large object, it sometimes helps to select a higher point
reduction with the parameter Optimization (see section 2.2.2).
• Increase the AngleStep (and the ScaleStep) as long as the matching succeeds.

HALCON 6.1.4
30 Application Note on Shape-Based Matching

4 Using the Results of Matching


As results, the operators find shape model, find scaled shape model etc. return
• the position of the match in the parameters Row and Column,
• its orientation in the parameter Angle,
• the scaling factor in the parameter Scale, and
• the matching score in the parameter Score.
The matching score, which is a measure of the similarity between the model and the matched
object, can be used “as it is”, since it is an absolute value.
In contrast, the results regarding the position, orientation, and scale are worth a closer look
as they are determined relative to the created model. Before this, we introduce HALCON’s
powerful operators for the so-called affine transformations, which, when used together with the
shape-based matching, enable you to easily realize applications like image rectification or the
alignment of ROIs with a few lines of code.

4.1 Introducing Affine Transformations

“Affine transformation” is a technical term in mathematics describing a certain group of trans-


formations. Figure 17 shows the types that occur in the context of the shape-based matching:
An object can be translated (moved) along the two axes, rotated, and scaled. In figure 17d, all
three transformations were applied in a sequence.
Note that for the rotation and the scaling there exists a special point, called fixed point or point
of reference. The transformation is performed around this point. In figure 17b, e.g., the IC is
rotated around its center, in figure 17e around its upper right corner. The point is called fixed
point because it remains unchanged by the transformation.
The transformation can be thought of as a mathematical instruction that defines how to calculate
the coordinates of object points after the transformation. Fortunately, you need not worry about
the mathematical part; HALCON provides a set of operators that let you specify and apply
tranformations in a simple way.

4.2 Creating and Applying Affine Transformations With HALCON

HALCON allows to transform not only regions, but also images and XLD con-
tours by providing the operators affine trans region, affine trans image, and
affine trans contour xld. The transformation in figure 17d corresponds to the line
affine_trans_region (IC, TransformedIC, ScalingRotationTranslation,
’false’)

The parameter ScalingRotationTranslation is a so-called homogeneous transformation


matrix that describes the desired transformation. You can create this matrix by adding sim-
ple transformations step by step. First, an identity matrix is created:
hom_mat2d_identity (EmptyTransformation)

HALCON Application Guide, 2005-02-01


4.2 Creating and Applying Affine Transformations With HALCON 31

column / y
row / x

a) b)

c) d)

e) f)

Figure 17: Typical affine transformations: a) translation along two axes; b) rotation around the
IC center; c) scaling around the IC center; d) combining a, b, and c; e) rotation
around the upper right corner; f) scaling around the right IC center.

Then, the scaling around the center of the IC is added:

hom_mat2d_scale (EmptyTransformation, 0.5, 0.5, RowCenterIC,


ColumnCenterIC, Scaling)

Similarly, the rotation and the translation are added:

hom_mat2d_rotate (Scaling, rad(90), RowCenterIC, ColumnCenterIC,


ScalingRotation)
hom_mat2d_translate (ScalingRotation, 100, 200, ScalingRotationTranslation)

Please note that in these operators the coordinate axes are labeled with x and y instead of Row
and Column! Figure 17a clarifies the relation.
Tranformation matrices can also be constructed by a sort of “reverse engineering”. In other
words, if the result of the transformation is known for some points of the object, you can de-
termine the corresponding transformation matrix. If, e.g., the position of the IC center and
its orientation after the transformation is known, you can get the corresponding matrix via the
operator vector angle to rigid.

HALCON 6.1.4
32 Application Note on Shape-Based Matching

a) Column Column
model image search image
Angle

Angle = 0
Row

Row

b) Column Column
model image search image
Angle

Angle = 0
Row Row

Figure 18: The position and orientation of a match: a) The center of the ROI acts as the default
point of reference; b) In the model image, the orientation is always 0.

vector_angle_to_rigid (RowCenterIC, ColumnCenterIC, 0,


TransformedRowCenterIC, TransformedColumnCenterIC,
rad(90), RotationTranslation)

and then use this matrix to compute the transformed region:

affine_trans_region (IC, TransformedIC, RotationTranslation, ’false’)

4.3 Using the Estimated Position and Orientation

There are two things to keep in mind about the position and orientation returned in the param-
! eters Row, Column, and Angle: First, by default the center of the ROI acts as the point of
reference for both transformations, i.e., the rotation is performed around this point, and the
returned position denotes the position of the ROI center in the search image. This is depicted
in figure 18a with the example of an ROI whose center does not coincide with the center of the
IC.
Secondly, in the model image the object is taken as not rotated, i.e., its angle is 0, even if it
seems to be rotated, e.g., as in figure 18b.
After creating a model, you can change its point of reference with the operator
set shape model origin. Note that this operator expects not the absolute position of the
new reference point as parameters, but its distance to the default reference point! An example
can be found section 4.3.4; please note that by modifying the point of reference, the accuracy
of the estimated position may decrease (see section 3.4).

HALCON Application Guide, 2005-02-01


4.3.1 Displaying the Matches 33

4.3.1 Displaying the Matches

Especially during the development of a matching application it is useful to display the matching
results overlaid on the search image. This can be realized in a few steps (see, e.g., the HDevelop
program hdevelop\first example shape matching.dev):
Step 1: Determine the point of reference

gen_rectangle1 (ROI, Row1, Column1, Row2, Column2)


area_center (ROI, Area, CenterROIRow, CenterROIColumn)

You can determine the center of the ROI, i.e., the point of reference, with the operator
area center.
Step 2: Create an XLD contour containing the model

inspect_shape_model (ImageROI, ShapeModelImages, ShapeModelRegions, 8, 30)


ShapeModelRegion := ShapeModelRegions[1]
gen_contours_skeleton_xld (ShapeModelRegion, ShapeModel, 1, ’filter’)

Below, we want to display the model at the extracted position and orientation. The correspond-
ing region can be accessed via the operator inspect shape model. However, if you call the
operator with NumLevels > 1 as in the example, an array (tuple) of regions is returned, with
the desired region at the first position; you can select the region from the array via the operator
select obj. We recommend to transform this region into an XLD contour using the operator
gen contours skeleton xld because XLD contours can be transformed more precisely and
quickly.
Step 3: Determine the affine transformation

find_shape_model (SearchImage, ModelID, 0, rad(360), 0.8, 1, 0.5,


’interpolation’, 0, 0.9, RowCheck, ColumnCheck,
AngleCheck, Score)
if (|Score| = 1)
vector_angle_to_rigid (CenterROIRow, CenterROIColumn, 0, RowCheck,
ColumnCheck, AngleCheck, MovementOfObject)

After the call of the operator find shape model, the results are checked; if the matching failed,
empty tuples are returned in the parameters Score etc. For a successful match, the correspond-
ing affine transformation can be constructed with the operator vector angle to rigid from
the movement of the center of the ROI (see section 4.2).
Step 4: Transform the XLD

affine_trans_contour_xld (ShapeModel, ModelAtNewPosition,


MovementOfObject)
dev_display (ModelAtNewPosition)

Now, you can apply the tranformation to the XLD version of the model using the operator
affine trans contour xld and display it; figure 2 shows the result.

HALCON 6.1.4
34 Application Note on Shape-Based Matching

Figure 19: Displaying multiple matches; the used model is depicted in figure 12a.

4.3.2 Dealing with Multiple Matches

If multiple instances of the object are searched and found, the parameters Row, Column, Angle,
and Score contain tuples. The HDevelop program hdevelop\multiple objects.dev shows
how to access these results in a loop:
Step 1: Determine the affine transformation
find_shape_model (SearchImage, ModelID, 0, rad(360), 0.75, 0, 0.55,
’interpolation’, 0, 0.8, RowCheck, ColumnCheck,
AngleCheck, Score)
for j := 0 to |Score| - 1 by 1
vector_angle_to_rigid (CenterROIRow, CenterROIColumn, 0,
RowCheck[j], ColumnCheck[j], AngleCheck[j],
MovementOfObject)

The transformation corresponding to the movement of the match is determined as in the previous
section; the only difference is that the position of the point of reference is extracted from the
tuple via the loop variable.
Step 2: Use the transformation
affine_trans_point_2d (MovementOfObject, CenterROIRow - 120 + 0.5,
CenterROIColumn + 0.5, RowArrowHead,
ColumnArrowHead)
disp_arrow (WindowHandle, RowCheck[j], ColumnCheck[j],
RowArrowHead - 0.5, ColumnArrowHead - 0.5, 2)

In this example, the transformation is also used to display an arrow that visualizes the orientation
(see figure 19).
! Note that the operator affine trans point 2d and the HALCON regions (and XLDs) use
different definitions of the position of a pixel: For a region, a pixel is positioned at its middle,
for affine trans point 2d at its upper left corner. Therefore, 0.5 must be added to the pixel
coordinates before transforming them and subtracted again before creating the regions.

HALCON Application Guide, 2005-02-01


4.3.3 Dealing with Multiple Models 35

4.3.3 Dealing with Multiple Models

When searching for multiple models simultaneously as described in section 3.3, it is use-
ful to store the information about the models, i.e., the reference point and the model re-
gion or XLD contour, in tuples. The following example code stems from the already partly
described HDevelop program hdevelop\multiple models.dev, which uses the operator
find scaled shape models to search simultaneously for the rings and nuts depicted in
figure 15.
Step 1: Inspect the models

inspect_shape_model (ImageROIRing, PyramidImage, ModelRegionRing, 1, 30)


gen_contours_skeleton_xld (ModelRegionRing, ShapeModelRing, 1, ’filter’)
area_center (ModelROIRing, Area, CenterROIRowRing, CenterROIColumnRing)
inspect_shape_model (ImageROINut, PyramidImage, ModelRegionNut, 1, 30)
gen_contours_skeleton_xld (ModelRegionNut, ShapeModelNut, 1, ’filter’)
area_center (ModelROINut, Area, CenterROIRowNut, CenterROIColumnNut)

As in the previous sections, the XLD contours corresponding to the two models are created with
the operators inspect shape model and gen contours skeleton xld, the reference points
are determined using area center.
Step 2: Save the information about the models in tuples

NumContoursRing := |ShapeModelRing|
NumContoursNut := |ShapeModelNut|
ShapeModels := [ShapeModelRing,ShapeModelNut]
StartContoursInTuple := [1, NumContoursRing+1]
NumContoursInTuple := [NumContoursRing, NumContoursNut]
CenterROIRows := [CenterROIRowRing, CenterROIRowNut]
CenterROIColumns := [CenterROIColumnRing, CenterROIColumnNut]

To facilitate the access to the shape models later, the XLD contours and the reference points
are saved in tuples in analogy to the model IDs (see section 3.3). However, when concatenating
XLD contours with the operator concat obj, one must keep in mind that XLD objects are
already tuples as they may consist of multiple contours! To access the contours belonging to a
certain model, you therefore need the number of contours of a model and the starting index in
the concatenated tuple. The former is determined using the operator count obj; the contours
of the ring start with the index 1, the contours of the nut with the index 1 plus the number of
contours of the ring.

HALCON 6.1.4
36 Application Note on Shape-Based Matching

Step 3: Access the found instances

find_scaled_shape_models (SearchImageReduced, ModelIDs, [-rad(22.5),


-rad(30)], [rad(45), rad(60)], [0.8, 0.6], [1.2,
1.4], 0.8, 0, 0, ’interpolation’, 0, 0.9,
RowCheck, ColumnCheck, AngleCheck, ScaleCheck,
Score, ModelIndex)
for i := 0 to |Score| - 1 by 1
Model := ModelIndex[i]
vector_angle_to_rigid (CenterROIRows[Model], CenterROIColumns[Model],
0, RowCheck[i], ColumnCheck[i], AngleCheck[i],
MovementOfObject)
hom_mat2d_scale (MovementOfObject, ScaleCheck[i], ScaleCheck[i],
RowCheck[i], ColumnCheck[i], MoveAndScalingOfObject)
copy_obj (ShapeModels, ShapeModel, StartContoursInTuple[Model],
NumContoursInTuple[Model])
affine_trans_contour_xld (ShapeModel, ModelAtNewPosition,
MoveAndScalingOfObject)
dev_display (ModelAtNewPosition)
endfor

As already described in section 4.3.2, in case of multiple matches the output parameters Row
etc. contain tuples of values, which are typically accessed in a loop, using the loop variable
as the index into the tuples. When searching for multiple models, a second index is involved:
The output parameter Model indicates to which model a match belongs by storing the index of
the corresponding model ID in the tuple of IDs specified in the parameter ModelIDs. This may
sound confusing, but can be realized in an elegant way in the code: For each found instance, the
model ID index is used to select the corresponding information from the tuples created above.
As already noted, the XLD representing the model can consist of multiple contours; therefore,
you cannot access them directly using the operator select obj. Instead, the contours belong-
ing to the model are selected via the operator copy obj, specifying the start index of the model
in the concatenated tuple and the number of contours as parameters. Note that copy obj does
not copy the contours, but only the corresponding HALCON objects, which can be thought of
as references to the contours.

4.3.4 Aligning Other ROIs

The results of the matching can be used to align ROIs for other image processing steps. i.e., to
position them relative to the image part acting as the model. This method is very useful, e.g.,
if the object to be inspected is allowed to move or if multiple instances of the object are to be
inspected at once as in the example application described below.
In the example application hdevelop\align measurements.dev the task is to inspect razor
blades by measuring the width and the distance of their “teeth”. Figure 20a shows the model
ROI, figure 20b the corresponding model region.
The inspection task is realized with the following steps:

HALCON Application Guide, 2005-02-01


4.3.4 Aligning Other ROIs 37

a) b) c)

d)

Figure 20: Aligning ROIs for inspecting parts of a razor: a) ROIs for the model; b) the model; c)
measuring ROIs; d) inspection results with zoomed faults.

Step 1: Position the measurement ROIs for the model blade


Rect1Row := 244
Rect1Col := 73
DistColRect1Rect2 := 17
Rect2Row := Rect1Row
Rect2Col := Rect1Col + DistColRect1Rect2
RectPhi := rad(90)
RectLength1 := 122
RectLength2 := 2

First, two rectangular measurement ROIs are placed over the teeth of the razor blade acting as
the model as shown in figure 20c.

HALCON 6.1.4
38 Application Note on Shape-Based Matching

Step 2: Move the reference point to the center of the first measure ROI
DistRect1CenterRow := Rect1Row - CenterROIRow
DistRect1CenterCol := Rect1Col - CenterROIColumn
set_shape_model_origin (ModelID, DistRect1CenterRow, DistRect1CenterCol)

Now, the reference point of the model is moved to the center of the first measure ROI using
the operator set shape model origin. As already mentioned, the operator expects not the
absolute position of the new reference point, but its distance to the default reference point. Note
that this step is only included to show how to use set shape model origin; as described in
section 3.4, the accuracy of the estimated position may decrease when using a modified point
of reference.
Step 3: Find all razor blades

find_shape_model (SearchImage, ModelID, 0, 0, 0.8, 0, 0.5, ’interpolation’,


0, 0.7, RowCheck, ColumnCheck, AngleCheck, Score)

Then, all instances of the model object are searched for in the image.
Step 4: Determine the affine transformation

for i := 0 to |Score|-1 by 1
vector_angle_to_rigid (Rect1Row, Rect1Col, 0, RowCheck[i],
ColumnCheck[i], AngleCheck[i],
MovementOfObject)

For each razor blade, the transformation representing its position and orientation is calculated.
Because the reference point was moved to the center of the first measure ROI, these coordinates
are now used in the call to vector to rigid.
Step 5: Create measurement objects at the corresponding positions

RectPhiCheck := RectPhi + AngleCheck[i]


gen_measure_rectangle2 (RowCheck[i], ColumnCheck[i],
RectPhiCheck, RectLength1, RectLength2,
Width, Height, ’bilinear’,
MeasureHandle1)
affine_trans_point_2d (MovementOfObject, Rect2Row+0.5,
Rect2Col+0.5, Rect2RowTmp, Rect2ColTmp)
Rect2RowCheck := Rect2RowTmp-0.5
Rect2ColCheck := Rect2ColTmp-0.5
gen_measure_rectangle2 (Rect2RowCheck, Rect2ColCheck,
RectPhiCheck, RectLength1, RectLength2,
Width, Height, ’bilinear’,
MeasureHandle2)

Because the center of the first measure ROI serves as the reference point of the model, the
returned position of the match can be used directly in the call to gen measure rectangle2.
Unfortunately, there is only one point of reference. Therefore, the new position of the second
measure ROI must be calculated explicitly using the operator affine trans point 2d. As
remarked in section 4.3.2, the code adding and subtracting 0.5 to and from the point coordinates
! is necessary because the operator affine trans point 2d and the HALCON regions (and
XLDs) use different definitions of the position of a pixel.

HALCON Application Guide, 2005-02-01


4.3.5 Rectifying the Search Results 39

In the example application, the individual razor blades are only translated but not rotated relative
to the model position. Instead of applying the full affine transformation to the measure ROIs
and then creating new measure objects, one can therefore use the operator translate measure
to translate the measure objects themselves. The example program contains the corresponding
code; you can switch between the two methods by modifying a variable at the top of the pro-
gram.
Step 6: Measure the width and the distance of the “teeth”

measure_pairs (SearchImage, MeasureHandle1, 2, 25, ’negative’,


’all’, RowEdge11, ColEdge11, Amp11, RowEdge21,
ColEdge21, Amp21, Width1, Distance1)
measure_pairs (SearchImage, MeasureHandle2, 2, 25, ’negative’,
’all’, RowEdge12, ColEdge12, Amp12, RowEdge22,
ColEdge22, Amp22, Width2, Distance2)

Now, the actual measurements are performed using the operator measure pairs.
Step 7: Inspect the measurements

NumberTeeth1 := |Width1|
if (NumberTeeth1 < 37)
for j := 0 to NumberTeeth1 - 2 by 1
if (Distance1[j] > 4.0)
RowFault := round(0.5*(RowEdge11[j+1] + RowEdge21[j]))
ColFault := round(0.5*(ColEdge11[j+1] + ColEdge21[j]))
disp_rectangle2 (WindowHandle, RowFault, ColFault, 0,
4, 4)

Finally, the measurements are inspected. If a “tooth” is too short or missing completely, no
edges are extracted at this point resulting in an incorrect number of extracted edge pairs. In this
case, the faulty position can be determined by checking the distance of the teeth. Figure 20d
shows the inspection results for the example.
Please note that the example program is not able to display the fault if it occurs at the first or
the last tooth.

4.3.5 Rectifying the Search Results

In the previous section, the matching results were used to determine the so-called forward trans-
formation, i.e., how objects are transformed from the model into the search image. Using this
transformation, ROIs specified in the model image can be positioned correctly in the search
image.
You can also determine the inverse transformation which transforms objects from the search
image back into the model image. With this transformation, you can rectify the search image
(or parts of it), i.e., transform it such that the matched object is positioned as it was in the model
image. This method is useful if the following image processing step is not invariant against
rotation, e.g., OCR or the variation model. Note that image rectification can also be useful
before applying shape-based matching, e.g., if the camera observes the scene under an oblique
angle; see section 5.1 for more information.

HALCON 6.1.4
40 Application Note on Shape-Based Matching

a) b)

c) d)

e)

Figure 21: Rectifying the search results: a) ROIs for the model and for the number extraction;
b) the model; c) number ROI at matched position; d) rectified search image (only
relevant part shown); e) extracted numbers.

The inverse transformation can be determined and applied in a few steps, which are
described below; in the corresponding example application of the HDevelop program
hdevelop\rectify results.dev the task is to extract the serial number on CD covers (see
figure 21).
Step 1: Calculate the inverse transformation

hom_mat2d_invert (MovementOfObject, InverseMovementOfObject)

You can invert a transformation easily using the operator hom mat2d invert.

HALCON Application Guide, 2005-02-01


4.3.5 Rectifying the Search Results 41

Step 2: Rectify the search image

affine_trans_image (SearchImage, RectifiedSearchImage,


InverseMovementOfObject, ’constant’, ’false’)

Now, you can apply the inverse transformation to the search image using the operator
affine trans image. Figure 21d shows the resulting rectified image of a different CD; unde-
fined pixels are marked in grey.
Step 3: Extract the numbers

reduce_domain (RectifiedSearchImage, NumberROI,


RectifiedNumberROIImage)
threshold (RectifiedNumberROIImage, Numbers, 0, 128)
connection (Numbers, IndividualNumbers)

Now, the serial number is positioned correctly within the original ROI and can be extracted
without problems. Figure 21e shows the result, which could then, e.g., be used as the input for
OCR.
Unfortunately, the operator affine trans image transforms the full image even if you restrict
its domain with the operator reduce domain. In a time-critical application it may therefore
be necessary to crop the search image before transforming it. The corresponding steps are
visualized in figure 22.
Step 1: Crop the search image

smallest_rectangle1 (NumberROIAtNewPosition, Row1, Column1, Row2,


Column2)
crop_rectangle1 (SearchImage, CroppedNumberROIImage, Row1, Column1,
Row2, Column2)

First, the smallest axis-parallel rectangle surrounding the transformed number ROI is com-
puted using the operator smallest rectangle1, and the search image is cropped to this part.
Figure 22b shows the resulting image overlaid on a grey rectangle to facilitate the comparison
with the subsequent images.
Step 2: Create an extended affine transformation

hom_mat2d_translate (MovementOfObject, - Row1, - Column1,


MoveAndCrop)
hom_mat2d_invert (MoveAndCrop, InverseMoveAndCrop)

In fact, the cropping can be interpreted as an additional affine transformation: a translation by


the negated coordinates of the upper left corner of the cropping rectangle (see figure 22a). We
therefore “add” this transformation to the transformation describing the movement of the object
using the operator hom mat2d translate, and then invert this extended transformation with
the operator hom mat2d invert.
Step 3: Transform the cropped image

affine_trans_image (CroppedNumberROIImage, RectifiedROIImage,


InverseMoveAndCrop, ’constant’, ’true’)
reduce_domain (RectifiedROIImage, NumberROI,
RectifiedNumberROIImage)

HALCON 6.1.4
42 Application Note on Shape-Based Matching

Column1
a)

translate(-Row1,-Column1)

Row1

b) c) d)

Figure 22: Rectifying only part of the search image: a) smallest image part containing the ROI;
b) cropped search image; c) result of the rectification; d) rectified image reduced to
the original number ROI.

Using the inverted extended transformation, the cropped image can easily be rectified with the
operator affine trans image (figure 22c) and then be reduced to the original number ROI
(figure 22d) in order to extract the numbers.

4.4 Using the Estimated Scale

Similarly to the rotation (compare section 4.3), the scaling is performed around the center of
the ROI – if you didn’t use set shape model origin, that is. This is depicted in figure 23a at
the example of an ROI whose center does not coincide with the center of the IC.
The estimated scale, which is returned in the parameter Scale, can be used similarly to the posi-
tion and orientation. However, there is no convenience operator like vector angle to rigid
that creates an affine transformation including the scale; therefore, the scaling must be added
separately. How to achieve this is explained below; in the corresponding example HDevelop
program hdevelop\multiple scales.dev, the task is to find nuts of varying sizes and to
determine suitable points for grasping them (see figure 24).

HALCON Application Guide, 2005-02-01


4.4 Using the Estimated Scale 43

Column Column
model image search image
Scale = 1 Scale = 0.5

Row Row

Figure 23: The center of the ROI acts as the point of reference for the scaling.

a) b) c)

d)

Figure 24: Determining grasping points on nuts of varying sizes: a) ring-shaped ROI; b) model;
c) grasping points defined on the model nut; d) results of the matching.

Step 1: Specify grasping points


RowUpperPoint := 284
ColUpperPoint := 278
RowLowerPoint := 362
ColLowerPoint := 278

In the example program, the grasping points are specified directly in the model image; they are
marked with arrows in figure 24c.
Step 2: Determine the complete transformation

find_scaled_shape_model (SearchImage, ModelID, -rad(30), rad(60), 0.6, 1.4,


0.9, 0, 0, ’interpolation’, 0, 0.8, RowCheck,
ColumnCheck, AngleCheck, ScaleCheck, Score)
for i := 0 to |Score| - 1 by 1
vector_angle_to_rigid (CenterROIRow, CenterROIColumn, 0, RowCheck[i],
ColumnCheck[i], AngleCheck[i], MovementOfObject)
hom_mat2d_scale (MovementOfObject, ScaleCheck[i], ScaleCheck[i],
RowCheck[i], ColumnCheck[i], MoveAndScalingOfObject)
affine_trans_contour_xld (ShapeModel, ModelAtNewPosition,
MoveAndScalingOfObject)

HALCON 6.1.4
44 Application Note on Shape-Based Matching

After the matching, first the translational and rotational part of the transformation is determined
with the operator vector angle to rigid as in the previous sections. Then, the scaling is
added using the operator hom mat2d scale. Note that the position of the match, i.e., the trans-
formed center of the ROI, is used as the point of reference; this becomes necessary because the
scaling is performed “after” the translation and rotation. The resulting, complete transformation
can be used as before to display the model at the position of the matches.
Step 3: Calculate the transformed grasping points

affine_trans_point_2d (MoveAndScalingOfObject, RowUpperPoint+0.5,


ColUpperPoint+0.5, TmpRowUpperPoint,
TmpColUpperPoint)
affine_trans_point_2d (MoveAndScalingOfObject, RowLowerPoint+0.5,
ColLowerPoint+0.5, TmpRowLowerPoint,
TmpColLowerPoint)
RowUpperPointCheck := TmpRowUpperPoint-0.5
ColUpperPointCheck := TmpColUpperPoint-0.5
RowLowerPointCheck := TmpRowLowerPoint-0.5
ColLowerPointCheck := TmpColLowerPoint-0.5

Of course, the affine transformation can also be applied to other points in the model image with
the operator affine trans point 2d. In the example, this is used to calculate the position of
the grasping points for all nuts; they are marked with arrows in figure 24d.
As noted in section 4.3.2, the code adding and subtracting 0.5 to and from the point coordinates
! is necessary because the operator affine trans point 2d and the HALCON regions (and
XLDs) use different definitions of the position of a pixel.

5 Miscellaneous

5.1 Adapting to a Changed Camera Orientation

As shown in the sections above, HALCON’s shape-based matching allows to localize objects
even if their position and orientation in the image or their scale changes. However, the shape-
based matching fails if the camera observes the scene under an oblique angle, i.e., if it is not
pointed perpendicularily at the plane in which the objects move, because an object then appears
distorted due to perspective projection; even worse, the distortion changes with the position and
orientation of the object.
In such a case we recommend to rectify images before applying the matching. This is a three-
step process: First, you must calibrate the camera, i.e., determine its position and orienta-
tion and other parameters, using the operator camera calibration. Secondly, the calibration
data is used to create a mapping function via the operator gen image to world plane map,
which is then applied to images with the operator map image. For more information please
refer to the HDevelop example program pm world plane.dev, which can be found in the
hdevelop\Applications\FA of the directory %HALCONROOT%\examples.

HALCON Application Guide, 2005-02-01


5.2 Reusing Models 45

5.2 Reusing Models


If you want to reuse created models in other HALCON applications, all you need to do is to
store the relevant information in files and then read it again. The following example code stems
from the HDevelop program hdevelop\reuse model.dev. First, a model is created and the
corresponding XLD contour and the reference point are determined:
create_scaled_shape_model (ImageROI, 0, -rad(30), rad(60), 0, 0.6, 1.4, 0,
’none’, ’use_polarity’, 60, 10, ModelID)
inspect_shape_model (ImageROI, ShapeModelImage, ShapeModelRegion, 1, 30)
gen_contours_skeleton_xld (ShapeModelRegion, ShapeModel, 1, ’filter’)
area_center (ModelROI, Area, CenterROIRow, CenterROIColumn)
Then, this information is stored in files using the operators write shape model (for the model),
write contour xld arc info (for the XLD contour), and write tuple (for the reference
point, whose coordinates have been concatenated into a tuple first):
write_shape_model (ModelID, ModelFile)
write_contour_xld_arc_info (ShapeModel, XLDFile)
ReferencePoint := [CenterROIRow, CenterROIColumn]
write_tuple (ReferencePoint, RefPointFile)

In the example program, all shape models are cleared to represent the start of another applica-
tion.
The model, the XLD contour, and the reference point are now read from the files using
the operators read shape model, read contour xld arc info, and read tuple, respec-
tively. Furthermore, the parameters used to create the model are accessed with the operator
get shape model params:
read_shape_model (ModelFile, ReusedModelID)
read_contour_xld_arc_info (ReusedShapeModel, XLDFile)
read_tuple (RefPointFile, ReusedReferencePoint)
ReusedCenterROIRow := ReusedReferencePoint[0]
ReusedCenterROICol := ReusedReferencePoint[1]
get_shape_model_params (ReusedModelID, NumLevels, AngleStart, AngleExtent,
AngleStep, ScaleMin, ScaleMax, ScaleStep, Metric,
MinContrast)
Now, the model can be used as if it was created in the application itself:
find_scaled_shape_model (SearchImage, ReusedModelID, AngleStart,
AngleExtent, ScaleMin, ScaleMax, 0.9, 0, 0,
’interpolation’, 0, 0.8, RowCheck, ColumnCheck,
AngleCheck, ScaleCheck, Score)
for i := 0 to |Score| - 1 by 1
vector_angle_to_rigid (ReusedCenterROIRow, ReusedCenterROICol, 0,
RowCheck[i], ColumnCheck[i], AngleCheck[i],
MovementOfObject)
hom_mat2d_scale (MovementOfObject, ScaleCheck[i], ScaleCheck[i],
RowCheck[i], ColumnCheck[i], MoveAndScalingOfObject)
affine_trans_contour_xld (ReusedShapeModel, ModelAtNewPosition,
MoveAndScalingOfObject)
dev_display (ModelAtNewPosition)
endfor

HALCON 6.1.4
46 Application Note on Shape-Based Matching

HALCON Application Guide, 2005-02-01

You might also like