Halrefmanhdev - en US
Halrefmanhdev - en US
HALCON/HDevelop
Reference Manual
This manual describes the operators of HALCON, version 8.0.2, in HDevelop syntax. It was generated on May
13, 2008.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior written
permission of the publisher.
Copyright
c 1997-2008 by MVTec Software GmbH, München, Germany MVTec Software GmbH
1 Classification 1
1.1 Gaussian-Mixture-Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
add_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
classify_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
clear_all_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
clear_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
clear_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
create_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
evaluate_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
get_params_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
get_prep_info_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
get_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
get_sample_num_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
read_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
read_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
train_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
write_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
write_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
clear_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
close_all_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
close_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
create_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
descript_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
enquire_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
enquire_reject_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
get_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
learn_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
learn_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
read_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
read_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
test_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
write_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
add_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
classify_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
clear_all_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
clear_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
clear_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
create_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
evaluate_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
get_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
get_prep_info_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
get_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
get_sample_num_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
read_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
read_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
train_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
write_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
write_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
add_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
classify_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
clear_all_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
clear_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
clear_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
create_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
get_params_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
get_prep_info_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
get_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
get_sample_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
get_support_vector_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
get_support_vector_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
read_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
read_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
reduce_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
train_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
write_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
write_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2 Control 59
assign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
comment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
continue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
elseif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
endfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
endif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
endwhile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
ifelse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
until . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
while . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3 Develop 71
dev_clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
dev_clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
dev_close_inspect_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
dev_close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
dev_display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
dev_error_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
dev_get_preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
dev_inspect_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
dev_map_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
dev_map_prog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
dev_map_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
dev_open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
dev_set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
dev_set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
dev_set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
dev_set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
dev_set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
dev_set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
dev_set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
dev_set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
dev_set_preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
dev_set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
dev_set_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
dev_set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
dev_unmap_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
dev_unmap_prog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
dev_unmap_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
dev_update_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
dev_update_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
dev_update_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
dev_update_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4 File 93
4.1 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
read_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
read_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
write_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.2 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
delete_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
file_exists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
list_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
read_world_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.3 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
read_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
write_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.4 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
close_all_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
close_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
fnew_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
fread_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
fread_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
fread_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
fwrite_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.5 Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
read_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
write_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.6 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
read_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
read_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
read_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
read_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
write_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
write_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
write_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
write_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5 Filter 117
5.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
abs_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
add_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
div_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
invert_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
max_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
min_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
mult_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
scale_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
sqrt_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
sub_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.2 Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
bit_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
bit_lshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
bit_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
bit_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
bit_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
bit_rshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
bit_slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
bit_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
cfa_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
gen_principal_comp_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
linear_trans_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
principal_comp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
rgb1_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
rgb3_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
trans_from_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
trans_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.4 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
close_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
close_edges_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
derivate_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
diff_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
edges_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
edges_color_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
edges_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
edges_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
frei_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
frei_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
highpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
info_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
kirsch_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
kirsch_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
laplace_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
prewitt_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
prewitt_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
roberts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
robinson_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
robinson_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
sobel_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
sobel_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.5 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
adjust_mosaic_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
coherence_enhancing_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
emphasize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
equ_histo_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
illuminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
mean_curvature_flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
scale_image_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
shock_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.6 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
convol_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
convol_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
energy_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
fft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
fft_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
fft_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
gen_bandfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
gen_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
gen_derivative_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
gen_filter_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
gen_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
gen_gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
gen_highpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
gen_lowpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
gen_sin_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
gen_std_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
optimize_fft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
optimize_rft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
phase_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
phase_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
power_byte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
power_ln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
power_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
read_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
rft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
write_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.7 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
affine_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
affine_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
gen_bundle_adjusted_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
gen_cube_map_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
gen_projective_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
gen_spherical_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
map_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
mirror_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
polar_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
polar_trans_image_ext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
polar_trans_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
projective_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
projective_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
rotate_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
zoom_image_factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
zoom_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.8 Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
harmonic_interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
inpainting_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
inpainting_ced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
inpainting_ct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
inpainting_mcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
inpainting_texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5.9 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
bandpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
lines_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
lines_facet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
lines_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.10 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
exhaustive_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
exhaustive_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
gen_gauss_pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
monotony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.11 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
convol_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
expand_domain_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
gray_inside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
gray_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
lut_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
topographic_sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.12 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
add_noise_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
add_noise_white . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
gauss_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
noise_distribution_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
sp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.13 Optical-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
optical_flow_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
unwarp_image_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
vector_field_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
5.14 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
corner_response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
dots_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
points_foerstner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
points_harris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
points_sojka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.15 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
anisotrope_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
anisotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
binomial_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
eliminate_min_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
eliminate_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
fill_interlace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
gauss_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
info_smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
isotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
mean_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
mean_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
mean_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
median_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
median_separate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
median_weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
midrange_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
rank_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
sigma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
smooth_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
trimmed_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
5.16 Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
deviation_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
entropy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
texture_laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
5.17 Wiener-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
gen_psf_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
gen_psf_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
simulate_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
simulate_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
wiener_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
wiener_filter_ni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6 Graphics 323
6.1 Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
drag_region1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
drag_region2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
drag_region3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
draw_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
draw_circle_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
draw_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
draw_ellipse_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
draw_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
draw_line_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
draw_nurbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
draw_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
draw_nurbs_interp_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
draw_nurbs_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
draw_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
draw_point_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
draw_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
draw_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
draw_rectangle1_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
draw_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
draw_rectangle2_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
draw_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
draw_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
draw_xld_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
6.2 Gnuplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
gnuplot_close . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
gnuplot_open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
gnuplot_open_pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
gnuplot_plot_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
gnuplot_plot_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
gnuplot_plot_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
6.3 LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
disp_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
draw_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
get_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
get_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
get_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
query_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
set_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
set_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
write_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
6.4 Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
get_mbutton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
get_mposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
get_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
query_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
set_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
6.5 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
disp_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
disp_arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
disp_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
disp_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
disp_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
disp_cross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
disp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
disp_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
disp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
disp_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
disp_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
disp_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
disp_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
disp_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
disp_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
disp_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
6.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
get_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
get_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
get_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
get_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
get_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
get_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
get_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
get_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
get_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
get_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
get_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
get_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
get_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
get_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
get_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
query_all_colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
query_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
query_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
query_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
query_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
query_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
query_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
query_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
set_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
set_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
set_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
set_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
set_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
set_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
set_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
set_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
set_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
set_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
set_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
6.7 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
get_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
get_string_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
get_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
get_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
new_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
query_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
query_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
read_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
read_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
set_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
set_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
set_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
write_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
6.8 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
clear_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
copy_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
dump_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
dump_window_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
get_os_window_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
get_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
get_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
get_window_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
get_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
move_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
new_extern_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
open_textwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
query_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
set_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
set_window_dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
set_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
slide_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
7 Image 447
7.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
get_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
get_image_pointer1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
get_image_pointer1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
get_image_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
get_image_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.2 Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
close_all_framegrabbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
close_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
get_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
get_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
grab_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
grab_data_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
grab_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
grab_image_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
grab_image_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
info_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
open_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
set_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
set_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
7.3 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
access_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
append_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
channels_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
compose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
compose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
compose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
compose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
compose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
compose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
count_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
decompose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
decompose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
decompose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
decompose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
decompose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
decompose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
image_to_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
7.4 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
copy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
gen_image1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
gen_image1_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
gen_image1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
gen_image3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
gen_image_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
gen_image_gray_ramp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
gen_image_interleaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
gen_image_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
gen_image_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
gen_image_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
region_to_bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
region_to_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
region_to_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
7.5 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
add_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
change_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
full_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
get_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
rectangle1_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
reduce_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
7.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
area_center_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
cooc_feature_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
cooc_feature_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
elliptic_axis_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
entropy_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
estimate_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
fit_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
fit_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
fuzzy_entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
fuzzy_perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
gen_cooc_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
gray_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
gray_histo_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
gray_projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
histo_2dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
min_max_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
moments_gray_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
plane_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
select_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
shape_histo_all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
shape_histo_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
7.7 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
change_format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
crop_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
crop_domain_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
crop_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
crop_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
tile_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
tile_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
tile_images_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
7.8 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
overpaint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
overpaint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
paint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
paint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
paint_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
set_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
7.9 Type-Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
complex_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
convert_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
real_to_complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
real_to_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
vector_field_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
8 Lines 537
8.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
approx_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
approx_chain_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
8.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
line_position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
partition_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
select_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
select_lines_longest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
9 Matching 549
9.1 Component-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
clear_all_component_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
clear_all_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
clear_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
clear_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
cluster_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
create_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
create_trained_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
find_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
gen_initial_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
get_component_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
get_component_model_tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
get_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
get_found_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
get_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
inspect_clustered_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
modify_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
read_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
read_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
train_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
write_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
write_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
9.2 Correlation-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
clear_all_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
clear_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
create_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
find_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
get_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
get_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
read_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
set_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
write_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
9.3 Gray-Value-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
adapt_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
best_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
best_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
best_match_pre_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
best_match_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
best_match_rot_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
clear_all_templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
clear_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
create_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
create_template_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
fast_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
fast_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
read_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
set_offset_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
set_reference_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
write_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
9.4 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
clear_all_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
clear_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
create_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
create_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
create_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
determine_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
find_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
find_aniso_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
find_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
find_scaled_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
find_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
find_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
get_shape_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
get_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
get_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
inspect_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
read_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
set_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
write_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
10 Matching-3D 649
affine_trans_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
clear_all_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
clear_all_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
clear_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
clear_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
convert_point_3d_cart_to_spher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
convert_point_3d_spher_to_cart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
create_cam_pose_look_at_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
create_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
find_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
get_object_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
get_shape_model_3d_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
get_shape_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
project_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
project_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
read_object_model_3d_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
read_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
trans_pose_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
write_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
11 Morphology 675
11.1 Gray-Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
dual_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
gen_disc_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
gray_bothat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
gray_closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
gray_closing_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
gray_closing_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
gray_dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
gray_dilation_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
gray_dilation_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
gray_erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
gray_erosion_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
gray_erosion_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
gray_opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
gray_opening_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
gray_opening_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
gray_range_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
gray_tophat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
read_gray_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
11.2 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
bottom_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
closing_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
closing_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
closing_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
dilation1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
dilation2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
dilation_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
dilation_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
dilation_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
dilation_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
erosion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
erosion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
erosion_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
erosion_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
erosion_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
erosion_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
gen_struct_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
golay_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
hit_or_miss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
hit_or_miss_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
hit_or_miss_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
minkowski_add1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
minkowski_add2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
minkowski_sub1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
minkowski_sub2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
morph_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
morph_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
morph_skiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
opening_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
opening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
opening_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
opening_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
thickening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
thickening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
thickening_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
thinning_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
thinning_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
top_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
12 OCR 743
12.1 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
close_all_ocrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
close_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
create_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
do_ocr_multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
do_ocr_single . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
info_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
ocr_change_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
ocr_get_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
read_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
testd_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
traind_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
trainf_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
write_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
12.2 Lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
clear_all_lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
clear_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
create_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
import_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
inspect_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
lookup_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
suggest_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
12.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
clear_all_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
clear_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
create_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
do_ocr_multi_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
do_ocr_single_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
do_ocr_word_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
get_features_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
get_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
get_prep_info_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
read_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
trainf_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
write_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
12.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
clear_all_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
clear_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
create_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
do_ocr_multi_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
do_ocr_single_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
do_ocr_word_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
get_features_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
get_params_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
get_prep_info_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
get_support_vector_num_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
get_support_vector_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
read_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
reduce_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
trainf_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
write_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
12.5 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
segment_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
select_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
text_line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788
text_line_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
12.6 Training-Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
append_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
concat_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
read_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
read_ocr_trainf_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792
read_ocr_trainf_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
write_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
write_ocr_trainf_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
13 Object 795
13.1 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
count_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
get_channel_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
get_obj_class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
test_equal_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
test_obj_def . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
13.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
concat_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
copy_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
gen_empty_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
integer_to_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
obj_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
select_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
14 Regions 805
14.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
get_region_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
get_region_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
get_region_convex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
get_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
get_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
get_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
14.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
gen_checker_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
gen_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
gen_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
gen_empty_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
gen_grid_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
gen_random_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
gen_random_regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
gen_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
gen_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
gen_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
gen_region_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
gen_region_hline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
gen_region_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
gen_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
gen_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
gen_region_polygon_filled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
gen_region_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
gen_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
label_to_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
14.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
area_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
connect_and_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
contlength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
diameter_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
eccentricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
elliptic_axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
euler_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
find_neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
get_region_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
get_region_thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
hamming_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
hamming_distance_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
inner_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
inner_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
moments_region_2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
moments_region_2nd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
moments_region_2nd_rel_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
moments_region_3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
moments_region_3rd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
moments_region_central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
moments_region_central_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
orientation_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
rectangularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
runlength_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
runlength_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
select_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
select_region_spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
select_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
select_shape_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
select_shape_std . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
smallest_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
smallest_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
smallest_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
spatial_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
14.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
affine_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
mirror_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
move_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
polar_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
polar_trans_region_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
projective_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
transpose_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
zoom_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
14.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
symm_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
union1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
union2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
14.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
test_equal_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
test_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 882
test_subset_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
14.7 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
background_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
clip_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
clip_region_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
distance_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
eliminate_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
expand_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
fill_up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
fill_up_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
hamming_change_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
interjacent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
junctions_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
merge_regions_line_scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
partition_dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
partition_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
rank_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
remove_noise_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
shape_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
sort_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
split_skeleton_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
split_skeleton_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
15 Segmentation 905
15.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
add_samples_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
add_samples_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
add_samples_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
class_2dim_sup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
class_2dim_unsup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
class_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
class_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
classify_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
classify_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
classify_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916
learn_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
learn_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
15.2 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
detect_edge_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
hysteresis_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
nonmax_suppression_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
nonmax_suppression_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
15.3 Regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
expand_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
expand_gray_ref . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
expand_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
regiongrowing_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
regiongrowing_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
15.4 Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
auto_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
bin_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
char_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
check_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938
dual_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
dyn_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
fast_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943
histo_to_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
threshold_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
var_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
zero_crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
zero_crossing_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
15.5 Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
critical_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
local_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
local_max_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
local_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
local_min_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
lowlands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
lowlands_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
plateaus_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
pouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
saddle_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
watersheds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
watersheds_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
16 System 965
16.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
count_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
get_modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
reset_obj_db . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
16.2 Error-Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
get_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
get_error_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
get_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
query_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
set_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
16.3 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
get_chapter_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
get_keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974
get_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
get_operator_name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
get_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
get_param_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_param_num . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
get_param_types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
query_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
query_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
search_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
16.4 Operating-System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
count_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
system_call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
wait_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
16.5 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
check_par_hw_potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
load_par_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
store_par_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
16.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
16.7 Serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
clear_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
close_all_serials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
close_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
get_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997
open_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997
read_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998
set_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
write_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
16.8 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
close_socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
get_next_socket_data_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
get_socket_descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
get_socket_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
open_socket_accept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
open_socket_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
receive_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
receive_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
receive_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
receive_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
send_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
send_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
send_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
send_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
set_socket_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008
socket_accept_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
17 Tools 1011
17.1 2D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
affine_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
affine_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
bundle_adjust_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
hom_mat2d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
hom_mat2d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
hom_mat2d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
hom_mat2d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
hom_mat2d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
hom_mat2d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
hom_mat2d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
hom_mat2d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021
hom_mat2d_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
hom_mat2d_slant_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024
hom_mat2d_to_affine_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
hom_mat2d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026
hom_mat2d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
hom_mat2d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
hom_mat3d_project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
hom_vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
proj_match_points_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
projective_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034
projective_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
vector_angle_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036
vector_field_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
vector_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038
vector_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040
vector_to_similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
17.2 3D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
affine_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
convert_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
create_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
get_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
hom_mat3d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
hom_mat3d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
hom_mat3d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
hom_mat3d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
hom_mat3d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
hom_mat3d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054
hom_mat3d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
hom_mat3d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
hom_mat3d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
hom_mat3d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
read_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
set_origin_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
write_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062
17.3 Background-Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
close_all_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
close_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
create_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
get_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067
give_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
run_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070
set_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
update_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
17.4 Barcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
clear_all_bar_code_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
clear_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
create_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
find_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
get_bar_code_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076
get_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
get_bar_code_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079
set_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
17.5 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
caltab_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
cam_mat_to_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
cam_par_to_cam_mat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
camera_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
change_radial_distortion_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
change_radial_distortion_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
change_radial_distortion_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
contour_to_world_plane_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
create_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
disp_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
find_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1100
find_marks_and_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
gen_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103
gen_image_to_world_plane_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106
gen_radial_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1108
get_circle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
get_line_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
get_rectangle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1112
hand_eye_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
image_points_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
image_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124
project_3d_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
radiometric_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127
read_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130
sim_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1132
stationary_camera_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1134
write_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139
17.6 Datacode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
clear_all_data_code_2d_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
clear_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
create_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
find_data_code_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
get_data_code_2d_objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149
get_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151
get_data_code_2d_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
query_data_code_2d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
read_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
set_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1162
write_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
17.7 Fourier-Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
abs_invar_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
fourier_1dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
fourier_1dim_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169
invar_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1170
match_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171
move_contour_orig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172
prep_contour_fourier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172
17.8 Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173
abs_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173
compose_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174
create_funct_1d_array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174
create_funct_1d_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175
derivate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175
distance_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
funct_1d_to_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
get_pair_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
get_y_value_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
integrate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1178
invert_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1178
local_min_max_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
match_funct_1d_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
negate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
num_points_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
read_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
sample_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
scale_y_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
smooth_funct_1d_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
smooth_funct_1d_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
transform_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
write_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
x_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
y_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
zero_crossings_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
17.9 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
angle_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
angle_lx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
distance_cc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1188
distance_cc_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
distance_lc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
distance_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
distance_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
distance_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
distance_pp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
distance_pr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194
distance_ps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
distance_rr_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
distance_rr_min_dil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
distance_sc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
distance_sl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
distance_sr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
distance_ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
get_points_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
intersection_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
projection_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
17.10 Grid-Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
connect_grid_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
create_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
find_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
gen_arbitrary_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
gen_grid_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
17.11 Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
hough_circle_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
hough_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
hough_line_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
hough_line_trans_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
hough_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
hough_lines_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
select_matching_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
17.12 Image-Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
clear_all_variation_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
clear_train_data_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
clear_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
compare_ext_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
compare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
create_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
get_thresh_images_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
get_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
prepare_direct_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
prepare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
read_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
train_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
write_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
17.13 Kalman-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
filter_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
read_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
sensor_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
update_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
17.14 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
close_all_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
close_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
fuzzy_measure_pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
fuzzy_measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
fuzzy_measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
gen_measure_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
gen_measure_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250
measure_projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
measure_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
reset_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254
set_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254
set_fuzzy_measure_norm_pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
translate_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258
17.15 OCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
close_all_ocvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
close_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
create_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1260
do_ocv_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261
read_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
traind_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
write_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1263
17.16 Shape-from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
depth_from_focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
estimate_al_am . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
estimate_sl_al_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
estimate_sl_al_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
estimate_tilt_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
estimate_tilt_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267
phot_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267
select_grayvalues_from_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268
sfs_mod_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1269
sfs_orig_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1270
sfs_pentland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1271
shade_height_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
17.17 Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
binocular_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
binocular_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
binocular_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
disparity_to_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
disparity_to_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
distance_to_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
essential_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
gen_binocular_proj_rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
gen_binocular_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
intersect_lines_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
match_essential_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
match_fundamental_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
match_rel_pose_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1298
reconst3d_from_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301
rel_pose_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
vector_to_essential_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
vector_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306
vector_to_rel_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1308
17.18 Tools-Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1310
decode_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1310
decode_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1311
discrete_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1312
find_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
find_1d_bar_code_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
find_1d_bar_code_scanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
find_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
gen_1d_bar_code_descr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324
gen_1d_bar_code_descr_gen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326
gen_2d_bar_code_descr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
get_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
get_1d_bar_code_scanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
get_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1332
get_2d_bar_code_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336
18 Tuple 1339
18.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
tuple_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
tuple_acos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
tuple_add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
tuple_asin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
tuple_atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
tuple_atan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
tuple_ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
tuple_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
tuple_cosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
tuple_cumul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343
tuple_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343
tuple_div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343
tuple_exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344
tuple_fabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344
tuple_floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345
tuple_fmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345
tuple_ldexp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345
tuple_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346
tuple_log10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346
tuple_max2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
tuple_min2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
tuple_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348
tuple_mult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348
tuple_neg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348
tuple_pow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
tuple_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
tuple_sgn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
tuple_sin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
tuple_sinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
tuple_sqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1351
tuple_sub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1351
tuple_tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
tuple_tanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
18.2 Bit-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
tuple_band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
tuple_bnot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353
tuple_bor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353
tuple_bxor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354
tuple_lsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354
tuple_rsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1355
18.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1355
tuple_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1355
tuple_greater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
tuple_greater_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
tuple_less . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
tuple_less_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
tuple_not_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358
18.4 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358
tuple_chr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358
tuple_chrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_is_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_ord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_ords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
tuple_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
tuple_round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
18.5 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
tuple_concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
tuple_gen_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
tuple_rand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
18.6 Element-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_sort_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
18.7 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
tuple_sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
18.8 Logical-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
tuple_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
18.9 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
tuple_remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
tuple_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
tuple_select_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
tuple_select_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_str_bit_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_uniq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
18.10 String-Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_regexp_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_regexp_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_regexp_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
tuple_regexp_test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
tuple_split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
tuple_str_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_str_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_strchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_strlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_strrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
tuple_strrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
tuple_strstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
19 XLD 1385
19.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
get_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
get_lines_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
get_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
get_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
19.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
gen_contour_nurbs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
gen_contour_polygon_rounded_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
gen_contour_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
gen_contour_region_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
gen_contours_skeleton_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
gen_cross_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
gen_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
gen_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
gen_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
gen_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
mod_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
19.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
area_center_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
area_center_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1399
circularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1400
compactness_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1400
contour_point_num_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
convexity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
diameter_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
dist_ellipse_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
dist_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
dist_rectangle2_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
eccentricity_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407
eccentricity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1408
elliptic_axis_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1408
elliptic_axis_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1410
fit_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
fit_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1413
fit_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
fit_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
get_contour_angle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419
get_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
get_contour_global_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
get_regress_params_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
info_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1422
length_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
local_max_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
max_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
moments_any_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
moments_any_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
moments_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428
moments_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
orientation_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1430
orientation_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1430
query_contour_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
query_contour_global_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
select_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
select_shape_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1433
select_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
smallest_circle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436
smallest_rectangle1_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437
smallest_rectangle2_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1438
test_self_intersection_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
test_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
19.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1440
affine_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1440
affine_trans_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
gen_parallel_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
polar_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443
polar_trans_contour_xld_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
projective_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
19.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
intersection_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
intersection_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
symm_difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
symm_difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
union2_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
union2_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
19.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454
add_noise_white_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454
clip_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
close_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
combine_roads_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
crop_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
merge_cont_line_scan_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
regress_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
segment_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1460
shape_trans_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1462
smooth_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
sort_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
split_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1464
union_adjacent_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465
union_cocircular_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1466
union_collinear_contours_ext_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
union_collinear_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1469
union_straight_contours_histo_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1471
union_straight_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
Index 1475
Chapter 1
Classification
1.1 Gaussian-Mixture-Models
add_sample_class_gmm ( : : GMMHandle, Features, ClassID,
Randomize : )
1
2 CHAPTER 1. CLASSIFICATION
clear_all_class_gmm ( : : : )
clear_class_gmm ( : : GMMHandle : )
HALCON 8.0.2
4 CHAPTER 1. CLASSIFICATION
Parameter
clear_samples_class_gmm ( : : GMMHandle : )
exactly one parameter: The parameter determines the exact number of centers to be used for all classes.
exactly two parameters: The first parameter determines the mimimum number of centers, the second determines
the maximum number of centers for all classes.
exactly 2 · N umClasses parameters: Alternatingly every first parameter determines the minimum number of
centers per class and every second parameters determines the maximum number of centers per class.
When upper and lower bounds are specified, the optimum number of centers will be determined with the help of
the Mimimum Message Length Criterion (MML). In general, we recommend to start the training with (too) many
centers as maximum and the expected number of centers as minimum.
Each center is described by the parameters center mj , covariance matrix Cj , and mixing coefficient Pj . These pa-
rameters are calculated from the training data by means of the Expectation Maximization (EM) algorithm. A GMM
can approximate an arbitrary probability density, provided that enough centers are being used. The covariance ma-
trices Cj have the dimensions NumDim · NumDim (NumComponents · NumComponents if preprocessing is
used) and are symmetric. Further constraints can be given by CovarType:
For CovarType = ’spherical’, Cj is a scalar multiple of the identity matrix Cj = s2j I. The center density
function p(x|j) is
2
1 kx − mj k
p(x|j) = exp(− )
2
(2πsj )d/2 2s2j
For CovarType = ’diag’, Cj is a diagonal matrix Cj = diag(s2j,1 , ..., s2j,d ). The center density function p(x|j)
is
d
1 X (xi − mj,i )2
p(x|j) = exp(− )
d 2s2j,i
s2j,i )d/2
Q
(2π i=1
i=1
For CovarType = ’full’, Cj is a positive definite matrix. The center density function p(x|j) is
1 1
p(x|j) = 1 exp(− (x − mj )T C−1 (x − mj ))
(2π)d/2 |Cj | 2 2
The complexity of the calculations increases from CovarType = ’spherical’ over CovarType = ’diag’ to
CovarType = ’full’. At the same time the flexibility of the centers increases. In general, ’spherical’ therefore
needs higher values for NumCenters than ’full’.
The procedure to use GMM is as follows: First, a GMM is created by create_class_gmm. Then,
training vectors are added by add_sample_class_gmm, afterwards they can be written to disk with
write_samples_class_gmm. With train_class_gmm the classifier center parameters (defined above)
are determined. Furthermore, they can be saved with write_class_gmm for later classifications.
From the mixing probabilities Pj and the center density function p(x|j), the probability density function p(x) can
be calculated by:
ncomp
X
p(x) = P (j)p(x|j)
j=1
The probability density function p(x) can be evaluated with evaluate_class_gmm for a feature vector x.
classify_class_gmm sorts the p(x) and therefore discovers the most probable class of the feature vector.
The parameters Preprocessing and NumComponents can be used to preprocess the training data and reduce
its dimensions. These parameteters are explained in the description of the operator create_class_mlp.
create_class_gmm initializes the coordinates of the centers with random numbers. To ensure that the results of
training the classifier with train_class_gmm are reproducible, the seed value of the random number generator
is passed in RandSeed.
HALCON 8.0.2
6 CHAPTER 1. CLASSIFICATION
Parameter
. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of dimensions of the feature space.
Default Value : 3
Suggested values : NumDim ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumDim ≥ 1
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the GMM.
Default Value : 5
Suggested values : NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : NumClasses ≥ 1
. NumCenters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Number of centers per class.
Default Value : 1
Suggested values : NumCenters ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30}
Restriction : NumClasses ≥ 1
. CovarType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the covariance matrices.
Default Value : ’spherical’
List of values : CovarType ∈ {’spherical’, ’diag’, ’full’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default Value : ’normalization’
List of values : Preprocessing ∈ {’none’, ’normalization’, ’principal_components’,
’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed value of the random number generator that is used to initialize the GMM with random values.
Default Value : 42
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; integer
GMM handle.
Example
Result
If the parameters are valid, the operator create_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
create_class_gmm is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_gmm, add_samples_image_class_gmm
Alternatives
create_class_mlp, create_class_svm, create_class_box
See also
clear_class_gmm, train_class_gmm, classify_class_gmm, evaluate_class_gmm,
classify_image_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
and returned for each class in ClassProb. The formulas for the calculation of the center density function p(x|j)
are described with create_class_gmm.
The probablity density of the feature vector is computed as a sum of the posterior class probabilities
nclasses
X
p(x) = P r(i)p(i|x)
i=1
and is returned in Density. Here, P r(i) are the prior classes probabilities as computed by
train_class_gmm. Density can be used for novelty detection, i.e., to reject feature vectors that do not
belong to any of the trained classes. However, since Density depends on the scaling of the feature vectors
and since Density is a probability density, and consequently does not need to lie between 0 and 1, the novelty
detection can typically be performed more easily with KSigmaProb (see below).
A k-sigma error ellipsoid is defined as a locus of points for which
(x − µ)T C −1 (x − µ) = k 2
In the one dimensional case this is the interval [µ − kσ, µ + kσ]. For any 1D Gaussian distribution, it is true
that approximately 65% of the occurrences of the random variable are within this range for k = 1, approximately
95% for k = 2, approximately 99% for k = 3, etc. Hence, the probability that a Gaussian distribution will
generate a random variable outside this range is approximately 35%, 5%, and 1%, respectively. This probability is
called k-sigma probability and is denoted by P [k]. P [k] can be computed numerically for univariate as well as for
multivariate Gaussian distributions, where it should be noted that for the same values of k, P (N ) [k] > P (N +1) [k]
(here N and (N+1) denote dimensions). For Gaussian mixture models the k-sigma probability is computed as:
ncomp
X
PGM M [x] = P (j)Pj [kj ], where kj2 = (x − µj )T Cj−1 (x − µj )
j=1
They then are weighted with the class priors, normalized, and returned for each class in KSigmaProb, such that
HALCON 8.0.2
8 CHAPTER 1. CLASSIFICATION
P r(i)
KSigmaProb[i] = PGM M [x]
P rmax
KSigmaProb can be used for novelty detection. Typically, feature vectors having values below 0.0001 should
be rejected. The parameter RejectionThreshold in classify_image_class_gmm is based on the
KSigmaProb values of the features.
Before calling evaluate_class_gmm, the GMM must be trained with train_class_gmm.
The position of the maximum value of ClassProb is usally interpreted as the class of the feature vector and the
corresponding value as the probability of the class. In this case, classify_class_gmm should be used instead
of evaluate_class_gmm, because classify_class_gmm directly returns the class and corresponding
probability.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; integer
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; real
Probability density of the feature vector.
. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator evaluate_class_gmm returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
evaluate_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
Alternatives
classify_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; integer
GMM handle.
. NumDim (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of dimensions of the feature space.
. NumClasses (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the GMM.
. MinCenters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Minimum number of centers per GMM class.
. MaxCenters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Maximum number of centers per GMM class.
. CovarType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the covariance matrices.
Result
If the parameters are valid, the operator get_params_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
get_params_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
create_class_gmm, read_class_gmm
Possible Successors
add_sample_class_gmm, train_class_gmm
See also
evaluate_class_gmm, classify_class_gmm
Module
Foundation
get_prep_info_class_gmm ( : : GMMHandle,
Preprocessing : InformationCont, CumInformationCont )
HALCON 8.0.2
10 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator get_prep_info_class_gmm returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
get_prep_info_class_gmm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
clear_class_gmm, create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Return a training sample from the training data of a Gaussian Mixture Models (GMM).
get_sample_class_gmm reads out a training sample from the Gaussian Mixture Model (GMM) given by
GMMHandle that was stored with add_sample_class_gmm or add_samples_image_class_gmm.
The index of the sample is specified with NumSample. The index is counted from 0, i.e., NumSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_gmm. The training sample is returned in Features and ClassID. Features
is a feature vector of length NumDim, while ClassID is its class (see add_sample_class_gmm and
create_class_gmm).
get_sample_class_gmm can, for example, be used to reclassify the training data with
classify_class_gmm in order to determine which training samples, if any, are classified incorrectly.
Parameter
Result
If the parameters are valid, the operator get_sample_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
get_sample_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm,
get_sample_num_class_gmm
HALCON 8.0.2
12 CHAPTER 1. CLASSIFICATION
Possible Successors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm
Module
Foundation
Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
get_sample_num_class_gmm returns in NumSamples the number of training samples that are stored in the
Gaussian Mixture Model (GMM) given by GMMHandle. get_sample_num_class_gmm should be called
before the individual training samples are read out with get_sample_class_gmm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_gmm).
Parameter
Parallelization Information
read_class_gmm is processed completely exclusively without parallelization.
Possible Successors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm, write_class_gmm
Module
Foundation
HALCON 8.0.2
14 CHAPTER 1. CLASSIFICATION
Example
Result
If the parameters are valid, the operator train_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
train_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
evaluate_class_gmm, classify_class_gmm, write_class_gmm
Alternatives
read_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
HALCON 8.0.2
16 CHAPTER 1. CLASSIFICATION
Possible Successors
clear_class_gmm
See also
create_class_gmm, read_class_gmm, write_samples_class_gmm
Module
Foundation
1.2 Hyperboxes
clear_sampset ( : : SampKey : )
Parameter
. SampKey (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; integer
Number of the data set.
Result
clear_sampset returns 2 (H_MSG_TRUE). An exception handling is raised if the key SampKey does not
exist.
Parallelization Information
clear_sampset is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box, learn_class_box, write_class_box
See also
test_sampset_box, learn_sampset_box, read_sampset
Module
Foundation
close_all_class_box ( : : : )
close_class_box ( : : ClassifHandle : )
HALCON 8.0.2
18 CHAPTER 1. CLASSIFICATION
Module
Foundation
create_class_box ( : : : ClassifHandle )
HALCON 8.0.2
20 CHAPTER 1. CLASSIFICATION
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; integer
Classificator’s handle number.
. FeatureList (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real / integer / string
Array of attributes which has to be classified.
Default Value : 1.0
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Number of the class, to which the array of attributes had been assigned or -1 for the rejection class.
Result
enquire_reject_class_box returns 2 (H_MSG_TRUE).
Parallelization Information
enquire_reject_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, set_class_box_param
Possible Successors
learn_class_box, write_class_box, close_class_box
Alternatives
enquire_class_box
See also
test_sampset_box, learn_class_box, learn_sampset_box
Module
Foundation
See also
create_class_box, set_class_box_param
Module
Foundation
HALCON 8.0.2
22 CHAPTER 1. CLASSIFICATION
learn_sampset_box trains the classificator with data for the key SampKey (see read_sampset). The
training sequence is terminated at least after NSamples examples. If NSamples is bigger than the number of
examples in SampKey, then a cyclic start at the beginning occurs. If the error underpasses the value StopError,
then the training sequence is prematurely terminated. StopError is calculated with N / ErrorN. Whereby N
significates the number of examples which were wrong classified during the last ErrorN training examples.
Typically ErrorN is the number of examples in SampKey and NSamples is a multiple of it. If you want a data
set with 100 examples to run 5 times at most and if you want it to terminate with an error lower than 5%, then the
corresponding values are NSamples = 500, ErrorN = 100 and StopError = 0.05. A protocol of the training
activity is going to be written in file Outfile.
Parameter
HALCON 8.0.2
24 CHAPTER 1. CLASSIFICATION
set_class_box_param modifies parameter which manipulate the training sequence while calling
learn_class_box. Only parameters of the classificator are modified, all other classificators remain unmodi-
fied. ’min_samples_for_split’ is the number of examples at least which have to train in one cuboid of this classi-
ficator, before the cuboid is allowed to divide itself. ’split_error’ indicates the critical error. By its exceeding the
cuboid divides itself, if there are more than ’min_samples_for_split’ examples to train. ’prop_constant’ manipu-
lates the extension of the cuboids. It is proportional to the average distance of the training examples in this cuboid
to the center of the cuboid. More detailed:
extension × prop = average distance of the expectation value.
This relation is valid in every dimension. Hence inside a cuboid the dimensions of the feature space are supposed
to be independent.
The parameters are set with problem independent default values, which must not modified without any rea-
son. Parameters are only important during a learning sequence. They do not influence on the behavior of
enquire_class_box.
Default setting:
’min_samples_for_split’ = 80,
’split_error’ = 0.1,
’prop_constant’ = 0.25
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; integer
Classificator’s handle number.
. Flag (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the wanted parameter.
Default Value : ’split_error’
Suggested values : Flag ∈ {’min_samples_for_split’, ’split_error’, ’prop_constant’}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Value of the parameter.
Default Value : 0.1
Result
read_sampset returns 2 (H_MSG_TRUE).
Parallelization Information
set_class_box_param is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box
Possible Successors
learn_class_box, test_sampset_box, write_class_box, close_class_box,
clear_sampset
See also
enquire_class_box, get_class_box_param, learn_class_box
Module
Foundation
1.3 Neural-Nets
add_sample_class_mlp ( : : MLPHandle, Features, Target : )
HALCON 8.0.2
26 CHAPTER 1. CLASSIFICATION
add_sample_class_mlp adds a training sample to the multilayer perceptron (MLP) given by MLPHandle.
The training sample is given by Features and Target. Features is the feature vector of the sample, and
consequently must be a real vector of length NumInput, as specified in create_class_mlp. Target is
the target vector of the sample, which must have the length NumOutput (see create_class_mlp) for all
three types of activation functions of the MLP (exception: see below). If the MLP is used for regression (function
approximation), i.e., if OutputFunction = ’linear’, Target is the value of the function at the coordinate
Features. In this case, Target can contain arbitrary real numbers. For OutputFunction = ’logistic’,
Target can only contain the values 0.0 and 1.0. A value of 1.0 specifies that the attribute in question is present,
while a value of 0.0 specifies that the attribute is absent. Because in this case the attributes are independent,
arbitrary combinations of 0.0 and 1.0 can be passed. For OutputFunction = ’softmax’, Target also can only
contain the values 0.0 and 1.0. In contrast to OutputFunction = ’logistic’, the value 1.0 must be present for
exactly one element of the tuple Target. The location in the tuple designates the class of the sample. For ease of
use, a single integer value may be passed if OutputFunction = ’softmax’. This value directly designates the
class of the sample, which is counted from 0, i.e., the class must be an integer between 0 and NumOutput − 1.
The class is converted to a target vector of length NumOutput internally.
Before the MLP can be trained with train_class_mlp, all training samples must be added to the MLP with
add_sample_class_mlp.
The number of currently stored training samples can be queried with get_sample_num_class_mlp. Stored
training samples can be read out again with get_sample_class_mlp.
Normally, it is useful to save the training samples in a file with write_samples_class_mlp to facilitate
reusing the samples, and to facilitate that, if necessary, new training samples can be added to the data set, and
hence to facilitate that a newly created MLP can be trained anew with the extended data set.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; integer
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample to be stored.
. Target (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / real
Class or target vector of the training sample to be stored.
Result
If the parameters are valid, the operator add_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
add_sample_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
clear_samples_class_mlp, get_sample_num_class_mlp, get_sample_class_mlp
Module
Foundation
classify_class_mlp can only be called if the MLP is used as a classifier with OutputFunction = ’soft-
max’ (see create_class_mlp). Otherwise, an error message is returned. classify_class_mlp cor-
responds to a call to evaluate_class_mlp and an additional step that extracts the best Num classes. As
described with evaluate_class_mlp, the output values of the MLP can be interpreted as probabilities of the
occurrence of the respective classes. However, here the posterior probability ClassProb is further normalized as
ClassProb = p(i|x)/p(x), where p(i|x) and p(x) are defined as in evaluate_class_gmm. In most cases
it should be sufficient to use Num = 1 in order to decide whether the probability of the best class is high enough.
In some applications it may be interesting to also take the second best class into account (Num = 2), particularly if
it can be expected that the classes show a significant degree of overlap.
Parameter
clear_all_class_mlp ( : : : )
HALCON 8.0.2
28 CHAPTER 1. CLASSIFICATION
Possible Predecessors
classify_class_mlp, evaluate_class_mlp
Alternatives
clear_class_mlp
See also
create_class_mlp, read_class_mlp, write_class_mlp, train_class_mlp
Module
Foundation
clear_class_mlp ( : : MLPHandle : )
clear_samples_class_mlp ( : : MLPHandle : )
Possible Predecessors
train_class_mlp, write_samples_class_mlp
See also
create_class_mlp, clear_class_mlp, add_sample_class_mlp,
read_samples_class_mlp
Module
Foundation
ni
(1) (1) (1)
X
aj = wji xi + bj , j = 1, . . . , nh
i=1
(1)
zj = tanh aj , j = 1, . . . , nh
(1) (1)
Here, the matrix wji and the vector bj are the weights of the input layer (first layer) of the MLP. In the hidden
layer (second layer), the activations zj are transformed in a first step by using linear combinations of the variables
in an analogous manner as above:
nh
(2) (2) (2)
X
ak = wkj zj + bk , k = 1, . . . , no
j=1
(2) (2)
Here, the matrix wkj and the vector bk are the weights of the second layer of the MLP.
The activation function used in the output layer can be determined by setting OutputFunction. For
OutputFunction = ’linear’, the data are simply copied:
(2)
yk = ak , k = 1, . . . , no
This type of activation function should be used for regression problems (function approximation). This activation
function is not suited for classification problems.
For OutputFunction = ’logistic’, the activations are computed as follows:
1
yk = (2)
, k = 1, . . . , no
1 + exp − ak
This type of activation function should be used for classification problems with multiple (NumOutput) indepen-
dent logical attributes as output. This kind of classification problem is relatively rare in practice.
For OutputFunction = ’softmax’, the activations are computed as follows:
(2)
exp ak
yk = Pno (2) , k = 1, . . . , no
l=1 al
HALCON 8.0.2
30 CHAPTER 1. CLASSIFICATION
This type of activation function should be used for common classification problems with multiple (NumOutput)
mutually exclusive classes as output. In particular, OutputFunction = ’softmax’ must be used for the classifi-
cation of pixel data with classify_image_class_mlp.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the MLP. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification or evaluation.
For Preprocessing = ’normalization’, the feature vectors are normalized by subtracting the mean of the
training vectors and dividing the result by the standard deviation of the individual components of the training
vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization
does not change the length of the feature vector. NumComponents is ignored in this case. This transformation
can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively,
or for data in which the components of the feature vectors are measured in different units (e.g., if some of the data
are gray value features and some are region features, or if region features are mixed, e.g., ’circularity’ (unit: scalar)
and ’area’ (unit: pixel squared)). In these cases, the training of the net will typically require fewer iterations than
without normalization.
For Preprocessing = ’principal_components’, a principal component analysis is performed. First, the feature
vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that
decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and
the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the
transformed features that contain the most variation is contained in the first components of the transformed feature
vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to detemine how many of the transformed feature vector components should be
used. Up to NumInput components can be selected. The operator get_prep_info_class_mlp can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated.
In contrast to the above three transformations, which can be used for all MLP types, the transformation spec-
ified by Preprocessing = ’canonical_variates’ can only be used if the MLP is used as a classifier with
OutputFunction = ’softmax’). The computation of the canonical variates is also called linear discrimi-
nant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the
training vectors on average over all classes is computed. At the same time, the transformation maximally sepa-
rates the mean values of the individual classes. As for Preprocessing = ’principal_components’, the trans-
formed components are sorted by information content, and hence transformed components with little informa-
tion content can be omitted. For canonical variates, up to min(NumOutput − 1, NumInput) components can
be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_mlp. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the actual number of
input units of the MLP is determined by NumComponents, whereas NumInput determines the dimensionality
of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transfor-
mations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this,
the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.
Usually, NumHidden should be selected in the order of magnitude of NumInput and NumOutput. In many
cases, much smaller values of NumHidden already lead to very good classification results. If NumHidden is
chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e.,
the MLP learns the training data very well, but does not return very good results on unknown data.
create_class_mlp initializes the above described weights with random numbers. To ensure that the results of
training the classifier with train_class_mlp are reproducible, the seed value of the random number generator
is passed in RandSeed. If the training results in a relatively large error, it sometimes may be possible to achieve
a smaller error by selecting a different value for RandSeed and retraining an MLP.
After the MLP has been created, typically training samples are added to the MLP by repeatedly calling
add_sample_class_mlp or read_samples_class_mlp. After this, the MLP is typically trained us-
ing train_class_mlp. Hereafter, the MLP can be saved using write_class_mlp. Alternatively, the
MLP can be used immediately after training to evaluate data using evaluate_class_mlp or, if the MLP is
used as a classifier (i.e., for OutputFunction = ’softmax’), to classify data using classify_class_mlp.
A comparison of the MLP and the support vector machine (SVM) (see create_class_svm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
HALCON 8.0.2
32 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator create_class_mlp returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
create_class_mlp is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_mlp
Alternatives
create_class_svm, create_class_gmm, create_class_box
See also
clear_class_mlp, train_class_mlp, classify_class_mlp, evaluate_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
If the MLP is used for regression (function approximation), i.e., if (OutputFunction = ’linear’), Result
is the value of the function at the coordinate Features. For OutputFunction = ’logistic’ and ’softmax’,
the values in Result can be interpreted as probabilities. Hence, for OutputFunction = ’logistic’ the ele-
ments of Result represent the probabilities of the presence of the respective independent attributes. Typically,
a threshold of 0.5 is used to decide whether the attribute is present or not. Depending on the application, other
thresholds may be used as well. For OutputFunction = ’softmax’ usually the position of the maximum value
of Result is interpreted as the class of the feature vector, and the corresponding value as the probability of the
class. In this case, classify_class_mlp should be used instead of evaluate_class_mlp because
classify_class_mlp directly returns the class and corresponding probability.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; integer
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Result of evaluating the feature vector with the MLP.
Result
If the parameters are valid, the operator evaluate_class_mlp returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
evaluate_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
classify_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
HALCON 8.0.2
34 CHAPTER 1. CLASSIFICATION
get_prep_info_class_mlp ( : : MLPHandle,
Preprocessing : InformationCont, CumInformationCont )
Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
get_prep_info_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e.,
it is computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput−1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains
the sums of the first n elements of InformationCont. To use get_prep_info_class_mlp, a suffi-
cient number of samples must be added to the multilayer perceptron (MLP) given by MLPHandle by using
add_sample_class_mlp or read_samples_class_mlp.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_mlp. The call to get_prep_info_class_mlp al-
ready requires the creation of an MLP, and hence the setting of NumComponents in create_class_mlp to
an initial value. However, if get_prep_info_class_mlp is called it is typically not known how many com-
ponents are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step ap-
proach should typically be used to select NumComponents: In a first step, an MLP with the maximum number for
NumComponents is created (NumInput for ’principal_components’ and min(NumOutput − 1, NumInput)
for ’canonical_variates’). Then, the training samples are added to the MLP and are saved in a file using
write_samples_class_mlp. Subsequently, get_prep_info_class_mlp is used to determine the
information content of the components, and with this NumComponents. After this, a new MLP with the de-
sired number of components is created, and the training samples are read with read_samples_class_mlp.
Finally, the MLP is trained with train_class_mlp.
Parameter
Result
If the parameters are valid, the operator get_prep_info_class_mlp returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
get_prep_info_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
clear_class_mlp, create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
HALCON 8.0.2
36 CHAPTER 1. CLASSIFICATION
* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’canonical_variates’,
NComp, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Reclassify the training samples
get_sample_num_class_mlp (MLPHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_mlp (MLPHandle, I, Data, Target)
classify_class_mlp (MLPHandle, Data, 1, Class, Confidence)
Result := gen_tuple_const(NOut,0)
Result[Class] := 1
Diffs := Target-Result
if (sum(fabs(Diffs)) > 0)
* Sample has been classified incorrectly
endif
endfor
clear_class_mlp (MLPHandle)
Result
If the parameters are valid, the operator get_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
get_sample_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp, get_sample_num_class_mlp
Possible Successors
classify_class_mlp, evaluate_class_mlp
See also
create_class_mlp
Module
Foundation
Return the number of training samples stored in the training data of a multilayer perceptron.
get_sample_num_class_mlp returns in NumSamples the number of training samples that are stored in
the multilayer perceptron (MLP) given by MLPHandle. get_sample_num_class_mlp should be called
before the individual training samples are accessed with get_sample_class_mlp, e.g., for the purpose of
reclassifying the training data (see get_sample_class_mlp).
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; integer
MLP handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If MLPHandle is valid, the operator get_sample_num_class_mlp returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
get_sample_num_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
get_sample_class_mlp
See also
create_class_mlp
Module
Foundation
HALCON 8.0.2
38 CHAPTER 1. CLASSIFICATION
values determined with RandSeed in create_class_mlp result in a relatively large optimum error, i.e., that
the optimization gets stuck in a local minimum. If it can be conjectured that this has happened the MLP should be
created anew with a different value for RandSeed in order to check whether a significantly smaller error can be
achieved.
The parameters MaxIterations, WeightTolerance, and ErrorTolerance control the nonlinear opti-
mization algorithm. MaxIterations specifies the maximum number of iterations of the optimization algorithm.
In practice, values between 100 and 200 should be sufficient for most problems. WeightTolerance specifies
a threshold for the change of the weights per iteration. Here, the absolute value of the change of the weights
between two iterations is summed. Hence, this value depends on the number of weights as well as the size of
the weights, which in turn depend on the scaling of the training data. Typically, values between 0.00001 and 1
should be used. ErrorTolerance specifies a threshold for the change of the error value per iteration. This
value depends on the number of training samples as well as the number of output variables of the MLP. Also here,
values between 0.00001 and 1 should typically be used. The optimization is terminated if the weight change is
smaller than WeightTolerance and the change of the error value is smaller than ErrorTolerance. In any
case, the optimization is terminated after at most MaxIterations iterations. It should be noted that, depending
on the size of the MLP and the number of training samples, the training can take from a few seconds to several
hours.
On output, train_class_mlp returns the error of the MLP with the optimal weights on the training samples
in Error. Furthermore, ErrorLog contains the error value as a function of the number of iterations. With
this, it is possible to decide whether a second training of the MLP with the same training data without creating
the MLP anew makes sense. If ErrorLog is regarded as a function, it should drop off steeply initially, while
leveling out very flatly at the end. If ErrorLog is still relatively steep at the end, it usually makes sense to call
train_class_mlp again. It should be noted, however, that this mechanism should not be used to train the
MLP successively with MaxIterations = 1 (or other small values for MaxIterations) because this will
substantially increase the number of iterations required to train the MLP.
Parameter
* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’normalization’, 1,
42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
HALCON 8.0.2
40 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator train_class_mlp returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
train_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = ’canon-
ical_variates’ is used. This typically indicates that not enough training samples have been stored for each class.
Parallelization Information
train_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
evaluate_class_mlp, classify_class_mlp, write_class_mlp
Alternatives
read_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
1.4 Support-Vector-Machines
HALCON 8.0.2
42 CHAPTER 1. CLASSIFICATION
Parameter
Possible Predecessors
train_class_svm, read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation
clear_all_class_svm ( : : : )
clear_class_svm ( : : SVMHandle : )
HALCON 8.0.2
44 CHAPTER 1. CLASSIFICATION
See also
create_class_svm, read_class_svm, write_class_svm, train_class_svm
Module
Foundation
clear_samples_class_svm ( : : SVMHandle : )
nsv
!
X
f (z) = sign αi yi < xi , z > +b
i=1
Here, xi are the support vectors, yi encodes their class membership (±1) and αi the weight coefficients. The dis-
tance of the hyperplane to the origin is b. The α and b are determined during training with train_class_svm.
Note that only a subset of the original training set (nsv : number of support vectors) is necessary for the definition
of the decision boundary and therefore data vectors that are not support vectors are discarded. The classification
speed depends on the evaluation of the dot product between support vectors and the feature vector to be classified,
and hence depends on the length of the feature vector and the number nsv of support vectors.
For classification problems in which the classes are not linearly separable the algorithm is extended in two ways.
First, during training a certain amount of errors (overlaps) is compensated with the use of slack variables. This
means that the α are upper bounded by a regularization constant. To enable an intuitive control of the amount of
training errors, the Nu-SVM version of the training algorithm is used. Here, the regularization parameter Nu is an
asymptotic upper bound on the number of training errors and an asymptotic lower bound on the number of support
vectors. As a rule of thumb, the parameter Nu should be set to the prior expectation of the application’s specific
error ratio, e.g., 0.01 (corresponding to a maximum training error of 1%). Please note that a too big value for Nu
might lead to an infeasible training problem, i.e., the SVM cannot be trained correctly (see train_class_svm
for more details). Since this can only be determined during training, an exception can only be raised there. In this
case, a new SVM with Nu chosen smaller must be created.
Second, because the above SVM exclusively calculates dot products between the feature vectors, it is possible to
incorporate a kernel function into the training and testing algorithm. This means that the dot products are substi-
tuted by a kernel function, which implicitly performs the dot product in a higher dimensional feature space. Given
the appropriate kernel transformation, an originally not linearly separable classification task becomes linearly sep-
arable in the higher dimensional feature space.
Different kernel functions can be selected with the parameter KernelType. For KernelType = ’linear’ the
dot product, as specified in the above formula is calculated. This kernel should solely be used for linearly or nearly
linearly separable classification tasks. The parameter KernelParam is ignored here.
The radial basis function (RBF) KernelType = ’rbf’ is the best choice for a kernel function because it achieves
good results for many classification tasks. It is defined as:
2
= e−γ·
x−z
K(x, z)
Here, the parameter KernelParam is used to select γ. The intuitive meaning of γ is the amount of influence of
a support vector upon its surroundings. A big value of γ (small influence on the surroundings) means that each
training vector becomes a support vector. The training algorithm learns the training data “by heart”, but lacks any
generalization ability (over-fitting). Additionally, the training/classification times grow significantly. A too small
value for γ (big influence on the surroundings) leads to few support vectors defining the separating hyperplane
(under-fitting). One typical strategy is to select a small γ-Nu pair and consecutively increase the values as long as
the recognition rate increases.
With KernelType = ’polynomial_homogeneous’ or ’polynomial_inhomogeneous’, polynomial kernels can be
selected. They are defined in the following way:
The degree of the polynomial kernel must be set with KernelParam. Please note that a too high degree polyno-
mial (d > 10) might result in numerical problems.
As a rule of thumb, the RBF kernel provides a good choice for most of the classification problems and should
therefore be used in almost all cases. Nevertheless, the linear and polynomial kernels might be better suited
for certain applications and can be tested for comparison. Please note that the novelty-detection Mode and the
reduce_class_svm operator are provided only for the RBF kernel.
Mode specifies the general classification task, which is either how to break down a multi-class decision problem to
binary sub-cases or whether to use a special classifier mode called ’novelty-detection’. Mode = ’one-versus-all’
creates a classifier where each class is compared to the rest of the training data. During testing the class with the
largest output (see the classification formula without sign) is chosen. Mode = ’one-versus-one’ creates a binary
classifier between each single class. During testing a vote is cast and the class with the majority of the votes
is selected. The optimal Mode for multi-class classification depends on the number of classes. Given n classes
’one-versus-all’ creates n classifiers, whereas ’one-versus-one’ creates n(n − 1)/2. Note that for a binary decision
task ’one-versus-one’ would create exactly one, whereas ’one-versus-all’ unnecessarily creates two symmetric
HALCON 8.0.2
46 CHAPTER 1. CLASSIFICATION
classifiers. For few classes (3-10) ’one-versus-one’ is faster for training and testing, because the sub-classifier all
consist of fewer training data and result in overall fewer support vectors. In case of many classes ’one-versus-all’
is preferable, because ’one-versus-one’ generates a prohibitively large amount of sub-classifiers, as their number
grows quadratically with the number of classes.
A special case of classification is Mode = 0 novelty − detection 0 , where the test data is classified with regard to
membership to the training data. The separating hyperplane lies around the training data and thereby implicitly
divides the training data from the rejection class. The advantage is that the rejection class is not defined explicitly,
which is difficult to do in certain applications like texture classification. The resulting support vectors are all lying
at the border. With the parameter Nu, the ratio of outliers in the training data set is specified.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the SVM. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification.
For Preprocessing = ’normalization’, the feature vectors are normalized. In case of a polynomial kernel, the
minimum and maximum value of the training data set is transformed to -1 and +1. In case of the RBF kernel, the
data is normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation
of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and
a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponents
is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors
differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are
measured in different units (e.g., if some of the data are gray value features and some are region features, or if
region features are mixed, e.g., ’circularity’ (unit: scalar) and ’area’ (unit: pixel squared)). The normalization
transformation should be performed in general, because it increases the numerical stability during training/testing.
For Preprocessing = ’principal_components’, a principal component analysis (PCA) is performed. First, the
feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space)
that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is
0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that
the transformed features that contain the most variation is contained in the first components of the transformed
feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumFeatures components can be selected. The operator get_prep_info_class_svm can
be used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated. Please note that the RBF kernel is very robust against the dimensionality reduction
performed by PCA and should therefore be the first choice when speeding up the classification time.
The transformation specified by Preprocessing = ’canonical_variates’ first normalizes the training vectors
and then decorrelates the training vectors on average over all classes. At the same time, the transformation maxi-
mally separates the mean values of the individual classes. As for Preprocessing = ’principal_components’,
the transformed components are sorted by information content, and hence transformed components with little infor-
mation content can be omitted. For canonical variates, up to min(NumClasses−1, NumFeatures) components
can be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_svm. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction. The computation of the canonical variates is also called linear discriminant
analysis.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the length of input
data of the SVM is determined by NumComponents, whereas NumFeatures determines the dimensionality of
the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transforma-
tions, the size of the SVM with respect to data length is reduced, leading to shorter training/classification times by
the SVM.
After the SVM has been created with create_class_svm, typically training samples are added to the SVM
by repeatedly calling add_sample_class_svm or read_samples_class_svm. After this, the SVM is
typically trained using train_class_svm. Hereafter, the SVM can be saved using write_class_svm.
Alternatively, the SVM can be used immediately after training to classify data using classify_class_svm.
A comparison of the SVM and the multi-layer perceptron (MLP) (see create_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
. NumFeatures (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the SVM.
Default Value : 10
Suggested values : NumFeatures ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumFeatures ≥ 1
. KernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The kernel type.
Default Value : ’rbf’
List of values : KernelType ∈ {’linear’, ’rbf’, ’polynomial_inhomogeneous’, ’polynomial_homogeneous’}
. KernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Additional parameter for the kernel function. In case of RBF kernel the value for γ. For polynomial kernel the
degree
Default Value : 0.02
Suggested values : KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. Nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularisation constant of the SVM.
Default Value : 0.05
Suggested values : Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction : (Nu > 0.0) ∧ (Nu < 1.0)
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes.
Default Value : 5
Suggested values : NumClasses ∈ {2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : NumClasses ≥ 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The mode of the SVM.
Default Value : ’one-versus-one’
List of values : Mode ∈ {’novelty-detection’, ’one-versus-all’, ’one-versus-one’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default Value : ’normalization’
List of values : Preprocessing ∈ {’none’, ’normalization’, ’principal_components’,
’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
SVM handle.
Example
HALCON 8.0.2
48 CHAPTER 1. CLASSIFICATION
* Data = [...]
* Class = ...
add_sample_class_svm (SVMHandle, Data, Class)
endfor
* Train the SVM
train_class_svm (SVMHandle, 0.001, ’default’)
* Use the SVM to classify unknown data
for J := 0 to N-1 by 1
* Extract features
* Features = [...]
classify_class_svm (SVMHandle, Features, 1, Class)
endfor
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator create_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
create_class_svm is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_svm
Alternatives
create_class_mlp, create_class_gmm, create_class_box
See also
clear_class_svm, train_class_svm, classify_class_svm
References
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Module
Foundation
get_prep_info_class_svm ( : : SVMHandle,
Preprocessing : InformationCont, CumInformationCont )
Compute the information content of the preprocessed feature vectors of a support vector machine
get_prep_info_class_svm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_class_svm, a sufficient number of samples must be added to the support vector machine
(SVM) given by SVMHandle by using add_sample_class_svm or read_samples_class_svm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_svm. The call to get_prep_info_class_svm al-
ready requires the creation of an SVM, and hence the setting of NumComponents in create_class_svm
to an initial value. However, when get_prep_info_class_svm is called, it is typically not known how
many components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-
step approach should typically be used to select NumComponents: In a first step, an SVM with the maximum
number for NumComponents is created (NumFeatures for ’principal_components’ and min(NumClasses−
1, NumFeatures) for ’canonical_variates’). Then, the training samples are added to the SVM and are saved in
a file using write_samples_class_svm. Subsequently, get_prep_info_class_svm is used to deter-
mine the information content of the components, and with this NumComponents. After this, a new SVM with the
desired number of components is created, and the training samples are read with read_samples_class_svm.
Finally, the SVM is trained with train_class_svm.
HALCON 8.0.2
50 CHAPTER 1. CLASSIFICATION
Parameter
Result
If the parameters are valid the operator get_prep_info_class_svm returns the value 2 (H_MSG_TRUE).
If necessary, an exception handling is raised.
get_prep_info_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
clear_class_svm, create_class_svm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Return a training sample from the training data of a support vector machine.
get_sample_class_svm reads out a training sample from the support vector machine (SVM) given by
SVMHandle that was added with add_sample_class_svm or read_samples_class_svm. The
index of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and IndexSamples − 1, where IndexSamples can be determined with
get_sample_num_class_svm. The training sample is returned in Features and Target. Features
is a feature vector of length NumFeatures (see create_class_svm), while Target is the index of the
class, ranging between 0 and NumClasses-1 (see add_sample_class_svm).
get_sample_class_svm can, for example, be used to reclassify the training data with
classify_class_svm in order to determine which training samples, if any, are classified incorrectly.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
SVM handle.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. Target (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Target vector of the training sample.
Example
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Reclassify the training samples
get_sample_num_class_svm (SVMHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_svm (SVMHandle, I, Data, Target)
classify_class_svm (SVMHandle, Data, 1, Class)
if (Class # Target)
* Sample has been classified incorrectly
endif
endfor
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator get_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
Parallelization Information
get_sample_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm, get_sample_num_class_svm,
get_support_vector_class_svm
Possible Successors
classify_class_svm
HALCON 8.0.2
52 CHAPTER 1. CLASSIFICATION
See also
create_class_svm
Module
Foundation
Return the number of training samples stored in the training data of a support vector machine.
get_sample_num_class_svm returns in NumSamples the number of training samples that are stored in
the support vector machine (SVM) given by SVMHandle. get_sample_num_class_svm should be called
before the individual training samples are accessed with get_sample_class_svm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_svm).
Parameter
get_support_vector_class_svm ( : : SVMHandle,
IndexSupportVector : Index )
Return the index of a support vector from a trained support vector machine.
The operator get_support_vector_class_svm maps support vectors of a trained SVM (given
in SVMHandle) to the original training data set. The index of the SV is specified with
IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be a number
between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be determined with
get_support_vector_num_class_svm. The index of this SV in the training data is returned in Index.
This Index can be used for a query with get_sample_class_svm to obtain the feature vectors that become
support vectors. get_sample_class_svm can, for example, be used to visualize the support vectors.
Note that when using train_class_svm with a mode different from ’default’ or reducing the SVM with
reduce_class_svm, the returned Index will always be -1, i.e., it will be invalid. The reason for this is that a
consistent mapping between SV and training data becomes impossible.
Parameter
get_support_vector_num_class_svm
( : : SVMHandle : NumSupportVectors, NumSVPerSVM )
HALCON 8.0.2
54 CHAPTER 1. CLASSIFICATION
read_class_svm reads a support vector machine (SVM) that has been stored with write_class_svm.
Since the training of an SVM can consume a relatively long time, the SVM is typically trained in an offline process
and written to a file with write_class_svm. In the online process the SVM is read with read_class_svm
and subsequently used for classification with classify_class_svm.
Parameter
See also
write_samples_class_svm, clear_samples_class_svm
Module
Foundation
Approximate a trained support vector machine by a reduced support vector machine for faster classification.
As described in create_class_svm, the classification time of a SVM depends on the number of kernel
evaluations between the support vectors and the feature vectors. While the length of the data vectors can be
reduced in a preprocessing step like ’pricipal_components’ or ’canonical_variates’ (see create_class_svm
for details), the number of resulting SV depends on the complexity of the classification problem. The number
of SVs is determined during training. To further reduce classification time, the number of SVs can be reduced
by approximating the original separating hyperplane with fewer SVs than originally required. For this purpose, a
copy of the original SVM provided by SVMHandle is created and returned in SVMHandleReduced. This new
SVM has the same parametrization as the original SVM, but a different SV expansion. The training samples that
are included in SVMHandle are not copied. The original SVM is not modified by reduce_class_svm.
The reduction method is selected with Method. Currently, only a bottom up approch is supported, which itera-
tively merges SVs. The algorithm stops if either the minimum number of SVs is reached (MinRemainingSV)
or if the accumulated maximum error exceeds the threshold MaxError. Note that the approximation reduces the
complexity of the hyperplane and thereby leads to a deteriorated classification rate. A common approch is therefore
to start from a small MaxError e.g., 0.001, and to increase its value step by step. To control the reduction ratio,
at each step the number of remaining SVs is determined with get_support_vector_num_class_svm and
the classification rate is checked on a separate test data set with classify_class_svm.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
Original SVM handle.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of postprocessing to reduce number of SV.
Default Value : ’bottom_up’
List of values : Method ∈ {’bottom_up’}
. MinRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum number of remaining SVs.
Default Value : 2
Suggested values : MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction : MinRemainingSV ≥ 2
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum allowed error of reduction.
Default Value : 0.001
Suggested values : MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction : MaxError > 0.0
. SVMHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
SVMHandle of reduced SVM.
Example
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Create a reduced SVM
reduce_class_svm (SVMHandle, ’bottom_up’, 2, 0.01, SVMHandleReduced)
write_class_svm (SVMHandleReduced, ’classifier.svm)
HALCON 8.0.2
56 CHAPTER 1. CLASSIFICATION
clear_class_svm (SVMHandleReduced)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator train_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
reduce_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
train_class_svm, get_support_vector_num_class_svm
Possible Successors
classify_class_svm, write_class_svm, get_support_vector_num_class_svm
See also
train_class_svm
Module
Foundation
chunks. The first chunk is trained normally using TrainMode = ’default’. Afterwards, the previous training set is
removed with clear_samples_class_svm, the next chunk is added with add_sample_class_svm and
trained with TrainMode = ’add_sv_to_train_set’. This is repeated until all chunks are trained. This approach has
the advantage that even huge training data sets can be trained efficiently with respect to memory consumption. A
second application area for this mode is that a general purpose classifier can be specialized by adding characteristic
training samples and then retraining it. Please note that the preprocessing (as described in create_class_svm)
is not changed when training with TrainMode = ’add_sv_to_train_set’.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
SVM handle.
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Stop parameter for training.
Default Value : 0.001
Suggested values : Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. TrainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer
Mode of training. For normal operation: ’default’. If SVs already included in the SVM should be used for
training: ’add_sv_to_train_set’. For alpha seeding: the respective SVM handle.
Default Value : ’default’
List of values : TrainMode ∈ {’default’, ’add_sv_to_train_set’}
Example
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
write_class_svm (SVMHandle, ’classifier.svm)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator train_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
train_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
classify_class_svm, write_class_svm
Alternatives
read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation
HALCON 8.0.2
58 CHAPTER 1. CLASSIFICATION
write_class_svm writes the support vector machine (SVM) SVMHandle to the file given by FileName.
write_class_svm is typically called after the SVM has been trained with train_class_svm. The SVM
can be read with read_class_svm. write_class_svm does not write any training samples that possibly
have been stored in the SVM. For this purpose, write_samples_class_svm should be used.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
Result
If the parameters are valid the operator write_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
write_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm
Possible Successors
clear_class_svm
See also
create_class_svm, read_class_svm, write_samples_class_svm
Module
Foundation
Control
u = sin(x) + cos(y);
assign(sin(x) + cos(y), u)
u := sin(x) + cos(y)
Parameter
. Input (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
New value.
Default Value : 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Variable that has to be changed.
Example
Tuple1 := [1,0,3,4,5,6,7,8,9]
Val := sin(1.2) + cos(1.2)
Tuple1[1] := 2
Tuple2 := []
for i := 0 to 10 by 1
Tuple2[i] := i
endfor
Result
assign returns 2 (H_MSG_TRUE) if the evaluation of the expression yields no error.
Parallelization Information
assign is reentrant, local, and processed without parallelization.
59
60 CHAPTER 2. CONTROL
Alternatives
insert
Module
Foundation
break ( : : : )
Result
break always returns 2 (H_MSG_TRUE)
Parallelization Information
break is reentrant, local, and processed without parallelization.
Alternatives
continue
See also
for, while, repeat, until
Module
Foundation
comment ( : : Comment : )
Result
comment always returns 2 (H_MSG_TRUE).
Parallelization Information
comment is reentrant, local, and processed without parallelization.
Module
Foundation
continue ( : : : )
else ( : : : )
elseif ( : : Condition : )
HALCON 8.0.2
62 CHAPTER 2. CONTROL
elseif is a conditional statement with an alternative. If the condition is true (i.e., not 0), all expressions and calls
between the head and the operator else or the next elseif are performed. If the condition is false (i.e., 0) the
part between else and endif is executed.
Parameter
. Condition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Condition for the if statement.
Default Value : 1
Result
elseif returns 2 (H_MSG_TRUE) if the evaluation of the expression yields no error. else and endif (as
operators) always return 2 (H_MSG_TRUE)
Parallelization Information
elseif is reentrant, local, and processed without parallelization.
Alternatives
if
See also
else, elseif, for, while, until
Module
Foundation
endfor ( : : : )
endif ( : : : )
End of if command.
endif is the last line of a if, elseif and else command.
Result
endif always returns 2 (H_MSG_TRUE)
Parallelization Information
endif is reentrant, local, and processed without parallelization.
See also
if
Module
Foundation
endwhile ( : : : )
exit ( : : : )
Terminate HDevelop.
exit terminates HDevelop. The operator is aquivalent to the menu File . Quit. Internally and for exported
C++ code the C-function call exit(0) is used.
Example
Result
exit returns 0 (o.k.) to the calling environment of HDevelop = operating system.
Parallelization Information
exit is reentrant, local, and processed without parallelization.
See also
stop
Module
Foundation
HALCON 8.0.2
64 CHAPTER 2. CONTROL
If the for loop is left too early (e.g., if you press Stop and set the PC) and the loop is entered again, the
expressions will be evaluated, as if the loop were entered for the first time.
Parameter
dev_update_window (’off’)
dev_close_window ()
dev_open_window (0, 0, 728, 512, ’black’, WindowID)
read_image (Bond, ’die3’)
dev_display (Bond)
stop ()
threshold (Bond, Bright, 100, 255)
shape_trans (Bright, Die, ’rectangle2’)
dev_set_color (’green’)
dev_set_line_width (3)
dev_set_draw (’margin’)
dev_display (Die)
stop ()
reduce_domain (Bond, Die, DieGrey)
threshold (DieGrey, Wires, 0, 50)
fill_up_shape (Wires, WiresFilled, ’area’, 1, 100)
dev_display (Bond)
dev_set_draw (’fill’)
dev_set_color (’red’)
dev_display (WiresFilled)
stop ()
opening_circle (WiresFilled, Balls, 15.5)
dev_set_color (’green’)
dev_display (Balls)
stop ()
connection (Balls, SingleBalls)
select_shape (SingleBalls, IntermediateBalls, ’circularity’, ’and’, 0.85, 1.0)
sort_region (IntermediateBalls, FinalBalls, ’first_point’, ’true’, ’column’)
dev_display (Bond)
dev_set_colored (12)
dev_display (FinalBalls)
stop ()
smallest_circle (FinalBalls, Row, Column, Radius)
NumBalls := |Radius|
Diameter := 2*Radius
meanDiameter := sum(Diameter)/NumBalls
mimDiameter := min(Diameter)
dev_display (Bond)
disp_circle (WindowID, Row, Column, Radius)
dev_set_color (’white’)
set_font (WindowID, ’system26’)
for i := 1 to NumBalls by 1
if (fmod(i,2)=1)
set_tposition (WindowID, Row[i-1]-1.5*Radius[i-1], Column[i-1]-60)
else
set_tposition (WindowID, Row[i-1]+2.5*Radius[i-1], Column[i-1]-60)
endif
write_string (WindowID, ’Diam: ’+Diameter[i-1])
endfor
dev_set_color (’green’)
dev_update_window (’on’)
Result
for returns 2 (H_MSG_TRUE) if the evaluation of the expression yields no error. endfor (as operator) always
returns 2 (H_MSG_TRUE)
Parallelization Information
for is reentrant, local, and processed without parallelization.
Alternatives
while, until
See also
repeat, break, continue, if, elseif, else
Module
Foundation
if ( : : Condition : )
Conditional statement.
if is a conditional statement. The condition contains a boolean expression. If the condition is true, the body is
executed. Otherwise the execution is continued at the first expression or operator call that follows the corresponding
elseif, else or endif.
Parameter
ifelse ( : : Condition : )
HALCON 8.0.2
66 CHAPTER 2. CONTROL
ifelse is a conditional statement with an alternative. If the condition is true (i.e., not 0), all expressions and calls
between the head and operator else are performed. If the condition is false (i.e., 0) the part between else and
endif is executed. Note that the operator is called ifelse and it is displayed as if in the program text area.
Parameter
is not presented in the program text as an operator call, but in the more intuitive form as:
Areas[Radius-1] := Area
.
Parameter
Alternatives
assign
Module
Foundation
repeat ( : : : )
return ( : : : )
stop ( : : : )
HALCON 8.0.2
68 CHAPTER 2. CONTROL
for i := 1 to Number by 1
RegionSelected := Regions[i]
dev_clear_window ()
dev_display (RegionSelected)
stop ()
endfor
Result
stop always returns 2 (H_MSG_TRUE)
Parallelization Information
stop is reentrant, local, and processed without parallelization.
See also
exit
Module
Foundation
until ( : : Condition : )
while ( : : Condition : )
dev_update_window (’off’)
dev_close_window ()
dev_open_window (0, 0, 512, 512, ’black’, WindowID)
read_image (Image, ’particle’)
dev_display (Image)
stop ()
threshold (Image, Large, 110, 255)
dilation_circle (Large, LargeDilation, 7.5)
dev_display (Image)
dev_set_draw (’margin’)
dev_set_line_width (3)
dev_set_color (’green’)
dev_display (LargeDilation)
dev_set_draw (’fill’)
stop ()
complement (LargeDilation, NotLarge)
reduce_domain (Image, NotLarge, ParticlesRed)
mean_image (ParticlesRed, Mean, 31, 31)
dyn_threshold (ParticlesRed, Mean, SmallRaw, 3, ’light’)
opening_circle (SmallRaw, Small, 2.5)
connection (Small, SmallConnection)
dev_display (Image)
dev_set_colored (12)
dev_display (SmallConnection)
stop ()
dev_set_color (’green’)
dev_display (Image)
dev_display (SmallConnection)
Button := 1
while (Button = 1)
dev_set_color (’green’)
get_mbutton (WindowID, Row, Column, Button)
dev_display (Image)
dev_display (SmallConnection)
dev_set_color (’red’)
select_region_point (SmallConnection, SmallSingle, Row, Column)
dev_display (SmallSingle)
NumSingle := |SmallSingle|
if (NumSingle=1)
intensity (SmallSingle, Image, MeanGray, DeviationGray)
area_center (SmallSingle, Area, Row, Column)
dev_set_color (’yellow’)
set_tposition (WindowID, Row, Column)
write_string (WindowID, ’Area=’+Area+’, Int=’+MeanGray)
endif
endwhile
dev_set_line_width (1)
dev_update_window (’on’)
Result
while returns 2 (H_MSG_TRUE) if the evaluation of the expression yields no error. endwhile (as operator)
always returns 2 (H_MSG_TRUE)
Parallelization Information
while is reentrant, local, and processed without parallelization.
Alternatives
for, until
See also
repeat, break, continue, if, elseif, else
HALCON 8.0.2
70 CHAPTER 2. CONTROL
Module
Foundation
Develop
dev_clear_obj ( Objects : : : )
dev_clear_window ( : : : )
71
72 CHAPTER 3. DEVELOP
RegionSelected := Regions[i]
dev_clear_window ()
dev_display (RegionSelected)
* stop ()
endfor
Result
dev_clear_window always returns 2 (H_MSG_TRUE).
Parallelization Information
dev_clear_window is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_set_window, dev_open_window, dev_display
Possible Successors
dev_display
See also
clear_window
Module
Foundation
dev_close_inspect_ctrl ( : : Variable : )
Var := 1
dev_inspect_ctrl (Var)
Var := [1,2,3,9,5,6,7,8]
Var[3] := 4
stop
dev_close_inspect_ctrl (Var)
Result
If an inspect window associated with Variable is open dev_close_inspect_ctrl returns 2
(H_MSG_TRUE).
Parallelization Information
dev_close_inspect_ctrl is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_inspect_ctrl
Module
Foundation
dev_close_window ( : : : )
dev_close_window closes the active graphics window which has been opened by dev_open_window or
by HDevelop (default window). The operator is equivalent to pressing the Close button of the active window. A
graphics window can be activated by calling dev_set_window.
Attention
If dev_close_window should be used for exported Code (C++), please note the description of
close_window due to the different semantics in C++.
Example
Result
dev_close_window always returns 2 (H_MSG_TRUE).
Parallelization Information
dev_close_window is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_set_window, dev_open_window
Possible Successors
dev_open_window
See also
close_window
Module
Foundation
dev_display ( Object : : : )
HALCON 8.0.2
74 CHAPTER 3. DEVELOP
Result
dev_display always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_display is local and processed completely exclusively without parallelization.
Alternatives
disp_obj, disp_image, disp_region, disp_xld
See also
dev_set_color, dev_set_colored, dev_set_draw, dev_set_line_width
Module
Foundation
dev_close_window ()
dev_open_window (0, 0, 512, 512, ’black’, WindowHandle)
dev_error_var (Error, 1)
dev_set_check (’~give_error’)
FileName := ’wrong_name’
read_image (Image, FileName)
ReadError := Error
if (ReadError # H_MSG_TRUE)
write_string (WindowHandle, ’wrong file name: ’+FileName)
endif
Result
dev_error_var always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_error_var is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_set_check
Possible Successors
dev_set_check, if, elseif, else, assign
See also
dev_set_check, set_check
Module
Foundation
’graphics_window_context_menu’: Returns whether a right click into the graphics window opens a context menu
or not. By default the context menu is enabled.
Attention
This operator is not supported for code exported.
Parameter
. PreferenceNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Selection of the preferences.
Default Value : ’graphics_window_context_menu’
List of values : PreferenceNames ∈ {’graphics_window_context_menu’}
. PreferenceValues (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string
Values of the selected preferences.
Parallelization Information
dev_get_preferences is local and processed completely exclusively without parallelization.
See also
dev_set_preferences
Module
Foundation
dev_inspect_ctrl ( : : Variable : )
Result
dev_inspect_ctrl always returns 2 (H_MSG_TRUE)
HALCON 8.0.2
76 CHAPTER 3. DEVELOP
Parallelization Information
dev_inspect_ctrl is local and processed completely exclusively without parallelization.
See also
dev_update_var
Module
Foundation
dev_map_par ( : : : )
Result
dev_map_par always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_map_par is local and processed completely exclusively without parallelization.
Possible Successors
dev_unmap_par
Module
Foundation
dev_map_prog ( : : : )
dev_map_var ( : : : )
• objects and
• display parameters
which have been displayed or changed since the most recent clear action or display of a full image. This history
is used for redrawing the contents of the window. Other output like text or general graphics like disp_line or
disp_circle or iconic data that is displayed using HALCON operators like disp_image or disp_region
are not part of the history, and are not redrawn. Only the object classes image, region, and XLD that are displayed
with the HDevelop operator dev_display or by double clicking on an icon are part of the history.
You may change the size of the graphics window interactively by “gripping” the window border with the mouse.
Then you can resize the window by dragging the mouse pointer. After this size modification the window content
is redisplayed. Now you see the same part of the window with changed zoom.
HALCON 8.0.2
78 CHAPTER 3. DEVELOP
If the mouse cursor is inside the window its look-up-table is reactivated. This is necessary if other programs use
their own look-up table. Thus if there is a “strange” graphics window presentation, you may load the proper
look-up table by placing the mouse inside the window.
Opening a window causes the assignment of a default font. It is used in connection with pro-
cedures like write_string and you may overwrite it by performing set_font after calling
dev_open_window. On the other hand you have the possibility to specify a default font by calling
set_system(’default_font’,<Fontname>) before opening a window (and all following windows; see
also query_font).
If you want to specify display parameters for a window you may select the menu item Visualization in the
menu bar. Here you can set the appropriate parameters by clicking the desired item. Parameters which you have set
in this way are used for all windows (in contrast to standard windows opened with open_window). The effects
of the new parameters will be applied direcly to the last object of the window history and alter its parameters only.
Attention
Never use close_window to close an HDevelop graphics window. The operator dev_close_window has
to be used instead.
If dev_open_window should be used for exported Code (C++), please note the description of open_window
due to the different semantics in C++.
Parameter
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; integer
Row index of upper left corner.
Default Value : 0
Typical range of values : 0 ≤ Row
Minimum Increment : 1
Recommended Increment : 1
Restriction : Row ≥ 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.origin.x ; integer
Column index of upper left corner.
Default Value : 0
Typical range of values : 0 ≤ Column
Minimum Increment : 1
Recommended Increment : 1
Restriction : Column ≥ 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; integer
Width of the window.
Default Value : 256
Typical range of values : 0 ≤ Width
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Width > 0) ∨ (Width = -1)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; integer
Height of the window.
Default Value : 256
Typical range of values : 0 ≤ Height
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Height > 0) ∨ (Height = -1)
. Background (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer / string
Color of the background of the new window.
Default Value : ’black’
. WindowHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
Example
dev_close_window ()
read_image (For5, ’for5’)
get_image_pointer1 (For5, Pointer, Type, Width, Height)
Result
If the values of the specified parameters are correct dev_open_window returns 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
dev_open_window is local and processed completely exclusively without parallelization.
Possible Successors
dev_display, dev_set_lut, dev_set_color, dev_set_draw, dev_set_part
Alternatives
open_window
See also
query_color
Module
Foundation
dev_set_check ( : : Mode : )
dev_close_window ()
dev_open_window (0, 0, 512, 512, ’black’, WindowHandle)
dev_error_var (Error, 1)
dev_set_check (’~give_error’)
FileName := ’wrong_name’
read_image (Image, FileName)
ReadError := Error
if (ReadError # H_MSG_TRUE)
write_string (WindowHandle, ’wrong file name: ’+FileName)
endif
* Now the program will stop with an exception
HALCON 8.0.2
80 CHAPTER 3. DEVELOP
dev_set_check (’give_error’)
read_image (Image, FileName)
Result
dev_set_check always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_check is local and processed completely exclusively without parallelization.
Possible Successors
dev_error_var
See also
set_check
Module
Foundation
dev_set_color ( : : ColorName : )
read_image(Image,’mreut’)
dev_set_draw(’fill’)
dev_set_color(’red’)
threshold(Image,Region,180,255)
dev_set_color(’green’)
threshold(Image,Region,0,179)
Result
dev_set_color always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_color is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_open_window, query_color, query_all_colors
Possible Successors
dev_display
Alternatives
dev_set_colored
See also
dev_set_draw, dev_set_line_width, set_color
Module
Foundation
dev_set_colored ( : : NumColors : )
read_image(Image,’monkey’)
threshold(Image,Region,128,255)
dev_set_colored(6)
connection(Region,Regions)
Result
dev_set_colored always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_colored is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_open_window
Possible Successors
dev_display
Alternatives
dev_set_color
See also
dev_set_draw, dev_set_line_width, set_colored
Module
Foundation
dev_set_draw ( : : DrawMode : )
HALCON 8.0.2
82 CHAPTER 3. DEVELOP
Parameter
. DrawMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Fill mode for region output.
Default Value : ’fill’
List of values : DrawMode ∈ {’fill’, ’margin’}
Example
read_image(Image,’monkey’)
threshold(Image,Region,128,255)
dev_clear_window
dev_set_color(’red’)
dev_set_draw(’fill’)
dev_display(Region)
dev_set_color(’white’)
dev_set_draw(’margin’)
dev_display(Region)
Result
dev_set_draw always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_draw is local and processed completely exclusively without parallelization.
Possible Successors
dev_set_line_width, dev_display
See also
set_draw
Module
Foundation
dev_set_line_width ( : : LineWidth : )
read_image(Image,’monkey’)
threshold(Image,Region,128,255)
dev_set_draw(’margin’)
dev_set_line_width(5)
dev_clear_window
dev_display(Region)
Result
dev_set_line_width always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_line_width is local and processed completely exclusively without parallelization.
Possible Successors
dev_display
See also
set_line_width, query_line_width
Module
Foundation
dev_set_lut ( : : LutName : )
read_image(Image,’mreut’)
dev_set_lut(’inverse’)
* For true color only:
dev_display(Image)
Result
dev_set_lut always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_lut is local and processed completely exclusively without parallelization.
Possible Successors
dev_display
See also
set_lut
HALCON 8.0.2
84 CHAPTER 3. DEVELOP
Module
Foundation
dev_set_paint ( : : Mode : )
• Only the name of the mode is passed: the defaults or the last values are used, respectively. Example:
dev_set_paint(’contourline’)
• All values are passed: all output characteristics can be set. Example: dev_set_paint
([’contourline’,10,1])
• Only the first n values are passed: only the passed values are changed. Example: dev_set_paint
([’contourline’,10])
Attention
If dev_set_paint should be used for exported Code (C++), please note the description of set_paint due
to the different semantics in C++.
Parameter
read_image(Image,’fabrik’)
dev_set_paint(’3D-plot’)
dev_display(Image)
Parallelization Information
dev_set_paint is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_open_window
Possible Successors
dev_set_color, dev_display
See also
set_paint
Module
Foundation
Result
dev_set_part always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_set_part is local and processed completely exclusively without parallelization.
Possible Successors
dev_display
See also
set_part
Module
Foundation
HALCON 8.0.2
86 CHAPTER 3. DEVELOP
dev_set_preferences allows to set selected preferences of the HDevelop application by programming. Until
now the following preferences are supported:
’graphics_window_context_menu’: Controls whether a right click into the graphics window opens a context menu
or not. By default the context menu is enabled. Disabling the context menu may be sensible if the right mouse
button is used for controlling some kind of navigation in the graphics window, e.g., for moving or zooming
3D-objects.
Possible values: ’false’, ’true’.
Default value: ’false’.
Attention
This operator is not supported for code exported.
Parameter
dev_set_shape ( : : Shape : )
’original’: The shape is displayed unchanged. Nevertheless modifications via parameters like
dev_set_line_width can take place. This is also true for all other modes.
’outer_circle’: Each region is displayed by the smallest surrounding circle. (See smallest_circle.)
’inner_circle’: Each region is displayed by the largest included circle. (See inner_circle.)
’ellipse’: Each region is displayed by an ellipse with the same moments and orientation (See elliptic_axis.)
’rectangle1’: Each region is displayed by the smallest surrounding rectangle parallel to the coordinate axes. (See
smallest_rectangle1.)
’rectangle2’: Each region is displayed by the smallest surrounding rectangle. (See smallest_rectangle2.)
’convex’: Each region is displayed by its convex hull (See shape_trans.)
’icon’ Each region is displayed by the icon set with set_icon in the center of gravity.
Attention
If dev_set_shape should be used for exported Code (C++), please note the description of set_shape due
to the different semantics in C++.
Parameter
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Region output mode.
Default Value : ’original’
List of values : Shape ∈ {’original’, ’convex’, ’outer_circle’, ’inner_circle’, ’rectangle1’, ’rectangle2’,
’ellipse’, ’icon’}
Example
read_image(Image,’monkey’)
threshold(Image,Region,128,255)
connection(Region,Regions)
dev_set_shape(’rectangle1’)
dev_set_draw(’margin’)
dev_display(Regions)
Parallelization Information
dev_set_shape is local and processed completely exclusively without parallelization.
Possible Successors
dev_display, dev_set_color
See also
set_shape, dev_set_line_width
Module
Foundation
dev_set_window ( : : WindowID : )
Parallelization Information
dev_set_window is local and processed completely exclusively without parallelization.
Possible Predecessors
dev_open_window
Possible Successors
dev_display
Module
Foundation
HALCON 8.0.2
88 CHAPTER 3. DEVELOP
dev_close_window ()
read_image (For5, ’for5’)
get_image_pointer1 (For5, Pointer, Type, Width, Height)
dev_open_window (0, 0, Width, Height, ’black’, WindowHandle)
dev_display (For5)
stop ()
dev_set_window_extents (-1,-1,Width/2,Height/2)
dev_display (For5)
stop ()
dev_set_window_extents (200,200,-1,-1)
Result
If the values of the specified parameters are correct dev_set_window_extents returns 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
dev_set_window_extents is local and processed completely exclusively without parallelization.
Possible Successors
dev_display, dev_set_lut, dev_set_color, dev_set_draw, dev_set_part
See also
set_window_extents
Module
Foundation
dev_unmap_par ( : : : )
dev_unmap_prog ( : : : )
dev_unmap_var ( : : : )
HALCON 8.0.2
90 CHAPTER 3. DEVELOP
dev_unmap_var hides the variable window so that it is no longer visible. It can be mapped again using the
operator dev_map_var.
Attention
This operator is not supported for exported C++ code.
Result
dev_unmap_var always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_unmap_var is reentrant, local, and processed without parallelization.
Possible Successors
dev_map_var
See also
dev_map_par, dev_map_prog
Module
Foundation
dev_update_pc ( : : DisplayMode : )
dev_update_time ( : : DisplayMode : )
Parameter
. DisplayMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode for graphic output.
Default Value : ’off’
List of values : DisplayMode ∈ {’on’, ’off’}
Result
dev_update_time always returns 2 (H_MSG_TRUE)
Parallelization Information
dev_update_time is reentrant, local, and processed without parallelization.
See also
dev_update_pc, dev_update_window, dev_update_var
Module
Foundation
dev_update_var ( : : DisplayMode : )
dev_update_window ( : : DisplayMode : )
HALCON 8.0.2
92 CHAPTER 3. DEVELOP
Attention
This operator is not supported for exported C++ code.
Parameter
File
4.1 Images
HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If CMYK or YCCK JPEG files are read, HALCON assumes that these files follow the Adobe Photoshop convention
that the CMYK channels are stored inverted, i.e., 0 represents 100% ink coverage, rather than 0% ink as one would
expect. The images are converted to RGB images using this convention. If the JPEG file does not follow this
convention, but stores the CMYK channels in the usual fashion, invert_image must be called after reading
the image.
93
94 CHAPTER 4. FILE
If PNG images that contain an alpha channel are read, the alpha channel is returned as the second or fourth channel
of the output image, unless the alpha channel contains exactly two different gray values, in which case a one or
three channel image with a reduced domain is returned, in which the points in the domain correspond to the points
with the higher gray value in the alpha channel.
Parameter
. Image (output_object) . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Read image.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; string
Name of the image to be read.
Default Value : ’fabrik’
Suggested values : FileName ∈ {’monkey’, ’fabrik’, ’mreut’}
Example
/* Reading an image: */
read_image(Image,’monkey’).
Result
If the parameters are correct the operator read_image returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
read_image is reentrant and processed without parallelization.
Possible Successors
disp_image, threshold, regiongrowing, count_channels, decompose3,
class_ndim_norm, gauss_image, fill_interlace, zoom_image_size,
zoom_image_factor, crop_part, write_image, rgb1_to_gray
Alternatives
read_sequence
See also
set_system, write_image
Module
Foundation
Read images.
The operator read_sequence reads unformatted image data, from a file and returns a “suitable” image. The
image data must be filled consecutively pixel by pixel and line by line.
Any file headers (with the length HeaderSize bytes) are skipped. The parameters SourceWidth and
SourceHeight indicate the size of the filled image. DestWidth and DestHeight indicate the size of the
image. In the simplest case these parameters are the same. However, areas can also be read. The upper left corner
of the required image area can be determined via StartRow and StartColumn.
The pixel types ’bit’, ’byte’, ’short’ (16 bits, unsigned), ’signed_short’ (16 bits, signed), ’long’ (32 bits, signed),
’swapped_long’ (32 bits, with swapped segments), and ’real’ (32 bit floating point numbers) are supported. Fur-
thermore, the operator read_sequence enables the extraction of components of a RBG image, if a triple of
three bytes (in the sequence “red”, “green”, “blue”) was filed in the image file. For the red component the pixel type
’r_byte’ must be chosen, and correspondingly for the green and blue components ’g_byte’ or ’b_byte’, respectively.
’MSBFirst’ (most significant bit first) or the inversion thereof (’LSBFirst’) can be chosen for the bit order
(BitOrder). The byte orders (ByteOrder) ’MSBFirst’ (most significant byte first) or ’LSBFirst’, respectively,
are processed analogously. Finally an alignment (Pad) can be set at the end of the line: ’byte’, ’short’ or ’long’. If
a whole image sequence is stored in the file a single image (beginning at Index 1) can be chosen via the parameter
Index.
Image files are searched in the current directory (determined by the environment variable) and in the image direc-
tory of HALCON . The image directory of HALCON is preset at ’.’ and ’/usr/local/halcon/images’ in a UNIX
environment and can be set via the operator set_system. More than one image directory can be indicated. This
is done by separating the individual directories by a colon.
Furthermore the search path can be set via the environment variable HALCONIMAGES (same structure as ’im-
age_dir’). Example:
HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If files of pixel type ’real’ are read and the byte order is chosen incorrectly (i.e., differently from the byte order in
which the data is stored in the file) program error and even crashes because of floating point exceptions may result.
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int2 / uint2 / int4
Image read.
. HeaderSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of bytes for file header.
Default Value : 0
Typical range of values : 0 ≤ HeaderSize
. SourceWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.x ; integer
Number of image columns of the filed image.
Default Value : 512
Typical range of values : 1 ≤ SourceWidth
. SourceHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Number of image lines of the filed image.
Default Value : 512
Typical range of values : 1 ≤ SourceHeight
. StartRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Starting point of image area (line).
Default Value : 0
Typical range of values : 0 ≤ StartRow
Restriction : StartRow < SourceHeight
. StartColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Starting point of image area (column).
Default Value : 0
Typical range of values : 0 ≤ StartColumn
Restriction : StartColumn < SourceWidth
. DestWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Number of image columns of output image.
Default Value : 512
Typical range of values : 1 ≤ DestWidth
Restriction : DestWidth ≤ SourceWidth
. DestHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Number of image lines of output image.
Default Value : 512
Typical range of values : 1 ≤ DestHeight
Restriction : DestHeight ≤ SourceHeight
HALCON 8.0.2
96 CHAPTER 4. FILE
’tiff’ TIFF format, 3-channel-images (RGB): 3 samples per pixel; other images (grayvalue images): 1 sample per
pixel, 8 bits per sample, uncompressed,72 dpi; file extension: *.tif
’bmp’ Windows-BMP format, 3-channel-images (RGB): 3 bytes per pixel; other images (gray value image): 1
byte per pixel; file extension: *.bmp
’jpeg’ JPEG format, with lost of information; together with the format string the quality value determining the
compression rate can be provided: e.g., ’jpeg 30’. Attention: images stored for being processed later should
not be compressed with the jpeg format according to the lost of information.
’jp2’ : JPEG-2000 format (lossless and lossy compression); together with the format string the quality value
determing the compression rate can be provided (e.g., ’jp2 40’). This value corresponds to the ratio of the
size of the compressed image and the size of the uncompressed image (in percent). Since lossless JPEG-
2000 compression already reduces the file size significantly, only smaller values (typically smaller than 50)
influence the file size. If no value is provided for the compression (and only then), the image is compressed
lossless. The image can contain an arbitrary number of channels. Possible types are byte, cyclic, direction,
int1, uint2, int2, and int4. In the case of int4 it is only possible to store images with less or equal to 24
bits precision (otherwise an exception handling is raised). If an image with a reduced domain is written, the
region is stored as 1 bit alpha channel.
’png’ PNG format (lossless compression); together with the format string, a compresion level between 0 and 9 can
be specified, where 0 corresponds to no compression and 9 to the best possible compression. Alternatively,
the compression can be selected with the following strings: ’best’, ’fastest’, and ’none’. Hence, examples for
correct parameters are ’png’, ’png 7’, and ’png none’. Images of type byte and uint2 can be stored in PNG
files. If an image with a reduced domain is written, the region is stored as the alpha channel, where the points
within the domain are stored as the maximum gray value of the image type and the points outside the domain
are stored as the gray value 0. If an image with a full domain is written, no alpha channel is stored.
’ima’ The data is written binary line by line (without header or carriage return). The size of the image and the
pixel type are stored in the description file "’FileName.exp"’. All HALCON pixel types except complex
and vector_field can be written. Only the first channel of the image is stored in the file. The file extension
is: ’.ima’
Parameter
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Output image(s).
. Format (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Graphic format.
Default Value : ’tiff’
List of values : Format ∈ {’tiff’, ’bmp’, ’jpeg’, ’ima’, ’jpeg 100’, ’jpeg 80’, ’jpeg 60’, ’jpeg 40’, ’jpeg 20’,
’jp2’, ’jp2 50’, ’jp2 40’, ’jp2 30’, ’jp2 20’, ’png’, ’png best’, ’png fastest’, ’png none’}
. FillColor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Fill gray value for pixels not belonging to the image region.
Default Value : 0
Suggested values : FillColor ∈ {-1, 0, 255, ’0xff0000’, ’0xff00’}
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write(-array) ; string
Name of graphic file.
Result
If the parameter values are correct the operator write_image returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
write_image is reentrant and processed without parallelization.
Possible Predecessors
open_window, read_image
Module
Foundation
4.2 Misc
delete_file ( : : FileName : )
Delete a file.
delete_file deletes the file given by FileName.
HALCON 8.0.2
98 CHAPTER 4. FILE
Parameter
structure. Because of this, at most 1000000 files (and directories) are returned in Files. By specifying a different
number with ’max_files <d>’, this value can be reduced.
Parameter
HALCON 8.0.2
100 CHAPTER 4. FILE
4.3 Region
Result
If the parameter values are correct the operator read_region returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
read_region is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
reduce_domain, disp_region
See also
write_region, read_image
Module
Foundation
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region of the images which are returned.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of region file.
Default Value : ’region.reg’
Example
regiongrowing(Img,Segmente,3,3,5,10)
write_region(Segmente,’result1’).
Result
If the parameter values are correct the operator write_region returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
write_region is reentrant and processed without parallelization.
Possible Predecessors
open_window, read_image, read_region, threshold, regiongrowing
See also
read_region
Module
Foundation
4.4 Text
close_all_files ( : : : )
close_file ( : : FileHandle : )
HALCON 8.0.2
102 CHAPTER 4. FILE
Example
open_file(’/tmp/data.txt’,’input’,FileHandle)
// ....
close_file(FileHandle).
Result
If the file handle is correct close_file returns the value 2 (H_MSG_TRUE). Otherwise an exception handling
is raised.
Parallelization Information
close_file is processed completely exclusively without parallelization.
Possible Predecessors
open_file
See also
open_file
Module
Foundation
fnew_line ( : : FileHandle : )
fwrite_string(FileHandle,’Good Morning’)
fnew_line(FileHandle)
Result
If an output file is open and it can be written to the file the operator fnew_line returns the value 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
fnew_line is reentrant and processed without parallelization.
Possible Predecessors
fwrite_string
See also
fwrite_string
Module
Foundation
Parameter
. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; integer
File handle.
. Char (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Read character or control string (’nl’,’eof’).
Example
repeat >
fread_char(FileHandle:Char)
(if(Char = ’nl’) > fnew_line(FileHandle)) |
(if(Char != ’nl’) > fwrite_string(FileHandle,Char))
until(Char = ’eof’).
Result
If an input file is open the operator fread_char returns 2 (H_MSG_TRUE). Otherwise an exception handling is
raised.
Parallelization Information
fread_char is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_string, read_string, fread_line
See also
open_file, close_file, fread_string, fread_line
Module
Foundation
do {
fread_line(FileHandle,&Line,&IsEOF) ;
} while(IsEOF==0) ;
Result
If the file is open and a suitable line is read fread_line returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
HALCON 8.0.2
104 CHAPTER 4. FILE
Parallelization Information
fread_line is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_char, fread_string
See also
open_file, close_file, fread_char, fread_string
Module
Foundation
Result
If a file is open and a suitable string is read fread_string returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
fread_string is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_char, read_string, fread_line
See also
open_file, close_file, fread_char, fread_line
Module
Foundation
Result
If the writing procedure was carried out successfully the operator fwrite_string returns the value 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
fwrite_string is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
write_string
See also
open_file, close_file, set_system
Module
Foundation
HALCON 8.0.2
106 CHAPTER 4. FILE
Result
If the parameters are correct the operator open_file returns the value 2 (H_MSG_TRUE). Otherwise an ex-
ception handling is raised.
Parallelization Information
open_file is processed completely exclusively without parallelization.
Possible Successors
fwrite_string, fread_char, fread_string, fread_line, close_file
See also
close_file, fwrite_string, fread_char, fread_string, fread_line
Module
Foundation
4.5 Tuple
4.6 XLD
read_contour_xld_arc_info ( : Contours : FileName : )
HALCON 8.0.2
108 CHAPTER 4. FILE
Result
If the parameters are correct and the file could be read, the operator read_contour_xld_arc_info returns
the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
read_contour_xld_arc_info is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_contour_xld
See also
read_world_file, write_contour_xld_arc_info, read_polygon_xld_arc_info
Module
Foundation
• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
• ELLIPSE
• SPLINE
• BLOCK
• INSERT
The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
contours Contours.
If the file has been created with the operator write_contour_xld_dxf, all attributes and global attributes that
were originally defined for the XLD contours are read. This means that read_contour_xld_dxf supports all
the extended data written by the operator write_contour_xld_dxf. The reading of these attributes can be
switched off by setting the generic parameter ’read_attributes’ to ’false’. Generic parameters are set by specifying
the parameter name(s) in GenParamNames and the corresponding value(s) in GenParamValues.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD contours. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). The parameter ’min_num_points’ defines the mini-
mum number of sampling points that are used for the approximation. Note that the parameter ’min_num_points’
always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if ’min_num_points’ is
set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-circle is approximated
by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum deviation of the XLD
contour from the ideal circle or ellipse, respectively (unit: pixel). For the determination of the accuracy of the
approximation both criteria are evaluated. Then, the criterion that leads to the more accurate approximation is
used.
Internally, the following default values are used for the generic parameters:
’read_attributes’ = ’true’
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Parameter
HALCON 8.0.2
110 CHAPTER 4. FILE
Parameter
. Polygons (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject
Read XLD polygons.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the ARC/INFO file.
Example
Result
If the parameters are correct and the file could be read, the operator read_polygon_xld_arc_info returns
the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
read_polygon_xld_arc_info is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_polygon_xld
See also
read_world_file, write_polygon_xld_arc_info, read_contour_xld_arc_info
Module
Foundation
• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
• ELLIPSE
• SPLINE
• BLOCK
• INSERT
The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
polygons Polygons.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD polygons. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). Generic parameters are set by specifying the pa-
rameter name(s) in GenParamNames and the corresponding value(s) in GenParamValues. The parameter
’min_num_points’ defines the minimum number of sampling points that are used for the approximation. Note that
the parameter ’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical
arcs, i.e., if ’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle,
this semi-circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the
maximum deviation of the XLD polygon from the ideal circle or ellipse, respectively (unit: pixel). For the deter-
mination of the accuracy of the approximation both criteria are evaluated. Then, the criterion that leads to the more
accurate approximation is used.
Internally, the following default values are used for the generic parameters:
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Note that reading a DXF file with read_polygon_xld_dxf results in exactly the same geometric information
as reading the file with read_contour_xld_dxf. However, the resulting data structure is different.
Parameter
. Polygons (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject
Read XLD polygons.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the DXF file.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {’min_num_points’, ’max_approx_error’}
. GenParamValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {0.1, 0.25, 0.5, 1, 2, 5, 10, 20}
. DxfStatus (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Status information.
Result
If the parameters are correct and the file could be read the operator read_polygon_xld_dxf returns the value
2 (H_MSG_TRUE). Otherwise, an exception is raised.
Parallelization Information
read_polygon_xld_dxf is reentrant and processed without parallelization.
Possible Predecessors
write_polygon_xld_dxf
See also
write_polygon_xld_dxf, read_contour_xld_dxf
Module
Foundation
HALCON 8.0.2
112 CHAPTER 4. FILE
Result
If the parameters are correct and the file could be written, the operator write_contour_xld_arc_info
returns the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
write_contour_xld_arc_info is reentrant and processed without parallelization.
Possible Predecessors
affine_trans_contour_xld
See also
read_world_file, read_contour_xld_arc_info, write_polygon_xld_arc_info
Module
Foundation
The attributes are written in the following format as extended data of each VERTEX:
DXF Explanation
1000 Meaning
contour attributes
1002 Beginning of the value list
{
1070 Number of attributes (here: 3)
3
1040 Value of the first attribute
5.00434303
1040 Value of the second attribute
126.8638916
1040 Value of the third attribute
4.99164152
1002 End of the value list
}
The global attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
global contour attributes
1002 Beginning of the value list
{
1070 Number of global attributes (here: 5)
5
1040 Value of the first global attribute
0.77951831
1040 Value of the second global attribute
0.62637949
1040 Value of the third global attribute
103.94314575
1040 Value of the fourth global attribute
0.21434096
1040 Value of the fifth global attribute
0.21921949
1002 End of the value list
}
The names of the attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
names of contour attributes
1002 Beginning of the value list
{
1070 Number of attribute names (here: 3)
3
1000 Name of the first attribute
angle
1000 Name of the second attribute
response
1000 Name of the third attribute
edge_direction
1002 End of the value list
}
The names of the global attributes are written in the following format as extended data of each POLYLINE:
HALCON 8.0.2
114 CHAPTER 4. FILE
DXF Explanation
1000 Meaning
names of global contour attributes
1002 Beginning of the value list
{
1070 Number of global attribute names (here: 5)
5
1000 Name of the first global attribute
regr_norm_row
1000 Name of the second global attribute
regr_norm_col
1000 Name of the third global attribute
regr_dist
1000 Name of the fourth global attribute
regr_mean_dist
1000 Name of the fifth global attribute
regr_dev_dist
1002 End of the value list
}
Parameter
Parameter
Result
If the parameters are correct and the file could be written, the operator write_polygon_xld_arc_info
returns the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
write_polygon_xld_arc_info is reentrant and processed without parallelization.
Possible Predecessors
affine_trans_polygon_xld
See also
read_world_file, read_polygon_xld_arc_info, write_contour_xld_arc_info
Module
Foundation
HALCON 8.0.2
116 CHAPTER 4. FILE
Possible Predecessors
gen_polygons_xld
See also
read_polygon_xld_dxf, write_contour_xld_dxf
Module
Foundation
Filter
5.1 Arithmetic
abs_image ( Image : ImageAbs : : )
Result
The operator abs_image returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no input
images available) is set via the operator set_system(::’no_object_result’,<Result>:)
Parallelization Information
abs_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
convert_image_type, power_byte
Module
Foundation
117
118 CHAPTER 5. FILTER
If an overflow or an underflow occurs the values are clipped. This is not the case with int2 images if Mult is equal
to 1 and Add is equal to 0. To reduce the runtime the underflow and overflow check is skipped. The resulting
image is stored in ImageResult.
It is possible to add byte images with int2, uint2 or int4 images and to add int4 to int2 or uint2 images. In this case
the result will be of type int2 or int4 respectively.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Please note that the runtime of the operator varies with different control parameters. For frequently used combina-
tions special optimizations are used. Additionally, for byte, int2, uint2, and int4 images special optimizations are
implemented that use SIMD technology. The actual application of these special optimizations is controlled by the
system parameter ’mmx_enable’ (see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction
set is available), the internal calculations are performed using SIMD technology.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of add_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system(::
’mmx_enable’,’false’:).
Parameter
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 /
int4 / real / direction / cyclic / com-
plex
Result image(s) by the addition.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Factor for gray value adaption.
Default Value : 0.5
Suggested values : Mult ∈ {0.2, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 5.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Value for gray value range adaption.
Default Value : 0
Suggested values : Add ∈ {0, 64, 128, 255, 512}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
add_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
Result
The operator add_image returns the value 2 (H_MSG_TRUE) if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
add_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
sub_image, mult_image
See also
sub_image, mult_image
Module
Foundation
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 /
int4 / real / complex
Result image(s) by the division.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Factor for gray range adaption.
Default Value : 255
Suggested values : Mult ∈ {0.1, 0.2, 0.5, 1.0, 2.0, 3.0, 10, 100, 500, 1000}
Typical range of values : -1000 ≤ Mult ≤ 1000
Minimum Increment : 0.001
Recommended Increment : 1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Value for gray range adaption.
Default Value : 0
Suggested values : Add ∈ {0.0, 128.0, 256.0, 1025}
Typical range of values : -1000 ≤ Add ≤ 1000
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
div_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
HALCON 8.0.2
120 CHAPTER 5. FILTER
Result
The operator div_image returns the value 2 (H_MSG_TRUE) if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
div_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_image, sub_image, mult_image
See also
add_image, sub_image, mult_image
Module
Foundation
Invert an image.
The operator invert_image inverts the gray values of an image. For images of the ’byte’ and ’cyclic’ type the
result is calculated as:
g 0 = 255 − g
In the case of signed types the values are negated. The resulting image has the same pixel type as the input image.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
read_image(Orig,"fabrik")
invert_image(Orig,Invert)
disp_image(Invert,WindowHandle).
Parallelization Information
invert_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
watersheds
Alternatives
scale_image
See also
scale_image, add_image, sub_image
Module
Foundation
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 2.
. ImageMax (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 /
real / direction / cyclic
Result image(s) by the maximization.
Example
read_image(Bild1,"affe")
read_image(Bild2,"fabrik")
max_image(Bild1,Bild2,Max)
disp_image(Max,WindowHandle)
Result
If the parameter values are correct the operator max_image returns the value 2 (H_MSG_TRUE). The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
max_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
max_image
See also
min_image
Module
Foundation
HALCON 8.0.2
122 CHAPTER 5. FILTER
Parameter
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 2.
. ImageMin (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 /
real / direction / cyclic
Result image(s) by the minimization.
Result
If the parameter values are correct the operator min_image returns the value 2 (H_MSG_TRUE). The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
min_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_erosion
See also
max_image, min_image
Module
Foundation
g 0 := g1 ∗ g2 ∗ Mult + Add
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 /
int4 / real / direction / cyclic / com-
plex
Result image(s) by the product.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Factor for gray range adaption.
Default Value : 0.005
Suggested values : Mult ∈ {0.001, 0.01, 0.5, 1.0, 2.0, 3.0, 5.0, 10.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
mult_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
Result
The operator mult_image returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
mult_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_image, sub_image, div_image
See also
add_image, sub_image, div_image
Module
Foundation
g 0 := g ∗ Mult + Add
255
Mult = Add = −Mult ∗ GMin
GMax − GMin
The values for GMin and GMax can be determined, e.g., with the operator min_max_gray.
Please note that the runtime of the operator varies with different control parameters. For frequently used combi-
nations special optimizations are used. Additionally, special optimizations are implemented that use fixed point
arithmetic (for int2 and uint2 images), and further optimizations that use SIMD technology (for byte, int2, and uint2
images). The actual application of these special optimizations is controlled by the system parameters ’int_zooming’
and ’mmx_enable’ (see set_system). If ’int_zooming’ is set to ’true’, the internal calculation is performed us-
ing fixed point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed
gray values is slightly lower in this mode. The difference to the more accurate calculation (using ’int_zooming’
= ’false’) is typically less than two gray levels. If ’mmx_enable’ is set to ’true’(and the SIMD instruction set is
available), the internal calculations are performed using fixed point arithmetic and SIMD technology. In this case
the setting of ’int_zooming’ is ignored.
HALCON 8.0.2
124 CHAPTER 5. FILTER
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of scale_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system(::
’mmx_enable’,’false’:).
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real /
direction / cyclic / complex
Image(s) whose gray values are to be scaled.
. ImageScaled (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 /
int4 / real / direction / cyclic / com-
plex
Result image(s) by the scale.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Scale factor.
Default Value : 0.01
Suggested values : Mult ∈ {0.001, 0.003, 0.005, 0.008, 0.01, 0.02, 0.03, 0.05, 0.08, 0.1, 0.5, 1.0}
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Offset.
Default Value : 0
Suggested values : Add ∈ {0, 10, 50, 100, 200, 500}
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
Result
The operator scale_image returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) Otherwise an exception treatment is carried out.
Parallelization Information
scale_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
min_max_gray
Alternatives
mult_image, add_image, sub_image
See also
min_max_gray
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image
. SqrtImage (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4
/ real
Output image
Parallelization Information
sqrt_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Module
Foundation
HALCON 8.0.2
126 CHAPTER 5. FILTER
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
sub_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
Result
The operator sub_image returns the value 2 (H_MSG_TRUE) if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
sub_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
dual_threshold
Alternatives
mult_image, add_image, sub_image
See also
add_image, mult_image, dyn_threshold, check_difference
Module
Foundation
5.2 Bit
bit_and ( Image1, Image2 : ImageAnd : : )
Example
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
read_image(Image1,’fabrik’)
disp_image(Image1,WindowHandle)
bit_and(Image0,Image1,ImageBitA)
disp_image(ImageBitA,WindowHandle).
Result
If the images are correct (type and number) the operator bit_and returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_and is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_mask, add_image, max_image
See also
bit_mask, add_image, max_image
Module
Foundation
read_image(&ByteImage,"fabrik");
convert_image_type(ByteImage,&Int2Image,"int2");
bit_lshift(Int2Image,&FullInt2Image,8);
Result
If the images are correct (type) and if Shift has a valid value the operator bit_lshift returns the value
HALCON 8.0.2
128 CHAPTER 5. FILTER
2 (H_MSG_TRUE). The behavior in case of empty input (no input images available) is set via the operator
set_system(::’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_lshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
scale_image
See also
bit_rshift
Module
Foundation
short, unsigned short, int/long). Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageNot (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 /
int2 / uint2 / int4
Result image(s) by complement operation.
Example
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
bit_not(Image0,ImageBitN)
disp_image(ImageBitN,WindowHandle).
Result
If the images are correct (type) the operator bit_not returns the value 2 (H_MSG_TRUE). The be-
havior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_not is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_or, bit_and, add_image
See also
bit_slice, bit_mask
Module
Foundation
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
HALCON 8.0.2
130 CHAPTER 5. FILTER
read_image(Image1,’fabrik’)
disp_image(Image1,WindowHandle)
bit_or(Image0,Image1,ImageBitO)
disp_image(ImageBitO,WindowHandle).
Result
If the images are correct (type and number) the operator bit_or returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_or is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_and, add_image
See also
bit_xor, bit_and
Module
Foundation
bit_rshift(Int2Image,&ReducedInt2Image,8);
convert_image_type(ReducedInt2Image,&ByteImage,"byte");
Result
If the images are correct (type) and Shift has a valid value the operator bit_rshift returns the value
2 (H_MSG_TRUE). The behavior in case of empty input (no input images available) is set via the operator
set_system(::’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_rshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
scale_image
See also
bit_lshift
Module
Foundation
read_image(&ByteImage,"fabrik");
for (bit=1; bit<=8; i++)
{
bit_slice(ByteImage,&Slice,bit);
threshold(Slice,&Region,0,255);
disp_region(Region,WindowHandle);
clear(bit_slice); clear(Slice); clear(Region);
}
Result
If the images are correct (type) and Bit has a valid value, the operator bit_slice returns the value 2
(H_MSG_TRUE). The behavior in case of empty input (no input images available) is set via the operator
set_system(::’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_slice is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, bit_or
Alternatives
bit_mask
See also
bit_and, bit_lshift
Module
Foundation
HALCON 8.0.2
132 CHAPTER 5. FILTER
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
read_image(Image1,’fabrik’)
disp_image(Image1,WindowHandle)
bit_xor(Image0,Image1,ImageBitX)
disp_image(ImageBitX,WindowHandle).
Result
If the parameter values are correct the operator bit_xor returns the value 2 (H_MSG_TRUE). The behav-
ior in case of empty input (no input images available) can be determined by the operator set_system(::
’no_object_result’,<Result>:) If necessary an exception handling is raised.
Parallelization Information
bit_xor is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_or, bit_and, add_image
See also
bit_or, bit_and
Module
Foundation
5.3 Color
cfa_to_rgb ( CFAImage : RGBImage : CFAType, Interpolation : )
grabbed using function calls from the frame grabber SDK, and are passed to HALCON using gen_image1 or
gen_image1_extern.
In single-chip CCD cameras, a color filter array in front of the sensor provides (subsampled) color information.
The most frequently used filter is the so called Bayer filter. The color filter array has the following layout in this
case:
G B G B G B ···
R G R G R G ···
G B G B G B ···
R G R G R G ···
.. .. .. .. .. .. ..
. . . . . . .
Each gray value of the input image CFAImage corresponds to the brightness of the pixel behind the corresponding
color filter. Hence, in the above layout, the pixel (0,0) corresponds to a green color value, while the pixel (0,1)
corresponds to a blue color value. The layout of the Bayer filter is completely determined by the first two elements
of the first row of the image, and can be chosen with the parameter CFAType. In particular, this enables the correct
conversion of color filter array images that have been cropped out of a larger image (e.g., using crop_part or
crop_rectangle1). The algorithm that is used to interpolate the RGB values is determined by the parameter
Interpolation. Currently, the only possible choice is ’bilinear’.
Parameter
. CFAImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input image.
. RGBImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Output image.
. CFAType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Color filter array type.
Default Value : ’bayer_gb’
List of values : CFAType ∈ {’bayer_gb’, ’bayer_gr’, ’bayer_bg’, ’bayer_rg’}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Interpolation type.
Default Value : ’bilinear’
List of values : Interpolation ∈ {’bilinear’}
Result
cfa_to_rgb returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
cfa_to_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_image1_extern, gen_image1, grab_image
Possible Successors
decompose3
See also
trans_from_rgb
Module
Foundation
Compute the transformation matrix of the principal component analysis of multichannel images.
HALCON 8.0.2
134 CHAPTER 5. FILTER
Y 0.299 0.587 0.144 R 0
I = 0.595 −0.276 −0.333 G + 128
Q 0.209 −0.522 0.287 B 128
[0.299, 0.587, 0.144, 0.0, 0.595, −0.276, −0.333, 128.0, 0.209, −0.522, 0.287, 128.0]
Here, it should be noted that the above transformation is unnormalized, i.e., the resulting color values can lie
outside the range [0, 255]. The transformation ’yiq’ in trans_from_rgb additionally scales the rows of the
matrix (except for the constant offset) appropriately.
To avoid a loss of information, linear_trans_color returns an image of type real. If a different image type
is desired, the image can be transformed with convert_image_type.
Parameter
. Image (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Multichannel input image.
. ImageTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .multichannel-image(-array) ; Hobject : real
Multichannel output image.
. TransMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Transformation matrix for the color values.
Result
The operator linear_trans_color returns the value 2 (H_MSG_TRUE) if the parameters are correct. Oth-
erwise an exception is raised.
Parallelization Information
linear_trans_color is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_principal_comp_trans
Possible Successors
convert_image_type
Alternatives
principal_comp, trans_from_rgb, trans_to_rgb
Module
Foundation
HALCON 8.0.2
136 CHAPTER 5. FILTER
Parameter
Parallelization Information
rgb1_to_gray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
compose3
Alternatives
trans_from_rgb, rgb3_to_gray
Module
Foundation
Parameter
Parallelization Information
rgb3_to_gray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Alternatives
rgb1_to_gray, trans_from_rgb
Module
Foundation
Transform an image from the RGB color space to an arbitrary color space.
trans_from_rgb transforms an image from the RGB color space to an arbitrary color space (ColorSpace).
The three channels of the image are passed as three separate images on input and output.
The operator trans_from_rgb supports the image types byte, uint2, int4, and real. In the case of int4 images,
the images should not contain negative values. In the case of real images, all values should lay within 0 and 1. If
not, the results of the transformation may not be reasonable.
Certain scalings are performed accordingly to the image type:
• Considering byte and uint2 images, the domain of color space values is generally mapped to the full domain
of [0..255] resp. [0..65535]. Because of this, the origin of signed values (e.g., CIELab or YIQ) may not be at
the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
HALCON 8.0.2
138 CHAPTER 5. FILTER
Range of values:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]
Range of values:
Y ∈ [0; 1], U ∈ [−0.436; 0.436], V ∈ [−0.615; 0.496]
’argyb’
A 0.30 0.59 0.11 R
Rg = 0.50 −0.50 0.00 G
Yb 0.25 0.25 −0.50 B
Range of values:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]
’ciexyz’
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
colors (x, y, z):
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Range of values:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’hls’ min = min(R,G,B)
max = max(R,G,B)
L = (min + max) / 2
if (max == min)
H = 0
S = 0
else
if (L > 0.5)
S = (max - min) / (2 - max - min)
else
S = (max - min) / (max + min)
fi
if (R == max)
’hsi’
√2 −1 −1
√ √
M1 6 6 6 R
√1 −1
M2 =
0 2
√
2
G
I1 √1 √1 √1 B
3 3 3
M2
H √arctan M 1
S = M 12 + M 22
I1
I √
3
Range of values: q
2
H ∈ [0; 2π], S ∈ [0; 3 ], I ∈ [0; 1]
HALCON 8.0.2
140 CHAPTER 5. FILTER
else
H = acos(A / C)
fi
if (B > G)
H = 2 * pi - H
fi
fi
fi
Range of values:
I ∈ [0; 1], H ∈ [0; 2π], S ∈ [0; 1]
’cielab’
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
Y
L = 116 ∗ f () − 16
Yw
X Y
a = 500 ∗ (f ( ) − f ( ))
Xw Yw
Y Z
b = 200 ∗ (f ( ) − f ( ))
Yw Zw
where 1 24 3
f (t) = t 3 , t > ( 116 )
841 16
f (t) = 108 ∗ t + 116 , otherwise
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Range of values:
L ∈ [0; 100], a ∈ [−86.1813; 98.2352], b ∈ [−107.8617; 94.4758]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’i1i2i3’
I1 0.333 0.333 0.333 R
I2 = 1.0 0.0 −1.0 G
I3 −0.5 1.0 −0.5 B
Range of values:
I1 ∈ [0; 1], I2 ∈ [−1; 1], I3 ∈ [−1; 1]
’ciexyz2’
X 0.620 0.170 0.180 R
Y = 0.310 0.590 0.110 G
Z 0.000 0.066 1.020 B
Range of values:
X ∈ [0; 0.970], Y ∈ [0; 1.010], Z ∈ [0; 1.086]
’ciexyz3’
X 0.618 0.177 0.205 R
Y = 0.299 0.587 0.114 G
Z 0.000 0.056 0.944 B
Range of values:
X ∈ [0; 1], Y ∈ [0; 1], Z ∈ [0; 1]
’ciexyz4’
X 0.476 0.299 0.175 R
Y = 0.262 0.656 0.082 G
Z 0.020 0.161 0.909 B
colors(x, y, z):
Used primary
0.628 0.268 0.150 0.313
red:= 0.346 , green:= 0.588 , blue:= 0.070 , white65 := 0.329
0.026 0.144 0.780 0.358
Range of values:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]
Parameter
. ImageRed (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (red channel).
. ImageGreen (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (green channel).
. ImageBlue (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (blue channel).
. ImageResult1 (output_object) . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Color-transformed output image (channel 1).
. ImageResult2 (output_object) . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Color-transformed output image (channel 1).
. ImageResult3 (output_object) . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Color-transformed output image (channel 1).
. ColorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Color space of the output image.
Default Value : ’hsv’
List of values : ColorSpace ∈ {’cielab’, ’hsv’, ’hsi’, ’yiq’, ’yuv’, ’argyb’, ’ciexyz’, ’ciexyz2’, ’ciexyz3’,
’ciexyz4’, ’hls’, ’ihs’, ’i1i2i3’}
Example
Result
trans_from_rgb returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling
is raised.
Parallelization Information
trans_from_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Possible Successors
compose3
Alternatives
rgb1_to_gray, rgb3_to_gray
HALCON 8.0.2
142 CHAPTER 5. FILTER
See also
trans_to_rgb
Module
Foundation
Transform an image from an arbitrary color space to the RGB color space.
trans_to_rgb transforms an image from an arbitrary color space (ColorSpace) to the RGB color space.
The three channels of the image are passed as three separate images on input and output.
The operator trans_to_rgb supports the image types byte, uint2, int4, and real. The domain of the input
images must match the domain provided by a corresponding transformation with trans_from_rgb. If not, the
results of the transformation may not be reasonable.
This includes some scalings in the case of certain image types and transformations:
• Considering byte and uint2 images, the domain of color space values is expected to be spread to the full
domain of [0..255] resp. [0..65535]. This includes a shift in the case of signed values, such that the origin of
signed values (e.g. CIELab or YIQ) may not be at the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].
The following transformations are supported:
(All domains are based on RGB values scaled to [0;1]. To obtain the domains for a certain image type, they must
be multiplied with the maximum of the image type, e.g. 255 in the case of a byte image)
’yiq’
R 0.999 0.962 0.615 Y
G = 0.949 −0.220 −0.732 I
B 0.999 −1.101 1.706 Q
Domain:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]
’argyb’
R 1.00 1.29 0.22 A
G = 1.00 −0.71 0.22 Rg
B 1.00 0.29 −1.78 Yb
Domain:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]
’ciexyz’
R 3.240479 −1.53715 −0.498535 X
G = −0.969256 1.875991 0.041556 Y
B 0.055648 −0.204043 1.057311 Z
The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
colors (x, y, z):
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Domain:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’cielab’
fy = (L + 16)/116
fx = a/500 + fy
fz = b/200 − fy
24
X = Xw ∗ fx3 , fx > 116
16 108
X = (fx − 116 ) ∗ Xw ∗ 841 , otherwise
24
Y = Yw ∗ fy3 , fy > 116
16 108
Y = (fy − 116 ) ∗ Yw ∗ 841 , otherwise
24
Z = Zw ∗ fz3 , fz > 116
16 108
Z = (fz − 116 ) ∗ Zw ∗ 841 , otherwise
R 3.240479 −1.53715 −0.498535 X
G = −0.969256 1.875991 0.041556 Y
B 0.055648 −0.204043 1.057311 Z
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Domain:
L ∈ [0; 100], a ∈ [−94.3383; 90.4746], b ∈ [−101.3636; 84.4473]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’hls’ Hi = integer(H * 6)
Hf = fraction(H * 6)
if (L <= 0.5)
max = L * (S + 1)
else
max = L + S - (L * S)
fi
min = 2 * L - max
if (S == 0)
R = L
G = L
B = L
else
if (Hi == 0)
R = max
G = min + Hf * (max - min)
B = min
HALCON 8.0.2
144 CHAPTER 5. FILTER
elif (Hi == 1)
R = min + (1 - Hf) * (max - min)
G = max
B = min
elif (Hi == 2)
R = min
G = max
B = min + Hf * (max - min)
elif (Hi == 3)
R = min
G = min + (1 - Hf) * (max - min)
B = max
elif (Hi == 4)
R = min + Hf * (max - min)
G = min
B = max
elif (Hi == 5)
R = max
G = min
B = min + (1 - Hf) * (max - min)
fi
fi
Domain:
H ∈ [0; 2π], L ∈ [0; 1], S ∈ [0; 1]
’hsi’
M 1 = S ∗ sin H
M 2 = S ∗ cos H
I
I1 = √
3
√2
0 √13
R 6 M1
−1
G = √ √1 1
√ M2
6 2 3
B −1
√ −1
√ √1 I1
6 2 3
’hsv’ Domain: q
H ∈ [0; 2π], S ∈ [0; 23 ], I ∈ [0; 1]
if (S == 0)
R = V
G = V
B = V
else
Hi = integer(H)
Hf = fraction(H)
if (Hi == 0)
R = V
G = V * (1 - (S * (1 - Hf)))
B = V * (1 - S)
elif (Hi == 1)
R = V * (1 - (S * Hf))
G = V
B = V * (1 - S)
elif (Hi == 2)
R = V * (1 - S)
G = V
B = V * (1 - (S * (1 - Hf)))
elif (Hi == 3)
R = V * (1 - S)
G = V * (1 - (S * Hf))
B = V
elif (Hi == 4)
R = V * (1 - (S * (1 - Hf)))
G = V * (1 - S)
B = V
elif (Hi == 5)
R = V
G = V * (1 - S)
B = V * (1 - (S * Hf))
fi
fi
Domain:
H ∈ [0; 2π], S ∈ [0; 1], V ∈ [0; 1]
’ciexyz4’
R 2.750 −1.149 −0.426 X
G = −1.118 2.026 0.033 Y
B 0.138 −0.333 1.104 Z
Domain:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]
Parameter
. ImageInput1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 1).
. ImageInput2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 2).
. ImageInput3 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 3).
. ImageRed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Red channel.
. ImageGreen (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Green channel.
. ImageBlue (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Blue channel.
. ColorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Color space of the input image.
Default Value : ’hsv’
List of values : ColorSpace ∈ {’hsi’, ’yiq’, ’yuv’, ’argyb’, ’ciexyz’, ’ciexyz4’, ’cielab’, ’hls’, ’hsv’}
Example
Result
trans_to_rgb returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
HALCON 8.0.2
146 CHAPTER 5. FILTER
Parallelization Information
trans_to_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Possible Successors
compose3, disp_color
See also
decompose3
Module
Foundation
5.4 Edges
sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges(ThinEdge,EdgeAmp,&CloseEdges,15);
skeleton(CloseEdges,&ThinCloseEdges);
Result
close_edges returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
close_edges is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
edges_image, sobel_amp, threshold, skeleton
Possible Successors
skeleton
Alternatives
close_edges_length, dilation1, closing
See also
gray_skeleton
Module
Foundation
sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges_length(ThinEdge,EdgeAmp,&CloseEdges,15,3);
Result
close_edges_length returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
HALCON 8.0.2
148 CHAPTER 5. FILTER
∂g(x, y) ∂g(x, y)
φ = atan2( , )
∂y ∂x
∂ 2 g(x, y) ∂ 2 g(x, y)
TR = +
∂x2 ∂y 2
A = EG − F 2
2
∂g(x, y)
E = 1+
∂x
∂g(x, y) ∂g(x, y)
F =
∂x ∂y
2
∂g(x, y)
G = 1+
∂y
HALCON 8.0.2
150 CHAPTER 5. FILTER
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. DerivGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : real
Filtered result image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Sigma of the Gaussian.
Default Value : 1.0
Suggested values : Sigma ∈ {0.7, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.2 ≤ Sigma ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma > 0.0
. Component (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Derivative or feature to be calculated.
Default Value : ’x’
List of values : Component ∈ {’none’, ’x’, ’y’, ’gradient’, ’xx’, ’yy’, ’xy’, ’xxx’, ’yyy’, ’xxy’, ’xyy’, ’det’,
’mean_curvature’, ’gauss_curvature’, ’eigenvalue1’, ’eigenvalue2’, ’main1_curvature’, ’main2_curvature’,
’kitchen_rosenfeld’, ’zuniga_haralick’, ’2nd_ddg’, ’de_saint_venant’, ’area’, ’laplace’, ’gradient_dir’,
’eigenvec_dir’}
Example (Syntax: C)
read_image(&Image,"mreut");
derivate_gauss(Image,&Gauss,3.0,"x");
zero_crossing(Gauss,&ZeroCrossings);
Parallelization Information
derivate_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, laplace_of_gauss, binomial_filter, gauss_image, smooth_image,
isotropic_diffusion
See also
zero_crossing, dual_threshold
Module
Foundation
Sigma
sigma1 = r
log ( SigF1actor )
−2 SigFactor 2 −1
sigma1
sigma2 =
SigFactor
DiffOfGauss = (Image ∗ gauss(sigma1)) − (Image ∗ gauss(sigma2))
For a SigFactor = 1.6, according to Marr, an approximation to the Mexican-Hat-Operator results. The resulting
image is stored in DiffOfGauss.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Input image
. DiffOfGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : int2
LoG image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Smoothing parameter of the Laplace operator to approximate.
Default Value : 3.0
Suggested values : Sigma ∈ {2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.2 ≤ Sigma ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma > 0.0
. SigFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Ratio of the standard deviations used (Marr recommends 1.6).
Default Value : 1.6
Typical range of values : 0.1 ≤ SigFactor ≤ 10.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : SigFactor > 0.0
HALCON 8.0.2
152 CHAPTER 5. FILTER
Example
read_image(Image,’fabrik’)
diff_of_gauss(Image,Laplace,2.0,1.6)
zero_crossing(Laplace,ZeroCrossings).
Complexity
The execution time depends linearly on the number of pixels and the size of sigma.
Result
diff_of_gauss returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
diff_of_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, derivate_gauss
References
D. Marr: “Vision (A computational investigation into human representation and processing of visual information)”;
New York, W.H. Freeman and Company; 1982.
Module
Foundation
The partial derivatives of the images, which are necessary to calculate the metric tensor, are calculated with the
corresponding edge filters, analogously to edges_image. For Filter = ’canny’, the partial derivatives of
the Gaussian smoothing masks are used (see derivate_gauss), for ’deriche1’ and Filter = ’deriche2’ the
corresponding Deriche filters, for Filter = ’shen’ the corresponding Shen filters, and for Filter = ’sobel_fast’
the Sobel filter. Analogously to single-channel images, the gradient direction is defined by the vector v in which the
rate of change f is maximum. The vector v is given by the eigenvector corresponding to the largest eigenvalue of
G. The square root of the eigenvalue is the equivalent of the gradient magnitude (the amplitude) for single-channel
images, and is returned in ImaAmp. For single-channel images, both definitions are equivalent. Since the gradient
magnitude may be larger than what can be represented in the input image data type (byte or uint2), it is stored in
the next larger data type (uint2 or int4) in ImaAmp. The eigenvector also is used to define the edge direction. In
contrast to single-channel images, the edge direction can only be defined modulo 180 degrees. Like in the output
of edges_image, the edge directions are stored in 2-degree steps, and are returned in ImaDir. Points with
edge amplitude 0 are assigned the edge direction 255 (undefined direction). For speed reasons, the edge directions
are not computed explicitly for Filter = ’sobel_fast’. Therefore, ImaDir is an empty object in this case.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarilyfor all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and Alpha is ignored), and can be estimated by calling info_edges for concrete values
of the parameter Alpha. It decreases for increasing Alpha for the Deriche and Shen filters and increases for
the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide”
filters exhibit a larger invariance to noise, but also a decreased ability to detect small details. Non-recursive filters,
such as the Canny filter, are realized using filter masks, and thus the execution time increases for increasing filter
width. In contrast, the execution time for recursive filters does not depend on the filter width. Thus, arbitrary
filter widths are possible using the Deriche and Shen filters without increasing the run time of the operator. The
resulting advantage in speed compared to the Canny operator naturally increases for larger filter widths. As border
treatment, the recursive operators assume that the images are zero outside of the image, while the Canny operator
mirrors the gray value at the image border. Comparable filter widths can be obtained by the following choices of
Alpha:
nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,1000,...)
For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImaAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : uint2 / int4
Edge amplitude (gradient magnitude) image.
. ImaDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : direction
Edge direction image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Edge operator to be applied.
Default Value : ’canny’
List of values : Filter ∈ {’canny’, ’deriche1’, ’deriche2’, ’shen’, ’sobel_fast’}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 1.0
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0, 2.5, 3.0}
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. NMS (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Non-maximum suppression (’none’, if not desired).
Default Value : ’nms’
List of values : NMS ∈ {’nms’, ’inms’, ’hvnms’, ’none’}
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Lower threshold for the hysteresis threshold operation (negative if no thresholding is desired).
Default Value : 20
Suggested values : Low ∈ {5, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Low
Minimum Increment : 1
Recommended Increment : 5
Restriction : (Low ≥ 1) ∨ (Low < 0)
HALCON 8.0.2
154 CHAPTER 5. FILTER
Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
edges_color_sub_pix extracts subpixel precise color edges from the input image Image. The definition
of color edges is given in the description of edges_color. The same edge filters as in edges_color
can be selected: ’canny’, ’deriche1’, ’deriche2’, and ’shen’. In addition, a fast Sobel filter can be selected with
’sobel_fast’. The filters are specified by the parameter Filter.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily. For a detailed description of this
parameter see edges_color. This parameter is ignored for Filter = ’sobel_fast’.
The extracted edges are returned as subpixel precise XLD contours in Edges. For all edge operators except for
’sobel_fast’, the following attributes are defined for each edge point (see get_contour_attrib_xld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
edges_color_sub_pix links the edge points into edges by using an algorithm similar to a hysteresis thresh-
old operation, which is also used in edges_sub_pix and lines_gauss. Points with an amplitude larger
than High are immediately accepted as belonging to an edge, while points with an amplitude smaller than Low
are rejected. All other points are accepted as edges if they are connected to accepted edge points (see also
lines_gauss and hysteresis_threshold).
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of Filter that are described
above. This mode is analogous to the mode for completing junctions that is available in edges_sub_pix and
lines_gauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter
HALCON 8.0.2
156 CHAPTER 5. FILTER
curs during execution. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
edges_color_sub_pix is reentrant and processed without parallelization.
Alternatives
edges_color
See also
edges_image, edges_sub_pix, info_edges, hysteresis_threshold, lines_gauss,
lines_facet
References
C. Steger: “Subpixel-Precise Extraction of Lines and Edges”; International Archives of Photogrammetry and
Remote Sensing, vol. XXXIII, part B3; pp. 141-156; 2000.
C. Steger: “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images”; Herbert Utz Verlag, München;
1998.
S. Di Zenzo: “A Note on the Gradient of a Multi-Image”; Computer Vision, Graphics, and Image Processing, vol.
33; pp. 116-125; 1986.
Aldo Cumani: “Edge Detection in Multispectral Images”; Computer Vision, Graphics, and Image Processing:
Graphical Models and Image Processing, vol. 53, no. 1; pp. 40-51; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; pp. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; pp. 167-187; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; pp. 78-87; 1990.
J. Shen, S. Castan: “An Optimal Linear Operator for Step Edge Detection”; Computer Vision, Graphics, and Image
Processing: Graphical Models and Image Processing, vol. 54, no. 2; pp. 112-133; 1992.
Module
2D Metrology
absolute value. This behavior can be obtained for byte-images as well by selecting ’deriche1_int4’ bzw. ’de-
riche2_int4’ as filter. This can be used to calculate the second derivative of an image by applying edges_image
(with parameter ’lanser2’) to the signed first derivative. Edge directions are stored in 2-degree steps, i.e., an edge
direction of x degrees with respect to the horizontal axis is stored as x/2 in the edge direction image. Furthermore,
the direction of the change of intensity is taken into account. Let [Ex , Ey ] denote the image gradient. Then the
following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and Alpha is ignored), and can be estimated by calling info_edges for concrete
values of the parameter Alpha. It decreases for increasing Alpha for the Deriche, Lanser and Shen filters and
increases for the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator
is based. “Wide” filters exhibit a larger invariance to noise, but also a decreased ability to detect small details.
Non-recursive filters, such as the Canny filter, are realized using filter masks, and thus the execution time increases
for increasing filter width. In contrast, the execution time for recursive filters does not depend on the filter width.
Thus, arbitrary filter widths are possible using the Deriche, Lanser and Shen filters without increasing the run time
of the operator. The resulting advantage in speed compared to the Canny operator naturally increases for larger
filter widths. As border treatment, the recursive operators assume that the images to be zero outside of the image,
while the Canny operator repeats the gray value at the image’s border. Comparable filter widths can be obtained
by the following choices of Alpha:
The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators — closely followed by the Deriche operators.
edges_image optionally offers to apply a non-maximum-suppression (NMS = ’nms’/’inms’/’hvnms’; ’none’ if
not desired) and hysteresis threshold operation (Low,High; at least one negative if not desired) to the resulting
edge image. Conceptually, this corresponds to the following calls:
nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,999,...)
For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
HALCON 8.0.2
158 CHAPTER 5. FILTER
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / int4
Input image.
. ImaAmp (output_object) . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / int4 / real
Edge amplitude (gradient magnitude) image.
. ImaDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : direction
Edge direction image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Edge operator to be applied.
Default Value : ’lanser2’
List of values : Filter ∈ {’deriche1’, ’deriche1_int4’, ’deriche2’, ’deriche2_int4’, ’lanser1’, ’lanser2’,
’shen’, ’mshen’, ’canny’, ’sobel_fast’}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1.1}
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. NMS (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Non-maximum suppression (’none’, if not desired).
Default Value : ’nms’
List of values : NMS ∈ {’nms’, ’inms’, ’hvnms’, ’none’}
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Lower threshold for the hysteresis threshold operation (negative, if no thresholding is desired).
Default Value : 20
Suggested values : Low ∈ {5, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Low ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Restriction : (Low > 1) ∨ (Low < 0)
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Upper threshold for the hysteresis threshold operation (negative, if no thresholding is desired).
Default Value : 40
Suggested values : High ∈ {10, 15, 20, 25, 30, 40, 50, 60, 70}
Typical range of values : 1 ≤ High ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Restriction : ((High > 1) ∨ (High < 0)) ∧ (High ≥ Low)
Example
read_image(Image,’fabrik’)
edges_image(Image,Amp,Dir,’lanser2’,0.5,’none’,-1,-1)
hysteresis_threshold(Amp,Margin,20,30,30).
Result
edges_image returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution.
If the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
edges_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
info_edges
Possible Successors
threshold, hysteresis_threshold, close_edges_length
Alternatives
sobel_dir, frei_dir, kirsch_dir, prewitt_dir, robinson_dir
See also
info_edges, nonmax_suppression_amp, hysteresis_threshold, bandpass_image
References
S.Lanser, W.Eckstein: “Eine Modifikation des Deriche-Verfahrens zur Kantendetektion”; 13. DAGM-Symposium,
München; Informatik Fachberichte 290; Seite 151 - 158; Springer-Verlag; 1991.
S.Lanser: “Detektion von Stufenkanten mittels rekursiver Filter nach Deriche”; Diplomarbeit; Technische Univer-
sität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; S. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; S. 167-187; 1987.
R.Deriche: “Optimal Edge Detection Using Recursive Filtering”; Proc. of the First International Conference on
Computer Vision, London; S. 501-505; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
S.Castan, J.Zhao und J.Shen: “Optimal Filter for Edge Detection Methods and Results”; Proc. of the First Euro-
pean Conference on Computer Vision, Antibes; Lecture Notes on computer Science; no. 427; S. 12-17; Springer-
Verlag; 1990.
Module
Foundation
Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
edges_sub_pix detects step edges using recursively implemented filters (according to Deriche, Lanser and
Shen) or the conventionally implemented “derivative of Gaussian” filter (using filter masks) proposed by Canny.
Thus, the following edge operators are available:
’deriche1’, ’lanser1’, ’deriche2’, ’lanser2’, ’shen’, ’mshen’, ’canny’, ’sobel’, and ’sobel_fast’
(parameter Filter).
The extracted edges are returned as sub-pixel precise XLD contours in Edges. For all edge operators except
’sobel_fast’, the following attributes are defined for each edge point (see get_contour_attrib_xld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all edge operators except ’sobel
and ’sobel_fast’, and can be estimated by calling info_edges for concrete values of the parameter Alpha. It
decreases for increasing Alpha for the Deriche, Lanser and Shen filters and increases for the Canny filter, where
it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide” filters exhibit a larger
invariance to noise, but also a decreased ability to detect small details. Non-recursive filters, such as the Canny
filter, are realized using filter masks, and thus the execution time increases for increasing filter width. In contrast,
the execution time for recursive filters does not depend on the filter width. Thus, arbitrary filter widths are possible
using the Deriche, Lanser and Shen filters without increasing the run time of the operator. The resulting advantage
in speed compared to the Canny operator naturally increases for larger filter widths. As border treatment, the
recursive operators assume that the images to be zero outside of the image, while the Canny operator repeats the
gray value at the image’s border. Comparable filter widths can be obtained by the following choices of Alpha:
HALCON 8.0.2
160 CHAPTER 5. FILTER
The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators that supprt arbitrary mask sizes, closely followed by the Deriche
operators. The two Sobel filters, which use a fixed mask size of (3 × 3), are faster than the other filters. Of these
two, the filter ’sobel_fast’ is significantly faster than ’sobel’.
edges_sub_pix links the edge points into edges by using an algorithm similar to a hysteresis threshold op-
eration, which is also used in lines_gauss. Points with an amplitude larger than High are immediately
accepted as belonging to an edge, while points with an amplitude smaller than Low are rejected. All other
points are accepted as edges if they are connected to accepted edge points (see also lines_gauss and
hysteresis_threshold).
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of Filter that are described
above. This mode is analogous to the mode for completing junctions that is available in lines_gauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter
read_image(Image,’fabrik’)
edges_sub_pix(Image,Edges,’lanser2’,0.5,20,40).
Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ Sigma) for the
Canny filter and O(A) for the recursive Lanser, Deriche, and Shen filters.
Let S = Width ∗ Height be the number of pixels of Image. Then edges_sub_pix requires at least 60 ∗ S bytes
of temporary memory during execution for all edge operators except ’sobel_fast’. For ’sobel_fast’, at least 9 ∗ S
bytes of temporary memory are required.
Result
edges_sub_pix returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution.
If the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
edges_sub_pix is reentrant and automatically parallelized (on tuple level).
Alternatives
sobel_dir, frei_dir, kirsch_dir, prewitt_dir, robinson_dir, edges_image
See also
info_edges, hysteresis_threshold, bandpass_image, lines_gauss, lines_facet
References
S.Lanser, W.Eckstein: “Eine Modifikation des Deriche-Verfahrens zur Kantendetektion”; 13. DAGM-Symposium,
München; Informatik Fachberichte 290; Seite 151 - 158; Springer-Verlag; 1991.
S.Lanser: “Detektion von Stufenkanten mittels rekursiver Filter nach Deriche”; Diplomarbeit; Technische Univer-
sität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; S. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; S. 167-187; 1987.
R.Deriche: “Optimal Edge Detection Using Recursive Filtering”; Proc. of the First International Conference on
Computer Vision, London; S. 501-505; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
S.Castan, J.Zhao und J.Shen: “Optimal Filter for Edge Detection Methods and Results”; Proc. of the First Euro-
pean Conference on Computer Vision, Antibes; Lecture Notes on computer Science; no. 427; S. 12-17; Springer-
Verlag; 1990.
Module
2D Metrology
HALCON 8.0.2
162 CHAPTER 5. FILTER
frei_amp calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B.
Parameter
read_image(Image,’fabrik’)
frei_amp(Image,Frei_amp)
threshold(Frei_amp,Edges,128,255).
Result
frei_amp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
frei_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, kirsch_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
√1 0 −1
√
B= 2 0 − 2
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
ImageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
read_image(Image,’fabrik’)
frei_dir(Image,Frei_dirA,Frei_dirD)
threshold(Frei_dirA,Res,128,255).
Result
frei_dir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
frei_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, prewitt_dir, kirsch_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
HALCON 8.0.2
164 CHAPTER 5. FILTER
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 −35 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
This corresponds to applying a mean operator ( mean_image), and then subtracting the original gray value. A
value of 128 is added to the result, i.e., zero crossings occur for 128.
This filter emphasizes high frequency components (edges and corners). The cutoff frequency is determined by the
size (Height × Width) of the filter matrix: the larger the matrix, the smaller the cutoff frequency is.
At the image borders the pixels’ gray values are mirrored. In case of over- or underflow the gray values are clipped
(255 and 0, resp.).
Attention
If even values are passed for Height or Width, the operator uses the next larger odd value instead. Thus, the
center of the filter mask is always uniquely determined.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Input image.
. Highpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
High-pass-filtered result image.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the filter mask.
Default Value : 9
Suggested values : Width ∈ {3, 5, 7, 9, 11, 13, 17, 21, 29, 41, 51, 73, 101}
Typical range of values : 3 ≤ Width ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Width ≥ 3) ∧ odd(Width)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the filter mask.
Default Value : 9
Suggested values : Height ∈ {3, 5, 7, 9, 11, 13, 17, 21, 29, 41, 51, 73, 101}
Typical range of values : 3 ≤ Height ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Height ≥ 3) ∧ odd(Height)
Example (Syntax: C)
highpass_image(Image,&Highpass,7,5);
threshold(Highpass,&Region,60.0,255.0);
skeleton(Region,&Skeleton);
Result
highpass_image returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
highpass_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, skeleton
Alternatives
mean_image, sub_image, convol_image, bandpass_image
See also
dyn_threshold
Module
Foundation
read_image(Image,’fabrik’)
info_edges(’lanser2’,’edge’,0.5,Size,Coeffs)
edges_image(Image,Amp,Dir,’lanser2’,0.5,’none’,-1,-1)
hysteresis_threshold(Amp,Margin,20,30,30).
Result
info_edges returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
info_edges is reentrant and processed without parallelization.
Possible Successors
edges_image, threshold, skeleton
See also
edges_image
Module
Foundation
HALCON 8.0.2
166 CHAPTER 5. FILTER
−3 −3 5
−3 0 5
−3 −3 5
−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5
read_image(Image,’fabrik’)
kirsch_amp(Image,Kirsch_amp)
threshold(Kirsch_amp,Edges,128,255).
Result
kirsch_amp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
kirsch_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, frei_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
−3 −3 5
−3 0 5
−3 −3 5
−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5
The result image contains the maximum response of all masks. The edge directions are returned in
ImageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : direction
Edge direction image.
Example
read_image(Image,’fabrik’)
kirsch_dir(Image,Kirsch_dirA,Kirsch_dirD)
threshold(Kirsch_dirA,Res,128,255).
HALCON 8.0.2
168 CHAPTER 5. FILTER
Result
kirsch_dir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
kirsch_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, prewitt_dir, frei_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
’n_8’
1 1 1
1 −8 1
1 1 1
’n_8_isotropic’
10 22 10
22 −128 22
10 22 10
For the three filter masks the following normelizations of the resulting gray values is applied, (i.e., by dividing
the result by the given divisor): ’n_4’ normalization by 1, ’n_8’, normalization by 2 and for ’n_8_isotropic’
normalization by 32.
For a Laplace operator with size 3 × 3, the corresponding filter is applied directly, while for larger filter
sizes the input image is first smoothed using using a Gaussian filter (see gauss_image) or a binomial fil-
ter (see binomial_filter) of size MaskSize-2. The Gaussian filter is selected for the above values of
ResultType. Here, MaskSize = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending
’_binomial’ to the above values of ResultType. Here, MaskSize can be selected between 5 and 39. Fur-
thermore, it is possible to select different amounts of smoothing for the column and row direction by passing two
values in MaskSize. Here, the first value of MaskSize corresponds to the mask width (smoothing in the column
direction), while the second value corresponds to the mask height (smoothing in the row direction) of the binomial
filter. Therefore,
laplace(O:R:’absolute’,MaskSize,N:)
gauss_image(O:G:MaskSize-2:) .
laplace(G:R:’absolute’,MaskSize,N:).
and
laplace(O:R:’absolute_binomial’,MaskSize,N:)
is equivalent to
binomial_filter(O:B:MaskSize-2,MaskSize-2:) .
laplace(B:R:’absolute’,3,N:)
laplace either returns the absolute value of the Laplace filtered image (ResultType = ’absolute’) in a byte
or uint2 image or the signed result (ResultType = ’signed’ or ’signed_clipped’). Here, the output image type
has the same number of bytes per pixel as the input image (i.e., int1 or int2) for ’signed_clipped’, while the output
image has the next larger number of pixels (i.e., int2 or int4) for ’signed’.
Parameter
read_image(&Image,"mreut");
laplace(Image,&Laplace,"signed",3,"n_8_isotropic");
zero_crossing(Laplace,&ZeroCrossings);
Result
laplace returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
laplace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold, threshold
Alternatives
diff_of_gauss, laplace_of_gauss, derivate_gauss
See also
highpass_image, edges_image
Module
Foundation
HALCON 8.0.2
170 CHAPTER 5. FILTER
∂ 2 g(x, y) ∂ 2 g(x, y)
∆g(x, y) = +
∂x2 ∂y 2
The derivatives in laplace_of_gauss are calculated by appropriate derivatives of the Gaussian, resulting in
the following formula for the convolution mask:
∆Gσ (x, y) =
2
x + y2
2
x + y2
1
− 1 exp −
2πσ 4 2σ 2 2σ 2
Parameter
. Image (input_object) . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. ImageLaplace (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : int2
Laplace filtered image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Smoothing parameter of the Gaussian.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0}
Typical range of values : 0.7 ≤ Sigma ≤ 5.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (Sigma > 0.7) ∧ (Sigma ≤ 25.0)
Example (Syntax: C)
read_image(&Image,"mreut");
laplace_of_gauss(Image,&Laplace,2.0);
zero_crossing(Laplace,&ZeroCrossings);
Parallelization Information
laplace_of_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, diff_of_gauss, derivate_gauss
See also
derivate_gauss
Module
Foundation
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
Example
read_image(Image,’fabrik’)
prewitt_amp(Image,Prewitt)
threshold(Prewitt,Edges,128,255).
Result
prewitt_amp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
prewitt_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
threshold, gray_skeleton, nonmax_suppression_amp, close_edges,
close_edges_length
Alternatives
sobel_amp, kirsch_amp, frei_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
ImageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
HALCON 8.0.2
172 CHAPTER 5. FILTER
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : direction
Edge direction image.
Example
read_image(Image,’fabrik’)
prewitt_dir(Image,PrewittA,PrewittD)
threshold(PrewittA,Edges,128,255).
Result
prewitt_dir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
prewitt_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, frei_dir, kirsch_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
A B
C D
If an overflow occurs the result is clipped. The result of the operator is stored at the pixel with the coordinates of
“D”.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageRoberts (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Roberts-filtered result images.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Filter type.
Default Value : ’gradient_sum’
List of values : FilterType ∈ {’roberts_max’, ’gradient_max’, ’gradient_sum’}
Example
read_image(Image,’fabrik’)
roberts(Image,Roberts,’roberts_max’)
threshold(Roberts,Margin,128,255).
Result
roberts returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
roberts is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image
Possible Successors
threshold, skeleton
Alternatives
edges_image, sobel_amp, frei_amp, kirsch_amp, prewitt_amp
See also
laplace, highpass_image, bandpass_image
Module
Foundation
−1 0 1
−2 0 2
−1 0 1
2 1 0
1 0 −1
0 −1 −2
HALCON 8.0.2
174 CHAPTER 5. FILTER
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1
read_image(Image,’fabrik’)
robinson_amp(Image,Robinson_amp)
threshold(Robinson_amp,Edges,128,255).
Result
robinson_amp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
robinson_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, frei_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
−1 0 1
−2 0 2
−1 0 1
2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1
The result image contains the maximum response of all masks. The edge directions are returned in
ImageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter
read_image(Image,’fabrik’)
robinson_dir(Image,Robinson_dirA,Robinson_dirD)
threshold(Robinson_dirA,Res,128,255).
Result
robinson_dir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
robinson_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, kirsch_dir, prewitt_dir, frei_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)
HALCON 8.0.2
176 CHAPTER 5. FILTER
√
’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
’thin_sum_abs’ (thin(|a|) + thin(|b|))/4
’thin_max_abs’ max(thin(|a|), thin(|b|))/4
’x’ b/4
’y’ a/4
Here, thin(x) is equal to x for a vertical maximum (mask A) and a horizontal maximum (mask B), respectively,
and 0 otherwise. Thus, for ’thin_sum_abs’ and ’thin_max_abs’ the gradient image is thinned. For the filter types ’x’
and ’y’ if the input image is of type byte the output image is of type int1, of type int2 otherwise. For a Sobel operator
with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter sizes the input image
is first smoothed using a Gaussian filter (see gauss_image) or a binomial filter (see binomial_filter) of
size Size-2. The Gaussian filter is selected for the above values of FilterType. Here, Size = 5, 7, 9, 11, or
13 must be used. The binomial filter is selected by appending ’_binomial’ to the above values of FilterType.
Here, Size can be selected between 5 and 39. Furthermore, it is possible to select different amounts of smoothing
the the column and row direction by passing two values in Size. Here, the first value of Size corresponds
to the mask width (smoothing in the column direction), while the second value corresponds to the mask height
(smoothing in the row direction) of the binomial filter. The binomial filter can only be used for images of type
byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge amplitudes are multiplied by a
factor of 2 to prevent information loss. Therefore,
sobel_amp(I,E,Dir,FilterTyp,S)
scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_amp(G,E,FilterType,3)
or to
scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_amp(G,E,FilterType,3).
For sobel_amp special optimizations are implemented FilterType = 0 sum_abs 0 that use SIMD technol-
ogy. The actual application of these special optimizations is controlled by the system parameter ’mmx_enable’
(see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal
calculations are performed using SIMD technology. Note that SIMD technology performs best on large, compact
input regions. Depending on the input region and the capabilities of the hardware the execution of sobel_amp
might even take significantly more time with SIMD technology than without.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. EdgeAmplitude (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : int1 / int2 / uint2
Edge amplitude (gradient magnitude) image.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Filter type.
Default Value : ’sum_abs’
List of values : FilterType ∈ {’sum_abs’, ’thin_sum_abs’, ’thin_max_abs’, ’sum_sqrt’, ’x’, ’y’,
’sum_abs_binomial’, ’thin_sum_abs_binomial’, ’thin_max_abs_binomial’, ’sum_sqrt_binomial’,
’x_binomial’, ’y_binomial’}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example
read_image(Image,’fabrik’)
sobel_amp(Image,Amp,’sum_abs’,3)
threshold(Amp,Edg,128,255).
Result
sobel_amp returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
sobel_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, mean_image, anisotropic_diffusion, sigma_image
Possible Successors
threshold, nonmax_suppression_amp, gray_skeleton
Alternatives
frei_amp, roberts, kirsch_amp, prewitt_amp, robinson_amp
See also
laplace, highpass_image, bandpass_image
Module
Foundation
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)
√
’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
For a Sobel operator with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter
sizes the input image is first smoothed using a Gaussian filter (see gauss_image) or a binomial filter (see
binomial_filter) of size Size-2. The Gaussian filter is selected for the above values of FilterType.
Here, Size = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending ’_binomial’ to the
above values of FilterType. Here, Size can be selected between 5 and 39. Furthermore, it is possible to
select different amounts of smoothing the the column and row direction by passing two values in Size. Here, the
first value of Size corresponds to the mask width (smoothing in the column direction), while the second value
corresponds to the mask height (smoothing in the row direction) of the binomial filter. The binomial filter can only
be used for images of type byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge
amplitudes are multiplied by a factor of 2 to prevent information loss. Therefore,
sobel_dir(I:Amp,Dir:FilterTyp,S:)
HALCON 8.0.2
178 CHAPTER 5. FILTER
scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_dir(G,Amp,Dir,FilterType,3:)
or to
scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_dir(G,Amp,Dir,FilterType,3:).
The edge directions are returned in EdgeDirection, and are stored in 2-degree steps, i.e., an edge direction of x
degrees with respect to the horizontal axis is stored as x/2 in the edge direction image. Furthermore, the direction
of the change of intensity is taken into account. Let [Ex , Ey ] denote the image gradient. Then the following edge
directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. EdgeAmplitude (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. EdgeDirection (output_object) . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : direction
Edge direction image.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Filter type.
Default Value : ’sum_abs’
List of values : FilterType ∈ {’sum_abs’, ’sum_sqrt’, ’sum_abs_binomial’, ’sum_sqrt_binomial’}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example
read_image(Image,’fabrik’)
sobel_dir(Image,Amp,Dir,’sum_abs’,3)
threshold(Amp,Edg,128,255).
Result
sobel_dir returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
sobel_dir is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
binomial_filter, gauss_image, mean_image, anisotropic_diffusion, sigma_image
Possible Successors
nonmax_suppression_dir, hysteresis_threshold, threshold
Alternatives
edges_image, frei_dir, kirsch_dir, prewitt_dir, robinson_dir
See also
roberts, laplace, highpass_image, bandpass_image
Module
Foundation
5.5 Enhancement
adjust_mosaic_images ( Images : CorrectedImages : From, To,
ReferenceImage, HomMatrices2D, EstimationMethod, EstimateParameters,
OECFModel : )
HALCON 8.0.2
180 CHAPTER 5. FILTER
many parameters that need to be determined. Instead, only simpler types of response functions can be estimated.
Currently, only so-called Laguerre-functions are available.
The response of a Laguerre-type OECF is determined by only one parameter called Phi. In a first step, the whole
gray value spectrum (in case of 8bit images the values 0 to 255) is converted to floating point numbers in the
interval [0:1]. Then, the OECF backprojection is calculated based on this and the resulting gray values are once
again converted to the original interval.
The inverse transform of the gray values back to linear values based on a Laguerre-type OECF is described by the
following equation:
2 P hi · sin(π · I_nl)
I_l = I_nl + · arctan( )
π 1 − P hi · cos(π · I_nl)
with I_l the linear gray value and I_nl the (nonlinear) gray value.
The parameter OECFModel is only used if the calibrated model has been chosen. Otherwise, any input for
OECFModel will be ignored.
The parameter EstimateParameters can also be used to influence the performance and memory consumption
of the operator. With ’no_cache’ the internal caching mechanism can be disabled. This switch only has and influ-
ence if EstimationMethod is set to ’gold_standard’. Otherwise this switch will be ignored. When disabling
the internal caching, the operator uses far less memory, but therefore calculates the corresponding grayvalue pairs
in each iteration of the minimization algorithm again. Therefore, disabling caching is only advisable if all physical
memory is used up at some point of the calculation and the operating system starts using swap space.
A second option to inluence the performance is using subsampling. When setting EstimateParameters to
’subsampling_2’, images are internally zoomed down by a factor of 2. Despite the suggested value list, not only
factors of 2 and 4 are available, but any integer number might be specified by appending it to subsampling_ in
EstimateParameters. With this, the amount of image data is tremendously reduced, which leads to a much
faster computation of the internal minimization. In fact, using moderate subsampling might even lead to better
results since it also decreases the influence of slightly misaligned pixels. Although subsampling also influences
the minimization if EstimationMethod is set to ’standard’, it is mostly useful for ’gold_standard’.
Some more general remarks on using adjust_mosaic_images in applications:
• Estimation of vignetting will only work well if significant vignetting is visible in the images. Otherwise, the
operator may lead to erratic results.
• Estimation of the response is rather slow because the problem is quite complex. Therefore, it is advisable not
to determine the response in time critical applications. Apart from this, the response can only be determined
correctly if there are relatively large brightness differences between the images.
• It is not possible to correct saturation. If there are saturated areas in an image, they will remain saturated.
• adjust_mosaic_images can only be used to correct different brightness in images, which is caused by different
exposure (shutter time, aperture) or different light intensity. It cannot be used to correct brightness differences
based on inhomogeneous illumination within each image.
Parameter
Result
If the parameters are valid, the operator adjust_mosaic_images returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
adjust_mosaic_images is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Possible Successors
gen_spherical_mosaic
References
David Hasler, Sabine S"usstrunk: Mapping colour in image stitching applications. Journal of Visual Communica-
tion and Image Representation, 15(1):65-90, 2004.
Module
Foundation
ut = div(G(u)∇u)
formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in Image, this is an
enhancement of the mean curvature flow or intrinsic heat equation
HALCON 8.0.2
182 CHAPTER 5. FILTER
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined by the input image Image at a time t0 = 0. The smoothing operator
mean_curvature_flow is a direct application of the mean curvature flow equation. The discrete diffusion
equation is solved in Iterations time steps of length Theta, so that the output image ImageCED contains
the gray value function at the time Iterations · Theta.
To detect the edge direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
While the matrix G is given by
1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2
in the case of the operator mean_curvature_flow, where I denotes the unit matrix, GMCF is again smoothed
componentwise by a Gaussian filter of standard deviation Rho for coherence_enhancing_diff. Then, the
final coefficient matrix
is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions
g1 (p) = 0.001
−1
g2 (p) = 0.001 + 0.999 exp
p
Factor serves as measurement of the increase in contrast. The division frequency is determined via the size of
the filter matrix: The larger the matrix, the lower the disivion frequency.
As an edge treatment the gray values are mirrored at the edges of the image. Overflow and/or underflow of gray
values is clipped.
Parameter
HALCON 8.0.2
184 CHAPTER 5. FILTER
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
draw_region(Region,WindowHandle)
reduce_domain(Image,Region,Mask)
emphasize(Mask,Sharp,7,7,2.0)
disp_image(Sharp,WindowHandle).
Result
If the parameter values are correct the operator emphasize returns the value 2 (H_MSG_TRUE) The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
emphasize is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_image, sub_image, laplace, add_image
See also
mean_image, highpass_image
Module
Foundation
h(x) describes the relative frequency of the occurrence of the gray value x. For uint2 images, the only difference
is that the value 255 is replaced with a different maximum value. The maximum value is computed from the
number of significant bits stored with the input image, provided that this value is set. If not, the value of the system
parameter ’int2_bits’ is used (see set_system), if this value is set (i.e., different from -1). If none of the two
values is set, the number of significant bits is set to 16.
This transformation linearises the cumulative histogram. Maxima in the original histogram are "‘spreaded"’ and
thus the contrast in image regions with these frequently occuring gray values is increased. Supposedly homogenous
regions receive more easily visible structures. On the other hand, of course, the noise in the image increases cor-
respondlingly. Minima in the original histogram are dually "‘compressed"’. The transformed histogram contains
gaps, but the remaining gray values used occur approximately at the same frequency ("‘histogram equalization"’).
Attention
The operator equ_histo_image primarily serves for optical processing of images for a human viewer. For
example, the (local) contrast spreading can lead to a detection of fictitious edges.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Image to be enhanced.
. ImageEquHisto (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Image with linearized gray values.
Parallelization Information
equ_histo_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
disp_image
Alternatives
scale_image, scale_image_max, illuminate
See also
scale_image
References
R.C. Gonzales, P. Wintz: "‘Digital Image Processing"’; Second edition; Addison Wesley; 1987.
Module
Foundation
Illuminate image.
The operator illuminate enhances contrast. Very dark parts of the image are "‘illuminated"’ more strongly,
very light ones are "‘darkened"’. If orig is the original gray value and mean is the corresponding gray value of the
low pass filtered image detected via the operators mean_image and filter size MaskHeight x MaskWidth.
For byte-images val equals 127, for int2-images and uint2-images val equals the median value. The resulting gray
value is new:
HALCON 8.0.2
186 CHAPTER 5. FILTER
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
illuminate(Image,Better,40,40,0.55)
disp_image(Better,WindowHandle).
Result
If the parameter values are correct the operator illuminate returns the value 2 (H_MSG_TRUE) The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
illuminate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
scale_image_max, equ_histo_image, mean_image, sub_image
See also
emphasize, gray_histo
Module
Foundation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
to the gray value function u defined by the input image Image at a time t0 = 0. The discretized equation is solved
in Iterations time steps of length Theta, so that the output image contains the gray value function at the time
Iterations · Theta.
The mean curvature flow causes a smoothing of Image in the direction of the edges in the image, i.e. along the
contour lines of u, while perpendicular to the edge direction no smoothing is performed and hence the boundaries
of image objects are not smoothed. To detect the image direction more robustly, in particular on noisy input data,
an additional isotropic smoothing step can precede the computation of the gray value gradients. The parameter
Sigma determines the magnitude of the smoothing by means of the standard deviation of a corresponding Gaussian
convolution kernel, as used in the operator isotropic_diffusion for isotropic image smoothing.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. ImageMCF (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Smoothing parameter for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
mean_curvature_flow is reentrant and automatically parallelized (on tuple level).
References
M. G. Crandall, P. Lions; “Convergent Difference Schemes for Nonlinear Parabolic Equations and Mean Curvature
Motion”; Numer. Math. 75 pp. 17-41; 1996.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
HALCON 8.0.2
188 CHAPTER 5. FILTER
Module
Foundation
ut = s |∇u|
on the function u defined by the gray values in Image at a time t0 = 0. The discretized equation is solved in
Iterations time steps of length Theta, so that the output image SharpenedImage contains the gray value
function at the time Iterations · Theta.
The decision between dilation and erosion is made using the sign function s ∈ {−1, 0, +1} on a conventional edge
detector. The detector of Canny
∇u ∇u 2
s = −sgn D u( , )
|∇u| |∇u|
is available with Mode = 0 canny 0 and the detector of Marr/Hildreth (the Laplace operator)
s = −sgn(∆u)
Parallelization Information
shock_filter is reentrant and automatically parallelized (on tuple level).
References
F. Guichard, J. Morel; “A Note on Two Classical Shock Filters and Their Asymptotics”; Michael Kerckhove (Ed.):
Scale-Space and Morphology in Computer Vision, LNCS 2106, pp. 75-84; Springer, New York; 2001.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
5.6 FFT
convol_fft ( ImageFFT, ImageFilter : ImageConvol : : )
gen_highpass(Highpass,0.2,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Highpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)
Result
convol_fft returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
convol_fft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_image, fft_generic, rft_generic, gen_highpass, gen_lowpass, gen_bandpass,
gen_bandfilter
Possible Successors
power_byte, power_real, power_ln, fft_image_inv, fft_generic, rft_generic
Alternatives
convol_gabor
See also
gen_gabor, gen_highpass, gen_lowpass, gen_bandpass, convol_gabor, fft_image_inv
Module
Foundation
HALCON 8.0.2
190 CHAPTER 5. FILTER
gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)
Result
convol_gabor returns 2 (H_MSG_TRUE) if all images are of correct type. If the input is empty the behavior
can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling
is raised.
Parallelization Information
convol_gabor is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_image, fft_generic, gen_gabor
Possible Successors
power_byte, power_real, power_ln, fft_image_inv, fft_generic
Alternatives
convol_fft
See also
convol_image
Module
Foundation
must contain only one single image. In this case, the correlation is performed for each image of ImageFFT1 with
ImageFFT2 .
Attention
The filtering is always performed on the entire image, i.e., the domain of the image is ignored.
Parameter
. ImageFFT1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Fourier-transformed input image 1.
. ImageFFT2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Fourier-transformed input image 2.
Number of elements : (ImageFFT2 = ImageFFT1) ∨ (ImageFFT2 = 1)
. ImageCorrelation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Correlation of the input images in the frequency domain.
Example
Result
convol_fft returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
correlation_fft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_generic, fft_image, rft_generic
Possible Successors
fft_generic, fft_image_inv, rft_generic
Module
Foundation
Often the calculation of the energy is preceded by the convolution of an image with a Gabor filter and the Hilbert
transform of the Gabor filter (see convol_gabor). In this case, the first channel of the image passed to
energy_gabor is the Gabor-filtered image, transformed back into the spatial domain (see fft_image_inv),
and the second channel the result of the convolution with the Hilbert transform, also transformed back into the
spatial domain. The local energy is a measure for the local contrast of structures (e.g., edges and lines) in the
image.
Parameter
. ImageGabor (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
1st channel of input image (usually: Gabor image).
. ImageHilbert (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
2nd channel of input image (usually: Hilbert image).
. Energy (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real
Image containing the local energy.
HALCON 8.0.2
192 CHAPTER 5. FILTER
Example (Syntax: C)
fft_image(Image,&FFT);
gen_gabor(&Filter,1.4,0.4,1.0,1.5,512);
convol_gabor(FFT,Filter,&Gabor,&Hilbert);
fft_image_inv(Gabor,&GaborInv);
fft_image_inv(Hilbert,&HilbertInv);
energy_gabor(GaborInv,HilbertInv,&Energy);
Result
energy_gabor returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
energy_gabor is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_gabor, convol_gabor, fft_image_inv
Module
Foundation
M −1 N −1
1 X X s2πi(km/M +ln/N )
F (m, n) = e f (k, l)
c
k=0 l=0
Opinions vary on whether the sign s in the exponent should be set to 1 or -1 for the forward transform, i.e., the
transform for going to the frequency domain. There is also disagreement on the magnitude of the normalizing
factor c. This is √
sometimes set to 1 for the forward transform, sometimes to M N , and sometimes (in case of the
unitary FFT) to M N . Especially in image processing applications the DC term is shifted to the center of the
image.
fft_generic allows to select these choices individually. The parameter Direction allows to select the
logical direction of the FFT. (This parameter is not unnecessary; it is needed to discern how to shift the image if
the DC term should rest in the center of the image.) Possible values are ’to_freq’ and ’from_freq’. The parameter
Exponent is used to determine the sign of the exponent. It can be set to 1 or -1. The normalizing factor can be
set with Norm, and can take on the values ’none’, ’sqrt’ and ’n’. The parameter Mode determines the location of
the DC term of the FFT. It can be set to ’dc_center’ or ’dc_edge’.
In any case, the user must ensure the consistent use of the parameters. This means that the normalizing factors
used for the forward and backward transform must yield M N when multiplied, the exponents must be of opposite
sign, and Mode must be equal for both transforms.
A consistent combination is, for example (’to_freq’,-1,’n’,’dc_edge’) for the forward transform and
(’from_freq’,1,’none’,’dc_edge’) for the reverse transform. In this case, the FFT can be interpreted as interpo-
lation with trigonometric basis functions. Another possible combination is (’to_freq’,-1,’sqrt’,’dc_center’) and
(’from_freq’,1,’sqrt’,’dc_center’).
The parameter ResultType can be used to specify the result image type of the reverse transform (Direction
= ’from_freq’). In the forward transform (Direction = ’to_freq’), ResultType must be set to ’complex’.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Input image.
. ImageFFT (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex
Fourier-transformed image.
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Calculate forward or reverse transform.
Default Value : ’to_freq’
List of values : Direction ∈ {’to_freq’, ’from_freq’}
. Exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Sign of the exponent.
Default Value : -1
List of values : Exponent ∈ {-1, 1}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normalizing factor of the transform.
Default Value : ’sqrt’
List of values : Norm ∈ {’none’, ’sqrt’, ’n’}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Location of the DC term in the frequency domain.
Default Value : ’dc_center’
List of values : Mode ∈ {’dc_center’, ’dc_edge’}
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Image type of the output image.
Default Value : ’complex’
List of values : ResultType ∈ {’complex’, ’byte’, ’int1’, ’int2’, ’uint2’, ’int4’, ’real’, ’direction’, ’cyclic’}
Example (Syntax: C)
/* simulation of fft */
my_fft(Hobject In, Hobject *Out)
{
fft_generic(In,Out,"to_freq",-1,"sqrt","dc_center","complex");
}
/* simulation of fft_image_inv */
my_fft_image_inv(Hobject In, Hobject *Out)
{
fft_generic(In,&Out,"from_freq",1,"sqrt","dc_center","byte");
}
Result
fft_generic returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
fft_generic is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
optimize_fft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convol_gabor, convert_image_type, power_byte, power_real, power_ln,
phase_deg, phase_rad, energy_gabor
Alternatives
fft_image, fft_image_inv, rft_generic
Module
Foundation
HALCON 8.0.2
194 CHAPTER 5. FILTER
fft_generic(Image,ImageFFT,’to_freq’,-1,’sqrt’,’dc_center’,’complex’)
.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
fft_generic(Image,ImageFFT,’from_freq’,1,’sqrt’,’dc_center’,’byte’)
.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
Result
fft_image_inv returns 2 (H_MSG_TRUE) if the input image is of correct type. If the input is empty the
behavior can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception
handling is raised.
Parallelization Information
fft_image_inv is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
convol_fft, convol_gabor, fft_image, optimize_fft_speed,
read_fft_optimization_data
Possible Successors
convert_image_type, energy_gabor
Alternatives
fft_generic, rft_generic
See also
fft_image, fft_generic, energy_gabor
Module
Foundation
HALCON 8.0.2
196 CHAPTER 5. FILTER
Result
gen_bandfilter returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling
is raised.
Parallelization Information
gen_bandfilter is reentrant and processed without parallelization.
Possible Successors
convol_fft
Alternatives
gen_circle, paint_region
See also
gen_highpass, gen_lowpass, gen_bandpass, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
Result
gen_bandpass returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_bandpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
gen_highpass, gen_lowpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
HALCON 8.0.2
198 CHAPTER 5. FILTER
’dc_edge’ can be used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm =
’none’ and Mode = ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used.
Parameter
. ImageDerivative (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : complex
Derivative filter as image in the frequency domain.
. Derivative (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Derivative to be computed.
Default Value : ’x’
Suggested values : Derivative ∈ {’x’, ’y’, ’xx’, ’xy’, ’yy’, ’xxx’, ’xxy’, ’xyy’, ’yyy’}
. Exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Exponent used in the reverse transform.
Default Value : 1
Suggested values : Exponent ∈ {-1, 1}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normalizing factor of the filter.
Default Value : ’none’
List of values : Norm ∈ {’none’, ’n’}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Location of the DC term in the frequency domain.
Default Value : ’dc_center’
List of values : Mode ∈ {’dc_center’, ’dc_edge’, ’rft’}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example
Result
gen_derivative_filter returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
handling is raised.
Parallelization Information
gen_derivative_filter is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
See also
fft_image_inv, gen_gauss_filter, gen_lowpass, gen_bandpass, gen_bandfilter,
gen_highpass
Module
Foundation
Parallelization Information
gen_filter_mask is reentrant and processed without parallelization.
Possible Successors
fft_image, fft_generic
See also
convol_image
Module
Foundation
HALCON 8.0.2
200 CHAPTER 5. FILTER
gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)
Result
gen_gabor returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is raised.
Parallelization Information
gen_gabor is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic
Possible Successors
convol_gabor
Alternatives
gen_bandpass, gen_bandfilter, gen_highpass, gen_lowpass
See also
fft_image_inv, energy_gabor
Module
Foundation
HALCON 8.0.2
202 CHAPTER 5. FILTER
Parameter
Result
gen_gauss_filter returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
gen_gauss_filter is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
See also
fft_image_inv, gen_gauss_filter, gen_lowpass, gen_bandpass, gen_bandfilter,
gen_highpass
Module
Foundation
Result
gen_highpass returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
HALCON 8.0.2
204 CHAPTER 5. FILTER
Parallelization Information
gen_highpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
convol_fft, gen_lowpass, gen_bandpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
Result
gen_lowpass returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_lowpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
gen_highpass, gen_bandpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
HALCON 8.0.2
206 CHAPTER 5. FILTER
Parallelization Information
gen_sin_bandpass is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
Alternatives
gen_std_bandpass
See also
fft_image_inv, gen_gauss_filter, gen_derivative_filter, gen_bandpass,
gen_bandfilter, gen_highpass, gen_lowpass
Module
Foundation
HALCON 8.0.2
208 CHAPTER 5. FILTER
Possible Successors
rft_generic, write_fft_optimization_data
Alternatives
read_fft_optimization_data
See also
optimize_fft_speed
Module
Foundation
90
phase = atan2(imaginary part, real part) .
π
Hence, ImagePhase contains half the phase angle. For negative phase angles, 180 is added.
Parameter
. ImageComplex (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImagePhase (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : direction
Phase of the image in degrees.
Example (Syntax: C)
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_deg(FFT,&Phase);
disp_image(Phase,WindowHandle);
Result
phase_deg returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
phase_deg is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
disp_image
Alternatives
phase_rad
See also
fft_image_inv
Module
Foundation
HALCON 8.0.2
210 CHAPTER 5. FILTER
phase_rad computes the phase of a complex image in radians. The following formula is used:
Parameter
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_rad(FFT,&Phase);
disp_image(Phase,WindowHandle);
Result
phase_rad returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
phase_rad is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
disp_image
Alternatives
phase_deg
See also
fft_image_inv, fft_generic, rft_generic
Module
Foundation
Parameter
Example (Syntax: C)
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_byte(FFT,&Power);
disp_image(Power,WindowHandle);
Result
power_byte returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
power_byte is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image
Alternatives
abs_image, convert_image_type, power_real, power_ln
See also
fft_image, fft_generic, rft_generic
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real
Power spectrum of the input image.
Example (Syntax: C)
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_ln(FFT,&Power);
disp_image(Power,WindowHandle);
Result
power_ln returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can be set
via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is raised.
Parallelization Information
power_ln is reentrant and automatically parallelized (on tuple level, domain level).
HALCON 8.0.2
212 CHAPTER 5. FILTER
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image, convert_image_type, scale_image
Alternatives
abs_image, convert_image_type, power_real, power_byte
See also
fft_image, fft_generic, rft_generic
Module
Foundation
Parameter
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_real(FFT,&Power);
disp_image(Power,WindowHandle);
Result
power_real returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
power_real is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image, convert_image_type, scale_image
Alternatives
abs_image, convert_image_type, power_byte, power_ln
See also
fft_image, fft_generic, rft_generic
Module
Foundation
read_fft_optimization_data ( : : FileName : )
HALCON 8.0.2
214 CHAPTER 5. FILTER
The normalizing factor can be set with Norm, and can take on the values ’none’, ’sqrt’ and ’n’. The user must
ensure the consistent use of the parameters. This means that the normalizing factors used for the forward and
backward transform must yield wh when multiplied.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Input image.
. ImageFFT (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex
Fourier-transformed image.
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Calculate forward or reverse transform.
Default Value : ’to_freq’
List of values : Direction ∈ {’to_freq’, ’from_freq’}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normalizing factor of the transform.
Default Value : ’sqrt’
List of values : Norm ∈ {’none’, ’sqrt’, ’n’}
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Image type of the output image.
Default Value : ’complex’
List of values : ResultType ∈ {’complex’, ’byte’, ’int1’, ’int2’, ’uint2’, ’int4’, ’real’, ’direction’, ’cyclic’}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048}
Result
rft_generic returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is
raised.
Parallelization Information
rft_generic is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
optimize_rft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convert_image_type, power_byte, power_real, power_ln, phase_deg,
phase_rad
Alternatives
fft_generic, fft_image, fft_image_inv
Module
Foundation
write_fft_optimization_data ( : : FileName : )
Result
write_fft_optimization_data returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception handling is raised.
Parallelization Information
write_fft_optimization_data is reentrant and processed without parallelization.
Possible Predecessors
optimize_fft_speed, optimize_rft_speed
See also
fft_generic, fft_image, fft_image_inv, wiener_filter, wiener_filter_ni,
phot_stereo, sfs_pentland, sfs_mod_lr, sfs_orig_lr, read_fft_optimization_data
Module
Foundation
5.7 Geometric-Transformations
affine_trans_image ( Image : ImageAffinTrans : HomMat2D,
Interpolation, AdaptImageSize : )
HALCON 8.0.2
216 CHAPTER 5. FILTER
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image can be controlled by the parameter AdaptImageSize: With value ’true’ the size
will be adapted so that no clipping occurs at the right or lower edge. With value ’false’ the target image has the
same size as the input image. Note that, independent of AdaptImageSize, the image is always clipped at the
left and upper edge, i.e., all image parts that have negative coordinates after the transformation are clipped.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_image corresponds to the following
chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output pixels as
homogeneous vectors):
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
ColT rans_i = 0 1 −0.5 · HomMat2D · 0 1 +0.5 · Col_i
1 0 0 1 0 0 1 1
As an effect, you might get unexpected results when creating affine transformations based on coordinates that are
derived from the image, e.g., by operators like area_center_gray. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric image and then rotate the image around this point using
hom_mat2d_rotate, the resulting image will not lie on the original one. In such a case, you can compensate
this effect by applying the following translations to HomMat2D before using it in affine_trans_image:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image(Image, ImageAffinTrans, HomMat2DAdapted, ’constant’,
’false’)
Parameter
hom_mat2d_identity(Matrix1)
hom_mat2d_scale(Matrix1,0.5,0.5,256.0,256.0,Matrix2)
hom_mat2d_rotate(Matrix2,3.14,256.0,256.0,Matrix3)
hom_mat2d_translate(Matrix3,-128.0,-128.0,Matrix4,)
affine_trans_image(Image,TransImage,Matrix4,1).
draw_rectangle2(WindowHandle,L,C,Phi,L1,L2)
hom_mat2d_identity(Matrix1)
get_system(width,Width)
get_system(height,Height)
hom_mat2d_translate(Matrix1,Height/2.0-L,Width/2.0-C,Matrix2)
hom_mat2d_rotate(Matrix2,3.14-Phi,Height/2.0,Width/2.0,Matrix3)
hom_mat2d_scale(Matrix3,Height/(2.0*L2),Width/(2.0*L1),
Height/2.0,Width/2.0,Matrix4)
affine_trans_image(Image,Matrix4,TransImage,1).
Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_image returns 2 (H_MSG_TRUE). If the input is empty the behavior can be set via
set_system(::’no_object_result’,<Result>:). If necessary, an exception handling is raised.
Parallelization Information
affine_trans_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_rotate, hom_mat2d_scale
Alternatives
affine_trans_image_size, zoom_image_size, zoom_image_factor, mirror_image,
rotate_image, affine_trans_region
See also
set_part_style
Module
Foundation
Apply an arbitrary affine 2D transformation to an image and specify the output image size.
affine_trans_image_size applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation,
and slant (skewing), to the images given in Image and returns the transformed images in ImageAffinTrans.
The affine transformation is described by the homogeneous transformation matrix given in HomMat2D, which
can be created using the operators hom_mat2d_identity, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_translate, etc., or be the result of operators like vector_angle_to_rigid.
The components of the homogeneous transformation matrix are interpreted as follows: The row coordinate of the
image corresponds to x and the col coordinate corresponds to y of the coordinate system in which the transforma-
tion matrix was defined. This is necessary to obtain a right-handed coordinate system for the image. In particular,
this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite
naturally corresponds to the usual (row,column) order for coordinates in the image.
The region of the input image is ignored, i.e., assumed to be the full rectangle of the image. The region of the
resulting image is set to the transformed rectangle of the input image. If necessary, the resulting image is filled
with zero (black) outside of the region of the original image.
Generally, transformed points will lie between pixel coordinates. Therefore, an appropriate interpolation scheme
has to be used. The interpolation can also be used to avoid aliasing effects for scaled images. The quality and
speed of the interpolation can be set by the parameter Interpolation:
none Nearest-neighbor interpolation: The gray value is determined from the nearest pixel’s gray value (pos-
sibly low quality, very fast).
constant Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of mean
filter is used to prevent aliasing effects (medium quality and run time).
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
HALCON 8.0.2
218 CHAPTER 5. FILTER
In addition, the system parameter ’int_zooming’ (see set_system) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image is specifed by the parameters Width and Height. Note that the image is always
clipped at the left and upper edge, i.e., all image parts that have negative coordinates after the transformation are
clipped. If the affine transformation (in particular, the translation) is chosen appropriately, a part of the image
can be transformed as well as cropped in one call. This is useful, for example, when using the variation model
(see compare_variation_model), because with this mechanism only the parts of the image that should be
examined, are transformed.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_image_size corresponds to the
following chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output
pixels as homogeneous vectors):
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
ColT rans_i = 0 1 −0.5 · HomMat2D · 0 1 +0.5 · Col_i
1 0 0 1 0 0 1 1
As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the image, e.g., by operators like area_center_gray. For example, if you use this op-
erator to calculate the center of gravity of a rotationally symmetric image and then rotate the image around
this point using hom_mat2d_rotate, the resulting image will not lie on the original one. In such a
case, you can compensate this effect by applying the following translations to HomMat2D before using it in
affine_trans_image_size:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image_size(Image, ImageAffinTrans, HomMat2DAdapted,
’constant’, Width, Height)
Parameter
HALCON 8.0.2
220 CHAPTER 5. FILTER
The parameter Interpolation can be used to select the desired interpolation mode for creating the cube maps.
Bilinear and bicubic interpolation is available.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte / uint2 / real
Input images.
. Front (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Front cube map.
. Rear (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Rear cube map.
. Left (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Left cube map.
. Right (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Right cube map.
. Top (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Top cube map.
. Bottom (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Bottom cube map.
. CameraMatrices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
(Array of) 3 × 3 projective camera matrices that determine the interior camera parameters.
. RotationMatrices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .hom_mat2d-array ; real
Array of 3 × 3 transformation matrices that determine rotation of the camera in the respective image.
. CubeMapDimension (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Width and height of the resulting cube maps.
Default Value : 1000
Restriction : CubeMapDimension ≥ 0
. StackingOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer
Mode of adding the images to the mosaic image.
Default Value : ’voronoi’
Suggested values : StackingOrder ∈ {’blend’, ’voronoi’, ’default’}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of image interpolation.
Default Value : ’bilinear’
Suggested values : Interpolation ∈ {’bilinear’, ’bicubic’}
Example
HALCON 8.0.2
222 CHAPTER 5. FILTER
RotationMatrices, X, Y, Z, Error)
cam_mat_to_cam_par (CameraMatrix, Kappa, 640, 480, CamParam)
change_radial_distortion_cam_par (’fixed’, CamParam, 0, CamParOut)
gen_radial_distortion_map (Map, CamParam, CamParOut, ’bilinear’)
map_image (Images, Map, ImagesRect)
gen_cube_map_mosaic (Images, Front, Left, Rear, Right, Top, Bottom,
CameraMatrix, RotationMatrices, 1000, ’default’,
’bicubic’)
Result
If the parameters are valid, the operator gen_cube_map_mosaic returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
gen_cube_map_mosaic is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Alternatives
gen_spherical_mosaic, gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
On output, the parameter MosaicMatrices2D contains a set of 3 × 3 projective transformation matrices that
describe for each image in Images the mapping of the image to its position in the mosaic.
Parameter
gen_empty_obj (Images)
for J := 1 to 6 by 1
read_image (Image, ’mosaic/pcb_’+J$’02’)
concat_obj (Images, Image, Images)
endfor
From := [1,2,3,4,5]
To := [2,3,4,5,6]
Num := |From|
ProjMatrices := []
for J := 0 to Num-1 by 1
F := From[J]
T := To[J]
select_obj (Images, F, ImageF)
select_obj (Images, T, ImageT)
points_foerstner (ImageF, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsF, ColJunctionsF, CoRRJunctionsF,
CoRCJunctionsF, CoCCJunctionsF, RowAreaF,
ColAreaF, CoRRAreaF, CoRCAreaF, CoCCAreaF)
points_foerstner (ImageT, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsT, ColJunctionsT, CoRRJunctionsT,
CoRCJunctionsT, CoCCJunctionsT, RowAreaT,
ColAreaT, CoRRAreaT, CoRCAreaT, CoCCAreaT)
proj_match_points_ransac (ImageF, ImageT, RowJunctionsF,
ColJunctionsF, RowJunctionsT,
ColJunctionsT, ’ncc’, 21, 0, 0, 480, 640,
0, 0.5, ’gold_standard’, 1, 4364537,
ProjMatrix, Points1, Points2)
ProjMatrices := [ProjMatrices,ProjMatrix]
endfor
HALCON 8.0.2
224 CHAPTER 5. FILTER
Parallelization Information
gen_projective_mosaic is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, vector_to_proj_hom_mat2d,
hom_vector_to_proj_hom_mat2d
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
the images are added to the mosaic image is important. Therefore, an array of integer values can be passed in
StackingOrder. The first index in this array will end up at the bottom of the image stack while the last one
will be on top. If ’default’ is given instead of an array of integers, the canonical order (images in the order used
in Images) will be used. Hence, if neither ’voronoi’ nor ’default’ are used, StackingOrder must contain a
permutation of the numbers 1,...,n, where n is the number of images passed in Images. It should be noted that
the mode ’voronoi’ cannot always be used. For example, at least two images must be passed to use this mode.
Furthermore, for very special configurations of the positions of the image centers on the sphere, the Voronoi cells
cannot be determined uniquely. With StackingOrder = 0 blend 0 , an additional mode is available, which blends
the images of the mosaic smoothly. This way seams between the images become less apparent. The seam lines
between the images are the same as in ’voronoi’. This mode leads to visually more appealing images, but requires
significantly more resources. If the mode ’voronoi’ or ’blend’ cannot be used for whatever reason the mode is
switched internally to ’default’ automatically.
The parameter Interpolation can be used to select the desired interpolation mode for creating the mosaic.
Bilinear and bicubic interpolation is available.
Parameter
HALCON 8.0.2
226 CHAPTER 5. FILTER
Example
Result
If the parameters are valid, the operator gen_spherical_mosaic returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
gen_spherical_mosaic is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Alternatives
gen_cube_map_mosaic, gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
The height and the width of Map define the size of the output image ImageMapped. The number of channels
in Map defines whether no interpolation or bilinear interpolation should be used. If Map only consists of one
channel, no interpolation is applied during the transformation. This channel containes ’int4’ values that describe
the geometric transformation: For each pixel in the output image ImageMapped the linearized coordinate of the
pixel in the input image Image from which the gray value should be taken is stored.
If bilinear interpolation between the pixels in the input image should be applied, Map must consist of 5 channels.
The first channel again consists of an ’int4’ image and describes the geometric transformation. The channels 2-5
consist of an ’uint2’ image each and contain the weights [0...1] of the four neighboring pixels that are used during
bilinear interpolation. If the overall brightness of the output image ImageMapped should not differ from the
overall brighntess of the input image Image, the sum of the four unscaled weights must be 1 for each pixel. The
weights [0...1] are scaled to the range of values of the ’uint2’ image and therefore hold integer values from 0 bis
65535.
Furthermore, the weights must be chosen in a way that the range of values of the output image ImageMapped is
not exceeded. The geometric relation between the four channels 2-5 is illustrated in the following sketch:
2 3
4 5
The reference point of the four pixels is the upper left pixel. The linearized coordinate of the reference point is
stored in the first channel.
Attention
The weights must be choosen in a way that the range of values of the output image ImageMapped is not exceeded.
For runtime reasons during the mapping process, it is not checked whether the linearized coordinates which are
stored in the first channel of Map, lie inside the input image. Thus, it must be ensured by the user that this constraint
is fulfilled. Otherwise, the program may crash!
Parameter
Mirror an image.
mirror_image reflects an image Image about one of three possible axes. If Mode is set to ’row’, it is reflected
about the horizontal axis, if Mode is set to ’column’, about the vertical axis, and if Mode is set to ’main’, about
the main diagonal x = y.
HALCON 8.0.2
228 CHAPTER 5. FILTER
Parameter
. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Input image.
. ImageMirror (output_object) . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Reflected image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Axis of reflection.
Default Value : ’row’
List of values : Mode ∈ {’row’, ’column’, ’main’}
Example
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
mirror_image(Image,MirImage,’row’).
disp_image(MirImage,WindowHandle)
Parallelization Information
mirror_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
hom_mat2d_rotate, affine_trans_image, rotate_image
See also
rotate_image, hom_mat2d_rotate
Module
Foundation
Parameter
. ImageXY (input_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image in cartesian coordinates.
. ImagePolar (output_object) . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Result image in polar coordinates.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Row coordinate of the center of the coordinate system.
Default Value : 100
Suggested values : Row ∈ {0, 10, 100, 200}
Typical range of values : 0 ≤ Row ≤ 512
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column coordinate of the center of the coordinate system.
Default Value : 100
Suggested values : Column ∈ {0, 10, 100, 200}
Typical range of values : 0 ≤ Column ≤ 512
Minimum Increment : 1
Recommended Increment : 1
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
polar_trans_image(Image,PolarImage,100,100,314,200).
disp_image(PolarImage,WindowHandle)
Parallelization Information
polar_trans_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
polar_trans_image_ext
See also
polar_trans_image_inv, polar_trans_region, polar_trans_region_inv,
polar_trans_contour_xld, polar_trans_contour_xld_inv, affine_trans_image
Module
Foundation
HALCON 8.0.2
230 CHAPTER 5. FILTER
The radii and angles are inclusive, which means that the first row of the target image contains the circle with radius
RadiusStart and the last row contains the circle with radius RadiusEnd. For complete circles, where the
difference between AngleStart and AngleEnd equals 2π (360 degrees), this also means that the first column
of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − Width ) degrees instead.
The call:
polar_trans_image(Image, PolarTransImage, Row, Column, Width, Height)
produces the same result as the call:
polar_trans_image_ext(Image, PolarTransImage, Row-0.5, Column-0.5,
6.2831853, 6.2831853/Width, 0, Height-1, Width, Height, ’nearest_neighbor’)
The offset of 0.5 is necessary since polar_trans_image does not do exact nearest neighbor interpola-
tion and the radii and angles can be calculated using the information in the above paragraph and knowing that
polar_trans_image does not handle its arguments inclusively. The start angle is bigger than the end angle to
make polar_trans_image_ext go clockwise, just like polar_trans_image does.
Attention
For speed reasons, the domain of the input image is ignored. The output image always has a complete rectangle as
its domain.
Parameter
HALCON 8.0.2
232 CHAPTER 5. FILTER
The parameter Interpolation determines, which interpolation method is used to determine the gray values
of the output image. For Interpolation = ’nearest_neighbor’, the gray value is determined from the nearest
pixel in the input image. This mode is very fast, but also leads to the typical “jagged” appearance for large
enlargements of the image. For Interpolation = ’bilinear’, the gray values are interpolated bilinearly, leading
to longer runtimes, but also to significantly improved results.
The parameter TransformRegion can be used to determine whether the domain of Image is also transformed.
Since the transformation of the domain costs runtime, this parameter should be used to specify whether this is
desired or not. If TransformRegion is set to ’false’ the domain of the input image is ignored and the complete
image is transformed.
The projective transformation matrix could for example be created using the operator
vector_to_proj_hom_mat2d.
In a homography the points to be projected are represented by homogeneous vectors of the form (x, y, w). A
x y
Euclidean point can be derived as (x’,y’) = ( w , w ).
Just like in affine_trans_image, x represents the row coordinate while y represents the column coordinate
in projective_trans_image. With this convention, affine transformations are a special case of projective
transformations in which the last row of HomMat2D is of the form (0, 0, c).
For images of type byte or uint2 the system parameter ’int_zooming’ selects between fast calculation in fixed point
arithmetics (’int_zooming’ = ’true’) and highly accurate calculation in floating point arithmetics (’int_zooming’ =
’false’). Especially for Interpolation = ’bilinear’, however, fixed point calculation can lead to minor gray
1
value deviations since the faster algorithm achieves an accuracy of no more than 16 pixels. Therefore, when
applying large scales ’int_zooming’ = ’false’ is recommended.
Parameter
HALCON 8.0.2
234 CHAPTER 5. FILTER
Apply a projective transformation to an image and specify the output image size.
projective_trans_image_size applies the projective transformation (homography) determined by the
homogeneous transformation matrix HomMat2D on the input image Image and stores the result into the output
image TransImage.
TransImage will be clipped at the output dimensions Height×Width. Apart from this,
projective_trans_image_size is identical to its alternative version projective_trans_image.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / real
Input image.
. TransImage (output_object) . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / real
Output image.
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Homogeneous projective transformation matrix.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Interpolation method for the transformation.
Default Value : ’bilinear’
List of values : Interpolation ∈ {’nearest_neighbor’, ’bilinear’}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Output image width.
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Output image height.
. TransformRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should the domain of the input image also be transformed?
Default Value : ’false’
List of values : TransformRegion ∈ {’true’, ’false’}
Parallelization Information
projective_trans_image_size is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_contour_xld, projective_trans_region,
projective_trans_point_2d, projective_trans_pixel
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageRotate (output_object) . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Rotated image.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Rotation angle.
Default Value : 90
Suggested values : Phi ∈ {90, 180, 270}
Typical range of values : 0 ≤ Phi ≤ 360
Minimum Increment : 0.001
Recommended Increment : 0.2
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation.
Default Value : ’constant’
List of values : Interpolation ∈ {’none’, ’constant’, ’weighted’}
Example
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
rotate_image(ImageRotImage,270).
disp_image(RotImage,WindowHandle)
Parallelization Information
rotate_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
hom_mat2d_rotate, affine_trans_image
See also
mirror_image
Module
Foundation
HALCON 8.0.2
236 CHAPTER 5. FILTER
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
zoom_image_factor(Image,ZooImage,0,0.5,0.5).
disp_image(ZooImage,WindowHandle)
Parallelization Information
zoom_image_factor is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
zoom_image_size, affine_trans_image, hom_mat2d_scale
See also
hom_mat2d_scale, affine_trans_image
Module
Foundation
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
zoom_image_size(Image,ZooImage,0,200,200).
disp_image(ZooImage,WindowHandle)
Parallelization Information
zoom_image_size is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
zoom_image_factor, affine_trans_image, hom_mat2d_scale
See also
hom_mat2d_scale, affine_trans_image
Module
Foundation
5.8 Inpainting
harmonic_interpolation ( Image,
Region : InpaintedImage : Precision : )
HALCON 8.0.2
238 CHAPTER 5. FILTER
a smaller fraction than Precision of the norm of the input data or a maximum of 1000 iterations is reached.
Precision = 0 .01 thus means a relative computational accuracy of 1%.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. Precision (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Computational accuracy.
Default Value : 0.001
Suggested values : Precision ∈ {0.0, 0.0001, 0.001, 0.01}
Restriction : Precision ≥ 0.0
Parallelization Information
harmonic_interpolation is reentrant and automatically parallelized (on tuple level).
Alternatives
inpainting_ct, inpainting_aniso, inpainting_mcf, inpainting_texture,
inpainting_ced
References
L.C. Evans; “Partial Differential Equations”; AMS, Providence; 1998.
W. Hackbusch; “Iterative Lösung großer schwachbesetzter Gleichungssysteme”; Teubner, Stuttgart;1991.
Module
Foundation
ut = div(g(|∇u|2 , c)∇u)
with the initial value u = u0 defined by Image at a time t0 = 0. The equation is iterated Iterations times in
time steps of length Theta, so that the output image InpaintedImage contains the gray value function at the
time Iterations · Theta.
The primary goal of the anisotropic diffusion, which is also referred to as nonlinear isotropic diffusion, is the
elimination of image noise in constant image patches while preserving the edges in the image. The distinction
between edges and constant patches is achieved using the threshold Contrast on the magnitude of the gray
value differences between adjacent pixels. Contrast is referred to as the contrast parameter and is abbreviated
with the letter c. If the edge information is distributed in an environment of the already existing edges by smoothing
the edge amplitude matrix, it is furthermore possible to continue edges into the computation area Region. The
standard deviation of this smoothing process is determined by the parameter Rho.
The algorithm used is basically the same as in the anisotropic diffusion filter anisotropic_diffusion,
except that here, border treatment is not done by mirroring the gray values at the border of Region. Instead, this
procedure is only applicable on regions that keep a distance of at least 3 pixels to the border of the image matrix
of Image, since the gray values on this band around Region are used to define the boundary conditions for the
respective differential equation and thus assure consistency with the neighborhood of Region. Please note that
the inpainting progress is restricted to those pixels that are included in the ROI of the input image Image. If the
ROI does not include the entire region Region, a band around the intersection of Region and the ROI is used to
define the boundary values.
The result of the diffusion process depends on the gray values in the computation area of the input image Image.
It must be pointed out that already exisiting image edges are preserved within Region. In particular, this holds
for gray value jumps at the border of Region, which can result for example from a previous inpainting with
constant gray value. If the procedure is to be used for inpainting, it is recommended to apply the operator
harmonic_interpolation first to remove all unwanted edges inside the computation area and to minimize
the gray value difference between adjacent pixels, unless the input image already contains information inside
Region that should be preserved.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter Mode,
the following functions can be selected:
1
g1 (x, c) = p
1 + 2 cx2
Choosing the function g1 by setting Mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size Theta. In this case however, there remains a slight diffusion even across edges of an amplitude larger than c.
1
g2 (x, c) =
1 + cx2
The choice of ’perona-malik’ for Mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.
c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting Mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Furthermore, the choice of the value ’shock’ is possible for Mode to select a contrast invariant modification of the
anisotropic diffusion. In this variant, the generation of edges is not achieved by variation of the diffusion coefficient
g, but the constant coefficient g = 1 and thus isotropic diffusion is used. Additionally, a shock filter of type
ut = −sgn(∇|∇u|)|∇u|
is applied, which, just like a negative diffusion coefficient, causes a sharpening of the edges, but works independent
of the absolute value of |∇u|. In this mode, Contrast does not have the meaning of a contrast parameter,
but specifies the ratio between the diffusion and the shock filter part applied at each iteration step. Hence, the
value 0 would correspond to pure isotropic diffusion, as used in the operator isotropic_diffusion. The
parameter is scaled in such a way that diffusion and sharpening cancel each other out for Contrast = 1 . A
value Contrast > 1 should not be used, since it would make the algorithm unstable.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of edge sharpening algorithm.
Default Value : ’weickert’
List of values : Mode ∈ {’weickert’, ’perona-malik’, ’parabolic’, ’shock’}
HALCON 8.0.2
240 CHAPTER 5. FILTER
Parallelization Information
inpainting_aniso is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_mcf, inpainting_texture,
inpainting_ced
References
J. Weickert; “’Anisotropic Diffusion in Image Processing’; PhD Thesis; Fachbereich Mathematik, Universität
Kaiserslautern; 1996.
P. Perona, J. Malik; “Scale-space and edge detection using anisotropic diffusion”; Transactions on Pattern Analysis
and Machine Intelligence 12(7), pp. 629-639; IEEE; 1990.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
ut = div(G(u)∇u)
formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in Image, this is an
enhancement of the mean curvature flow or intrinsic heat equation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined by the input image Image at a time t0 = 0. The smoothing opera-
tor mean_curvature_flow is a direct application of the mean curvature flow equation. With the opera-
tor inpainting_mcf, it can also be used for image inpainting. The discrete diffusion equation is solved in
Iterations time steps of length Theta, so that the output image InpaintedImage contains the gray value
function at the time Iterations · Theta.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
Similar to the operator inpainting_mcf, the structure of the image data in Region is simplified by smoothing
the level lines of Image. By this, image errors and unwanted objects can be removed from the image, while the
edges in the neighborhood are extended continuously. This procedure is called image inpainting. The objective is
to introduce a minimum amount of artefacts or smoothing effects, so that the image manipulation is least visible to
a human beholder.
While the matrix G is given by
1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2
in the case of the operator inpainting_mcf, where I denotes the unit matrix, GM CF is again smoothed
componentwise by a Gaussian filter of standard deviation Rho for coherence_enhancing_diff. Then, the
final coefficient matrix
is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions
g1 (p) = 0.001
−1
g2 (p) = 0.001 + 0.999 exp
p
HALCON 8.0.2
242 CHAPTER 5. FILTER
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Smoothing for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0
. Rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Smoothing for diffusion coefficients.
Default Value : 3.0
Suggested values : Rho ∈ {0.0, 1.0, 3.0, 5.0, 10.0, 30.0}
Restriction : Rho ≥ 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1
Example
Parallelization Information
inpainting_ced is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_aniso, inpainting_mcf,
inpainting_texture
References
J. Weickert, V. Hlavac, R. Sara; “Multiscale texture enhancement”; Computer analysis of images and patterns,
Lecture Notes in Computer Science, Vol. 970, pp. 230-237; Springer, Berlin; 1995.
J. Weickert, B. ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever; “A review of nonlinear diffusion
filtering”; Scale-Space Theory in Computer Vision, Lecture Notes in Comp. Science, Vol. 1252, pp. 3-28;
Springer, Berlin; 1997.
Module
Foundation
The operator inpainting_ct inpaints a missing region Region of an image Image by transporting image
information from the region’s boundary along the coherence direction into this region.
Since this operator’s basic concept is inpainting by continuing broken contour lines, the image content and in-
painting region must be such that this idea makes sense. That is, if a contour line hits the region to inpaint at a
pixel p, there should be some opposite point q where this contour line continues so that the continuation of contour
lines from two opposite sides can succeed. In cases where there is less geometry in the image, a diffusion-based
inpainter, e.g., harmonic_interpolation may yield better results. Alternatively, Kappa can be set to 0.
An extreme situation with little global geometries are pure textures. Then the idea behind this operator will fail to
produce good results (think of a checkerboard with a big region to inpaint relative to the checker fields). For these
kinds of images, a texture-based inpaiting, e.g., inpainting_texture, can be used instead.
The operator uses a so-called upwind scheme to assign gray values to the missing pixels, i.e.,:
• The order of the pixels to process is given by their Euclidean distance to the boundary of the region to inpaint.
• A new value ui is computed as a weighted average of already known values uj within a disc of radius
Epsilon around the current pixel. The disc is restricted to already known pixels.
• The size of this scheme’s mask depends on Epsilon.
The initially used image data comes from a stripe of thickness Epsilon around the region to inpaint. Thus,
Epsilon must be at least 1 for the scheme to work, but should be greater. The maximum value for Epsilon
depends on the gray values that should be transported into the region. Choosing Epsilon = 5 can be used in
many cases.
Since the goal is to close broken contour lines, the direction of the level lines must be estimated and used in the
weight. This estimated direction is called the coherence direction, and is computed by means of the structure tensor
S.
S = Gρ ∗ DvDv T
and
v = Gσ ∗ u
where ∗ denotes the convolution, u denotes the gray value image, D the derivative and G Gaussian kernels with
standard deviation σ and ρ. These standard deviations are defined by the operator’s parameters Sigma and Rho.
Sigma should have the size of the noise or uninportant little objects, which are then not considered in the estima-
tion step by the pre-smoothing. Rho gives the size of the window around a pixel that will be used for direction
estimation. The coherence direction c then is given by the eigendirection of S with respect to the minimal eigen-
value λ, i.e.
Sc = λc, |c| = 1
For multichannel or color images, the scheme above is applied to each channel separately, but the weights must be
the same for all channels to propagate information in the same direction. Since the weight depends on the coherence
direction, the common direction is given by the eigendirection of a composite structure tensor. If u1 , ..., un denote
the n channels of the image, the channel structure tensors S1 , ..., Sn are computed and then combined to the
composite structure tensor S.
n
X
S= ai Si
i=1
The coefficients ai are passed in ChannelCoefficients, which is a tuple of length n or length 1. If the tuple’s
length is 1, the arithmetic mean is used, i.e., ai = 1/n. If the length of ChannelCoefficients matches the
number of channels, the ai are set to
ChannelCoefficientsi
ai = Pn
i=1 ChannelCoefficientsi
HALCON 8.0.2
244 CHAPTER 5. FILTER
in order to get a well-defined convex combination. Hence, the ChannelCoefficients must be greater than or
equal to zero and their sum must be greater than zero. If the tuple’s length is neither 1 nor the number of channels
or the requirement above is not satisfied, the operator returns an error message.
The purpose of using other ChannelCoefficients than the arithmetic mean is to adapt to different color
codes. The coherence direction is a geometrical information of the composite image, which is given by high
contrasts such as edges. Thus the more contrast a channel has, the more geometrical information it contains, and
consequently the greater its coefficient should be chosen (relative to the others). For RGB images, [0.299, 0.587,
0.114] is a good choice.
The weight in the scheme is the product of a directional component and a distance component. If p is the 2D
coordinate vector of the current pixel to be inpainted and q the 2D coordinate of a pixel in the neighborhood (the
disc restricted to already known pixels), the directional component measures the deviation of the vector p − q
from the coherence direction. If the deviation exponentially scaled by β is large, a low directional component is
assigned, whereas if it is small, a large directional component is assigned. β is controlled by Kappa (in percent):
β = 20 ∗ Epsilon ∗ Kappa/100
Kappa defines how important it is to propagate information along the coherence direction, so a large Kappa
yields sharp edges, while a low Kappa allows for more diffusion.
A special case is when Kappa is zero: In this case the directional component of the weight is constant (one).
The direction estimation step is then skipped to save computational costs and the parameters Sigma, Rho,
ChannelCoefficients become meaningless, i.e, the propagation of information is not based on the struc-
tures visible in the image.
The distance component is 1/|p − q|. Consequently, if q is far away from p, a low distance component is assigned,
whereas if it is near to p, a high distance component is assigned.
Parameter
Parallelization Information
inpainting_ct is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_aniso, inpainting_mcf, inpainting_ced,
inpainting_texture
References
Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathemati-
cal Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.
Module
Foundation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined in the region Region by the input image Image at a time t0 = 0.
The discretized equation is solved in Iterations time steps of length Theta, so that the output image
InpaintedImage contains the gray value function at the time Iterations · Theta.
A stationary state of the mean curvature flow equation, which is also the basis of the operator
mean_curvature_flow, has the special property that the level lines of u all have the curvature 0. This means
that after sufficiently many iterations there are only straight edges left inside the computation area of the output
image InpaintedImage. By this, the structure of objects inside of Region can be simplified, while the re-
maining edges are continuously connected to those of the surrounding image matrix. This allows for a removal of
image errors and unwanted objects in the input image, a so called image inpainting, which is only weakly visible
to a human beholder since there remain no obvious artefacts or smudges.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
Parameter
HALCON 8.0.2
246 CHAPTER 5. FILTER
is set to ’min_grad’ the sum of the squares of the gray value gradients is minimized on the comparison blocks.
With the value ’min_range_extension’, the growth of the gray value interval of the comparison blocks with respect
to the reference block around the point x is minimized. If PostIteration has the value ’none’ no post-
iteration is performed. The choice of feasible blocks for this minimization process is determined by the parameter
Smoothness, which is an upper limit to the permitted increase of the mean absolute gray value difference
between the comparison blocks and the reference block with respect to the block that was selected by the original
algorithm. With increasing value of Smoothness, the inpainting result becomes smoother and loses structure.
The matching accuracy of the selected comparison blocks decreases. If Smoothness is set to 0, the post-iteration
only considers comparison blocks with an equally high correlation to the reference block.
If the inpainting process cannot be completed because there are points x, for which no complete block of intact gray
value information is contained in the search area of size SearchSize, the remaining pixels keep their initial gray
value and the ROI of the output image InpaintedImage is reduced by the region that could not be processed.
If the structure size of the ROI of Image or of the computation area Region is smaller than MaskSize, the
execution time of the algorithm can increase extremely. Hence, it is recommended to only use clearly structured
input regions.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of the inpainting blocks.
Default Value : 9
Suggested values : MaskSize ∈ {7, 9, 11, 15, 21}
Restriction : (MaskSize ≥ 3) ∧ odd(MaskSize)
. SearchSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of the search window.
Default Value : 30
Suggested values : SearchSize ∈ {15, 30, 50, 100, 1000}
Restriction : (2 · SearchSize) > MaskSize
. Anisotropy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Influence of the edge amplitude on the inpainting order.
Default Value : 1.0
Suggested values : Anisotropy ∈ {0.0, 0.01, 0.1, 0.5, 1.0, 10.0}
Restriction : Anisotropy ≥ 0
. PostIteration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Post-iteration for artifact reduction.
Default Value : ’none’
List of values : PostIteration ∈ {’none’, ’min_grad’, ’min_range_extension’}
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Gray value tolerance for post-iteration.
Default Value : 1.0
Suggested values : Smoothness ∈ {0.0, 0.1, 0.2, 0.5, 1.0}
Restriction : Smoothness ≥ 0
Parallelization Information
inpainting_texture is reentrant and processed without parallelization.
Module
Foundation
5.9 Lines
bandpass_image ( Image : ImageBandpass : FilterType : )
HALCON 8.0.2
248 CHAPTER 5. FILTER
bandpass_image serves as an edge filter. It applies a linear filter with the following convolution mask to
Image:
FilterType: ’lines’
In contrast to the edge operator sobel_amp this filter detects lines instead of edges, i.e., two closely adjacent
edges.
0 −2 −2 −2 0
−2 0 3 0 −2
−2 3 12 3 −2
−2 0 3 0 −2
0 −2 −2 −2 0
At the border of the image the gray values are mirrored. Over- and underflows of gray values are clipped. The
resulting images are returned in ImageBandpass.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input images.
. ImageBandpass (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Bandpass-filtered images.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Filter type: currently only ’lines’ is supported.
Default Value : ’lines’
List of values : FilterType ∈ {’lines’}
Example (Syntax: C)
bandpass_image(Image,&LineImage,"lines");
threshold(LineImage,&Lines,60.0,255.0);
skeleton(Lines,&ThinLines);
Result
bandpass_image returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception handling
is raised.
Parallelization Information
bandpass_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, skeleton
Alternatives
convol_image, topographic_sketch, texture_laws
See also
highpass_image, gray_skeleton
Module
Foundation
By defining color lines as dark lines in the amplitude image, in contrast to lines_gauss, for single-channel
images no distinction is made whether the lines are darker or brighter than their surroundings. Furthermore,
lines_color also returns staircase lines, i.e., lines for which the gray value of the lines lies between the gray
values in the surrounding area to the left and right sides of the line. In multi-channel images, the above definition
allows each channel to have a different line type. For example, in a three-channel image the first channel may have
a dark line, the second channel a bright line, and the third channel a staircase line at the same position.
If ExtractWidth is set to ’true’ the line width is extracted for each line point. Because the line extractor is
unable to extract certain junctions because of differential geometric reasons, it tries to extract these by different
means if CompleteJunctions is set to ’true’.
lines_color links the line points into lines by using an algorithm similar to a hysteresis threshold op-
eration, which is also used in lines_gauss and edges_color_sub_pix. Points with an amplitude
larger than High are immediately accepted as belonging to a line, while points with an amplitude smaller
than Low are rejected. All other points are accepted as lines if they are connected to accepted line points (see
also lines_gauss). Here, amplitude means the line amplitude of the dark line (see lines_gauss and
lines_facet). This value corresponds to the third directional derivative of the smoothed input image in the
direction perpendicular to the line.
For the choice of the thresholds High and Low one has to keep in mind that the third directional derivative depends
on the amplitude and width of the line as well as the choice of Sigma. The value of the third derivative depends
linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the line there
is an inverse dependence: The wider the line is, the smaller the response gets. This holds analogously for the
dependence on Sigma: The larger Sigma is chosen, the smaller the second derivative will be. This means that
for larger smoothing correspondingly smaller values for High and Low should be chosen.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_color defines the following attributes for each line point if ExtractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line (oriented such that the normal vectors point to
the right side of the line as the line is traversed from start to end point; the angles are given with
respect to the row axis of the image.)
’response’ The magnitude of the second derivative
If ExtractWidth was set to ’true’, additionally the following attributes are defined:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
All these attributes can be queried via the operator get_contour_attrib_xld.
Attention √
In general, but in particular if the line width is to be extracted, Sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value Sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, Sigma ≥ 2.3
should be selected. If it is expected that staircase lines are present in at least one channel, and if such lines should
be extracted, in addition to the above restriction, Sigma ≤ w should be selected. This is necessary because
staircase lines turn into normal step edges for large amounts of smoothing, and therefore no longer appear as dark
lines in the amplitude image of the color edge filter.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject
Extracted lines.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of Gaussian smoothing to be applied.
Default Value : 1.5
Suggested values : Sigma ∈ {1, 1.2, 1.5, 1.8, 2, 2.5, 3, 4, 5}
Typical range of values : 0.7 ≤ Sigma ≤ 20
Recommended Increment : 0.1
HALCON 8.0.2
250 CHAPTER 5. FILTER
lead to worse localization of the line. The parameters of the polynomial are used to calculate the line direction
for each pixel. Pixels which exhibit a local maximum in the second directional derivative perpendicular to the
line direction are marked as line points. The line points found in this manner are then linked to contours. This
is done by immediately accepting line points that have a second derivative larger than High. Points that have
a second derivative smaller than Low are rejected. All other line points are accepted if they are connected to
accepted points by a connected path. This is similar to a hysteresis threshold operation with infinite path length
(see hysteresis_threshold). However, this function is not used internally since it does not allow the
extraction of sub-pixel precise contours.
The gist of how to select the thresholds in the description of lines_gauss also holds for this operator. A value
of Sigma = 1.5 there roughly corresponds to a MaskSize of 5 here.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_facet defines the following attributes for each line point:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
These attributes can be queried via the operator get_contour_attrib_xld.
Attention
The smaller the filter size MaskSize is chosen, the more short, fragmented lines will be extracted. This can lead
to considerably longer execution times.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject
Extracted lines.
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of the facet model mask.
Default Value : 5
List of values : MaskSize ∈ {3, 5, 7, 9, 11}
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Extract bright or dark lines.
Default Value : ’light’
List of values : LightDark ∈ {’dark’, ’light’}
Example
Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ MaskSize).
HALCON 8.0.2
252 CHAPTER 5. FILTER
Let S = Width ∗ Height be the number of pixels of Image. Then lines_facet requires at least 55 ∗ S bytes
of temporary memory during execution.
Result
lines_facet returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(::’no_object_result’,<Result>:). If
necessary, an exception handling is raised.
Parallelization Information
lines_facet is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_gauss
See also
bandpass_image, dyn_threshold, topographic_sketch
References
A. Busch: "‘Fast Recognition of Lines in Digital Images Without User-Supplied Parameters"’. In H. Ebner, C.
Heipke, K.Eder, eds., "‘Spatial Information from Digital Photogrammetry and Computer Vision"’, International
Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3/1, pp. 91-97, 1994.
Module
2D Metrology
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_gauss defines the following attributes for each line point if ExtractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
If ExtractWidth was set to ’true’ and CorrectPositions to ’false’, the following attributes are defined in
addition to the above ones:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
Finally, if CorrectPositions was set to ’true’, additionally the following attributes are defined:
’asymmetry’ The asymmetry of the line point
’contrast’ The contrast of the line point
Here, the asymmetry is positive if the asymmetric part, i.e., the part with the weaker gradient, is on the right side of
the line, while it is negative if the asymmetric part is on the left side of the line. All these attributes can be queried
via the operator get_contour_attrib_xld.
Attention √
In general, but in particular if the line width is to be extracted, Sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value Sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, Sigma ≥ 2.3
should be selected.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject
Extracted lines.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of Gaussian smoothing to be applied.
Default Value : 1.5
Suggested values : Sigma ∈ {1, 1.2, 1.5, 1.8, 2, 2.5, 3, 4, 5}
Typical range of values : 0.7 ≤ Sigma ≤ 20
Recommended Increment : 0.1
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Extract bright or dark lines.
Default Value : ’light’
List of values : LightDark ∈ {’dark’, ’light’}
. ExtractWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should the line width be extracted?
Default Value : ’true’
List of values : ExtractWidth ∈ {’true’, ’false’}
HALCON 8.0.2
254 CHAPTER 5. FILTER
Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ Sigma).
Let S = Width ∗ Height be the number of pixels of Image. Then lines_gauss requires at least 55 ∗ S bytes
of temporary memory during execution.
Result
lines_gauss returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(::’no_object_result’,<Result>:). If
necessary, an exception handling is raised.
Parallelization Information
lines_gauss is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_facet
See also
bandpass_image, dyn_threshold, topographic_sketch
References
C. Steger: “Extracting Curvilinear Structures: A Differential Geometric Approach”. In B. Buxton, R. Cipolla, eds.,
“Fourth European Conference on Computer Vision”, Lecture Notes in Computer Science, Volume 1064, Springer
Verlag, pp. 630-641, 1996.
C. Steger: “Extraction of Curved Lines from Images”. In “13th International Conference on Pattern Recognition”,
Volume II, pp. 251-255, 1996.
C. Steger: “An Unbiased Detector of Curvilinear Structures”. Technical Report FGBV-96-03, Forschungsgruppe
Bildverstehen (FG BV), Informatik IX, Technische Universit"at M"unchen, July 1996.
Module
2D Metrology
5.10 Match
exhaustive_match ( Image, RegionOfInterest,
ImageTemplate : ImageMatch : Mode : )
’norm_correlation’
P
(Image[i − u][j − v] · ImageTemplate[l − u][c − v])
u,v
ImageMatch[i][j] = 255 · qP P
2 2
u,v (Image[i − u][j − v] ) · u,v (ImageTemplate[l − u][c − v] )
whereby X[i][j] indicates the grayvalue in the ith column and jth row of the image X. (l, c) is the centre of
the region of ImageTemplate. u and v are chosen so that all points of the template will be reached, i, j
run accross the RegionOfInterest. At the image frame only those parts of ImageTemplate will be
considered which lie inside the image (i.e. u and v will be restricted correspondingly). Range of values: 0 -
255 (best fit).
’dfd’ Calculating the average “displaced frame difference”:
P
u,v |Image[i − u][j − v] − ImageTemplate[l − u][c − v]|
ImageMatch[i][j] =
AREA(ImageT emplate)
The terms are the same as in ’norm_correlation’. AREA ( X ) means the area of the region X. Range of value
0 (best fit) - 255.
To calculate the normalized correlation as well as the “displaced frame difference” is (with regard to the
area of ImageTemplate) very time consuming. Therefore it is important to restrict the input region
(RegionOfInterest if possible, i.e. to apply the filter only in a very confined “region of interest”.
As far as quality is concerned, both modes return comparable results, whereby the mode ’dfd’ is faster by a factor
of about 3.5.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. RegionOfInterest (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Area to be searched in the input image.
. ImageTemplate (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
This area will be “matched” by Image within the RegionOfInterest.
. ImageMatch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Result image: values of the matching criterion.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Desired matching criterion.
Default Value : ’dfd’
List of values : Mode ∈ {’norm_correlation’, ’dfd’}
Example
read_image(Image,’monkey’)
disp_image(Image,WindowHandle)
draw_rectangle2(WindowHandle,Row,Column,Phi,Length1,Length2)
gen_rectangle2(Rectangle,Row,Column,Phi,Length1,Length2)
reduce_domain(Image,Rectangle,Template)
exhaustive_match(Image,Image,Template,ImageMatch,’dfd’)
invert_image(ImageMatch,ImageInvert)
local_max(Image,Maxima)
union1(Maxima,AllMaxima)
add_channels(AllMaxima,ImageInvert,FitMaxima)
threshold(FitMaxima,BestFit,230.0,255.0)
disp_region(BestFit,WindowHandle).
Result
If the parameter values are correct, the operator exhaustive_match returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
exhaustive_match is reentrant and automatically parallelized (on tuple level).
HALCON 8.0.2
256 CHAPTER 5. FILTER
Possible Predecessors
draw_region, draw_rectangle1
Possible Successors
local_max, threshold
Alternatives
exhaustive_match_mg
Module
Foundation
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
draw_rectangle2(WindowHandle,&Row,&Column,&Phi,&Length1,&Length2);
gen_rectangle2(&Rectangle,Row,Column,Phi,Length1,Length2);
reduce_domain(Image,Rectangle,&Template);
exhaustive_match_mg(Image,Template,&ImageMatch,’dfd’1,30);
invert_image(ImageMatch,&ImageInvert);
local_max(ImageInvert,&BestFit);
disp_region(BestFit,WindowHandle);
Result
If the parameter values are correct, the operator exhaustive_match_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
exhaustive_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, draw_rectangle1
Possible Successors
threshold, local_max
Alternatives
exhaustive_match
See also
gen_gauss_pyramid
Module
Foundation
HALCON 8.0.2
258 CHAPTER 5. FILTER
Parameter
gen_gauss_pyramid(Image,Pyramid,"weighted",0.5);
count_obj(Pyramid,&num);
for (i=1; i<=num; i++)
{
select_obj(Pyramid,&Single,i);
disp_image(Single,WindowHandle);
clear(Single);
}
Parallelization Information
gen_gauss_pyramid is reentrant and automatically parallelized (on channel level).
Possible Successors
image_to_channels, count_obj, select_obj, copy_obj
Alternatives
zoom_image_size, zoom_image_factor
See also
affine_trans_image
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageMonotony (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Result of the monotony operator.
Number of elements : ImageMonotony = Image
Example (Syntax: C)
Parallelization Information
monotony is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, median_image, mean_image, smooth_image,
invert_image
Possible Successors
threshold, exhaustive_match, disp_image
Alternatives
local_max, topographic_sketch, corner_response
Module
Foundation
5.11 Misc
convol_image ( Image : ImageResult : FilterMask, Margin : )
All image points are convolved with the filter mask. If an overflow or underflow occurs, the resulting gray value
is clipped. Hence, if filters that result in negative output values are used (e.g., derivative filters) the input image
should be of type int2. If a filename is given in FilterMask the filter mask is read from a text file with the
following structure:
hMask sizei
hInverse weight of the maski
hMatrixi
The first line contains the size of the filter mask, given as two numbers separated by white space (e.g., 3 3 for
3 × 3). Here, the first number defines the height of the filter mask, while the second number defines its width. The
next line contains the inverse weight of the mask, i.e., the number by which the convolution of a particular image
point is divided. The remaining lines contain the filter mask as integer numbers (separated by white space), one
line of the mask per line in the file. The file must have the extension “.fil”. This extension must not be passed to
the operator. If the filter mask is to be computed from a tuple, the tuple given in FilterMask must also satisfy
the structure described above. However, in this case the line feed is omitted.
HALCON 8.0.2
260 CHAPTER 5. FILTER
For example, lets assume we want to use the following filter mask:
1 2 1
1
16
2 4 2
1 2 1
If the filter mask should be generated from a file, then the file should look like this:
33
16
121
242
121
In contrast, if the filter mask should be generated from a tuple, then the following tuple must be passed in
FilterMask:
[3,3,16,1,2,1,2,4,2,1,2,1]
Parameter
Expand the domain of an image and set the gray values in the expanded domain.
expand_domain_gray expands the border gray values of the domain outwards. The width of the expansion
is set by the parameter ExpansionRange. All filters in HALCON use gray values of the pixels outside the
domain depending on the filter width. This may lead to undesirable side effects especially in the border region
of the domain. For example, if the foreground (domain) and the background of the image differ strongly in
brightness, the result of a filter operation may lead to undesired darkening or brightening at the border of the
domain. In order to avoid this drawback, the domain is expanded by expand_domain_gray in a preliminary
stage, copying the gray values of the border pixels to the outside of the domain. In addition, the domain itself is
also expanded to reflect the newly set pixels. Therefore, in many cases it is reasonable to reduce the domain again
( reduce_domain or change_domain) after using expand_domain_gray and call the filter operation
afterwards. ExpansionRange should be set to the half of the filter width.
Parameter
. InputImage (input_object) . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image with domain to be expanded.
. ExpandedImage (output_object) . . . . . . . . . . image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Output image with new gray values in the expanded domain.
read_image(Fabrik, ’fabrik.tif’);
gen_rectangle2(Rectangle_Label,243,320,-1.55,62,28);
reduce_domain(Fabrik, Rectangle_Label, Fabrik_Label);
/* Character extraction without gray value expansion: */
mean_image(Fabrik_Label,Label_Mean_normal,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_normal,Characters_normal,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_normal);
/* The characters in the border region are not extracted ! */
stop();
/* Character extraction with gray value expansion: */
expand_domain_gray(Fabrik_Label, Label_expanded,15);
reduce_domain(Label_expanded,Rectangle_Label, Label_expanded_reduced);
mean_image(Label_expanded_reduced,Label_Mean_expanded,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_expanded,Characters_expanded,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_expanded);
/* Now, even in the border region the characters are recognized */
Complexity
Let L the perimeter of the domain. Then the runtime complexity is approximately O(L) ∗ ExpansionRange.
Result
expand_domain_gray returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
handling is raised.
Parallelization Information
expand_domain_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
reduce_domain
Possible Successors
reduce_domain, mean_image, dyn_threshold
See also
reduce_domain, mean_image
Module
Foundation
Calculate the lowest possible gray value on an arbitrary path to the image border for each point in the image.
gray_inside determines the “cheapest” path to the image border for each point in the image, i.e., the path on
which the lowest gray values have to be overcome. The resulting image contains the difference of the gray value
of the particular point and the maximum gray value on the path. Bright areas in the result image therefore signify
that these areas (which are typically dark in the original image) are surrounded by bright areas. Dark areas in the
result image signify that there are only small gray value differences between them and the image border (which
doesn’t mean that they are surrounded by dark areas; a small “gap” of dark values suffices). The value 0 (black) in
the result image signnifies that only darker or equally bright pixels exist on the path to the image border.
The operator is implemented by first segmenting into basins and watersheds the image using the watersheds
operator. If the image is regarded as a gray value mountain range, basins are the places where water accumulates
HALCON 8.0.2
262 CHAPTER 5. FILTER
and the mountain ridges are the watersheds. Then, the watersheds are distributed to adjacent basins, thus leaving
only basins. The border of the domain (region) of the original image is now searched for the lowest gray value,
and the region in which it resides is given its result values. If the lowest gray value resides on the image border,
all result values can be calculated immediately using the gray value differences to the darkest point. If the smalles
found gray value lies in the interior of a basin, the lowest possible gray value has to be determined from the already
processed adjacent basins in order to compute the new values. An 8-neighborhood is used to determine adjacency.
The found region is subtracted from the regions yet to process, and the whole process is repeated. Thus, the image
is “stripped” form the outside.
Analogously to watersheds, it is advisable to apply a smoothing operation before calling watersheds, e.g.,
binomial_filter or gauss_image, in order to reduce the amount of regions that result from the watershed
algorithm, and thus to speed up the processing time.
Parameter
read_image(Bild,’coin’)
gauss_image (Bild,G_Bild,11)
open_window (0,0,512,512,0,’visible’,’’,WindowHandle)
gray_inside(G_Bild,Ausgabebild)
disp_image (Ausgabebild,WindowHandle).
Result
gray_inside always returns 2 (H_MSG_TRUE).
Parallelization Information
gray_inside is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, mean_image, median_image
Possible Successors
select_shape, area_center, count_obj
See also
watersheds
Module
Foundation
Example
Result
gray_skeleton returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_skeleton is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
mean_image
Alternatives
nonmax_suppression_amp, nonmax_suppression_dir, local_max
See also
skeleton, gray_dilation_rect
Module
Foundation
def_tab(Tab,I) :- I=255
Tab = 0
def_tab([Tk|Ts],I) :-
Tk is 255 - I
Iw is I -1
def_tab(Ts,Iw)
HALCON 8.0.2
264 CHAPTER 5. FILTER
Result
The operator lut_trans returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
lut_trans is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Module
Foundation
MaskSize Exponent
255 X |g(i) − g(−i)|
sym := 255 −
MaskSize i=1
255
read_image(Image,’monkey’)
symmetry(Image,ImageSymmetry,70,0.0,0.5)
threshold(ImageSymmetry,SymmPoints,170,255)
Result
If the parameter values are correct the operator symmetry returns the value 2 (H_MSG_TRUE) The
behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
symmetry is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold
Module
Foundation
Complexity
Let n be the number of pixels in the image. Then O(n) operations are performed.
Result
topographic_sketch returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
behavior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is
raised.
Parallelization Information
topographic_sketch is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
threshold
HALCON 8.0.2
266 CHAPTER 5. FILTER
References
R. Haralick, L. Shapiro: “Computer and Robot Vision, Volume I”; Reading, Massachusetts, Addison-Wesley;
1992; Kapitel 8.13.
Module
Foundation
5.12 Noise
add_noise_distribution ( Image : ImageNoise : Distribution : )
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
sp_distribution(30,30,Dist)
add_noise_distribution(Image,ImageNoise,Dist)
disp_image(ImageNoise,WindowHandle).
Result
add_noise_distribution returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
behaviour can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
add_noise_distribution is reentrant and automatically parallelized (on tuple level, channel level, domain
level).
Possible Predecessors
gauss_distribution, sp_distribution, noise_distribution_mean
Alternatives
add_noise_white
See also
sp_distribution, gauss_distribution, noise_distribution_mean, add_noise_white
Module
Foundation
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
add_noise_white(Image,ImageNoise,90)
disp_image(ImageNoise,WindowHandle).
Result
add_noise_white returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
add_noise_white is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_noise_distribution
See also
add_noise_distribution, noise_distribution_mean, gauss_distribution,
sp_distribution
Module
Foundation
HALCON 8.0.2
268 CHAPTER 5. FILTER
Parameter
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Standard deviation of the Gaussian noise distribution.
Default Value : 2.0
Suggested values : Sigma ∈ {1.5, 2.0, 3.0, 5.0, 10.0}
Typical range of values : 0.0 ≤ Sigma ≤ 100.0
Minimum Increment : 0.1
Recommended Increment : 1.0
. Distribution (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . distribution.values-array ; real
Resulting Gaussian noise distribution.
Number of elements : 513
Example
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
gauss_distribution(30,Dist)
add_noise_distribution(Image,ImageNoise,Dist)
disp_image(ImageNoise,WindowHandle).
Parallelization Information
gauss_distribution is reentrant and processed without parallelization.
Possible Successors
add_noise_distribution
Alternatives
sp_distribution, noise_distribution_mean
See also
sp_distribution, add_noise_white, noise_distribution_mean
Module
Foundation
noise_distribution_mean ( ConstRegion,
Image : : FilterSize : Distribution )
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
sp_distribution(30,30,Dist)
add_noise_distribution(Image,ImageNoise,Dist)
disp_image(ImageNoise,WindowHandle)
Parallelization Information
sp_distribution is reentrant and processed without parallelization.
Possible Successors
add_noise_distribution
HALCON 8.0.2
270 CHAPTER 5. FILTER
Alternatives
gauss_distribution, noise_distribution_mean
See also
gauss_distribution, noise_distribution_mean, add_noise_white
Module
Foundation
5.13 Optical-Flow
where w = (u, v, 1) is the optical flow vector field to be determined (with a time step of 1 in the third coordinate).
The image sequence is regarded as a continuous function f (x), where x = (r, c, t) and (r, c) denotes the position
and t the time. Furthermore, ED (w) denotes the data term, while ES (w) denotes the smoothness term, and α is a
regularization parameter that determines the smoothness of the solution. The regularization parameter α is passed
in FlowSmoothness. While the data term encodes assumptions about the constancy of the object features in
consecutive images, e.g., the constancy of the gray values or the constancy of the first spatial derivative of the
gray values, the smoothness term encodes assumptions about the (piecewise) smoothness of the solution, i.e., the
smoothness of the vector field to be determined.
The FDRIG algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (r + u, c + v, t + 1) = f (r, c, t). This can be written more compactly as
f (x + w) = f (x) using vector notation.
Constancy of the spatial gray value derivatives: It is assumed that corresponding pixels in consecutive images of an
image sequence additionally have have the same spatial gray value derivatives, i.e, that ∇2 f (x + u, y + v, t + 1) =
∇2 f (x, y, t) also holds, where ∇2 f = (∂x f, ∂y f ). This can be written more compactly as ∇2 f (x+w) = ∇2 f (x).
In contrast to the gray value constancy, the gradient constancy has the advantage that it is invariant to additive global
illumination changes.
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field I: The solution is assumed to be piecewise smooth. While the actual
2 2
smoothness is achieved by penalizing the first
2
√ derivatives of the flow |∇2 u| + |∇2 v| , the use of a statistically
2 2
robust (linear) penalty function ΨS (s ) = s + with = 0.001 provides the desired preservation of edges in
the movement in the flow field to be determined. This type of smoothness term is called flow-driven and isotropic.
Taking into account all of the above assumptions, the energy functional of the FDRIG algorithm can be written as
Z
2 2
EFDRIG (w) = |f (x + w) − f (x)| + γ |∇2 f (x + w) − ∇2 f (x)| drdc
ΨD
| {z } | {z }
gray value constancy gradient constancy
Z
+α ΨS |∇2 u(x)|2 + |∇2 v(x)|2 drdc
| {z }
smoothness assumption
Here, α is the regularization parameter passed in FlowSmoothness, while γ is the gradient constancy weight
passed in GradientConstancy. These two parameters, which constitute the model parameters of the FDRIG
algorithm, are described in more detail below.
The DDRAW algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field II: The solution is assumed to be piecewise smooth. In contrast to
the FDRIG algorithm, which allows discontinuities everywhere, the DDRAW algorithm only allows discontinuities
at the edges in the original image. Here, the local smoothness is controlled in such a way that the flow field is sharp
across image edges, while it is smooth along the image edges. This type of smoothness term is called data-driven
and anisotropic.
All assumptions of the DDRAW algorithm can be combined into the following energy functional:
Z
2
EDDRAW (w) = ΨD |f (x + w) − f (x)| drdc
| {z }
gray value constancy
Z
∇2 u(x)> PNE (∇2 f (x)) ∇2 u(x) + ∇2 v(x)> PNE (∇2 f (x)) ∇2 v(x) drdc ,
+α
| {z }
smoothness assumption
where PNE (∇2 f (x)) is a normalized projection matrix orthogonal to ∇2 f (x), for which
HALCON 8.0.2
272 CHAPTER 5. FILTER
holds. This matrix ensures that the smoothness of the flow field is only assumed along the image edges. In
contrast, no assumption is made with respect to the smoothness across the image edges, resulting in the fact
that discontinuities in the solution may occur across the image edges. In this respect, S = 0.001 serves as a
regularization parameter that prevents the projection matrix PNE (∇2 f (x)) from becoming singular. In contrast to
the FDRIG algorithm, there is only one model parameter for the DDRAW algorithm: the regularization parameter
α. As mentioned above, α is described in more detail below.
As for the two approaces described above, the CLG algorithm uses certain assumptions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Small displacements: In contrast to the two approaches above, it is assumed that only small displacements can
occur, i.e., displacements in the order of a few pixels. This facilitates a linearization of the constancy assumptions
in the model, and leads to the approximation f (x) + ∇3 f (x)> w(x) = f (x), i.e., ∇3 f (x)> w(x) = 0 should
hold. Here, ∇3 f (x) denotes the gradient in the spatial as well as the temporal domain.
Local constancy of the solution: Furthermore, it is assumed that the flow field to be computed is locally constant.
This facilitates the integration of the image data in the data term over the respective neighborhood of each pixel.
This, in turn, increases the robustness of the algorithm against noise. Mathematically, this can be achieved by
reformulating the quadratic data term as (∇3 f (x)> w(x))2 = w(x)> ∇3 f (x)∇3 f (x)> w(x). By performing a
local Gaussian-weighted integration over a neighborhood specified by the ρ (passed in IntegrationSigma),
the following data term is obtained: w(x)> Gρ ∗(∇3 f (x)∇3 f (x)> ) w(x). Here, Gρ ∗. . . denotes a convolution of
the 3 × 3 matrix ∇3 f (x)∇3 f (x)> with a Gaussian filter with a standard deviation of ρ (see derivate_gauss).
General smoothness of the flow field: Finally, the solution is assumed to be smooth everywhere in the image. This
particular type of smoothness term is called homogeneous.
All of the above assumptions can be combined into the following energy functional:
Z Z
w(x)> Gρ ∗ (∇3 f (x)∇3 f (x)> ) w(x) drdc + α |∇2 u(x)|2 + |∇2 v(x)|2 drdc ,
ECLG (w) =
| {z } | {z }
gray value constancy smoothness assumption
The corresponding model parameters are the regularization parameter α as well as the integration scale ρ (passed
in IntegrationSigma), which determines the size of the neighborhood over which to integrate the data term.
These two parameters are described in more detail below.
To compute the optical flow vector field for two consecutive images of an image sequence with the FDRIG,
DDRAW, or CLG algorithm, the solution that best fulfills the assumptions of the respective algorithm must be
determined. From a mathematical point of view, this means that a minimization of the above energy functionals
should be performed. For the FDRIG and DDRAW algorithms, so called coarse-to-fine warping strategies play an
important role in this minimization, because they enable the calculation of large displacements. Thus, they are a
suitable means to handle the omission of the linearization of the constancy assumptions numerically in these two
approaches.
To calculate large displacements, coarse-to-fine warping strategies use two concepts that are closely interlocked:
The successive refinement of the problem (coarse-to-fine) and the successive compensation of the current image
pair by already computed displacements (warping). Algorithmically, such coarse-to-fine warping strategies can be
described as follows:
1. First, both images of the current image pair are zoomed down to a very coarse resolution level.
2. Then, the optical flow vector field is computed on this coarse resolution.
3. The vector field is required on the next resolution level: It is applied there to the second image of the image
sequence, i.e., the problem on the finer resolution level is compensated by the already computed optical flow field.
This step is also known as warping.
4. The modified problem (difference problem) is now solved on the finer resolution level, i.e., the optical flow
vector field is computed there.
5. The steps 3-4 are repeated until the finest resolution level is reached.
6. The final result is computed by adding up the vector fields from all resolution levels.
This incremental computation of the optical flow vector field has the following advantage: While the coarse-to-fine
strategy ensures that the displacements on the finest resolution level are very small, the warping strategy ensures
that the displacements remain small for the incremental displacements (optical flow vector fields of the difference
problems). Since small displacements can be computed much more accurately than larger displacements, the
accuracy of the results typically increases significantly by using such a coarse-to-fine warping strategy. However,
instead of having to solve a single correspondence problem, an entire hierarchy of these problems must now be
solved. For the CLG algorithm, such a coarse-to-fine warping strategy is unnecessary since the model already
assumes small displacements.
The maximum number of resolution levels (warping levels), the resolution ratio between two consecutive resolution
levels, as well as the finest resolution level can be specified for the FDRIG as well as the DDRAW algorithm.
Details can be found below.
The minimization of functionals is mathematically very closely related to the minimization of functions: Like
the fact that the zero crossing of the first derivative is a necessary condition for the minimum of a function, the
fulfillment of the so called Euler-Lagrange equations is a necessary condition for the minimizing function of a
functional (the minimizing function corresponds to the desired optical flow vector field in this case). The Euler-
Lagrange equations are partial differential equations. By discretizing these Euler-Lagrange equations using finite
differences, large sparse nonlinear equation systems result for the FDRIG and DDRAW algorithms. Because
coarse-to-fine warping strategies are used, such an equation system must be solved for each resolution level, i.e.,
for each warping level. For the CLG algorithm, a single sparse linear equation system must be solved.
To ensure that the above nonlinear equation systems can be solved efficiently, the FDRIG and DDRAW use bidi-
rectional multigrid methods. From a numerical point of view, these strategies are among the fastest methods for
solving large linear and nonlinear equation systems. In contrast to conventional nonhierarchical iterative methods,
e.g., the different linear and nonlinear Gauss-Seidel variants, the multigrid methods have the advantage that correc-
tions to the solution can be determined efficiently on coarser resolution levels. This, in turn, leads to a significantly
faster convergence. The basic idea of multigrid methods additionally consists of hierarchically computing these
correction steps, i.e., the computation of the error on a coarser resolution level itself uses the same strategy and
efficiently computes its error (i.e., the error of the error) by correction steps on an even coarser resolution level.
Depending on whether one or two error correction steps are performed per cycle, a so called V or W cycle is
obtained. The corresponding strategies for stepping through the resolution hierarchy are as follows for two to four
resolution levels:
Fine
V-Cycles W-Cycles
1 u u u u u u u u u u u u
AAs A s s As s AAs A s s s As s s
2 A A A A A
s As s As A
s As s s As s s
A
3 A
AAs AAsAA
s AsAAs
4
Coarse
Here, iterations on the original problem are denoted by large markers, while small markers denote iterations on
error correction problems.
Algorithmically, a correction cycle can be described as follows:
1. In the first step, several (few) iterations using an interative linear or nonlinear basic solver are performed (e.g.,
a variant of the Gauss-Seidel solver). This step is called pre-relaxation step.
2. In the second step, the current error is computed to correct the current solution (the solution after step 1).
For efficiency reasons, the error is calculated on a coarser resolution level. This step, which can be performed
iteratively several times, is called coarse grid correction step.
3. In a final step, again several (few) iterations using the interative linear or nonlinear basic solver of step 1 are
performed. This step is called post-relaxation step.
In addition, the solution can be initialized in a hierarchical manner. Starting from a very coarse variant of the
original (non)linear equation system, the solution is successively refined. To do so, interpolated solutions of
coarser variants of the equation system are used as the initialization of the next finer variant. On each resolution
HALCON 8.0.2
274 CHAPTER 5. FILTER
level itself, the V or W cycles described above are used to efficiently solve the (non)linear equation system on that
resolution level. The corresponding multigrid methods are called full multigrid methods in the literature. The full
multigrid algorithm can be visualized as follows:
Coarse
This example represents a full multigrid algorithm that uses two W correction cycles per resolution level of the
hierarchical initialization. The interpolation steps of the solution from one resolution level to the next arew denoted
by i and the two W correction cycles by w1 and w2 . Iterations on the original problem are denoted by large markers,
while small markers denote iterations on error correction problems.
In the multigrid implementation of the FDRIG, DDRAW, and CLG algorithm, the following parameters can be
set: whether a hierarchical initialization is performed; the number of coarse grid correction steps; the maximum
number of correction levels (resolution levels); the number of pre-relaxation steps; the number of post-relaxation
steps. These parameters are described in more detail below.
The basic solver for the FDRIG algorithm is a point-coupled fixed-point variant of the linear Gauss-Seidel algo-
rithm. The basic solver for the DDRAW algorithm is an alternating line-coupled fixed-point variant of the same
type. The number of fixed-point steps can be specified for both algorithms with a further parameter. The basic
solver for the CLG algorithm is a point-coupled linear Gauss-Seidel algorithm. The transfer of the data between
the different resolution levels is performed by area-based interpolation and area-based averaging, respectively.
After the algorithms have been described, the effects of the individual parameters are discussed in the following.
The input images, along with their domains (regions of interest) are passed in Image1 and Image2. The com-
putation of the optical flow vector field VectorField is performed on the smallest surrounding rectangle of the
intersection of the domains of Image1 and Image2. The domain of VectorField is the intersection of the
two domains. Hence, by specifying reduced domains for Image1 and Image2, the processing can be focused
and runtime can potentially be saved. It should be noted, however, that all methods compute a global solution of
the optical flow. In particular, it follows that the solution on a reduced domain need not (and cannot) be identical
to the resolution on the full domain restricted to the reduced domain.
SmoothingSigma specifies the standard deviation of the Gaussian kernel that is used to smooth both input
images. The larger the value of SmoothingSigma, the larger the low-pass effect of the Gaussian kernel, i.e., the
smoother the preprocessed image. Usually, SmoothingSigma = 0.8 is a suitable choice. However, other values
in the interval [0, 2] are also possible. Larger standard deviations should only be considered if the input images are
very noisy. It should be noted that larger values of SmoothingSigma lead to slightly longer execution times.
IntegrationSigma specifies the standard deviation ρ of the Gaussian kernel Gρ that is used for the local
integration of the neighborhood information of the data term. This parameter is used only in the CLG algorithm and
has no effect on the other two algorithms. Usually, IntegrationSigma = 1.0 is a suitable choice. However,
other values in the interval [0, 3] are also possible. Larger standard deviations should only be considered if the
input images are very noisy. It should be noted that larger values of IntegrationSigma lead to slightly longer
execution times.
FlowSmoothness specifies the weight α of the smoothness term with respect to the data term. The larger the
value of FlowSmoothness, the smoother the computed optical flow field. It should be noted that choosing
FlowSmoothness too small can lead to unusable results, even though statistically robust penalty functions are
used, in particular if the warping strategy needs to predict too much information outside of the image. For byte
images with a gray value range of [0, 255], values of FlowSmoothness around 20 for the flow-driven FDRIG
algorithm and around 1000 for the data-driven DDRAW algorithm and the homogeneous CLG algorithm typically
yield good results.
GradientConstancy specifies the weight γ of the gradient constancy with respect to the gray value constancy.
This parameter is used only in the FDRIG algorithm. For the other two algorithms, it does not influence the results.
For byte images with a gray value range of [0, 255], a value of GradientConstancy = 5 is typically a good
choice, since then both constancy assumptions are used to the same extent. For large changes in illumination, how-
ever, significantly larger values of GradientConstancy may be necessary to achieve good results. It should be
noted that for large values of the gradient constancy weight the smoothness parameter FlowSmoothness must
also be chosen larger.
The parameters of the multigrid solver and for the coarse-to-fine warping strategy can be specified with the
generic parameters MGParamName and MGParamValue. Usually, it suffices to use one of the four default
parameter sets via MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. The default parameter sets are described below. If the parameters should be speci-
fied individually, MGParamName and MGParamValue must be set to tuples of the same length. The values
corresponding to the parameters specified in MGParamName must be specified at the corresponding position in
MGParamValue.
MGParamName = ’warp_zoom_factor’ can be used to specify the resolution ratio between two consecutive warp-
ing levels in the coarse-to-fine warping hierarchy. ’warp_zoom_factor’ must be selected from the open interval
(0, 1). For performance reasons, ’warp_zoom_factor’ is typically set to 0.5, i.e., the number of pixels is halved in
each direction for each coarser warping level. This leads to an increase of 33% in the calculations that need to be
performed with respect to an algorithm that does not use warping. Values for ’warp_zoom_factor’ close to 1 can
lead to slightly better results. However, they require a disproportionately larger computation time, e.g., 426% for
’warp_zoom_factor’ = 0.9.
MGParamName = ’warp_levels’ can be used to restrict the warping hierarchy to a maximum number of levels.
For ’warp_levels’ = 0, the largest possible number of levels is used. If the image size does not allow to use
the specified number of levels (taking the resolution ratio ’warp_zoom_factor’ into account), the largest possible
number of levels is used. Usually, ’warp_levels’ should be set to 0.
MGParamName = ’warp_last_level’ can be used to specify the number of warping levels for which the flow
increment should no longer be computed. Usually, ’warp_last_level’ is set to 1 or 2, i.e., a flow increment is
computed for each warping level, or the finest warping level is skipped in the computation. Since in the latter case
the computation is performed on an image of half the resolution of the original image, the gained computation
time can be used to compute a more accurate solution, e.g., by using a full multigrid algorithm with additional
iterations. The more accurate solution is then interpolated to the full resolution.
The three parameters that specify the coarse-to-fine warping strategy are only used in the FDRIG and DDRAW
algorithms. They are ignored for the CLG algorithm.
MGParamName = ’mg_solver’ can be used to specify the general multigrid strategy for solving the (non)linear
equation system (in each warping level). For ’mg_solver’ = ’multigrid’, a normal multigrid algorithm (without
coarse-to-fine initialization) is used, while for ’mg_solver’ = ’full_multigrid’ a full multigrid algorithm (with
coarse-to-fine initialization) is used. Since a resolution reduction of 0.5 is used between two consecutive levels of
the coarse-to-fine initialization (in contrast to the resolution reduction in the warping strategy, this value is hard-
coded into the algorithm), the use of a full multigrid algorithm results in an increase of the computation time by
approximately 33% with respect to the normal multigrid algorithm. Using ’mg_solver’ to ’full_multigrid’ typically
yields numerically more accurate results than ’mg_solver’ = ’multigrid’.
MGParamName = ’mg_cycle_type’ can be used to specify whether a V or W correction cycle is used per multigrid
level. Since a resolution reduction of 0.5 is used between two consecutive levels of the respective correction cycle,
using a W cycle instead of a V cycle increases the computation time by approximately 50%. Using ’mg_cycle_type’
= ’w’ typically yields numerically more accurate results than ’mg_cycle_type’ = ’v’.
MGParamName = ’mg_levels’ can be used to restrict the multigrid hierarchy for the coarse-to-fine initialization
as well as for the actual V or W correction cycles. For ’mg_levels’ = 0, the largest possible number of levels is
used. If the image size does not allow to use the specified number of levels, the largest possible number of levels
is used. Usually, ’mg_levels’ should be set to 0.
MGParamName = ’mg_cycles’ can be used to specify the total number of V or W correction cycles that are being
performed. If a full multigrid algorithm is used, ’mg_cycles’ refers to each level of the coarse-to-fine initialization.
Usually, one or two cycles are sufficient to yield a sufficiently accurate solution of the equation system. Typically,
the larger ’mg_cycles’, the more accurate the numerical results. This parameter enters almost linearly into the
computation time, i.e., doubling the number of cycles leads approximately to twice the computation time.
MGParamName = ’mg_pre_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver before the actual error correction is performed.
Usually, one or two pre-relaxation steps are sufficient. Typically, the larger ’mg_pre_relax’, the more accurate the
numerical results.
HALCON 8.0.2
276 CHAPTER 5. FILTER
MGParamName = ’mg_post_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver after the actual error correction is performed.
Usually, one or two post-relaxation steps are sufficient. Typically, the larger ’mg_post_relax’, the more accurate
the numerical results.
Like when increasing the number of correction cycles, increasing the number of pre- and post-relaxation steps
increases the computation time asymptotically linearly. However, no additional restriction and prolongation oper-
ations (zooming down and up of the error correction images) are performed. Consequently, a moderate increase in
the number of relaxation steps only leads to a slight increase in the computation times.
MGParamName = ’mg_inner_iter’ can be used to specify the number of iterations to solve the linear equation
systems in each fixed-point iteration of the nonlinear basic solver. Usually, one iteration is sufficient to achieve a
sufficient convergence speed of the multigrid algorithm. The increase in computation time is slightly smaller than
for the increase in the relaxation steps. This parameter only influences the FDRIG and DDRAW algorithms since
for the CLG algorithm no nonlinear equation system needs to be solved.
As described above, usually it is sufficient to use one of the default parameter sets for the parameters described
above by using MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. If necessary, individual parameters can be modified after the default parameter set has
been chosen by specifying a subset of the above parameters and corresponding values after ’default_parameters’ in
MGParamName and MGParamValue (e.g., MGParamName = [’default_parameters’,’warp_zoom_factor’] and
MGParamValue = [’accurate’,0.6]).
The default parameter sets use the following values for the above parameters:
’default_parameters’ = ’very_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1,
’mg_post_relax’ = 1, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2, ’mg_solver’
= ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1, ’mg_post_relax’ =
1, ’mg_inner_iter’ = 1.
It should be noted that for the CLG algorithm the two modes ’fast_accurate’ and ’fast’ are identical to the modes
’very_accurate’ and ’accurate’ since the CLG algorithm does not use a coarse-to-fine warping strategy.
Parameter
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / uint2 / real
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / uint2 / real
Input image 2.
. VectorField (output_object) . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : vector_field
Optical flow.
. Algorithm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Algorithm for computing the optical flow.
Default Value : ’fdrig’
List of values : Algorithm ∈ {’fdrig’, ’ddraw’, ’clg’}
. SmoothingSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Standard deviation for initial Gaussian smoothing.
Default Value : 0.8
Suggested values : SmoothingSigma ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}
Restriction : SmoothingSigma ≥ 0.0
. IntegrationSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Standard deviation of the integration filter.
Default Value : 1.0
Suggested values : IntegrationSigma ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6,
2.8, 3.0}
Restriction : IntegrationSigma ≥ 0.0
Result
If the parameter values are correct, the operator optical_flow_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
optical_flow_mg is reentrant and automatically parallelized (on tuple level).
Possible Successors
threshold, vector_field_length
See also
unwarp_image_vector_field
References
T. Brox, A. Bruhn, N. Papenberg, and J. Weickert: High accuracy optic flow estimation based on a theory for
warping. In T. Pajdla and J. Matas, editors, Computer Vision - ECCV 2004, volume 3024 of Lecture Notes in
Computer Science, pages 25–36. Springer, Berlin, 2004.
A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schnörr: Variational optical flow computation in real-
time. IEEE Transactions on Image Processing, 14(5):608-615, May 2005.
H.-H. Nagel and W. Enkelmann: An investigation of smoothness constraints for the estimation of displacement
vector fields from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(5):565-
593, September 1986.
Ulrich Trottenberg, Cornelis Oosterlee, Anton Schüller: Multigrid. Academic Press, Inc., San Diego, 2000.
Module
Foundation
HALCON 8.0.2
278 CHAPTER 5. FILTER
Result
If the parameter values are correct, the operator unwarp_image_vector_field returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
unwarp_image_vector_field is reentrant and automatically parallelized (on domain level, tuple level).
Possible Predecessors
optical_flow_mg
Module
Foundation
Result
If the parameter values are correct, the operator vector_field_length returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
vector_field_length is reentrant and automatically parallelized (on domain level, tuple level).
Possible Predecessors
optical_flow_mg
Possible Successors
threshold
Module
Foundation
5.14 Points
corner_response ( Image : ImageCorner : Size, Weight : )
2
R(x, y) = A(x, y) · B(x, y) − C 2 (x, y) − W eight · (A(x, y) + B(x, y))
A(x, y) = W (u, v) ∗ (∇x I(x, y))2
B(x, y) = W (u, v) ∗ (∇y I(x, y))2
C(c, y) = W (u, v) ∗ (∇x I(x, y)∇y I(x, y))
where I is the input image and R the output image of the filter. The operator gauss_image is used for smoothing
(W ), the operator sobel_amp is used for calculating the derivative (∇).
The corner response function is invariant with regard to rotation. In order to achieve a suitable dependency of the
function R(x, y) on the local gradient, the parameter Weight must be set to 0.04. With this, only gray value
corners will return positive values for R(x, y), while straight edges will receive negative values. The output image
type is identical to the input image type. Therefore, the negative output values are set to 0 if byte images are
used as input images. If this is not desired, the input image should be converted into a real or int2 image with
convert_image_type.
Parameter
HALCON 8.0.2
280 CHAPTER 5. FILTER
Example (Syntax: C)
read_image(&Fabrik,"fabrik");
corner_response(Fabrik,&CornerResponse,3,0.04);
local_max(CornerResponse,&LocalMax);
disp_image(Fabrik,WindowHandle);
set_color(WindowHandle,"red");
disp_region(LocalMax,WindowHandle);
Parallelization Information
corner_response is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
local_max, threshold
See also
gauss_image, sobel_amp, convert_image_type
References
C.G. Harris, M.J. Stephens, “A combined corner and edge detector”’; Proc. of the 4th Alvey Vision Conference;
August 1988; pp. 147-152.
H. Breit, “Bestimmung der Kameraeigenbewegung und Gewinnung von Tiefendaten aus monokularen Bildfol-
gen”; Diplomarbeit am Lehrstuhl f"ur Nachrichtentechnik der TU M"unchen; 30. September 1990.
Module
Foundation
The parameter FilterType selects whether dark, light, or all dots in the image should be enhanced. The
PixelShift can be used either to increase the contrast of the output image (PixelShift > 0) or to dampen
the values in extremly bright areas that would be cut off otherwise (PixelShift = −1).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. DotImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Output image.
. Diameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Diameter of the dots to be enhanced.
Default Value : 5
List of values : Diameter ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23}
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enhance dark, light, or all dots.
Default Value : ’light’
List of values : FilterType ∈ {’dark’, ’light’, ’all’}
is calculated, where Ix,c and Iy,c are the first derivatives of each image channel and S stands for a smoothing.
If Smoothing is ’gauss’, the derivatives are computed with Gaussian derivatives of size SigmaGrad and the
smoothing is performed by a Gaussian of size SigmaInt. If Smoothing is ’mean’, the derivatives are computed
with a 3 × 3 Sobel filter (and hence SigmaGrad is ignored) and the smoothing is performed by a SigmaInt ×
SigmaInt mean filter. Then
inhomogeneity = TraceM
DetM
isotropy = 4 ·
(TraceM )2
is the degree of the isotropy of the texture in the image. Image points that have an inhomogeneity greater or equal to
ThreshInhom and at the same time an isotropy greater or equal to ThreshShape are subsequently examined
further.
In the second step, two optimization functions are calculated for the resulting points. Essentially, these optimiza-
tion functions average for each point the distances to the edge directions (for junction points) and the gradient
directions (for area points) within an observation window around the point. If Smoothing is ’gauss’, the aver-
aging is performed by a Gaussian of size SigmaPoints, if Smoothing is ’mean’, the averaging is performed
by a SigmaPoints × SigmaPoints mean filter. The local minima of the optimization functions determine
HALCON 8.0.2
282 CHAPTER 5. FILTER
the extracted points. Their subpixel precise position is returned in (RowJunctions, ColJunctions) and
(RowArea, ColArea).
In addition to their position, for each extracted point the elements CoRRJunctions, CoRCJunctions, and
CoCCJunctions (and CoRRArea, CoRCArea, and CoCCArea, respectively) of the corresponding covariance
matrix are returned. This matrix facilitates conclusions about the precision of the calculated point position. To
obtain the actual values, it is necessary to estimate the amount of noise in the input image and to multiply all
components of the covariance matrix with the variance of the noise. (To estimate the amount of noise, apply
intensity to homgeneous image regions or plane_deviation to image regions, where the gray values
form a plane. In both cases the amount of noise is returned in the parameter Deviation.) This is illustrated by the
example program
%HALCONROOT%\examples\hdevelop\Filter\Points\ points_foerstner_ellipses.dev .
It lies in the nature of this operator that corners often result in two distinct points: One junction point, where the
edges of the corner actually meet, and one area point inside the corner. Such doublets will be eliminated automati-
cally, if EliminateDoublets is ’true’. To do so, each pair of one junction point and one area point is examined.
If the points lie within each others’ observation window of the optimization function, for both points the precision
of the point position is calculated and the point with the lower precision is rejected. If EliminateDoublets is
’false’, every detected point is returned.
Attention
Note that only odd values for SigmaInt and SigmaPoints are allowed, if Smoothing is ’mean’. Even
values automatically will be replaced by the next larger odd value.
Parameter
HALCON 8.0.2
284 CHAPTER 5. FILTER
C. Fuchs: “Extraktion polymorpher Bildstrukturen und ihre topologische und geometrische Gruppierung”. Volume
502, Series C, Deutsche Geodätische Kommission, München, 1998.
Module
Foundation
where Gσ stands for a Gaussian smoothing of size SigmaSmooth and Ix,c and Iy,c are the first derivatives of
each image channel, computed with Gaussian derivatives of size SigmaGrad. The resulting points are the positive
local extrema of
If necessary, they can be restricted to points with a minimum filter response of Threshold. The coordinates of
the points are calculated with subpixel accuracy.
Parameter
HALCON 8.0.2
286 CHAPTER 5. FILTER
5.15 Smoothing
For iterative calculation of the gray value of a pixel the gray value differences in relation to the four or eight
neighbors, respectively, are used. These gray value differences, however, are evaluated differently, i.e., a non-
linear diffusion process is carried out.
The evaluation is carried out by using a diffusion function (two different functions were implemented, namely
Mode = 1 and/or 2), which — depending on the gradient — ensures that within homogenous regions the smoothing
is stronger than over the margins of regions so that the edges remain sharp. The diffusion function is adjusted to
the noise ratio of the image by a histogram analysis in the gradient image (according to Canny). A high value for
Percent increases the smoothing effect but blurs the edges a little more (values from 80 - 90 percent are typical).
The parameter Iteration determines the number of iterations (typically 3–7).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Image to be smoothed.
. ImageAniso (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Smoothed image.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
For histogram analysis; higher values increase the smoothing effect, typically: 80-90.
Default Value : 80
Suggested values : Percent ∈ {65, 70, 75, 80, 85, 90}
Typical range of values : 50 ≤ Percent ≤ 100
Minimum Increment : 1
Recommended Increment : 5
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Selection of diffusion function.
Default Value : 1
List of values : Mode ∈ {1, 2}
. Iteration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations, typical values: 3-7.
Default Value : 5
Suggested values : Iteration ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Typical range of values : 1 ≤ Iteration ≤ 30
Minimum Increment : 1
Recommended Increment : 1
. neighborhoodType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Required neighborhood type.
Default Value : 8
List of values : neighborhoodType ∈ {4, 8}
Example
read_image(Image,’fabrik’)
anisotrope_diff(Image,Aniso,80,1,5,8)
sub_image(Image,Aniso,Sub,2.0,127)
disp_image(Sub,WindowHandle).
Complexity
For each pixel: O(Iterations ∗ 18).
Result
If the parameter values are correct the operator anisotrope_diff returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
anisotrope_diff is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
HALCON 8.0.2
288 CHAPTER 5. FILTER
Alternatives
sigma_image, rank_image
See also
smooth_image, binomial_filter, gauss_image, sigma_image, rank_image,
eliminate_min_max
References
P. Perona, J. Malik: “Scale-space and edge detection using anisotropic diffusion”, IEEE transaction on pattern
analysis and machine intelligence, Vol. 12, No. 7, July 1990.
Module
Foundation
ut = div(g(|∇u|2 , c)∇u)
with the initial value u = u0 defined by Image at a time t0 . The equation is iterated Iterations times in
time steps of length Theta, so that the output image ImageAniso contains the gray value function at the time
t0 + Iterations · Theta.
The goal of the anisotropic diffusion is the elimination of image noise in constant image patches while preserv-
ing the edges in the image. The distinction between edges and constant patches is achieved using the threshold
Contrast on the size of the gray value differences between adjacent pixels. Contrast is referred to as the
contrast parameter and abbreviated with the letter c.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter Mode,
the following functions can be selected:
1
g1 (x, c) = p
1 + 2 cx2
Choosing the function g1 by setting Mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size Theta. In this case however, there remains a slight diffusion even across edges of a height larger than c.
1
g2 (x, c) =
1 + cx2
The choice of ’perona-malik’ for Mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.
c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting Mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. ImageAniso (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Diffusion coefficient as a function of the edge amplitude.
Default Value : ’weickert’
List of values : Mode ∈ {’weickert’, ’perona-malik’, ’parabolic’}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Contrast parameter.
Default Value : 5.0
Suggested values : Contrast ∈ {2.0, 5.0, 10.0, 20.0, 50.0, 100.0}
Restriction : Contrast > 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Time step.
Default Value : 1.0
Suggested values : Theta ∈ {0.5, 1.0, 3.0}
Restriction : Theta > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 3, 10, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
anisotropic_diffusion is reentrant and automatically parallelized (on tuple level).
References
J. Weickert; “’Anisotropic Diffusion in Image Processing’; PhD Thesis; Fachbereich Mathematik, Universität
Kaiserslautern; 1996.
P. Perona, J. Malik; “Scale-space and edge detection using anisotropic diffusion”; Transactions on Pattern Analysis
and Machine Intelligence 12(7), pp. 629-639; IEEE; 1990.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
as follows:
1 m−1 n−1
bij =
2n+m−2 i j
Here, i = 0, . . . , m − 1 and√
j = 0, . . . , n − 1. The binomial filter performs approximately the same smoothing as
a Gaussian filter with σ = n − 1/2, where for simplicity it is assumed that m = n. In detail, the relationship
between n and σ is:
HALCON 8.0.2
290 CHAPTER 5. FILTER
n σ
3 0.7523
5 1.0317
7 1.2505
9 1.4365
11 1.6010
13 1.7502
15 1.8876
17 2.0157
19 2.1361
21 2.2501
23 2.3586
25 2.4623
27 2.5618
29 2.6576
31 2.7500
33 2.8395
35 2.9262
37 3.0104
If different values are chosen for MaskHeight and MaskWidth, the above relation between n and σ still holds
and refers to the amount of smoothing in the row and column directions.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageBinomial (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Filter width.
Default Value : 5
List of values : MaskWidth ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Filter height.
Default Value : 5
List of values : MaskHeight ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
Result
If the parameter values are correct the operator binomial_filter returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
binomial_filter is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
gauss_image, smooth_image, derivate_gauss, isotropic_diffusion
See also
mean_image, anisotropic_diffusion, sigma_image, gen_lowpass
Module
Foundation
eliminate_min_max smooths an image by replacing gray values with neighboring mean values, or local
minima/maxima. In order to prevent edges and lines from being smoothed, only those gray values that represent
local minima or maxima are replaced (if there is a line or edge within an image there will be at least one neighboring
pixel with a comparable gray value). Gap controls the strictness of replacment: Only gray values that exceed all
other values within their local neighborhood more than Gap and all values that fall below their neighboring more
than Gap are replaced: E(x, y) represents a N × M sized rectangular neighborhood of an pixel at position (x, y),
containing all pixels within the neighborhood except the pixel itself;
HALCON 8.0.2
292 CHAPTER 5. FILTER
Result
eliminate_min_max returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
eliminate_min_max returns with an error message.
Parallelization Information
eliminate_min_max is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
wiener_filter, wiener_filter_ni
See also
mean_sp, mean_image, median_image, median_weighted, binomial_filter,
gauss_image, smooth_image
References
M. Imme:“A Noise Peak Elimination Filter”; S. 204-211 in CVGIP Graphical Models and Image Processing, Vol.
53, No. 2, March 1991
M. Lückenhaus:“Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse”; Diplomarbeit; Tech-
nische Universität München, Institut für Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
eliminate_sp(Image,ImageMeansp,3,3,101,201)
disp_image(ImageMeansp,WindowHandle).
Parallelization Information
eliminate_sp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_sp, mean_image, median_image, eliminate_min_max
See also
binomial_filter, gauss_image, smooth_image, anisotropic_diffusion, sigma_image,
eliminate_min_max
Module
Foundation
read_image(Image,’video_bild’)
fill_interlace(Image,New,’odd’)
sobel_amp(New,Sobel,’sum_abs’,3).
Complexity
For each pixel: O(2).
HALCON 8.0.2
294 CHAPTER 5. FILTER
Result
If the parameter values are correct the operator fill_interlace returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
fill_interlace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
sobel_amp, edges_image, regiongrowing, diff_of_gauss, threshold, dyn_threshold,
auto_threshold, mean_image, binomial_filter, gauss_image,
anisotropic_diffusion, sigma_image, median_image
See also
median_image, binomial_filter, gauss_image, crop_part
Module
Foundation
3 (0.65)
5 (0.87)
7 (1.43)
9 (1.88)
11 (2.31)
For border treatment the gray values of the images are reflected at the image borders.
The operator binomial_filter can be used as an alternative to gauss_image. binomial_filter
is significantly faster than gauss_image. It should be noted that the mask size in binomial_filter does
not lead to the same amount of smoothing as the mask size in gauss_image. Corresponding mask sizes can be
determined based on the respective values of the Gaussian smoothing parameter sigma.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4
Image to be smoothed.
. ImageGauss (output_object) . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4
Filtered image.
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Required filter size.
Default Value : 5
List of values : Size ∈ {3, 5, 7, 9, 11}
Example
gauss_image(Input,Gauss,7)
regiongrowing(Gauss,Segments,7,7,5,100).
Complexity
For each pixel: O(Size ∗ 2).
Result
If the parameter values are correct the operator gauss_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
gauss_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
binomial_filter, smooth_image, derivate_gauss, isotropic_diffusion
See also
mean_image, anisotropic_diffusion, sigma_image, gen_lowpass
Module
Foundation
The gauss filter was conventionally implemented with filter masks (the other three are recursive filters). In the case
of the gauss filter the filter coefficients (of the one-dimensional impulse answer f (n) with n ≥ 0) are returned in
Coeffs in addition to the filter size.
Parameter
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of required filter.
Default Value : ’deriche2’
List of values : Filter ∈ {’deriche1’, ’deriche2’, ’shen’, ’gauss’}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Filter parameter: small values effect strong smoothing (reversed in case of ’gauss’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.01 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. Size (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of filter is approx. size x size pixels.
. Coeffs (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
In case of gauss filter: coefficients of the “positive” half of the 1D impulse answer.
Example
info_smooth(’deriche2’,0.5,Size,Coeffs)
smooth_image(Input,Smooth,’deriche2’,7).
Result
If the parameter values are correct the operator info_smooth returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
HALCON 8.0.2
296 CHAPTER 5. FILTER
Parallelization Information
info_smooth is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
smooth_image
See also
smooth_image
Module
Foundation
ut = ∆u
on the gray value function u with the initial value u = u0 defined by the gray values of Image at a time t0 . This
equation is then solved up to a time t0 + 12 Sigma2 , which is equivalent to the above convolution, using an iterative
procedure for parabolic partial differential equations. The computational complexity is proportional to the value
of Iterations and independent of Sigma in this case. For small values of Iterations, the computational
accuracy is very low, however. For this reason, choosing Iterations < 3 is not recommended.
For smaller values of Sigma, the convolution implementation is typically the faster method. Since the runtime of
the partial differential equation solver only depends on the number of iterations and not on the value of Sigma, it
is typically faster for large values of Sigma if few iterations are chosen (e.g., Iterations = 3 ).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. SmoothedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Standard deviation of the Gauss distribution.
Default Value : 1.0
Suggested values : Sigma ∈ {0.1, 0.5, 1.0, 3.0, 10.0, 20.0, 50.0}
Restriction : Sigma > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {0, 3, 10, 100, 500}
Restriction : Iterations ≥ 0
Parallelization Information
isotropic_diffusion is reentrant and automatically parallelized (on tuple level).
Module
Foundation
Smooth by averaging.
The operator mean_image carries out a linear smoothing with the gray values of all input images (Image). The
filter matrix consists of ones (evaluated equally) and has the size MaskHeight × MaskWidth. The result of the
convolution is divided by MaskHeight × MaskWidth. For border treatment the gray values are reflected at the
image edges.
For mean_image special optimizations are implemented that use SIMD technology. The actual application
of these special optimizations is controlled by the system parameter ’mmx_enable’ (see set_system). If
’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal calculations are performed
using SIMD technology. Note that SIMD technology performs best on large, compact input regions. Depending on
the input region and the capabilities of the hardware the execution of mean_image might even take significantly
more time with SIMD technology than without.
At any rate, it is advantageous for the performance of mean_image to choose the input region of Image such
that any border treatment is avoided.
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real / vec-
tor_field
Image to be smoothed.
. ImageMean (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
/ vector_field
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of filter mask.
Default Value : 9
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskWidth ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of filter mask.
Default Value : 9
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskHeight ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
Example
read_image(Image,’fabrik’)
mean_image(Image,Mean,3,3)
disp_image(Mean,WindowHandle).
Complexity
For each pixel: O(15).
Result
If the parameter values are correct the operator mean_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
mean_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
HALCON 8.0.2
298 CHAPTER 5. FILTER
Possible Predecessors
reduce_domain, rectangle1_domain
Possible Successors
dyn_threshold, regiongrowing
Alternatives
binomial_filter, gauss_image, smooth_image
See also
anisotropic_diffusion, sigma_image, convol_image, gen_lowpass
Module
Foundation
compose3(Channel1,Channel2,Channel3,&MultiChannel);
mean_n(MultiChannel,&Mean);
Parallelization Information
mean_n is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
compose2, compose3, compose4, add_channels
Possible Successors
disp_image
See also
count_channels
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageSPMean (output_object) . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskWidth ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskHeight ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum gray value.
Default Value : 1
Suggested values : MinThresh ∈ {1, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
. MaxThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum gray value.
Default Value : 254
Suggested values : MaxThresh ∈ {5, 7, 9, 11, 15, 23, 31, 43, 61, 101, 200, 230, 250, 254}
Restriction : MinThresh ≤ MaxThresh
Example
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
mean_sp(Image,ImageMeansp,3,3,101,201)
disp_image(ImageMeansp,WindowHandle).
Parallelization Information
mean_sp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_image, median_image, median_separate, eliminate_min_max
See also
anisotropic_diffusion, sigma_image, binomial_filter, gauss_image, smooth_image,
eliminate_min_max
Module
Foundation
HALCON 8.0.2
300 CHAPTER 5. FILTER
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels of the objects once. For each of these pixels all neighboring pixels covered by the
mask are sorted in an ascending sequence according to their gray values. Thus, each of these sorted gray value
sequences contains exactly as many gray values as the mask has pixels. From these sequences the median is
selected and entered as resulting gray value at the corresponding output image.
Parameter
. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. ImageMedian (output_object) . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Median filtered image.
. MaskType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of median mask.
Default Value : ’circle’
List of values : MaskType ∈ {’circle’, ’rectangle’}
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Radius of median mask.
Default Value : 1
Suggested values : Radius ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 15, 19, 25, 31, 39, 47, 59}
Typical range of values : 1 ≤ Radius ≤ 101
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string / integer / real
Border treatment.
Default Value : ’mirrored’
Suggested values : Margin ∈ {’mirrored’, ’cyclic’, ’continued’, 0, 30, 60, 90, 120, 150, 180, 210, 240, 255}
Example
read_image(Image,’fabrik’)
median_image(Image,Median,’circle’,3,’continued’)
disp_image(MedianWeighted,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 5) with F = area of MaskType.
Result
If the parameter values are correct the operator median_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
median_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
rank_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-319
Module
Foundation
read_image(Image,’fabrik’)
median_separate(Image,MedianSeparate,5,5,3)
disp_image(MedianSeparate,WindowHandle).
Complexity
For each pixel: O(40).
Parallelization Information
median_separate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
texture_laws, sobel_amp, deviation_image
HALCON 8.0.2
302 CHAPTER 5. FILTER
Possible Successors
learn_ndim_norm, learn_ndim_box, median_separate, regiongrowing, auto_threshold
Alternatives
median_image
See also
rank_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
’gauss’ (MaskSize = 3)
1 2 1
2 4 2
1 2 1
’inner’ (MaskSize = 3)
1 1 1
1 3 1
1 1 1
The operator median_weighted means that, contrary to median_image, gray value corners remain.
Parameter
read_image(Image,’fabrik’)
median_weighted(Image,MedianWeighted,’gauss’,3)
disp_image(MedianWeighted,WindowHandle).
Complexity
For each pixel: O(F ∗ log F ) with F = area of MaskType.
Parallelization Information
median_weighted is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
median_image, trimmed_mean, sigma_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once.
Parameter
read_image(Image,’fabrik’)
draw_region(Region,WindowHandle)
midrange_image(Image,Region,Midrange,’mirrored’)
disp_image(Midrange,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 5) with F = area of Mask.
Result
If the parameter values are correct the operator midrange_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
midrange_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
HALCON 8.0.2
304 CHAPTER 5. FILTER
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect,
gray_range_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. Mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject : byte
Region serving as filter mask.
. ImageRank (output_object) . . . . . . . . multichannel-image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Filtered image.
. Rank (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Rank of the output gray value in the sorted sequence of input gray values inside the filter mask. Typical value
(median): area(mask) / 2.
Default Value : 5
Suggested values : Rank ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31}
Typical range of values : 1 ≤ Rank ≤ 512
Minimum Increment : 1
Recommended Increment : 2
read_image(Image,’fabrik’)
draw_region(Region,WindowHandle)
rank_image(Image,Region,ImageRank,5,’mirrored’)
disp_image(ImageRank,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 5) with F = area of Mask.
Result
If the parameter values are correct the operator rank_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
rank_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-320
Module
Foundation
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / cyclic / int1 / int2 / uint2 / int4
/ real
Image to be smoothed.
. ImageSigma (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / cyclic / int1 / int2 /
uint2 / int4 / real
Smoothed image.
HALCON 8.0.2
306 CHAPTER 5. FILTER
read_image(Image,’fabrik’)
sigma_image(Image,ImageSigma,5,5,3)
disp_image(ImageSigma,WindowHandle).
Complexity
For each pixel: O(MaskHeight× MaskWidth).
Result
If the parameter values are correct the operator sigma_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
sigma_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
anisotropic_diffusion, rank_image
See also
smooth_image, binomial_filter, gauss_image, mean_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 325
Module
Foundation
smooth_image smooths gray images using recursive filters originally developed by Deriche and Shen and using
the non-recursive Gaussian filter. The following filters can be choosen via the parameter Filter:
The “filter width” (i.e., the range of the filter and thereby result of the filter) can be of any size. In the case that the
Deriche or Shen is choosen it decreases by increasing the filter parameter Alpha and increases in the case of the
Gauss filter (and Alpha corresponds to the standard deviation of the Gaussian function). An approximation of the
appropiate size of the filterwidth Alpha is performed by the operator info_smooth.
Non-recursive filters like the Gaussian filter are often implemented using filter-masks. In this case the runtime
of the operator increases with increasing size of the filter mask. The runtime of the recursive filters remains
constant; except the border treatment becomes a little bit more time consuming. The Gaussian filter becomes slow
in comparison to the recursive ones but is in contrast to them isotropic (the filter ’deriche2’ is only weakly direction
sensitive). A comparable result of the smoothing is achieved by choosing the following values for the parameter:
Alpha(0 deriche10 )
Alpha(0 deriche20 ) =
2
Alpha( deriche10 )
0
Alpha(0 shen0 ) =
2
1.77
Alpha(0 gauss0 ) =
Alpha(0 deriche10 )
Parameter
info_smooth(’deriche2’,0.5,Size,Coeffs)
smooth_image(Input,Smooth,’deriche2’,7)
Result
If the parameter values are correct the operator smooth_image returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
smooth_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
HALCON 8.0.2
308 CHAPTER 5. FILTER
Alternatives
binomial_filter, gauss_image, mean_image, derivate_gauss, isotropic_diffusion
See also
info_smooth, median_image, sigma_image, anisotropic_diffusion
References
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once. For each of these pixels all neighboring pixels covered by the mask are sorted
in an ascending sequence according to their gray values. Thus, each of these sorted gray value sequences contains
exactly as many gray values as the mask has pixels. If F is the area of the mask the average of these sequences is
calculated as follows: The first (F - Number)/2 gray values are ignored. Then the following Number gray values
are summed up and divided by Number. Again the remaining (F - Number)/2 gray values are ignored.
Parameter
read_image(Image,’fabrik’)
draw_region(Region,WindowHandle)
trimmed_mean(Image,Region,TrimmedMean,5,’mirrored’)
disp_image(TrimmedMean,WindowHandle).
Result
If the parameter values are correct the operator trimmed_mean returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
trimmed_mean is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image, median_weighted, median_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 320
Module
Foundation
5.16 Texture
deviation_image ( Image : ImageDeviation : Width, Height : )
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
deviation_image(Image,Deviation,9,9)
disp_image(Deviation,WindowHandle).
Result
deviation_image returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
HALCON 8.0.2
310 CHAPTER 5. FILTER
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
entropy_image(Image,Entropy1,9,9)
disp_image(Entropy1,WindowHandle).
Result
entropy_image returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
entropy_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
entropy_gray
See also
energy_gabor, entropy_gray
Module
Foundation
l = [ 1 2 1]
e = [−1 0 1]
s = [−1 2 −1]
l = [ 1 4 6 4 1]
e = [−1 −2 0 2 1]
s = [−1 0 2 0 −1]
r = [ 1 −4 6 −4 1]
w = [−1 2 0 −2 1]
l = [ 1 6 15 20 15 6 1]
e = [−1 −4 −5 0 5 4 1]
s = [−1 −2 1 4 1 −2 −1]
r = [−1 −2 −1 4 −1 −2 −1]
w = [−1 0 3 0 −3 0 1]
o = [−1 6 −15 20 −15 6 −1]
For most of the filters the resulting gray values must be modified by a Shift. This makes the different textures in
the output image more comparable to each other, provided suitable filters are used.
The name of the filter is composed of the letters of the two vectors used, where the first letter denotes convolution
in the column direction while the second letter denotes convolution in the row direction.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Images to which the texture transformation is to be applied.
. ImageTexture (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Texture images.
. FilterTypes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Desired filters (name or number).
Default Value : ’el’
Suggested values : FilterTypes ∈ {’ll’, ’le’, ’ls’, ’lr’, ’lw’, ’lo’, ’el’, ’ee’, ’es’, ’er’, ’ew’, ’eo’, ’sl’, ’se’,
’ss’, ’sr’, ’sw’, ’so’, ’rl’, ’re’, ’rs’, ’rr’, ’rw’, ’ro’, ’wl’, ’we’, ’ws’, ’wr’, ’ww’, ’wo’, ’ol’, ’oe’, ’os’, ’or’, ’ow’,
’oo’}
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Shift to reduce the gray value dynamics.
Default Value : 2
List of values : Shift ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
HALCON 8.0.2
312 CHAPTER 5. FILTER
Result
texture_laws returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
texture_laws is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
mean_image, binomial_filter, gauss_image, median_image, histo_2dim,
learn_ndim_norm, learn_ndim_box, threshold
Alternatives
convol_image
See also
class_2dim_sup, class_ndim_norm
References
Laws, K.I. “Textured image segmentation”; Ph.D. dissertation, Dept. of Engineering, Univ. Southern California,
1980
Module
Foundation
5.17 Wiener-Filter
gen_psf_defocus ( : Psf : PSFwidth, PSFheight, Blurring : )
the "‘blur radius"’ (out-of-focus blurring maps each image pixel on a small circle with a radius of Blurring
- specified in "‘number of pixels"’). If specified less than zero, the absolute value of Blurring is used. The
result image of gen_psf_defocus encloses an spatial domain impulse response of the specified blurring. Its
representation presumes the origin in the upper left corner. This results in the following disposition of an N xM
sized image:
This representation conforms to that of the impulse response parameter of the HALCON-operator
wiener_filter. So one can use gen_psf_defocus to generate an impulse response for Wiener filter-
ing.
Parameter
HALCON 8.0.2
314 CHAPTER 5. FILTER
Module
Foundation
HALCON 8.0.2
316 CHAPTER 5. FILTER
Parallelization Information
simulate_defocus is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_defocus, simulate_motion, gen_psf_motion
Possible Successors
wiener_filter, wiener_filter_ni
See also
gen_psf_defocus, simulate_motion, gen_psf_motion
References
Reginald L. Lagendijk, Jan Biemond: Iterative Identification and Restoration of Images, Kluwer Academic Pub-
lishers Boston/Dordrecht/London, 1991
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation
Result
simulate_motion returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
simulate_motion returns with an error message.
Parallelization Information
simulate_motion is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, gen_psf_motion
Possible Successors
simulate_defocus, wiener_filter, wiener_filter_ni
See also
gen_psf_motion, simulate_defocus, gen_psf_defocus
References
Anil K. Jain:Fundamentals of Digital Image Processing, Prentice-Hall International Inc., Englewood Cliffs, New
Jersey, 1989
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Kha-Chye Tan, Hock Lim, B. T. G. Tan:"‘Restoration of Real-World Motion-Blurred Images"’;S. 291-299 in:
CVGIP Graphical Models and Image Processing, Vol. 53, No. 3, May 1991
Module
Foundation
So wiener_filter needs a smoothed version of the input image to estimate the power spectral density of
noise and original image. One can use one of the smoothing HALCON-filters (e.g. eliminate_min_max)to
get this version. wiener_filter needs further the impulse response that describes the specific degradation.
This impulse response (represented in spatial domain) must fit into an image of HALCON image type ’real’.
There exist two HALCON-operators for generation of an impulse response for motion blur and out-of-focus (see
gen_psf_motion, gen_psf_defocus). The representation of the impulse response presumes the origin in
the upper left corner. This results in the following disposition of an N xM sized image:
HALCON 8.0.2
318 CHAPTER 5. FILTER
• estimation of the power spectrum density of the original image by using the smoothed version of the corrupted
image,
• estimation of the power spectrum density of each pixel by subtracting smoothed version from unsmoothed
version,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Corrupted image.
. Psf (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
impulse response (PSF) of degradation (in spatial domain).
. FilteredImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real
Smoothed version of corrupted image.
. RestoredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
Restored image.
Example (Syntax: C)
Result
wiener_filter returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
wiener_filter returns with an error message.
Parallelization Information
wiener_filter is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_motion, simulate_defocus, gen_psf_defocus
Alternatives
wiener_filter_ni
See also
simulate_motion, gen_psf_motion, simulate_defocus, gen_psf_defocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation
wiener_filter_ni estimates the noise term as follows: The user defines a region that is suitable for noise
estimation within the image (homogeneous as possible, as edges or textures aggravate noise estimation). After
smoothing within this region by an (unweighted) median filter and subtracting smoothed version from unsmoothed,
the average noise amplitude of the region is processed within wiener_filter_ni. This amplitude together
with the average gray value within the region allows estimating the quotient of the power spectral densities of
noise and original image (in contrast to wiener_filter wiener_filter_ni assumes a rather constant
quotient within the whole image). The user can define width and height of the rectangular (median-)filter mask to
influence the noise estimation (MaskWidth, MaskHeight). wiener_filter_ni needs further the impulse
response that describes the specific degradation. This impulse response (represented in spatial domain) must fit
into an image of HALCON image type ’real’. There exist two HALCON-operators for generation of an impulse
response for motion blur and out-of-focus (see gen_psf_motion, gen_psf_defocus). The representation
of the impulse response presumes the origin in the upper left corner. This results in the following disposition of an
N xM sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
HALCON 8.0.2
320 CHAPTER 5. FILTER
• estimating the quotient of the power spectrum densities of noise and original image,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.
Result
wiener_filter_ni returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
wiener_filter_ni returns with an error message.
Parallelization Information
wiener_filter_ni is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_motion, simulate_defocus, gen_psf_defocus
Alternatives
wiener_filter
See also
simulate_motion, gen_psf_motion, simulate_defocus, gen_psf_defocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation
HALCON 8.0.2
322 CHAPTER 5. FILTER
Graphics
6.1 Drawing
draw_region(Obj,WindowHandle)
drag_region1(Obj,New,WindowHandle)
disp_region(New,WindowHandle)
position(Obj,_,Row1,Column1,_,_,_,_)
position(New,_,Row2,Column2,_,_,_,_)
disp_arrow(WindowHandle,Row1,Column1,Row2,Column2,1.0)
fwrite_string([’Transformation: (’,Row2-Row1,’,’,Column2-Column1,’)’])
fnew_line().
Result
drag_region1 returns 2 (H_MSG_TRUE), if a region is entered, the window is valid and the needed drawing
mode (see set_insert) is available. If necessary, an exception handling is raised. You may determine the
behavior after an empty input with set_system(::’no_object_result’,<Result>:).
323
324 CHAPTER 6. GRAPHICS
Parallelization Information
drag_region1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
get_mposition, move_region
See also
set_insert, set_draw, affine_trans_image
Module
Foundation
See also
set_insert, set_draw, affine_trans_image
Module
Foundation
drag_region3 ( SourceRegion,
MaskRegion : DestinationRegion : WindowHandle, Row, Column : )
HALCON 8.0.2
326 CHAPTER 6. GRAPHICS
read_image(Image,’affe’)
draw_circle(WindowHandle,Row,Column,Radius)
gen_circle(Circle,Row,Column,Radius,)
reduce_domain(Image,Circle,GrayCircle)
disp_image(GrayCircle,WindowHandle).
Result
draw_circle returns 2 (H_MSG_TRUE) if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_circle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_circle_mod, draw_ellipse, draw_region
See also
gen_circle, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y ; real
Row index of the center.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x ; real
Column index of the center.
. RadiusIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius1 ; real
Radius of the circle.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y ; real
Barycenter’s row index.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x ; real
Barycenter’s column index.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius ; real
Circle’s radius.
Example
read_image(Image,’affe’)
draw_circle_mod(WindowHandle,20,20,15,Row,Column,Radius)
gen_circle(Circle,Row,Column,Radius,)
reduce_domain(Image,Circle,GrayCircle)
disp_image(GrayCircle,WindowHandle).
Result
draw_circle_mod returns 2 (H_MSG_TRUE) if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_circle_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_circle, draw_ellipse, draw_region
See also
gen_circle, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
HALCON 8.0.2
328 CHAPTER 6. GRAPHICS
Pressing the right mouse button terminates the procedure. After terminating the procedure the ellipse is not visible
in the window any longer.
Parameter
read_image(Image,’affe’)
draw_ellipse(WindowHandle,Row,Column,Phi,Radius1,Radius2)
gen_ellipse(Ellipse,Row,Column,Phi,Radius1,Radius2)
reduce_domain(Image,Ellipse,GrayEllipse)
sobel_amp(GrayEllipse,Sobel,’sum_abs’,3)
disp_image(Sobel,WindowHandle).
Result
draw_ellipse returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_ellipse is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_ellipse_mod, draw_circle, draw_region
See also
gen_ellipse, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
“grips” it to modify the length of the appropriate half axis. You may modify the orientation only, if a vertex of the
first half axis is gripped.
Pressing the right mouse button terminates the procedure. After terminating the procedure the ellipse is not visible
in the window any longer.
Parameter
read_image(Image,’affe’)
draw_ellipse_mod(WindowHandle,RowIn,ColumnIn,PhiIn,Radius1In,Radius2In,Row,Column,Phi,Ra
gen_ellipse(Ellipse,Row,Column,Phi,Radius1,Radius2)
reduce_domain(Image,Ellipse,GrayEllipse)
sobel_amp(GrayEllipse,Sobel,’sum_abs’,3)
disp_image(Sobel,WindowHandle).
Result
draw_ellipse_mod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_ellipse_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_ellipse, draw_circle, draw_region
See also
gen_ellipse, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
HALCON 8.0.2
330 CHAPTER 6. GRAPHICS
Draw a line.
draw_line returns the parameter for a line, which has been created interactively by the user in the window.
To create a line you have to press the left mouse button determining a start point of the line. While keeping the
button pressed you may “drag” the line in any direction. After another mouse click in the middle of the created
line you can move it. If you click on one end point of the created line, you may move this point. Pressing the right
mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; real
Row index of the first point of the line.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x ; real
Column index of the first point of the line.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y ; real
Row index of the second point of the line.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x ; real
Column index of the second point of the line.
Example
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_line(WindowHandle,Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
draw_line returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_line is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_line_mod, gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation
Draw a line.
draw_line_mod returns the parameter for a line, which has been created interactively by the user in the window.
To create a line are expected the coordinates of the start point Row1In, Column1In and of the end point
Row2In,Column2In. If you click on one end point of the created line, you may move this point. After an-
other mouse click in the middle of the created line you can move it.
Pressing the right mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Row1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; real
Row index of the first point of the line.
. Column1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x ; real
Column index of the first point of the line.
. Row2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y ; real
Row index of the second point of the line.
. Column2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x ; real
Column index of the second point of the line.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; real
Row index of the first point of the line.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x ; real
Column index of the first point of the line.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y ; real
Row index of the second point of the line.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x ; real
Column index of the second point of the line.
Example
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_line_mod(WindowHandle,10,20,55,124,Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
draw_line_mod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode is available.
If necessary, an exception handling is raised.
Parallelization Information
draw_line_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_line, draw_ellipse, draw_region
See also
gen_circle, draw_rectangle1, draw_rectangle2
Module
Foundation
HALCON 8.0.2
332 CHAPTER 6. GRAPHICS
By pressing the Shift key again you can switch back to the edit mode. Pressing the right mouse button terminates
the procedure.
The appearance of the curve while drawing is determined by the line width, size, and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The control polygon and all
handles are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1
and their line style is fixed to a drawn-through line.
Parameter
HALCON 8.0.2
334 CHAPTER 6. GRAPHICS
When there are three points or more, the first and the last point will be marked with an additional square. By
clicking on them you can close the curve or open it again. You delete the point appended last by pressing the Ctrl
key.
The tangents (i.e. the first derivative of the curve) of the first and the last point are displayed as lines. They can be
modified by dragging their ends using the mouse.
Existing points can be moved by dragging them with the mouse. Further new points on the curve can be inserted
by a left click on the desired position on the curve.
By pressing the Shift key, you can switch into the transformation mode. In this mode you can rotate, move, and
scale the curve as a whole, but only if you set the parameters Rotate, Move, and Scale, respectively, to true.
Instead of the pick points and the two tangents, 3 symbols are displayed with the curve: a cross in the middle and
an arrow to the right if Rotate is set to true, and a double-headed arrow to the upper right if Scale is set to true.
You can
• move the curve by clicking the left mouse button on the cross in the center and then dragging it to the new
position,
• rotate it by clicking with the left mouse button on the arrow and then dragging it, till the curve has the right
direction, and
• scale it by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be set to true.
By pressing the Shift key again you can switch back to the edit mode. Pressing the right mouse button terminates
the procedure.
The appearance of the curve while drawing is determined by the line width, size, and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The tangents and all handles
are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1 and their
line style is fixed to a drawn-through line.
Attention
In contrast to draw_nurbs, each point specified by the user influences the whole curve. Thus, if one point is
moved, the whole curve can and will change. To minimize this effects, it is recommended to use a small degree
(3-5) and to place the points such that they are approximately equally spaced. In general, uneven degrees will
perform slightly better than even degrees.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Contour of the curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable rotation?
Default Value : ’true’
List of values : Rotate ∈ {’true’, ’false’}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable moving?
Default Value : ’true’
List of values : Move ∈ {’true’, ’false’}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable scaling?
Default Value : ’true’
List of values : Scale ∈ {’true’, ’false’}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Keep ratio while scaling?
Default Value : ’true’
List of values : KeepRatio ∈ {’true’, ’false’}
. Degree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
The degree p of the NURBS curve. Reasonable values are 3 to 5.
Default Value : 3
Suggested values : Degree ∈ {2, 3, 4, 5}
Restriction : (Degree ≥ 2) ∧ (Degree ≤ 9)
HALCON 8.0.2
336 CHAPTER 6. GRAPHICS
arrow to the upper right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it
again, you can switch back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its interpolation points and the start and end tangent. Start and
end point are marked by an additional square. You can perform the following modifications:
• To append new points, click with the left mouse button in the window and a new point is added at this position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the curve, click on the desired position on the curve.
• To close respectively open the curve, click on the first or on the last point.
HALCON 8.0.2
338 CHAPTER 6. GRAPHICS
control polygon if Edit is set to true. Similarly, you can only rotate, move or scale it if Rotate, Move, and
Scale, respectively, are set to true.
draw_nurbs_mod starts in the transformation mode. In this mode, the curve is displayed together with 3 sym-
bols: a cross in the middle and an arrow to the right if Rotate is set to true, and a double-headed arrow to the
upper right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it again, you can
switch back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its control polygon. Start and end point are marked by an
additional square and the point which was handled last is surrounded by a circle representing its weight. You can
perform the following modifications:
• To append control points, click with the left mouse button in the window and a new point is added at this
position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the control polygon, click on the desired position on the polygon.
• To close respectively open the curve, click on the first or on the last control point.
• You can modify the weight of a control point by first clicking on the point itself (if it is not already the point
which was modified or created last) and then dragging the circle around the point.
Draw a point.
draw_point returns the parameter for a point, which has been created interactively by the user in the window.
To create a point you have to press the left mouse button. While keeping the button pressed you may “drag” the
point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.
Parameter
HALCON 8.0.2
340 CHAPTER 6. GRAPHICS
Example
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_point(WindowHandle,Row1,Column1)
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1)
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fnew_line().
Result
draw_point returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_point is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_point_mod, draw_circle, draw_ellipse, set_insert
Module
Foundation
Draw a point.
draw_point_mod returns the parameter for a point, which has been created interactively by the user in the
window.
To create a point are expected the coordinates RowIn and ColumnIn. While keeping the button pressed you may
“drag” the point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real
Row index of the point.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real
Column index of the point.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real
Row index of the point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real
Column index of the point.
Example
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_point_mod(WindowHandle,Row1,Column1)
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1)
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fnew_line().
Result
draw_point_mod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode is available.
If necessary, an exception handling is raised.
Parallelization Information
draw_point_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_point, draw_circle, draw_ellipse, set_insert
Module
Foundation
draw_polygon(Polygon,WindowHandle)
shape_trans(Polygon,Filled,’convex’)
disp_region(Filled,WindowHandle).
Result
If the window is valid, draw_polygon returns 2 (H_MSG_TRUE). If necessary, an exception handling is raised.
Parallelization Information
draw_polygon is reentrant, local, and processed without parallelization.
HALCON 8.0.2
342 CHAPTER 6. GRAPHICS
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw
Alternatives
draw_region, draw_circle, draw_rectangle1, draw_rectangle2, boundary
See also
reduce_domain, fill_up, set_color
Module
Foundation
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
draw_rectangle1 returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle1_mod, draw_rectangle2, draw_region
See also
gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_rectangle1_mod(WindowHandle,Row1In,Column1In,Row2In,Column2In,Row1,Column1,Row2,Col
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
HALCON 8.0.2
344 CHAPTER 6. GRAPHICS
Result
draw_rectangle1_mod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle1_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2_mod, draw_rectangle1, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation
HALCON 8.0.2
346 CHAPTER 6. GRAPHICS
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
draw_region(Region,WindowHandle)
reduce_domain(Image,Region,New)
regiongrowing(New,Segmente,5,5,6,50)
set_colored(WindowHandle,12)
disp_region(Segmente,WindowHandle).
Result
If the window is valid, draw_region returns 2 (H_MSG_TRUE). If necessary, an exception handling is raised.
Parallelization Information
draw_region is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw
Alternatives
draw_circle, draw_ellipse, draw_rectangle1, draw_rectangle2
See also
draw_polygon, reduce_domain, fill_up, set_color
Module
Foundation
HALCON 8.0.2
348 CHAPTER 6. GRAPHICS
Parallelization Information
draw_xld is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation
Parameter
. ContIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Input contour.
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Modified contour.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable rotation?
Default Value : ’true’
List of values : Rotate ∈ {’true’, ’false’}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable moving?
Default Value : ’true’
List of values : Move ∈ {’true’, ’false’}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable scaling?
Default Value : ’true’
List of values : Scale ∈ {’true’, ’false’}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Keep ratio while scaling?
Default Value : ’true’
List of values : KeepRatio ∈ {’true’, ’false’}
. Edit (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable editing?
Default Value : ’true’
List of values : Edit ∈ {’true’, ’false’}
Result
draw_xld_mod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_xld_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation
6.2 Gnuplot
gnuplot_close ( : : GnuplotFileID : )
HALCON 8.0.2
350 CHAPTER 6. GRAPHICS
Parameter
gnuplot_open_pipe ( : : : GnuplotFileID )
Open a pipe to a gnuplot process for visualization of images and control values.
gnuplot_open_pipe opens a pipe to a gnuplot sub-process with which subsequently images can be
visualized as 3D-plots ( gnuplot_plot_image) or control values can be visualized as 2D-plots (
gnuplot_plot_ctrl). The sub-process must be terminated after displaying the last plot by calling
gnuplot_close. The corresponding identifier for the gnuplot output stream is returned in GnuplotFileID.
Attention
gnuplot_open_pipe is only implemented for Unix because gnuplot for Windows (wgnuplot) cannot be
controlled by an external process.
Parameter
HALCON 8.0.2
352 CHAPTER 6. GRAPHICS
6.3 LUT
disp_lut ( : : WindowHandle, Row, Column, Scale : )
HALCON 8.0.2
354 CHAPTER 6. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Row of centre of the graphic.
Default Value : 128
Typical range of values : 0 ≤ Row ≤ 511
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column of centre of the graphic.
Default Value : 128
Typical range of values : 0 ≤ Column ≤ 511
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Scaling of the graphic.
Default Value : 1
List of values : Scale ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Typical range of values : 0 ≤ Scale ≤ 20
Example
set_lut(WindowHandle,’color’)
disp_lut(WindowHandle,256,256,1)
get_mbutton(WindowHandle,_,_,_)
set_lut(WindowHandle,’sqrt’)
disp_lut(WindowHandle,128,128,2).
Result
disp_lut returns 2 (H_MSG_TRUE) if the hardware supports a look-up-table, the window is valid and the
parameters are correct. Otherwise an exception handling is raised.
Parallelization Information
disp_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
set_lut
See also
open_window, open_textwindow, draw_lut, set_lut, set_fix, set_pixel, write_lut,
get_lut, set_color
Module
Foundation
draw_lut ( : : WindowHandle : )
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
Example
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
draw_lut(WindowHandle)
write_lut(WindowHandle,’my_lut’).
...
read_image(Image,’fabrik’)
set_lut(WindowHandle,’my_lut’).
Result
draw_lut returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
draw_lut is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut, write_lut, disp_lut
Alternatives
set_fix, set_rgb
See also
write_lut, set_lut, get_lut, disp_lut
Module
Foundation
HALCON 8.0.2
356 CHAPTER 6. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. LookUpTable (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string / integer
Name of look-up-table or tuple of RGB-values.
Result
get_lut returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_lut is reentrant, local, and processed without parallelization.
Possible Successors
draw_lut, set_lut
Alternatives
set_fix, get_pixel
See also
set_lut, draw_lut
Module
Foundation
Hue: 0.0
Saturation 1.0
Intensity 1.0
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Hue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Modification of color value.
. Saturation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Modification of saturation.
. Intensity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Modification of intensity.
Result
get_lut_style returns 2 (H_MSG_TRUE) if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
get_lut_style is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut
See also
set_lut_style
Module
Foundation
query_lut returns the names of all look-up-tables available on the current used device. These tables can be set
with set_lut. An table named ’default’ is always available.
Parameter
HALCON 8.0.2
358 CHAPTER 6. GRAPHICS
B: image data.
Colors in S descend from applications that were active before starting HALCON and should not get lost. Graphic
colors in G are used for operators such as disp_region, disp_circle etc. and are set unique within
all look-up-tables. An output in a graphic color has always got the same (color-)look, even if different look-up-
tables are used. set_color and set_rgb set graphic colors. Gray values resp. colors in B are used by
disp_image to display an image. They can change according to the current look-up-table. There exist two
exceptions to this concept:
• set_gray allows setting of colors of the area B for operators such as disp_region,
• set_fix that allows modification of graphic colors.
For common monitors only one look-up-table can be loaded per screen. Whereas set_lut can be activated
separately for each window. There is the following solution for this problem: It will always be activated the
look-up-table that is assigned to the "‘active window"’ (a window is set into the state "‘active"’ by the window
manager).
look-up-table can also be used with truecolor displays. In this case the look-up-table will be simulated in software.
This means, that the look-up-table will be used each time an image is displayed.
WindowsNT specific: if the graphiccard is used in mode different from truecolor, you must display the image after
setting the look-up-taple.
query_lut lists the names of all look-up-tables. They differ from each other in the area used for gray values.
Within this area the following behaiviour is defined:
gray value tables (1-7 image levels)
’default’: Only the two basic colors (generally black and white) are used.
color tables (Real color, static gray value steps)
’default’: Table proposed by the hardware.
gray value tables (256 colors)
’default’: As ’linear’.
’linear’: Linear increasing of gray values from 0 (black) to 255 (white).
’inverse’: Inverse function of ’linear’.
’sqr’: Gray values increase according to square function.
’inv_sqr’: Inverse function of ’sqr’.
’cube’: Gray values increase according to cubic function.
’inv_cube’: Inverse function of ’cube’.
’sqrt’: Gray values increase according to square-root function.
’inv_sqrt’: Inverse Function of ’sqrt’.
’cubic_root’: Gray values increase according to cubic-root function.
’inv_cubic_root’: Inverse Function of ’cubic_root’.
color tables (256 colors)
’color1’: Linear transition from red via green to blue.
’color2’: Smooth transition from yellow via red, blue to green.
’color3’: Smooth transition from yellow via red, blue, green, red to blue.
’color4’: Smooth transition from yellow via red to blue.
’three’: Displaying the three colors red, green and blue.
’six’: Displaying the six basic colors yellow, red, magenta, blue, cyan and green.
’twelve’: Displaying 12 colors.
’twenty_four’: Displaying 24 colors.
’rainbow’: Displaying the spectral colors from red via green to blue.
A look-up-table can be read from a file. Every line of such a file must contain three numbers in the range of 0 to
255, with the first number describing the amount of red, the second the amount of green and the third the amount
of blue of the represented display color. The number of lines can vary. The first line contains information for the
first gray value and the last line for the last value. If there are less lines than gray values, the available information
values are distributed over the whole interval. If there are more lines than gray values, a number of (uniformly
distributed) lines is ignored. The file-name must conform to "‘LookUpTable.lut"’. Within the parameter the
name is specified without file extension. HALCON will search for the file in the current directory and after that in
a specified directory ( see set_system(::’lut_dir’,<Pfad>:) ). It is also possible to call set_lut
with a tuple of RGB-Values. These will be set directly. The number of parameter values must conform to the
number of pixels currently used within the look-up-table.
Attention
set_lut can only be used with monitors supporting 256 gray levels/colors.
Parameter
read_image(Image,’affe’)
query_lut(WindowHandle,LUTs)
for(1,|LUTs|,i)
set_lut(WindowHandle,LUTs[i])
fwrite_string([’current table ’,LUTs[i]])
fnew_line()
get_mbutton(WindowHandle,_,_,_)
loop().
Result
set_lut returns 2 (H_MSG_TRUE) if the hardware supports a look-up-table and the parameter is correct. Oth-
erwise an exception handling is raised.
Parallelization Information
set_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
query_lut, draw_lut, get_lut
Possible Successors
write_lut
Alternatives
draw_lut, set_fix, set_pixel
See also
get_lut, query_lut, draw_lut, set_fix, set_color, set_rgb, set_hsi, write_lut
Module
Foundation
HALCON 8.0.2
360 CHAPTER 6. GRAPHICS
Hue: Rotation of color space, Hue = 1.9 conforms to a one-time rotation of the color space. No changement: Hue
= 0.0 Complement colors: Hue = 0.5
Saturation: Changement of saturation, No changement: Saturation = 1.0 Gray value image: Saturation = 0.0
Intensity: Changement of intensity, No changement: Intensity = 1.0 Black image: Intensity = 0.0
Changement affects only the part of an look-up-table that is used for diplaying images. The parameter of modifi-
cation remain until the next call of set_lut_style. Calling set_lut has got no effect on these parameters.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Hue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Modification of color value.
Default Value : 0.0
Typical range of values : 0.0 ≤ Hue ≤ 1.0
Restriction : (0.0 ≤ Hue) ∧ (Hue ≤ 1.0)
. Saturation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Modification of saturation.
Default Value : 1.5
Typical range of values : 0.0 ≤ Saturation
Restriction : 0.0 ≤ Saturation
. Intensity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Modification of intensity.
Default Value : 1.5
Typical range of values : 0.0 ≤ Intensity
Restriction : 0.0 ≤ Intensity
Example
read_image(Image,’affe’)
set_lut(WindowHandle,’color’)
repeat(:::) >
get_mbutton(WindowHandle,Row,Column,Button)
eval(Row/300.0,Saturation)
eval(Column/512.0,Hue)
set_lut_style(WindowHandle,Hue,Saturation,1.0)
until(Button = 1).
Result
set_lut_style returns 2 (H_MSG_TRUE) if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
set_lut_style is reentrant, local, and processed without parallelization.
Possible Predecessors
get_lut_style
Possible Successors
set_lut
Alternatives
set_lut, scale_image
See also
get_lut_style
Module
Foundation
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_lut(WindowHandle)
write_lut(WindowHandle,’test_lut’).
Result
write_lut returns 2 (H_MSG_TRUE) if the window with the required properties (256 colors) is valid and the
parameter (file name) is correct. Otherwise an exception handling is raised.
Parallelization Information
write_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
draw_lut, set_lut
See also
set_lut, draw_lut, set_pixel, get_pixel
Module
Foundation
6.4 Mouse
get_mbutton ( : : WindowHandle : Row, Column, Button )
1: Left button,
2: Middle button,
4: Right button.
The operator waits until a button is pressed in the output window. If more than one button is pressed, the sum of
the individual buttons’ values is returned. The origin of the coordinate system is located in the left upper corner
of the window. The row coordinates increase towards the bottom, while the column coordinates increase towards
the right. For graphics windows, the coordinates of the lower right corner are (image height-1,image width-1) (see
HALCON 8.0.2
362 CHAPTER 6. GRAPHICS
open_window, reset_obj_db), while for text windows they are (window height-1,window width-1) (see
open_textwindow).
Attention
get_mbutton only returns if a mouse button is pressed in the window.
Parameter
0: No button,
1: Left button,
2: Middle button,
4: Right button.
The origin of the coordinate system is located in the left upper corner of the window. The row coordinates increase
towards the bottom, while the column coordinates increase towards the right. For graphics windows, the coor-
dinates of the lower right corner are (image height-1,image width-1) (see open_window, reset_obj_db),
while for text windows they are (window height-1,window width-1) (see open_textwindow).
Attention
get_mposition fails (returns FAIL) if the mouse pointer is not located within the window. In this case, no
values are returned.
Parameter
HALCON 8.0.2
364 CHAPTER 6. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. ShapeNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Available mouse pointer names.
Result
query_mshape returns the value 2 (H_MSG_TRUE).
Parallelization Information
query_mshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, get_mshape
Possible Successors
set_mshape
See also
set_mshape, get_mshape
Module
Foundation
6.5 Output
the arc (BeginRow,BeginCol). The arc is displayed in clockwise direction. The parameters for output can be
determined - as with the output of regions - with the procedures set_color, set_gray, set_draw, etc. It
is possible to draw several arcs with one call by using tupel parameters. For the use of colors with several arcs, see
set_color.
Attention
The center point has to be within the window. The radius of the arc has be at least 2 pixel.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.y ; real / integer
Row coordinate of center point.
Default Value : 64
Suggested values : CenterRow ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.x ; real / integer
Column coordinate of center point.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.angle.rad ; real / integer
Angle between start and end of the arc (in radians).
Default Value : 3.1415926
Suggested values : Angle ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Angle ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Angle > 0.0
. BeginRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.y(-array) ; integer / real
Row coordinate of the start of the arc.
Default Value : 32
Suggested values : BeginRow ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ BeginRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. BeginCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.x(-array) ; integer / real
Column coordinate of the start of the arc.
Default Value : 32
Suggested values : BeginCol ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ BeginCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Example
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_draw(WindowHandle,’fill’)
set_color(WindowHandle,’white’)
set_insert(WindowHandle,’not’)
Row = 100
Column = 100
disp_arc(WindowHandle,Row,Column,3.14,Row+10,Column+10)
close_window(WindowHandle).
Result
disp_arc returns 2 (H_MSG_TRUE).
HALCON 8.0.2
366 CHAPTER 6. GRAPHICS
Parallelization Information
disp_arc is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_circle, disp_ellipse, disp_region, gen_circle, gen_ellipse
See also
open_window, open_textwindow, set_color, set_draw, set_rgb, set_hsi
Module
Foundation
set_color(WindowHandle,[’red’,’green’])
disp_arrow(WindowHandle,[10,10],[10,10],[118,110],[118,118],1.0).
Result
disp_arrow returns 2 (H_MSG_TRUE).
Parallelization Information
disp_arrow is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_line, gen_region_polygon, disp_region
See also
open_window, open_textwindow, set_color, set_draw, set_line_width
Module
Foundation
HALCON 8.0.2
368 CHAPTER 6. GRAPHICS
Result
If the used images contain valid values and a correct output mode is set, disp_channel returns 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
disp_channel is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi
Alternatives
disp_image, disp_color
See also
open_window, open_textwindow, reset_obj_db, set_lut, draw_lut, dump_window
Module
Foundation
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_draw(WindowHandle,’fill’)
set_color(WindowHandle,’white’)
set_insert(WindowHandle,’not’)
repeat()
get_mbutton(WindowHandle,Row,Column,Button)
disp_circle(WindowHandle,Row,Column,(Row + Column) mod 50)
until(Button = 1)
close_window(WindowHandle).
Result
disp_circle returns 2 (H_MSG_TRUE).
Parallelization Information
disp_circle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_ellipse, disp_region, gen_circle, gen_ellipse
See also
open_window, open_textwindow, set_color, set_draw, set_rgb, set_hsi
Module
Foundation
Result
If the used image contains valid values and a correct output mode is set, disp_color returns 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
disp_color is reentrant, local, and processed without parallelization.
HALCON 8.0.2
370 CHAPTER 6. GRAPHICS
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi
Alternatives
disp_channel, disp_obj
See also
disp_image, open_window, open_textwindow, reset_obj_db, set_lut, draw_lut,
dump_window
Module
Foundation
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_draw(WindowHandle,’fill’)
set_color(WindowHandle,’white’)
set_insert(WindowHandle,’not’)
read_image(Image,’affe’)
draw_region(Region,WindowHandle)
noise_distribution_mean(Region,Image,21,Distribution)
disp_distribution (WindowHandle,Distribution,100,100,3).
Parallelization Information
disp_distribution is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
noise_distribution_mean, gauss_distribution
See also
gen_region_histo, set_paint, gauss_distribution, noise_distribution_mean
Module
Foundation
Displays ellipses.
HALCON 8.0.2
372 CHAPTER 6. GRAPHICS
disp_ellipse displays one or several ellipses in the output window. An ellipse is described by the center
(CenterRow, CenterCol), the orientation Phi (in radians) and the radii of the major and the minor axis
(Radius1 and Radius2).
The procedures used to control the display of regions (e.g. set_draw, set_gray, set_draw) can also be
used with ellipses. Several ellipses can be displayed with one call by using tuple parameters. For the use of colors
with several ellipses, see set_color.
Attention
The center of the ellipse must be within the window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y(-array) ; integer
Row index of center.
Default Value : 64
Suggested values : CenterRow ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x(-array) ; integer
Column index of center.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad(-array) ; real / integer
Orientation of the ellipse in radians
Default Value : 0.0
Suggested values : Phi ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Phi ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. Radius1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1(-array) ; real / integer
Radius of major axis.
Default Value : 24.0
Suggested values : Radius1 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Radius1 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Radius2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2(-array) ; real / integer
Radius of minor axis.
Default Value : 14.0
Suggested values : Radius2 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Radius2 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Example
set_color(WindowHandle,’red’)
draw_region(MyRegion,WindowHandle)
elliptic_axis(MyRegionRa,Rb,Phi)
area_center(MyRegion,_,Row,Column)
disp_ellipse(WindowHandle,Row,Column,Phi,Ra,Rb).
Result
disp_ellipse returns 2 (H_MSG_TRUE), if the parameters are correct. Otherwise an exception handling is
raised.
Parallelization Information
disp_ellipse is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
elliptic_axis, area_center
Alternatives
disp_circle, disp_region, gen_ellipse, gen_circle
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_draw,
set_line_width
Module
Foundation
Result
If the used image contains valid values and a correct output mode is set, disp_image returns 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
HALCON 8.0.2
374 CHAPTER 6. GRAPHICS
Parallelization Information
disp_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, scale_image, convert_image_type,
min_max_gray
Alternatives
disp_obj, disp_color
See also
open_window, open_textwindow, reset_obj_db, set_comprise, set_paint, set_lut,
draw_lut, paint_gray, scale_image, convert_image_type, dump_window
Module
Foundation
Example
disp_rectangle1_margin(WindowHandle,Row1,Column1,Row2,Column2):
disp_line(WindowHandle,Row1,Column1,Row1,Column2)
disp_line(WindowHandle,Row1,Column2,Row2,Column2)
disp_line(WindowHandle,Row2,Column2,Row2,Column1)
disp_line(WindowHandle,Row2,Column1,Row1,Column1).
Result
disp_line returns 2 (H_MSG_TRUE).
Parallelization Information
disp_line is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_arrow, disp_rectangle1, disp_rectangle2, disp_region, gen_region_polygon,
gen_region_points
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation
Result
If the used object is valid and a correct output mode is set, disp_obj returns 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
disp_obj is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, scale_image, convert_image_type,
min_max_gray
HALCON 8.0.2
376 CHAPTER 6. GRAPHICS
Alternatives
disp_color, disp_image, disp_xld, disp_region
See also
open_window, open_textwindow, reset_obj_db, set_comprise, set_paint, set_lut,
draw_lut, paint_gray, scale_image, convert_image_type, dump_window
Module
Foundation
Displays a polyline.
disp_polygon displays a polyline with the row coordinates Row and the column coordinates Column in the
output window. The parameters Row and Column have to be provided as tuples. Straight lines are drawn between
the given points. The start and the end of the polyline are not connected.
The procedures used to control the display of regions (e.g. set_color, set_gray, set_draw,
set_line_width) can also be used with polylines.
Attention
The given coordinates must lie within the window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.y-array ; integer / real
Row index
Default Value : [16,80,80]
Suggested values : Row ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.x-array ; integer / real
Column index
Default Value : [48,16,80]
Suggested values : Column ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Example (Syntax: C)
/* display a rectangle */
disp_rectangle1_margin1(long WindowHandle,
long Row1, long Column1,
long Row2, long Column2)
{
Htuple Row, Col;
create_tuple(&Row,4) ;
create_tuple(&Col,4) ;
set_i(Row,Row1,0) ;
set_i(Col,Column1,0) ;
set_i(Row,Row1,1) ;
set_i(Col,Column2,1) ;
set_i(Row,Row2,2) ;
set_i(Col,Column2,2) ;
set_i(Row,Row2,3) ;
set_i(Col,Column1,3) ;
set_i(Row,Row1,4) ;
set_i(Col,Column1,4) ;
T_disp_polygon(WindowHandle,Row,Col) ;
Result
disp_polygon returns 2 (H_MSG_TRUE).
Parallelization Information
disp_polygon is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_line, gen_region_polygon, disp_region
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation
HALCON 8.0.2
378 CHAPTER 6. GRAPHICS
set_color(WindowHandle,’green’)
draw_region(MyRegion,WindowHandle)
smallest_rectangle1(MyRegion,R1,C1,R2,C2)
disp_rectangle1(WindowHandle,R1,C1,R2,C2).
Result
disp_rectangle1 returns 2 (H_MSG_TRUE).
Parallelization Information
disp_rectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_rectangle2, gen_rectangle1, disp_region, disp_line, set_shape
See also
open_window, open_textwindow, set_color, set_draw, set_line_width
Module
Foundation
set_color(WindowHandle,’green’)
draw_region(MyRegion:WindowHandle)
elliptic_axis(MyRegion,Ra,Rb,Phi)
area_center(MyRegion,_,Row,Column)
disp_rectangle2(WindowHandle,Row,Column,Phi,Ra,Rb).
Result
disp_rectangle2 returns 2 (H_MSG_TRUE), if the parameters are correct. Otherwise an exception handling
is raised.
Parallelization Information
disp_rectangle2 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_region, gen_rectangle2, disp_rectangle1, set_shape
See also
open_window, open_textwindow, disp_region, set_color, set_draw, set_line_width
HALCON 8.0.2
380 CHAPTER 6. GRAPHICS
Module
Foundation
/* Symbolic representation: */
set_draw(WindowHandle,’margin’)
set_color(WindowHandle,’red’)
set_shape(WindowHandle,’ellipse’)
disp_region(SomeSegmentsWindowHandle).
Result
disp_region returns 2 (H_MSG_TRUE).
Parallelization Information
disp_region is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_shape, set_line_style, set_insert,
set_fix, set_draw, set_color, set_colored, set_line_width
Alternatives
disp_obj, disp_arrow, disp_line, disp_circle, disp_rectangle1, disp_rectangle2,
disp_ellipse
See also
open_window, open_textwindow, set_color, set_colored, set_draw, set_shape,
set_paint, set_gray, set_rgb, set_hsi, set_pixel, set_line_width, set_line_style,
set_insert, set_fix, paint_region, dump_window
Module
Foundation
6.6 Parameters
get_comprise ( : : WindowHandle : Mode )
HALCON 8.0.2
382 CHAPTER 6. GRAPHICS
Parameter
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window_id.
. Hue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Hue (color value) of the current color.
. Saturation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Saturation of the current color.
. Intensity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Intensity of the current color.
Result
get_hsi returns 2 (H_MSG_TRUE), if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_hsi is reentrant and processed without parallelization.
Possible Successors
set_hsi, set_rgb, disp_image
See also
set_hsi, set_color, set_rgb, trans_to_rgb, trans_from_rgb
Module
Foundation
Result
get_icon always returns 2 (H_MSG_TRUE).
Parallelization Information
get_icon is reentrant and processed without parallelization.
Possible Predecessors
set_icon
Possible Successors
disp_region
Module
Foundation
HALCON 8.0.2
384 CHAPTER 6. GRAPHICS
HALCON 8.0.2
386 CHAPTER 6. GRAPHICS
is queried, then changed (with procedure set_paint) and finally the old value is written back. The available
modes can be viewed with the procedure query_paint. Mode is the name of the display mode. If a mode
can be customized with parameters, the parameter values are passed in a tuple after the mode name. The order of
values is the same as in set_paint.
Parameter
HALCON 8.0.2
388 CHAPTER 6. GRAPHICS
See also
set_pixel, set_fix
Module
Foundation
Possible Successors
set_shape, disp_region
See also
set_shape, query_shape, disp_region
Module
Foundation
query_all_colors(WindowHandle,Colors)
<interactive selection from Colors provide ActColors> >
set_system(’graphic_colors’,ActColors)
open_window(0,0,1,1,’root’,’invisible’,"’,WindowHandle)
query_color(WindowHandle,F)
close_window(WindowHandle)
fwrite_string([’Setting Colors: ’,F]).
Result
query_all_colors always returns 2 (H_MSG_TRUE)
Parallelization Information
query_all_colors is reentrant, local, and processed without parallelization.
Possible Successors
set_system, set_color, disp_region
See also
query_color, set_system, set_color, disp_region, open_window, open_textwindow
Module
Foundation
HALCON 8.0.2
390 CHAPTER 6. GRAPHICS
) returns a list of all available colors for the set_system(::’graphic_colors’,...:) call. For screens
with truecolor output the same list is returned by query_color. The list of available colors (to HALCON )
must not be confused with the list of displayable colors. For screens with truecolor output the available colors are
only a small subset of the displayable colors. Colors that are not directly available to HALCON can be chosen
manually with set_rgb or set_hsi. If colors are chosen that are known to HALCON but cannot be displayed,
HALCON can choose a similar color. To use this faeture, set_check(::’˜color’:) must be set.
Parameter
open_window(0,0,-1,-1,’root’,’invisible’,"’,WindowHandle)
query_color(WindowHandle,Colors)
close_window(WindowHandle)
fwrite_string([’Displayable colors: ’,Farben]).
Result
query_color returns 2 (H_MSG_TRUE), if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_color is reentrant, local, and processed without parallelization.
Possible Successors
set_color, disp_region
See also
query_all_colors, set_color, disp_region, open_window, open_textwindow
Module
Foundation
query_colored ( : : : PossibleNumberOfColors )
regiongrowing(Image,Seg,5,5,6,100)
query_colored(Colors)
set_colored(WindowHandle,Colors[1])
disp_region(Seg,WindowHandle).
Result
query_colored always returns 2 (H_MSG_TRUE).
Parallelization Information
query_colored is reentrant and processed without parallelization.
Possible Successors
set_colored, set_color, disp_region
Alternatives
query_color
See also
set_colored, set_color
Module
Foundation
HALCON 8.0.2
392 CHAPTER 6. GRAPHICS
Parallelization Information
query_insert is reentrant, local, and processed without parallelization.
Possible Successors
set_insert, disp_region
See also
set_insert, get_insert
Module
Foundation
Possible Successors
get_paint, set_paint, disp_image
See also
set_paint, get_paint, disp_image
Module
Foundation
query_shape ( : : : DisplayShape )
HALCON 8.0.2
394 CHAPTER 6. GRAPHICS
set_color(WindowHandle,[’red’,’green’])
disp_circle(WindowHandle,[100,200,300],[200,300,100],[100,100,100]).
Result
set_color returns 2 (H_MSG_TRUE) if the window is valid and the passed colors are displayable on the screen.
Otherwise an exception handling is raised.
Parallelization Information
set_color is reentrant, local, and processed without parallelization.
Possible Predecessors
query_color
Possible Successors
disp_region
Alternatives
set_rgb, set_hsi
See also
get_rgb, disp_region, set_fix, set_paint
Module
Foundation
Module
Foundation
open_window(0,0,-1,-1,’root’,’visible’,"’,WindowHandle)
read_image(Image,’fabrik’)
threshold(Image,Seg,100,255)
set_system(’init_new_image’,’false’)
sobel_amp(Image,Sob,’sum_abs’,3)
disp_image(Sob,WindowHandle)
get_comprise(Mode)
fwrite_string([’Current mode for gray values: ’,Mode])
fnew_line()
set_comprise(WindowHandle,’image’)
get_mbutton(WindowHandle,_,_,_)
disp_image(Sob,WindowHandle)
fwrite_string([’Current mode for gray values: image’])
fnew_line().
Result
set_comprise returns 2 (H_MSG_TRUE) if Mode is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_comprise is reentrant and processed without parallelization.
Possible Predecessors
get_comprise
Possible Successors
disp_image
See also
get_comprise, disp_image, disp_color
Module
Foundation
HALCON 8.0.2
396 CHAPTER 6. GRAPHICS
set_draw defines the region fill mode. If Mode is set to ’fill’, output regions are filled, if set to ’margin’, only
contours are displayed. Setting Mode only affects the valid window. It is used by procedures with region output like
disp_region, disp_circle, disp_rectangle1, disp_rectangle2, disp_arrow etc. It is also
used by procedures with grayvalue output for some grayvalue output modes (e.g. ’histogram’, see set_paint).
If the mode is ’margin’, the contour can be affected with set_line_width, set_line_approx and
set_line_style.
Attention
If the output mode is ’margin’ and the line width is more than one, objects may not be displayed.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Fill mode for region output.
Default Value : ’fill’
List of values : Mode ∈ {’fill’, ’margin’}
Result
set_draw returns 2 (H_MSG_TRUE) if Mode is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_draw is reentrant, local, and processed without parallelization.
Possible Predecessors
get_draw
Possible Successors
disp_region
See also
get_draw, disp_region, set_paint, disp_image, set_line_width, set_line_style
Module
Foundation
Parallelization Information
set_fix is reentrant, local, and processed without parallelization.
Possible Predecessors
get_fix
Possible Successors
set_pixel, set_rgb
See also
get_fix, set_pixel, set_rgb, set_color, set_hsi, set_gray
Module
Foundation
set_gray(WindowHandle,[100,200])
disp_circle(WindowHandle,[100,200,300],[200,300,100],[100,100,100]).
Result
set_gray returns 2 (H_MSG_TRUE) if GrayValues is displayable and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_gray is reentrant, local, and processed without parallelization.
HALCON 8.0.2
398 CHAPTER 6. GRAPHICS
Possible Successors
disp_region
See also
get_pixel, set_color
Module
Foundation
H = (2πHue)/255
√
I = ( 6Intensity)/255 √
M 1 = (sin (H)Saturation)/(255 √6)
M 2 = (cos (H)Saturation)/(255 2)
√
R = (2M 1 + I)/(4√6)
G = (−M 1 + M 2 + I)/(4√6
B = (−M 1 − M 2 + I)/(4 6)
Red = R ∗ 255
Green = G ∗ 255
Blue = B ∗ 255
If only one combination is passed, all output will take place in that color. If a tuple of colors is passed, the output
color of regions and geometric objects is modulo to the number of colors. HALCON always begins output with
the first color passed. Note, that the number of output colors depends on the number of objects that are displayed
in one procedure call. If only single objects are displayed, they always appear in the first color, even if the consist
of more than one connected components.
Selected colors are used until the next call of set_color, set_pixel, set_rgb or set_gray. Colors
are relevant to windows, i.e. only the colors of the valid window can be set. Region output colors are used by
operatores like disp_region, disp_line, disp_rectangle1, disp_rectangle2, disp_arrow,
etc. It is also used by procedures with grayvalue output in certain output modes (e.g. ’3D-plot’,’histogram’,
’contourline’, etc. See set_paint).
Attention
The selected intensities may not be available for the selected hues. In that case, the intensities will be lowered
automatically.
Parameter
Result
set_icon returns 2 (H_MSG_TRUE) if exactly one region is passed. Otherwise an exception handling is raised.
Parallelization Information
set_icon is reentrant and processed without parallelization.
Possible Predecessors
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region
Possible Successors
set_shape, disp_region
Module
Foundation
HALCON 8.0.2
400 CHAPTER 6. GRAPHICS
set_insert defines the function, with which pixels are displayed in the output window. It is e.g. possible for a
pixel to overwrite the old value. In most of the cases there is a functional relationship between old and new values.
The definition value is only valid for the valid window. Output procedures that honor Mode are e.g.
disp_region, disp_polygon, disp_circle.
Possible display functions are:
There may not be all functions available, depending on the physical display. However, "‘copy"’ is always available.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the display function.
Default Value : ’copy’
List of values : Mode ∈ {’copy’, ’xor’, ’complement’}
Result
set_insert returns 2 (H_MSG_TRUE) if the paramter is correct and the window is valid. Otherwise an excep-
tion handling is raised.
Parallelization Information
set_insert is reentrant, local, and processed without parallelization.
Possible Predecessors
query_insert, get_insert
Possible Successors
disp_region
See also
get_insert, query_insert
Module
Foundation
/* Calling */
set_line_approx(WindowHandle,Approximation)
set_draw(WindowHandle,’margin’)
disp_region(Obj,WindowHandle).
/* correspond with */
get_region_polygon(Obj,Approximation,Row,Col)
disp_polygon(WindowHandle,Row,Col).
Result
set_line_approx returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise
an exception handling is raised.
Parallelization Information
set_line_approx is reentrant and processed without parallelization.
Possible Predecessors
get_line_approx
Possible Successors
disp_region
Alternatives
get_region_polygon, disp_polygon
See also
get_line_approx, set_line_style, set_draw, disp_region
Module
Foundation
Result
set_line_style returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_style is reentrant, local, and processed without parallelization.
HALCON 8.0.2
402 CHAPTER 6. GRAPHICS
Possible Predecessors
get_line_style
Possible Successors
disp_region
See also
get_line_style, set_line_approx, disp_region
Module
Foundation
values. For binary displays, HALCON includes algorithms using a dithering matrix (fast, but low resolution),
minimal error (good, but slow) and thresholding. Using the thresholding algorithm, the threshold can be passed as
a second parameter (a tuple with the string ’threshold’ and the actual threshold, e.g.: [’theshold’, 100]).
Displays with eight bit planes use approximately 200 gray values for output. Of course it is still possible to use a
binary display on those displays.
A different way to display gray values is the histogram (mode: ’histogram’). This mode has two additional
parameter values: Row (second value) and column (third value). They denote row and column of the histogram
center for positioning on the screen. The scale factor (fourth value) determines the histogram size: a scale factor
of 1 distinguishes 256 grayvalues, 2 distinguishes 128 gray values, 3 distinguishes 64 gray values, and so on. The
four values are passed as a tuple, e.g. [’histogram’,256,256,1]. If only the first value is passed (’histogram’), the
other values are set to defaults or the last values, respectively. For histogram computation see gray_histo.
Histogram output honors the same parameters as procedures like disp_region etc. (e.g. set_color,
set_draw, etc.)
Yet another mode is the display of relative frequencies of the number of connection components ("’compo-
nent_histogram"’). For informations on computing the component histogram see shape_histo_all). Po-
sitioning and resolution are exactly as in the mode ’histogram’.
In mode ’mean’, all object regions are displayed in their mean gray value.
The modes ’row’ and ’column’ allow the display of lines or columns, respecively. The position (row and column
index) is passed with the second paramter value. The third parameter value is the scale factor in percent (100
means 1 pixel per grayvalue, 50 means one pixel per two gray values).
Gray images can also be interpreted as 3d data, depending on the grayvalue. To view these 3d plots, select the
modes ’contourline’, ’3D-plot’ or ’3D-plot_hidden’.
Three-channel images are interpreted as RGB images. They can be displayed in three different modes. Two of
them can be optimized by Floyd-Steinberg dithering.
Vector field images can be viewed as ’vector_field’.
All available painting modes can be queried with query_paint.
Paramters for modes that need more than one parameter can be passed the following ways:
• Only the name of the mode is passed: the defaults or the most recently used values are used, respectively.
Example: set_paint(WindowHandle,’contourline’)
• All values are passed: all output characteristics can be set. Example: set_paint
(WindowHandle,[’contourline’,10,1])
• Only the first n values are passed: only the passed values are changed. Example: set_paint
(WindowHandle,[’contourline’,10])
• Some of the values are replaced by an asterisk (’*’): The value of the replaced parameters is not changed.
Example: set_paint(WindowHandle,[’contourline’,’*’,1])
If the current mode is ’default’, HALCON chooses a suitable algorithm for the output of 2- and 3-channel images.
No set_paint call is necessary in this case.
Apart from set_paint there are other operators that affect the output of grayvalues. The most important of
them are set_part, set_part_style, set_lut and set_lut_style. Some output modes display
grayvalues using region output (e.g. ’histogram’,’contourline’,’3D-plot’, etc.). In these modes, paramters set with
set_color, set_rgb, set_hsi, set_pixel, set_shape, set_line_width and set_insert
influence grayvalue output. This can lead to unexpected results when using set_shape(’convex’) and
set_paint(WindowHandle,’histogram’). Here the convex hull of the histogram is displayed.
Modes:
• one-channel images:
’default’ optimal display on given hardware
’gray’ grayvalue output
’mean’ mean grayvalue
’dither4_1’ binary image, dithering matrix 4x4
’dither4_2’ binary image, dithering matrix 4x4
’dither4_3’ binary image, dithering matrix 4x4
HALCON 8.0.2
404 CHAPTER 6. GRAPHICS
Attention
• Display of color images (’television’, ’grid_scan’, etc.) changes the color lookup tables.
• If a wrong color mode is set, the error message may appear not until the disp_image call.
• Grayvalue output may be influenced by region output parameters. This can yield unexpected results.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string / integer
Output mode. Additional parameters possible.
Default Value : ’default’
List of values : Mode ∈ {’default’, ’histogram’, ’row’, ’column’, ’contourline’, ’3D-plot’, ’3D-plot_hidden’,
’3D-plot_point’, ’vector_field’}
Example
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,"’,WindowHandle)
query_paint(WindowHandleModi)
fwrite_string([’available gray value modes: ’,Modi])
fnew_line()
disp_image(Image,WindowHandle)
get_mbutton(WindowHandle,_,_,_)
set_color(WindowHandle,’red’)
set_draw(WindowHandle,’margin’)
set_paint(WindowHandle,’histogram’)
disp_image(Image,WindowHandle)
set_color(WindowHandle,’blue’)
set_paint(WindowHandle,[’histogram’,100,100,3])
HALCON 8.0.2
406 CHAPTER 6. GRAPHICS
disp_image(Image,WindowHandle)
set_color(WindowHandle,’yellow’)
set_paint(WindowHandle,[’row’,100])
disp_image(Image,WindowHandle)
get_mbutton(WindowHandle,_,_,_)
clear_window(WindowHandle)
set_paint(WindowHandle,[’contourline’,10,1])
disp_image(Image,WindowHandle)
set_lut(WindowHandle,’color’)
get_mbutton(WindowHandle,_,_,_)
clear_window(WindowHandle)
set_part(WindowHandle,100,100,300,300)
set_paint(WindowHandle,’3D-plot’)
disp_image(Image,WindowHandle).
Result
set_paint returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an excep-
tion handling is raised.
Parallelization Information
set_paint is reentrant, local, and processed without parallelization.
Possible Predecessors
query_paint, get_paint
Possible Successors
disp_image
See also
get_paint, query_paint, disp_image, set_shape, set_rgb, set_color, set_gray
Module
Foundation
Row1 = Column1 = Row2 = Column2 = -1: The window size is choosen as the image part, i.e. no zooming of
the image will be performed.
Row1, Column1 > -1 and Row2 = Column2 = -1: The size of the last displayed image (in this window) is
choosen as the image part, i.e. the image can completely be displayed in the image. For this the image
will be zoomed if necessary.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window_id.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; integer
Row of the upper left corner of the chosen image part.
Default Value : 0
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; integer
Column of the upper left corner of the chosen image part.
Default Value : 0
get_system(’width’,,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Height-1,Width-1)
disp_image(Image,WindowHandle)
draw_rectangle1(WindowHandle:Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle).
Result
set_part returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
set_part is reentrant and processed without parallelization.
Possible Predecessors
get_part
Possible Successors
set_part_style, disp_image, disp_region
Alternatives
affine_trans_image
See also
get_part, set_part_style, disp_region, disp_image, disp_color
Module
Foundation
HALCON 8.0.2
408 CHAPTER 6. GRAPHICS
Result
set_part_style returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_part_style is reentrant and processed without parallelization.
Possible Predecessors
get_part_style
Possible Successors
set_part, disp_image, disp_region
Alternatives
affine_trans_image
See also
get_part_style, set_part, disp_image, disp_color
Module
Foundation
set_rgb sets the output color(s) or the grayvalues, respectively, for region output for the window. The colors are
defined with the red, green and blue components. If only one combination is passed, all output takes place in that
color. If a tuple is passed, region output and output of geometric objects takes place modulo the passed colors.
For every call of an output procedure, output is started with the first color. If only one object is displayed per call,
it will always be displayed in the first color. This is even true for objects with multiple connection components.
If multiple objects are displayed per procedure call, multiple colors are used. The defined colors are used until
set_color, set_pixel, set_rgb or set_gray is called again. The values are used by procedures like
disp_region, disp_line, disp_rectangle1, disp_rectangle2, disp_arrow, etc.
Attention
If a passed is not available, an exception handling is raised. If set_check(::’˜color’:) was called before,
HALCON uses a similar color and suppresses the error.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window_id.
. Red (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Red component of the color.
Default Value : 255
Typical range of values : 0 ≤ Red ≤ 255
Restriction : (0 ≤ Red) ∧ (Red ≤ 255)
. Green (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Green component of the color.
Default Value : 0
Typical range of values : 0 ≤ Green ≤ 255
Restriction : (0 ≤ Green) ∧ (Green ≤ 255)
. Blue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Blue component of the color.
Default Value : 0
Typical range of values : 0 ≤ Blue ≤ 255
Restriction : (0 ≤ Blue) ∧ (Blue ≤ 255)
Result
set_rgb returns 2 (H_MSG_TRUE) if the window is valid and all passed colors are available and displayable.
Otherwise an exception handling is raised.
Parallelization Information
set_rgb is reentrant, local, and processed without parallelization.
Possible Successors
disp_image, disp_region
Alternatives
set_hsi, set_color, set_gray
See also
set_fix, disp_region
Module
Foundation
’original’: The shape is displayed unchanged. Nevertheless modifications via parameters like set_line_width or
set_line_approx can take place. This is also true for all other modes.
HALCON 8.0.2
410 CHAPTER 6. GRAPHICS
’outer_circle’: Each region is displayed by the smallest surrounding circle. (See smallest_circle.)
’inner_circle’: Each region is displayed by the largest included circle. (See inner_circle.)
’ellipse’: Each region is displayed by an ellipse with the same moments and orientation (See elliptic_axis.)
’rectangle1’: Each region is displayed by the smallest surrounding rectangle parallel to the coordinate axes. (See
smallest_rectangle1.)
’rectangle2’: Each region is displayed by the smallest surrounding rectangle. (See smallest_rectangle2.)
’convex’: Each region is displayed by its convex hull (See convexity.)
’icon’ Each region is displayed by the icon set with set_icon in the center of gravity.
Attention
Caution is advised for grayvalue output procedures with output parameter settings that use region output,
e.g. disp_image with set_paint(::WindowHandle,’histogram’:) and set_shape(::
WindowHandle,’convex’:). In that case the convex hull of the grayvalue histogram is displayed.
Parameter
read_image(Image,’fabrik’)
regiongrowing(Image,Seg,5,5,6,100)
set_colored(WindowHandle,12)
set_shape(WindowHandle,’rectangle2’)
disp_region(Seg,WindowHandle).
Result
set_shape returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an excep-
tion handling is raised.
Parallelization Information
set_shape is reentrant and processed without parallelization.
Possible Predecessors
set_icon, query_shape, get_shape
Possible Successors
disp_region
See also
get_shape, query_shape, disp_region
Module
Foundation
6.7 Text
get_font ( : : WindowHandle : Font )
set_system(’default_font’,Fontname) prior to opening the window. A list of all available fonts can
be obtained using query_font.
Parameter
get_font(WindowHandle,CurrentFont)
set_font(WindowHandle,MyFont)
write_string(WindowHandle,[’The name of my Font is:’,Myfont])
new_line(WindowHandle)
set_font(WindowHandle,CurrentFont)
Result
get_font returns 2 (H_MSG_TRUE).
Parallelization Information
get_font is reentrant and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, query_font
Possible Successors
set_font
See also
set_font, query_font, open_window, open_textwindow, set_system
Module
Foundation
HALCON 8.0.2
412 CHAPTER 6. GRAPHICS
Result
get_string_extents returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is
raised.
Parallelization Information
get_string_extents is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font
Possible Successors
set_tposition, write_string, read_string, read_char
See also
set_tposition, set_font
Module
Foundation
A text cursor marks the current position for text output (which can also be invisible). It is different from the mouse
cursor (although both will be called "’cursor"’ if the context makes misconceptions impossible). The available
shapes for the text cursor can be queried with query_tshape.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. TextCursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the current text cursor.
Result
get_tshape returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_tshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font
Possible Successors
set_tshape, set_tposition, write_string, read_string, read_char
See also
set_tshape, query_tshape, write_string, read_string
Module
Foundation
new_line ( : : WindowHandle : )
Set the position of the text cursor to the beginning of the next line.
new_line sets the position of the text cursor to the beginning of the next line. The new position depends on the
current font. The left end of the baseline for writing the following text string (not considering descenders) is placed
on this position.
If the next line does not fit into the window the content of the window is scrolled by the height of one line in the
upper direction. In order to reach the correct new cursor position the font used in the next line must be set before
new_line is called. The position is changed by the output or input of text ( write_string, read_string)
or by an explicit change of position by ( set_tposition).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
Result
new_line returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
new_line is reentrant and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font, write_string
Alternatives
get_tposition, get_string_extents, set_tposition, move_rectangle
See also
write_string, set_font
Module
Foundation
HALCON 8.0.2
414 CHAPTER 6. GRAPHICS
query_font queries the fonts available for text output in the output window. They can be set with the operator
set_font. Fonts are used by the operators write_string, read_char, read_string and new_line.
Attention
For different machines the available fonts may differ a lot. Therefore query_font will return different fonts on
different machines.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Font (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Tuple with available font names.
Example
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_check(’~text’)
query_font(WindowHandle,Fontlist)
set_color(WindowHandle,’white’)
for i=0 to |Fontlist|-1 by 1
set_font(WindowHandle,Fontlist[i])
write_string(WindowHandle,Fontlist[i])
new_line(WindowHandle)
endfor
Result
query_font returns 2 (H_MSG_TRUE).
Parallelization Information
query_font is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
set_font, write_string, read_string, read_char
See also
set_font, write_string, read_string, read_char, new_line
Module
Foundation
Possible Successors
set_tshape, write_string, read_string
See also
set_tshape, get_shape, set_tposition, write_string, read_string
Module
Foundation
Attention
The window has to be a text window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Char (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Input character (if it is not a control character).
. Code (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Code for input character.
Result
read_char returns 2 (H_MSG_TRUE) if the text window is valid. Otherwise an exception handling is raised.
Parallelization Information
read_char is reentrant, local, and processed without parallelization.
Possible Predecessors
open_textwindow, set_font
Alternatives
read_string, fread_char, fread_string
See also
write_string, set_font
Module
Foundation
HALCON 8.0.2
416 CHAPTER 6. GRAPHICS
The maximum size has to be small enough to keep the string within the right window boundary. A default string
which can be edited or simply accepted by the user may be provided. After text input the text cursor is positioned
at the end of the edited string. Commands for editing:
Attention
The window has to be a text window.
Parameter
-FontName-Height-Width-Italic-Underlined-Strikeout-Bold-CharSet-
where “Italic”, “Underlined”, “Strikeout” and “Bold” can take the values 1 and 0 to activate or de-
activate the corresponding feature. “Charset” can be used to select the character set, if it differs
from the default one. You can use the names of the defines (ANSI_CHARSET, BALTIC_CHARSET,
CHINESEBIG5_CHARSET, DEFAULT_CHARSET, EASTEUROPE_CHARSET, GB2312_CHARSET,
GREEK_CHARSET, HANGUL_CHARSET, MAC_CHARSET, OEM_CHARSET, RUSSIAN_CHARSET,
• -Arial-10-*-1-*-*-1-ANSI_CHARSET-
• -Arial-10-*-1-*-*-1-
• -Arial-10-
Please refer to the Windows documentation (Fonts and Text in the MSDN) for a detailed discussion.
On UNIX environments the Font is specified by a string with the following components:
-FOUNDRY-FAMILY_NAME-WEIGHT_NAME-SLANT-SETWIDTH_NAME-ADD_STYLE_NAME-PIXEL_SIZE
-POINT_SIZE-RESOLUTION_X-RESOLUTION_Y-SPACING-AVERAGE_WIDTH-CHARSET_REGISTRY
-CHARSET_ENCODING,
where FOUNDRY identifies the organisation that supplied the Font. The actual name of Font is given in FAM-
ILY_NAME (e.g. ’courier’). WEIGHT_NAME describes the typographic weight of the Font in human readable
form (e.g. ’medium’, ’semibold’, ’demibold’, or ’bold’). SLANT is one of the following codes:
• r for Roman
• i for Italic
• o for Oblique
• ri for Reverse Italic
• ro for Reverse Oblique
• ot for Other
SET_WIDTH_NAME describes the proportionate width of the font (e.g. ’normal’). ADD_STYLE_NAME iden-
tifies additional typographic style information (e.g. ’serif’ or ’sans serif’) and is empty in most cases.
The PIXEL_SIZE is the height of the Font on the screen in pixel, while POINT_SIZE is the print size the Font
was designed for. RESOLUTION_Y and RESOLUTION_X contain the vertical and horizontal Resolution of the
Font. SPACING may be one of the following three codes:
• p for Proportional,
• m for Monospaced, or
• c for CharCell.
The AVERAGE_WIDTH is the mean of the width of each character in Font. The character set encoded in Font
is described in CHARSET_REGISTRY and CHARSET_ENCODING (e.g. ISO8859-1).
An example of a valid string for Font would be
’-adobe-courier-medium-r-normal–12-120-75-75-m-70-iso8859-1’,
which is a 12px medium weighted courier font. As on Windows systems not all fields have to be specified and a *
can be used instead:
’-adobe-courier-medium-r-*–12-*-*-*-*-*-*-*’.
Please refer to "X Logical Font Description Conventions" for detailed information on individual parameters.
Attention
For different machines the available fonts may differ a lot. Therefore it is suggested to use wildcards, tables of
fonts and/or the operator query_font.
Parameter
HALCON 8.0.2
418 CHAPTER 6. GRAPHICS
Example
Result
set_font returns 2 (H_MSG_TRUE) if the font name is correct. Otherwise an exception handling is raised.
Parallelization Information
set_font is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
query_font
See also
get_font, query_font, open_textwindow, open_window
Module
Foundation
See also
read_string, set_tshape, write_string
Module
Foundation
HALCON 8.0.2
420 CHAPTER 6. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. String (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer / real
Tuple of output values (all types).
Default Value : ’hello’
Result
write_string returns 2 (H_MSG_TRUE) if the window is valid and the output text fits within the current line
(see set_check). Otherwise an exception handling is raised.
Parallelization Information
write_string is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font, get_string_extents
Alternatives
fwrite_string
See also
set_tposition, get_string_extents, open_textwindow, set_font, set_system,
set_check
Module
Foundation
6.8 Window
clear_rectangle ( : : WindowHandle, Row1, Column1, Row2, Column2 : )
Result
If an output window exists and the specified parameters are correct clear_rectangle returns 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
clear_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
draw_rectangle1
Alternatives
clear_window, disp_rectangle1
See also
open_window, open_textwindow
Module
Foundation
clear_window ( : : WindowHandle : )
clear_window(WindowHandle).
Result
If the output window is valid clear_window returns 2 (H_MSG_TRUE). If necessary an exception handling is
raised.
Parallelization Information
clear_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
clear_rectangle, disp_rectangle1
See also
open_window, open_textwindow
Module
Foundation
HALCON 8.0.2
422 CHAPTER 6. GRAPHICS
close_window ( : : WindowHandle : )
read_image(Image,’affe’)
open_window(0,0,-1,-1,’root’,’buffer’,’’,WindowHandle)
disp_image(Image,WindowHandle)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandleDestination)
repeat()
get_mbutton(WindowHandleDestination,Row,Column,Button)
copy_rectangle(BufferID,WindowHandleDestination,20,90,120,390,Row,Column)
until(Button = 1)
close_window(WindowHandleDestination)
close_window(WindowHandle)
clear(Image).
Result
If the output window is valid and if the specified parameters are correct close_window returns 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
copy_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
close_window
Alternatives
move_rectangle, slide_image
HALCON 8.0.2
424 CHAPTER 6. GRAPHICS
See also
open_window, open_textwindow
Module
Foundation
Attention
Under UNIX, the graphics window must be completely visible on the root window, because otherwise the contents
of the window cannot be read due to limitations in X Windows. If larger graphical displays are to be written to a
file, the window type ’pixmap’ can be used.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
. Device (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer
Name of the target device or of the graphic format.
Default Value : ’postscript’
List of values : Device ∈ {’postscript’, ’tiff’, ’bmp’, ’jpeg’, ’jp2’, ’png’, ’jpeg 100’, ’jpeg 80’, ’jpeg 60’,
’jpeg 40’, ’jpeg 20’, ’jp2 50’, ’jp2 40’, ’jp2 30’, ’jp2 20’, ’png best’, ’png fastest’, ’png none’}
Result
If the appropriate window is valid and the specified parameters are correct dump_window returns 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
dump_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow,
disp_region
Possible Successors
system_call
See also
open_window, open_textwindow, set_system, dump_window_image
Module
Foundation
HALCON 8.0.2
426 CHAPTER 6. GRAPHICS
/* Draw a line into a HALCON window under UNIX using X11 calls. */
#include "HalconC.h"
#include <X11/X.h>
#include <X11/Xlib.h>
/* Draw a line into a HALCON window under Windows using GDI calls. */
#include "HalconC.h"
#include "windows.h"
Result
If the window is valid get_os_window_handle returns 2 (H_MSG_TRUE). Otherwise, an exception han-
dling is raised.
Parallelization Information
get_os_window_handle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Module
Foundation
Parameter
HALCON 8.0.2
428 CHAPTER 6. GRAPHICS
Parallelization Information
get_window_attr is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
open_window, set_window_attr
Module
Foundation
open_window(100,100,200,200,’root’,’visible’,’’,WindowHandle)
fwrite_string(’Move the window with the mouse!’)
fnew_line()
repeat()
get_mbutton(WindowHandle,_,_,Button)
get_window_extents(WindowHandle,Row,Column,Width,Height)
fwrite([’(’Row,’,’,Column,’)’])
fnew_line()
until(Button = 4).
Result
If the window is valid get_window_extents returns 2 (H_MSG_TRUE). If necessary an exception handling
is raised.
Parallelization Information
get_window_extents is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
set_window_extents, open_window, open_textwindow
Module
Foundation
HALCON 8.0.2
430 CHAPTER 6. GRAPHICS
Parameter
open_window(100,100,200,200,’root’,’visible’,’’,WindowHandle)
get_window_type(WindowHandle,WindowType)
fwrite_string([’Window type: ’,WindowType])
fnew_line().
Result
If the window is valid get_window_type returns 2 (H_MSG_TRUE). If necessary an exception handling is
raised.
Parallelization Information
get_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
query_window_type, set_window_type, get_window_pointer3, open_window,
open_textwindow
Module
Foundation
Result
If the window is valid and the specified parameters are correct move_rectangle returns 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
move_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
copy_rectangle
See also
open_window, open_textwindow
Module
Foundation
HALCON 8.0.2
432 CHAPTER 6. GRAPHICS
HALCON:
set\_color(WindowHandle,"green");
disp\_region(WindowHandle,region);
Windows NT:
HPEN* penold;
HPEN penGreen = CreatePen(PS\_SOLID,1,RGB(0,255,0));
pen = (HPEN*)SelectObject(WINHDC,penGreen);
disp\_region(WindowHandle,region);
Interactive operators, for example draw_region, draw_circle or get_mbutton cannot be used in this
window. The following operators can be used:
• Output of gray values: set_paint, set_comprise, ( set_lut and set_lut_style after output)
• Regions: set_color, set_rgb, set_hsi, set_gray, set_pixel, set_shape,
set_line_width, set_insert, set_line_style, set_draw
• Image part: set_part
• Text: set_font
You may query current set values by calling procedures like get_shape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available resources by calling operators like
query_color.
The parameter WINHWnd is used to pass the window handle of the Windows NT window, in which output should
be done. The parameter WINHDC is used to pass the device context of the window WINHWnd. This device context
is used in the output routines of HALCON.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximum: Height-1), the column index grows to the right (maximal: Width-1).
You may use the value -1 for parameters Width and Height. This means, that the corresponding value is chosen
automatically. In particular, this is important if the aspect ratio of the pixels is not 1.0 (see set_system). If
one of the two parameters is set to -1, it will be chosen through the size which results out of the aspect ratio of the
pixels. If both parameters are set to -1, they will be set to the current image format.
The position and size of a window may change during runtime of a program. This may be achieved by call-
ing set_window_extents, but also through external influences (window manager). For the latter case the
procedure set_window_extents is provided.
Opening a window causes the assignment of a default font. It is used in connection with procedures
like write_string and you may change it by performing set_font after calling open_window.
On the other hand, you have the possibility to specify a default font by calling set_system(::
’default_font’,<Fontname>:) before opening a window (and all following windows; see also
query_font).
You may set the color of graphics and font, which is used for output procedures like disp_region or
disp_circle, by calling set_rgb, set_hsi, set_gray or set_pixel. Calling set_insert
specifies how graphics is combined with the content of the image repeat memory. Thereto you may achieve by
calling, e.g., set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
The content of the window is not saved, if other windows overlap the window. This must be done in the program
code that handles the Windows NT window in the calling program.
For graphical output ( disp_image, disp_region, etc.) you may adjust the window by calling procedure
set_part in order to represent a logical clipping of the image format. In particular this implies that only this
part (appropriately scaled) of images and regions is displayed. Before you close your window, you have to close
the HALCON-window.
Steps to use new_extern_window:
Attention
Note that parameters as Row, Column, Width and Height are constrained through the output device, i.e., the
size of the Windows NT desktop.
Parameter
. WINHWnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Windows windowhandle of a previously created window.
Restriction : WINHWnd 6= 0
. WINHDC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Device context of WINHWnd.
Restriction : WINHDC 6= 0
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; integer
Row coordinate of upper left corner.
Default Value : 0
Restriction : Row ≥ 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; integer
Column coordinate of upper left corner.
Default Value : 0
Restriction : Column ≥ 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; integer
Width of the window.
Default Value : 512
Restriction : (Width > 0) ∨ (Width = -1)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; integer
Height of the window.
Default Value : 512
Restriction : (Height > 0) ∨ (Height = -1)
. WindowHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; integer
Window identifier.
Example (Syntax: C++)
HTuple m_tHalconWindow ;
Hobject m_objImage ;
WM_CREATE:
/* here you should create your extern halcon window*/
HTuple tWnd, tDC ;
::set_check("~father") ;
tWnd = (INT)((INT*)&m_hWnd) ;
tDC = (INT)(INT*)GetWindowDC() ;
::new_extern_window(tWnd, tDC, 0, 0, sizeTotal.cx, sizeTotal.cy, &m_tHalconWindow) ;
::set_check("father") ;
WM_PAINT:
/* here you can draw halcon objects */
long l = 0 ;
if (m_thWindow != -1) {
/* don´t forget to set the dc !! */
HTuple tDC((INT)(INT*)&pDC->m_hDC) ;
HTuple tDCNull((INT)(INT*)&l) ;
::set_window_dc(m_tHalconWindow,tDC) ;
::disp_obj(pDoc->m_objImage, m_tHalconWindow) ;
/* release the graphic objects */
::set_window_dc(m_tHalconWindow, tDCNull) ;
}
HALCON 8.0.2
434 CHAPTER 6. GRAPHICS
WM_CLOSE:
/* close the halcon window */
if (m_tHalconWindow != -1) {
::close_window(m_tHalconWindow) ;
}
Result
If the values of the specified parameters are correct new_extern_window returns 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Parallelization Information
new_extern_window is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_window, open_textwindow
See also
open_window, disp_region, disp_image, disp_color, set_lut, query_color,
set_color, set_rgb, set_hsi, set_pixel, set_gray, set_part, set_part_style,
query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_window_extents, get_window_extents, set_window_attr,
set_check, set_system
Module
Foundation
<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter FatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via set_check,
FatherWindow relates to the ID of a HALCON window, otherwise ( set_check(’∼ father’)) it relates to the
ID of an operating system window. If FatherWindow is passed the value 0 or ’root’, then under Windows and
Unix the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via set_check) is irrelevant.
Position and size of a window may change during runtime of a program. This may be achieved by calling
set_window_extents, but also through external interferences (window manager). In the latter case the pro-
cedure set_window_extents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with procedures like
write_string and you may overwrite it by performing set_font after calling open_textwindow.
On the other hand you have the possibility to specify a default font by calling set_system(::
’default_font’,<Fontname>:) before opening a window (and all following windows; see also
query_font).
You may set the color of the font ( write_string, read_string) by calling set_color, set_rgb,
set_hsi, set_gray or set_pixel. Calling set_insert specifies how the text or the graphics, re-
spectively, is combined with the content of the image repeat memory. So you may achieve by calling, e.g.,
set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
Normally every output (e.g., write_string, disp_region, disp_circle, etc.) in a window is termi-
nated by a "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or there is a mouse proce-
dure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available. You
may stop this behavior by calling set_system(::’flush_graphic’,’false’:).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also
if the window is hidden by other windows. But this is not necessary in all cases: If you use a textual window,
e.g., as a parent window for other windows, you may suppress the security mechanism for it and save the neces-
sary memory at the same moment. You achieve this before opening the window by calling set_system(::
’backing_store’,’false’:).
Difference: graphical window - textual window
• In contrast to graphical windows ( open_window) you may specify more parameters (color, edge) for a
textual window while opening it.
• You may use textual windows only for input of user data ( read_string).
• Using textual windows, the output of images, regions and graphics is "‘clipped"’ at the edges. Whereas
during the use of graphical windows the edges are "‘zoomed"’.
• The coordinate system (e.g., with get_mbutton or get_mposition) consists of display coordinates
independently of image size. The maximum coordinates are equal to the size of the window minus 1. In
contrast to this, graphical windows ( open_window) use always a coordinate system, which corresponds to
the image format.
The parameter Mode specifies the mode of the window. It can have following values:
’visible’: Normal mode for textual windows: The window is created according to the parameters and all inputs
and outputs are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like Row, Column, BorderWidth,
BorderColor, BackgroundColor and FatherWindow do not have any meaning. Output to these
windows has no effect. Input ( read_string, mouse, etc.) is not possible. You may use these windows
to query representation parameter for an output device without opening a (visible) window. General queries
are, e.g., query_color and get_string_extents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but
all the other operations are possible and all output is displayed. Parameters like BorderColor and
BackgroundColor do not have any meaning. A common use for this mode is the creation of mouse
sensitive regions.
HALCON 8.0.2
436 CHAPTER 6. GRAPHICS
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on
the display, but is stored in memory. Parameters like Row, Column, BorderWidth, BorderColor,
BackgroundColor and FatherWindow do not have any meaning. You may use buffer windows, if you
prepare output (in the background) and copy it finally with copy_rectangle in a visible window. An-
other usage might be the rapid processing of image regions during interactive manipulations. Textual input
and mouse interaction are not possible in this mode.
Attention
You have to keep in mind that parameters like Row, Column, Width and Height are restricted by the output
device. Is a father window (FatherWindow <> ’root’) specified, then the coordinates are relative to this window.
Parameter
open_textwindow(0,0,900,600,1,’black’,’slate blue’,’root’,’visible’,
’’WindowHandle)
open_textwindow(10,10,300,580,3,’red’,’blue’,Father,’visible’,
’’WindowHandle)
open_window(10,320,570,580,Father,’visible’,’’WindowHandle)
set_color(WindowHandle,’red’)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
repeat()
get_mposition(WindowHandle,Row,Column,Button)
get_grayval(Image,Row,Column,1,Gray)
write_string(WindowHandle,[’ Position (’,Row,’,’,Column,’) ’])
write_string(WindowHandle,[’Gray value (’,Gray,’) ’])
new_line(WindowHandle)
until(Button = 4)
close_window(WindowHandle)
clear_obj(Image).
Result
If the values of the specified parameters are correct open_textwindow returns 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
open_textwindow is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_window
See also
write_string, read_string, new_line, get_string_extents, get_tposition,
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Module
Foundation
HALCON 8.0.2
438 CHAPTER 6. GRAPHICS
open_window opens a new window, which can be used to perform output of gray value data, regions, graphics as
well as to perform textual output. All output ( disp_region, disp_image, etc.) is redirected to this window,
if the same logical window number WindowHandle is used.
The background of the created window is set to black in advance and it has a white border, which is 2 pixels wide
(see also set_window_attr(::’border_width’,<Breite>:).
Certain parameters used for the editing of output data are assigned to a window. These parameters are considered
during the output itself (e.g., with disp_image or disp_region). They are not specified by an output
procedure, but by "‘configuration procedures"’. If you want to set, e.g., the color red for the output of regions, you
have to call set_color(::WindowHandle,’red’:) before calling disp_region. These parameters
are always set for the window with the logical window number WindowHandle and remain assigned to a window
as long as they will be overwritten. You may use the following configuration procedures:
• Output of gray values: set_paint, set_comprise, ( set_lut and set_lut_style after output)
• Regions: set_color, set_rgb, set_hsi, set_gray, set_pixel, set_shape,
set_line_width, set_insert, set_line_style, set_draw
• Image clipping: set_part
• Text: set_font
You may query current set values by calling procedures like get_shape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available ressources by calling query_color.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximal: Height-1), the column index grows to the right (maximal: Width-1). You
have to keep in mind, that the range of the coordinate system is independent of the window size. It is specified
only through the image format (see reset_obj_db).
The parameter Machine indicates the name of the computer, which has to open the window. In case of a X-
window, TCP-IP only sets the name, DEC-Net sets in addition a colon behind the name. The "‘server"’ resp. the
"‘screen"’ are not specified. If the empty string is passed the environment variable DISPLAY is used. It indicates
the target computer. At this the name is indicated in common syntax
<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter FatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via set_check,
FatherWindow relates to the ID of a HALCON window, otherwise ( set_check(’∼ father’)) it relates to the
ID of an operating system window. If FatherWindow is passed the value 0 or ’root’, then under Windows and
Unix the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via set_check) is irrelevant.
You may use the value "‘-1"’ for parameters Width and Height. This means, that the according value has
to be specified automatically. In particular this is of importance, if the proportion of pixels is not 1.0 (see
set_system): Is one of the two parameters set to "‘-1"’, it will be specified through the size which results
out of the proportion of pixels. Are both parameters set to "‘-1"’, they will be set to the maximum image format,
which is currently used (further information about the currently used maximum image format can be found in the
description of get_system using "‘width"’ or "‘height"’).
Position and size of a window may change during runtime of a program. This may be achieved by calling
set_window_extents, but also through external interferences (window manager). In the latter case the pro-
cedure set_window_extents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with procedures
like write_string and you may overwrite it by performing set_font after calling open_window.
On the other hand you have the possibility to specify a default font by calling set_system(::
’default_font’,<Fontname>:) before opening a window (and all following windows; see also
query_font).
You may set the color of graphics and font, which is used for output procedures like disp_region or
disp_circle, by calling set_rgb, set_hsi, set_gray or set_pixel. Calling set_insert
specifies how graphics is combined with the content of the image repeat memory. Thereto you may achieve by
calling, e.g., set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
Normally every output (e.g., disp_image, disp_region, disp_circle, etc.) in a window is terminated
by a called "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or if there is a mouse
procedure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available.
You may stop this behavior by calling set_system(::’flush_graphic’,’false’:).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also if
the window is hidden by other windows. But this is not necessary in all cases: If the content of a window is built
up permanently new ( copy_rectangle), you may suppress the security mechanism for that and hence you
can save the necessary memory. This is done by calling set_system(::’backing_store’,’false’:)
before opening a window. In doing so you save not only memory but also time to compute. This is significant for
the output of video clips (see copy_rectangle).
For graphical output ( disp_image, disp_region, etc.) you may adjust the window by calling procedure
set_part in order to represent a logical clipping of the image format. In particular this implicates that you
obtain this clipping (with appropriate enlargement) of images and regions only.
Difference: graphical window - textual window
• Using graphical windows the layout is not as variable as concerned to textual windows.
• You may use textual windows for the input of user data only ( read_string).
• During the output of images, regions and graphics a "‘zooming"’ is performed using graphical windows:
Independent on size and side ratio of the window images are transformed in that way, that they are displayed
in the window by filling it completely. On the opposite side using textual windows the output does not care
about the size of the window (only if clipping is necessary).
• Using graphical windows the coordinate system of the window corresponds to the coordinate system of
the image format. Using textual windows, its coordinate system is always equal to the display coordinates
independent on image size.
The parameter Mode determines the mode of the window. It may have following values:
’visible’: Normal mode for graphical windows: The window is created according to the parameters and all input
and output are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like Row, Column and
FatherWindow do not have any meaning. Output to these windows has no effect. Input ( read_string,
mouse, etc.) is not possible. You may use these windows to query representation parameter for an
output device without opening a (visible) window. Common queries are, e.g., query_color and
get_string_extents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but all
the other operations are possible and all output is displayed. A common use for this mode is the creation of
mouse sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on the
display, but is stored in memory. Parameters like Row, Column and FatherWindow do not have any
meaning. You may use buffer windows, if you prepare output (in the background) and copy it finally with
copy_rectangle in a visible window. Another usage might be the rapid processing of image regions
during interactive manipulations. Textual input and mouse interaction are not possible in this mode.
Attention
You may keep in mind that parameters as Row, Column, Width and Height are constrained by the output
device. If you specify a father window (FatherWindow <> ’root’) the coordinates are relative to this window.
Parameter
HALCON 8.0.2
440 CHAPTER 6. GRAPHICS
open_window(0,0,400,-1,’root’,’visible’,’’,WindowHandle)
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
write_string(WindowHandle,’File, fabrik.ima’)
new_line(WindowHandle)
get_mbutton(WindowHandle,_,_,_)
set_lut(WindowHandle,’temperature’)
set_color(WindowHandle,’blue’)
write_string(WindowHandle,’temperature’)
new_line(WindowHandle)
write_string(WindowHandle,’Draw Rectangle’)
new_line(WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
set_part(Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
new_line(WindowHandle).
Result
If the values of the specified parameters are correct open_window returns 2 (H_MSG_TRUE). If necessary an
exception handling is raised.
Parallelization Information
open_window is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_textwindow
See also
disp_region, disp_image, disp_color, set_lut, query_color, set_color, set_rgb,
set_hsi, set_pixel, set_gray, set_part, set_part_style, query_window_type,
get_window_type, set_window_type, get_mposition, set_tposition,
set_window_extents, get_window_extents, set_window_attr, set_check, set_system
Module
Foundation
query_window_type ( : : : WindowTypes )
Parameter
’border_width’ Width of the window border in pixels. Is not implemented under Windows.
’border_color’ Color of the window border. Is not implemented under Windows.
’background_color’ Background color of the window.
’window_title’ Name of the window in the titlebar.
HALCON 8.0.2
442 CHAPTER 6. GRAPHICS
Attention
You have to call set_window_attr before calling open_window.
Parameter
. AttributeName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the attribute that should be modified.
List of values : AttributeName ∈ {’border_width’, ’border_color’, ’background_color’, ’window_title’}
. AttributeValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .string ; string / integer
Value of the attribute that should be set.
List of values : AttributeValue ∈ {0, 1, 2, ’white’, ’black’, ’MyName’, ’default’}
Result
If the parameters are correct set_window_attr returns 2 (H_MSG_TRUE). If necessary an exception handling
is raised.
Parallelization Information
set_window_attr is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
open_window, get_window_attr
Module
Foundation
hWnd = createWINDOW(...)
new_extern_window(hwnd, hdc, 0,0,400,-1,WindowHandle)
set_device_context(WindowHandle, hdc)
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
write_string(WindowHandle,’File, fabrik.ima’)
new_line(WindowHandle)
get_mbutton(WindowHandle,_,_,_)
set_lut(WindowHandle,’temperature’)
set_color(WindowHandle,’blue’)
write_string(WindowHandle,’temperature’)
new_line(WindowHandle)
write_string(WindowHandle,’Draw Rectangle’)
new_line(WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
set_part(Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
new_line(WindowHandle).
Result
If the values of the specified parameters are correct, set_window_dc returns 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Parallelization Information
set_window_dc is reentrant, local, and processed without parallelization.
Possible Predecessors
new_extern_window
Possible Successors
disp_image, disp_region
See also
new_extern_window, disp_region, disp_image, disp_color, set_lut, query_color,
set_color, set_rgb, set_hsi, set_pixel, set_gray, set_part, set_part_style,
query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_window_extents, get_window_extents, set_window_attr,
set_check, set_system
Module
Foundation
HALCON 8.0.2
444 CHAPTER 6. GRAPHICS
set_window_type ( : : WindowType : )
read_image(Image,’fabrik’)
sobel_amp(Image,Amp,’sum_abs’,3)
open_window(0,0,-1,-1,’root’,’buffer’,’’,WindowHandle)
disp_image(Amp,WindowHandle)
sobel_dir(Image,Dir,’sum_abs’,3)
open_window(0,0,-1,-1,’root’,’buffer’,’’,WindowHandle)
disp_image(Dir,WindowHandle)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
slide_image(Puffer1,Puffer2,WindowHandle).
Result
If the both windows exist and one of these windows is valid slide_image returns 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
slide_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
copy_rectangle, get_mposition
See also
open_window, open_textwindow, move_rectangle
Module
Foundation
HALCON 8.0.2
446 CHAPTER 6. GRAPHICS
Image
7.1 Access
get_grayval ( Image : : Row, Column : Grayval )
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image whose gray value is to be accessed.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; integer
Line numbers of pixels to be viewed.
Default Value : 0
Suggested values : Row ∈ {0, 64, 128, 256, 512, 1024}
Typical range of values : 0 ≤ Row ≤ 32768 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : (0 ≤ Row) ∧ (Row < height(Image))
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; integer
Column numbers of pixels to be viewed.
Default Value : 0
Suggested values : Column ∈ {0, 64, 128, 256, 512, 1024}
Typical range of values : 0 ≤ Column ≤ 32768 (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Column = Row
Restriction : (0 ≤ Column) ∧ (Column < width(Image))
. Grayval (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . grayval(-array) ; real / integer
Gray values of indicated pixels.
Number of elements : Grayval = Row
447
448 CHAPTER 7. IMAGE
Result
If the state of the parameters is correct the operator get_grayval returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_grayval is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
get_image_pointer1
See also
set_grayval
Module
Foundation
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. Pointer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; integer
Pointer to the image data in the HALCON database.
. Type (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of image.
List of values : Type ∈ {’int1’, ’int2’, ’uint2’, ’int4’, ’byte’, ’real’, ’direction’, ’cyclic’, ’complex’,
’vector_field’}
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of image.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of image.
Example (Syntax: C)
Hobject Bild;
char typ[128];
long width,height;
unsigned char *ptr;
read_image(&Bild,"fabrik");
get_image_pointer1(Bild,(long*)&ptr,typ,&width,&height);
Result
The operator get_image_pointer1 returns the value 2 (H_MSG_TRUE) if exactly one image was passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer1 is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
set_grayval, get_grayval, get_image_pointer3
See also
paint_region, paint_gray
Module
Foundation
Access to the image data pointer and the image data inside the smallest rectangle of the domain of the input image.
The operator get_image_pointer1_rect returns the pointer PixelPointer which points to the
beginning of the image data inside the smallest rectangle of the domain of Image. VerticalPitch
corresponds to the width of the input image Image multiplied with the number of bytes per pixel
(HorizontalBitPitch / 8). Width and Height correspond to the size of the smallest rectangle of the
input region. HorizontalBitPitch is the horizontal distance (in bits) between two neighbouring pixels.
BitsPerPixel is the number of used bits per pixel. get_image_pointer1_rect is symmetrical to
gen_image1_rect.
Attention
The operator get_image_pointer1_rect should only be used for entry into newly created images, since
otherwise the gray values of other images might be overwritten (see relational structure).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / int4
Input image (Himage).
. PixelPointer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; integer
Pointer to the image data.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the output image.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the output image.
. VerticalPitch (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width(input image)*(HorizontalBitPitch/8).
. HorizontalBitPitch (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Distance between two neighbouring pixels in bits .
Default Value : 8
List of values : HorizontalBitPitch ∈ {8, 16, 32}
. BitsPerPixel (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of used bits per pixel.
Default Value : 8
List of values : BitsPerPixel ∈ {8, 16, 32}
HALCON 8.0.2
450 CHAPTER 7. IMAGE
Example (Syntax: C)
Hobject image,reg,imagereduced;
char typ[128];
long width,height,vert_pitch,hori_bit_pitch,bits_per_pix, winID;
unsigned char *ptr;
open_window(0,0,512,512,"black",winID);
read_image(&image,"monkey");
draw_region(®,winID);
reduce_domain(image,reg,&imagereduced);
get_image_pointer1_rect(imagereduced,(long*)&ptr,&width,&height,
&vert_pitch,&hori_bit_pitch,&bits_per_pix);
Result
The operator get_image_pointer1_rect returns the value 2 (H_MSG_TRUE) if exactly one image was
passed. The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer1_rect is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image1_rect
Alternatives
set_grayval, get_grayval, get_image_pointer3, get_image_pointer1
See also
paint_region, paint_gray, gen_image1_rect
Module
Foundation
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. MSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Milliseconds (0..999).
. Second (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seconds (0..59).
. Minute (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minutes (0..59).
. Hour (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Hours (0..11).
. Day (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Day of the month (1..31).
. YDay (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Day of the year (1..365).
. Month (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Month (1..12).
. Year (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Year (xxxx).
Result
The operator get_image_time returns the value 2 (H_MSG_TRUE) if exactly one image was passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_time is reentrant and processed without parallelization.
Possible Predecessors
read_image, grab_image
HALCON 8.0.2
452 CHAPTER 7. IMAGE
See also
count_seconds
Module
Foundation
7.2 Acquisition
close_all_framegrabbers ( : : : )
close_framegrabber ( : : AcqHandle : )
HALCON 8.0.2
454 CHAPTER 7. IMAGE
Grab images and preprocessed image data from the specified image acquisition device.
The operator grab_data grabs images and preprocessed image data via the image acquisition device specified
by AcqHandle. The desired operational mode of the image acquisition device as well as a suitable image part
can be adjusted via the operator open_framegrabber. Additional interface-specific settings can be specified
via set_framegrabber_param. Depending on the current configuration of the image acquisition device,
the preprocessed image data can be returned in terms of images (Image), regions (Region), XLD contours
(Contours), and control data (Data).
Parameter
Example
Result
If the image acquisition device is open and supports the image acquisition via grab_data, the operator
grab_data returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
grab_data is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, grab_image_start, set_framegrabber_param
Possible Successors
grab_data, grab_data_async, grab_image_start, grab_image, grab_image_async,
set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
Grab images and preprocessed image data from the specified image image acquisition device and start the next
asynchronous grab.
The operator grab_data grabs images and preprocessed image data via the image acquisition device specified
by AcqHandle and starts the next asynchronous grab. The desired operational mode of the image acquisition
device as well as a suitable image part can be adjusted via the operator open_framegrabber. Additional
interface-specific settings can be specified via set_framegrabber_param. The segmented image regions
are returned in Region. Depending on the current configuration of the image acquisition device, the preprocessed
image data can be returned in terms of images (Image), regions (Region), XLD contours (Contours), and
control data (Data).
The grab of the next image is finished by calling grab_data_async or grab_image_async. If more
than MaxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is
considered as too old and a new image is grabbed. If a negative value is assigned to MaxDelay this control
mechanism is deactivated.
Please note that if you call the operators grab_image or grab_data after grab_data_async, the asyn-
chronous grab started by grab_data_async is aborted and a new image is grabbed (and waited for).
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Grabbed image data.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Pre-processed image regions.
. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
Pre-processed XLD contours.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; integer
Handle of the acquisition device to be used.
HALCON 8.0.2
456 CHAPTER 7. IMAGE
Result
If the image acquisition device is open and supports the image acquisition via grab_data_async, the operator
grab_data_async returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
grab_data_async is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, grab_image_start, set_framegrabber_param
Possible Successors
grab_data_async, grab_image_async, set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
Result
If the image could be acquired successfully, the operator grab_image returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
grab_image is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image, grab_image_start, grab_image_async, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
Grab an image from the specified image acquisition device and start the next asynchronous grab.
The operator grab_image_async grabs an image via the image acquisition device by AcqHandle and starts
the asynchronous grab of the next image. The desired operational mode of the image acquisition device as well
as a suitable image part can be adjusted via the operator open_framegrabber. Additional interface-specific
settings can be specified via set_framegrabber_param.
The grab of the next image is finished by calling grab_image_async or grab_data_async. If more
than MaxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is
considered as too old and a new image is grabbed. If a negative value is assigned to MaxDelay this control
mechanism is deactivated.
Please note that if you call the operators grab_image or grab_data after grab_image_async, the
asynchronous grab started by grab_image_async is aborted and a new image is grabbed (and waited for).
Parameter
Result
If the image acquisition device is open and supports asynchronous grabbing the operator grab_image_start
returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
HALCON 8.0.2
458 CHAPTER 7. IMAGE
Parallelization Information
grab_image_async is reentrant and processed without parallelization.
Possible Predecessors
grab_image_start, open_framegrabber, set_framegrabber_param
Possible Successors
grab_image_async, grab_data_async, set_framegrabber_param, close_framegrabber
See also
grab_image_start, open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
Result
If the image acquisition device is open and supports asynchronous grabbing the operator grab_image_start
returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
grab_image_start is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image_async, grab_data_async, set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
’bits_per_channel’: List of all supported values for the parameter ’BitsPerChannel’, see
open_framegrabber.
’camera_type’: Description and list of all supported values for the parameter ’CameraType’, see
open_framegrabber.
’color_space’: List of all supported values for the parameter ’ColorSpace’, see open_framegrabber.
’defaults’: Interface-specific default values in ValueList, see open_framegrabber.
’device’: List of all supported values for the parameter ’Device’, see open_framegrabber.
’external_trigger’: List of all supported values for the parameter ’ExternalTrigger’, see
open_framegrabber.
’field’: List of all supported values for the parameter ’Field’, see open_framegrabber.
’general’: General information (in Information).
’horizontal_resolution’: List of all supported values for the parameter ’HorizontalResolution’, see
open_framegrabber.
’image_height’: List of all supported values for the parameter ’ImageHeight’, see open_framegrabber.
’image_width’: List of all supported values for the parameter ’ImageWidth’, see open_framegrabber.
’info_boards’: Information about actually installed boards or cameras. This data is especially useful for the auto-
detect mechansim of ActivVisionTools and for the Image Acquisition Assistant in HDevelop.
’line_in’: List of all supported values for the parameter ’LineIn’, see open_framegrabber.
’parameters’: List of all interface-specific parameters which are accessible via set_framegrabber_param
or get_framegrabber_param.
’parameters_readonly’: List of all interface-specific parameters which are only accessible via
get_framegrabber_param.
’parameters_writeonly’: List of all interface-specific parameters which are only accessible via
set_framegrabber_param.
’port’: List of all supported values for the parameter ’Port’, see open_framegrabber.
’revision’: Version number of the image acquisition interface.
’start_column’: List of all supported values for the parameter ’StartColumn’, see open_framegrabber.
’start_row’: List of all supported values for the parameter ’StartRow’, see open_framegrabber.
’vertical_resolution’: List of all supported values for the parameter ’VerticalResolution’, see
open_framegrabber.
Please check also the directory doc/html/manuals for documentation about specific image grabber interfaces.
HALCON 8.0.2
460 CHAPTER 7. IMAGE
Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library
(Linux/UNIX).
Default Value : ’File’
Suggested values : Name ∈ {’1394IIDC’, ’ABS’, ’BaumerFCAM’, ’BitFlow’, ’DahengCAM’, ’DahengFG’,
’DFG-LC’, ’DirectFile’, ’DirectShow’, ’dPict’, ’DT315x’, ’DT3162’, ’eneo’, ’eXcite’, ’FALCON’, ’File’,
’FlashBusMV’, ’FlashBusMX’, ’GigEVision’, ’Ginga++’, ’GingaDG’, ’INSPECTA’, ’INSPECTA5’,
’iPORT’, ’Leutron’, ’LinX’, ’LuCam’, ’MatrixVisionAcquire’, ’MILLite’, ’mEnableIII’, ’mEnableIV’,
’mEnableVisualApplets’, ’MultiCam’, ’Opteon’, ’p3i2’, ’p3i4’, ’PX’, ’PXC’, ’PXD’, ’PXR’, ’pylon’,
’RangerC’, ’RangerE’, ’SaperaLT’, ’SonyXCI’, ’TAG’, ’TWAIN’, ’uEye’, ’VRmUsbCam’}
. Query (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the chosen query.
Default Value : ’info_boards’
List of values : Query ∈ {’defaults’, ’general’, ’info_boards’, ’parameters’, ’parameters_readonly’,
’parameters_writeonly’, ’revision’, ’bits_per_channel’, ’camera_type’, ’color_space’, ’device’,
’external_trigger’, ’field’, ’generic’, ’horizontal_resolution’, ’image_height’, ’image_width’, ’port’,
’start_column’, ’start_row’, ’vertical_resolution’}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Textual information (according to Query).
. ValueList (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string / integer / real
List of values (according to Query).
Example
Result
If the parameter values are correct and the specified image acquistion interface is available,
info_framegrabber returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
info_framegrabber is processed completely exclusively without parallelization.
Possible Predecessors
open_framegrabber
Possible Successors
open_framegrabber
See also
open_framegrabber
Module
Foundation
processes, and, if necessary, memory is reserved for the data buffers. The actual image grabbing is done via the
operators grab_image, grab_data, grab_image_async, or grab_data_async. If the image acqui-
sition device is not needed anymore, it should be closed via the operator close_framegrabber, releasing it
for other processes. Some image acquisition devices allow to open several instances of the same image acquisition
device class.
For all parameters image acquisition device-specific default values can be chosen explicitly (see the pa-
rameter description below). Additional information for a specific image acquisition device is available via
info_framegrabber. A comprehensive documentation of all image acquistion device-specific parameters
con be found in the corresponding description file in the directory doc/html/manuals.
The meaning of the particular parameters is as follows:
The operator open_framegrabber returns a handle (AcqHandle) to the opened image acquisition device.
Attention
Due to the multitude of supported image acquisition devices, open_framegrabber contains a large number
of parameters. However, not all parameters are needed for a specific image acquisition device.
Parameter
HALCON 8.0.2
462 CHAPTER 7. IMAGE
info_framegrabber(AcqName,’port’,Information,Values)
// Choose the port P and the input line L your camera is connected to
open_framegrabber(AcqName,1,1,0,0,0,0,’default’,-1,’default’,-1.0,
’default’,’default’,’default’,P,L,AcqHandle)
grab_image(Image,AcqHandle)
close_framegrabber(AcqHandle)
Result
If the parameter values are correct and the desired image acquisition device could be opened,
open_framegrabber returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
open_framegrabber is processed completely exclusively without parallelization.
Possible Predecessors
info_framegrabber
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async,
set_framegrabber_param
See also
info_framegrabber, close_framegrabber, grab_image
Module
Foundation
HALCON 8.0.2
464 CHAPTER 7. IMAGE
7.3 Channel
access_channel ( MultiChannelImage : Image : Channel : )
Example (Syntax: C)
Parallelization Information
access_channel is reentrant and processed without parallelization.
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
decompose2, decompose3, decompose4, decompose5
See also
count_channels
Module
Foundation
. MultiChannelImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2
/ int4 / real / complex / vector_field
Multichannel image.
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Image to be appended.
. ImageExtended (output_object) . . . . . . multichannel-image ; Hobject : byte / direction / cyclic / int1 /
int2 / uint2 / int4 / real / complex / vec-
tor_field
Image appended by Image.
Parallelization Information
append_channel is reentrant and processed without parallelization.
Possible Successors
disp_image
Alternatives
compose2, compose3, compose4, compose5
Module
Foundation
HALCON 8.0.2
466 CHAPTER 7. IMAGE
The operator channels_to_image converts several one-channel images into a multichannel image. The new
definition domain is the average of the definition domains of the input images.
Parameter
. Images (input_object) . . . . . . singlechannel-image-array ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
One-channel images to be combined into a one-channel image.
. MultiChannelImage (output_object) . . . . . . multichannel-image ; Hobject : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real / com-
plex / vector_field
Multichannel image.
Parallelization Information
channels_to_image is reentrant and processed without parallelization.
Possible Successors
count_channels, disp_image
Module
Foundation
Parameter
. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 3.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 /
int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose3 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose3
Module
Foundation
HALCON 8.0.2
468 CHAPTER 7. IMAGE
Alternatives
append_channel
See also
decompose4
Module
Foundation
Parameter
. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 3.
. Image4 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 4.
. Image5 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 5.
. Image6 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 6.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 /
int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose6 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose6
Module
Foundation
HALCON 8.0.2
470 CHAPTER 7. IMAGE
read_image(&Color,"patras");
count_channels(Color,&num_channels);
for (i=1; i<=num_channels; i++)
{
access_channel(Color,&Channel,i);
disp_image(Channel,WindowHandle);
clear_obj(Channel);
}
Parallelization Information
count_channels is reentrant and processed without parallelization.
Possible Successors
access_channel, append_channel, disp_image
See also
append_channel, access_channel
Module
Foundation
HALCON 8.0.2
472 CHAPTER 7. IMAGE
HALCON 8.0.2
474 CHAPTER 7. IMAGE
7.4 Creation
copy_image ( Image : DupImage : : )
HALCON 8.0.2
476 CHAPTER 7. IMAGE
copy_image copies the input image into a new image with the same domain as the input image. In contrast
to HALCON operators such as copy_obj, physical copies of all channels are created. This can be used, for
example, to modify the gray values of the new image (see get_image_pointer1).
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Image to be copied.
. DupImage (output_object) . . . . . . (multichannel-)image ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Copied image.
Parallelization Information
copy_image is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const
Possible Successors
set_grayval, get_image_pointer1
Alternatives
set_grayval, paint_gray, gen_image_const, gen_image_proto
See also
get_image_pointer1
Module
Foundation
Result
If the parameter values are correct, the operator gen_image1 returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
gen_image1 is reentrant and processed without parallelization.
Possible Predecessors
gen_image_const, get_image_pointer1
Alternatives
gen_image3, gen_image_const, get_image_pointer1
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation
. Image (output_object) . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created HALCON image.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Pixel type.
Default Value : ’byte’
List of values : Type ∈ {’int1’, ’int2’, ’uint2’, ’int4’, ’byte’, ’real’, ’direction’, ’cyclic’}
HALCON 8.0.2
478 CHAPTER 7. IMAGE
Result
The operator gen_image1_extern returns the value 2 (H_MSG_TRUE) if the parameter values are correct.
Otherwise an exception handling is raised.
Parallelization Information
gen_image1_extern is reentrant and processed without parallelization.
Alternatives
gen_image1, gen_image_const, get_image_pointer1
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation
Create an image with a rectangular domain from a pointer on the pixels (with storage management).
The operator gen_image1_rect creates an image of size (VerticalPitch/(HorizontalBitPitch /
8)) * Height. The pixels pointed to by PixelPointer are stored line by line. Since the type of the parameter
PixelPointer is generic (long) a cast must be used for the call. VerticalPitch determines the distance
(in bytes) between pixel m in row n and pixel m in row n+1 inside of memory. All rows of the ’input image’ have
the same vertical pitch. The width of the output image equals VerticalPitch / (HorizontalBitPitch /
8). The height of input and output image are equal. The domain of the output image Image is a rectangle of the
size Width * Height. The parameter HorizontalBitPitch is the horizontal distance (in bits) between two
neighbouring pixels. BitsPerPixel is the number of used bits per pixel.
If DoCopy is set ’true’, the image data pointed to by PixelPointer is copied and memory for the new image is
newly allocated by HALCON . Else the image data is not duplicated and the memory space that PixelPointer
points to must be released when deleting the object Image. This is done by the procedure ClearProc provided
by the caller. This procedure must have the following signature
void ClearProc(void* ptr);
and will be called using __cdecl calling convention when deleting Image. If the memory shall not be released
(in the case of frame grabbers or static memory) a procedure ”without trunk” or the NULL-pointer can be passed.
Analogously to the parameter PixelPointer the pointer has to be passed to the procedure by casting it to
long. If DoCopy is ’true’ then ClearProc is irrelevant. The operator gen_image1_rect is symmetrical to
get_image_pointer1_rect.
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / int4
Created HALCON image.
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; integer
Pointer to the first pixel.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. VerticalPitch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Distance (in bytes) between pixel m in row n and pixel m in row n+1 of the ’input image’.
Restriction : VerticalPitch ≥ (Width · (HorizontalBitPitch/8))
. HorizontalBitPitch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Distance between two neighbouring pixels in bits .
Default Value : 8
List of values : HorizontalBitPitch ∈ {8, 16, 32}
. BitsPerPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of used bits per pixel.
Default Value : 8
List of values : BitsPerPixel ∈ {8, 9, 10, 11, 12, 13, 14, 15, 16, 32}
Restriction : BitsPerPixel ≤ HorizontalBitPitch
. DoCopy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Copy image data.
Default Value : ’false’
Suggested values : DoCopy ∈ {’true’, ’false’}
. ClearProc (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; integer
Pointer to the procedure releasing the memory of the image when deleting the object.
Default Value : 0
Example (Syntax: C)
HALCON 8.0.2
480 CHAPTER 7. IMAGE
{
unsigned char *image;
int r,c;
image = malloc(640*480);
for (r=0; r<480; r++)
for (c=0; c<640; c++)
image[r*640+c] = c % 255;
gen_image1_rect(new,(long)image,400,480,640,8,8,’false’,(long)free);
}
Result
The operator gen_image1_rect returns the value 2 (H_MSG_TRUE) if the parameter values are correct.
Otherwise an exception handling is raised.
Parallelization Information
gen_image1_rect is reentrant and processed without parallelization.
Possible Successors
get_image_pointer1_rect
Alternatives
gen_image1, gen_image1_extern
See also
get_image_pointer1_rect
Module
Foundation
main()
{
Hobject rgb;
open_window(0,0,768,525,0,"","",&WindowHandle);
NewRGBImage(&rgb);
disp_color(rgb,WindowHandle);
clear_obj(rgb);
}
Result
If the parameter values are correct, the operator gen_image3 returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
gen_image3 is reentrant and processed without parallelization.
Possible Predecessors
gen_image_const, get_image_pointer1
Possible Successors
disp_color
Alternatives
gen_image1, compose3, gen_image_const
See also
reduce_domain, paint_gray, paint_region, set_grayval, get_image_pointer1,
decompose3
Module
Foundation
HALCON 8.0.2
482 CHAPTER 7. IMAGE
gen_image_const(&New,"byte",width,height);
get_image_pointer1(New,(long*)&pointer,type,&width,&height);
for (row=0; row<height-1; row++)
for (col=0; col<width-1; col++)
pointer[row*width+col] = (row + col) % 256;
Result
If the parameter values are correct, the operator gen_image_const returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
gen_image_const is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain, get_image_pointer1, copy_obj
Alternatives
gen_image1, gen_image3
See also
reduce_domain, paint_gray, paint_region, set_grayval, get_image_pointer1
Module
Foundation
The size of the image is determined by Width and Height The gray values are of the type byte. Gray values
outside the valid area are clipped.
Parameter
. ImageGrayRamp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Created image with new image matrix.
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Gradient in line direction.
Default Value : 1.0
Suggested values : Alpha ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Gradient in column direction.
Default Value : 1.0
Suggested values : Beta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Mean (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Mean gray value.
Default Value : 128
Suggested values : Mean ∈ {0, 20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 255}
Minimum Increment : 1
Recommended Increment : 10
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Line index of reference point.
Default Value : 256
Suggested values : Row ∈ {128, 256, 512, 1024}
Minimum Increment : 1
Recommended Increment : 10
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column index of reference point.
Default Value : 256
Suggested values : Column ∈ {128, 256, 512, 1024}
Minimum Increment : 1
Recommended Increment : 10
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
HALCON 8.0.2
484 CHAPTER 7. IMAGE
Parameter
HALCON 8.0.2
486 CHAPTER 7. IMAGE
The size of the image is determined by Width and Height. The gray values are of the type Type. Gray values
outside the valid area are clipped.
Parameter
HALCON 8.0.2
488 CHAPTER 7. IMAGE
ImageSurface(r, c) = Alpha(r−Row)∗∗2+Beta(c−Col)∗∗2+Gamma(r−Row)∗(c−Col)+Delta(r−Row)+Epsilon(c
The size of the image is determined by Width and Height. The gray values are of the type Type. Gray values
outside the valid area are clipped.
Parameter
HALCON 8.0.2
490 CHAPTER 7. IMAGE
Possible Predecessors
fit_surface_second_order
Possible Successors
sub_image
See also
gen_image_gray_ramp, gen_image_surface_first_order
Module
Foundation
HALCON 8.0.2
492 CHAPTER 7. IMAGE
Result
region_to_label always returns 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty in-
put region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling
is raised.
Parallelization Information
region_to_label is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection, expand_region
Possible Successors
get_grayval, get_image_pointer1
Alternatives
region_to_bin, paint_region
See also
label_to_region
Module
Foundation
read_image(Image,’fabrik’)
region_growing(Image,Regions,3,3,6,100)
region_to_mean(Regions,Image,Disp)
disp_image(Disp,WindowHandle)
set_draw(WindowHandle,’margin’)
set_color(WindowHandle,’black’)
disp_region(Regions,WindowHandle).
Result
region_to_mean returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
region_to_mean is reentrant and processed without parallelization.
Possible Predecessors
regiongrowing, connection
Possible Successors
disp_image
Alternatives
paint_region, intensity
Module
Foundation
7.5 Domain
add_channels ( Regions, Image : GrayRegions : : )
HALCON 8.0.2
494 CHAPTER 7. IMAGE
Parallelization Information
change_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
reduce_domain
See also
full_domain, get_domain, intersection
Module
Foundation
See also
get_domain, change_domain, reduce_domain, full_domain
Module
Foundation
HALCON 8.0.2
496 CHAPTER 7. IMAGE
The operator reduce_domain reduces the definition domain of the given image to the indicated region. The
new definition domain is calculated as the intersection of the old definition domain with the region. Thus, the new
definition domain can be a subset of the region. The size of the matrix is not changed.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
New definition domain.
. ImageReduced (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Image with reduced definition domain.
Parallelization Information
reduce_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
change_domain, rectangle1_domain, add_channels
See also
full_domain, get_domain, intersection
Module
Foundation
7.6 Features
area_center_gray ( Regions, Image : : : Area, Row, Column )
Compute the area and center of gravity of a region in a gray value image.
area_center_gray computes the area and center of gravity of the regions Regions that have gray values
which are defined by the image Image. This operator is similar to area_center, but in contrast to that
operator, the gray values of the image are taken into account while computing the area and center of gravity.
The area A of a region R in the image with the gray values g(r, c) is defined as
X
A= g(r, c).
(r,c)∈R
This means that the area is defined by the volume of the gray value function g(r, c). The center of gravity is defined
by the first two normalized moments of the gray values g(r, c), i.e., by (m1,0 , m0,1 ), where
1 X p q
mp,q = r c g(r, c).
A
(r,c)∈R
Parameter
HALCON 8.0.2
498 CHAPTER 7. IMAGE
Parallelization Information
cooc_feature_image is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
gen_cooc_matrix
Alternatives
cooc_feature_matrix
See also
intensity, min_max_gray, entropy_gray, select_gray
Module
Foundation
Contrast:
width
X
Contrast = (i − j)2 cij
i,j=0
Attention
The region of the input image is disregarded.
Parameter
. CoocMatrix (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
Co-occurrence matrix.
. Energy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Homogeneity of the gray values.
. Correlation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Correlation of gray values.
. Homogeneity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Local homogeneity of gray values.
. Contrast (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Gray value contrast.
Result
The operator cooc_feature_matrix returns the value 2 (H_MSG_TRUE) if an image with defined gray
values is passed and the parameters are correct. The behavior in case of empty input (no input images available)
is set via the operator set_system(::’no_object_result’,<Result>:). If necessary an exception
handling is raised.
Parallelization Information
cooc_feature_matrix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
gen_cooc_matrix
Alternatives
cooc_feature_image
See also
intensity, min_max_gray, entropy_gray, select_gray
Module
Foundation
Compute the orientation and major axes of a region in a gray value image.
The operator elliptic_axis_gray calculates the length of the axes and the orientation of the ellipse having
the “same orientation” and the “aspect ratio” as the input region. Several input regions can be passed in Regions
as tuples. The length of the major axis Ra and the minor axis Rb as well as the orientation of the major axis with
regard to the x-axis (Phi) are determined. The angle is returned in radians. The calculation is done analogously
to elliptic_axis. The only difference is that in elliptic_axis_gray the gray value moments are
used instead of the region moments. The gray value moments are derived from the input image Image. For the
definition of the gray value moments, see area_center_gray.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Image (input_object) . . . . . . singlechannel-image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real
Gray value image.
. Ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Major axis of the region.
. Rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Minor axis of the region.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Angle enclosed by the major axis and the x-axis.
Result
elliptic_axis_gray returns 2 (H_MSG_TRUE) if all parameters are correct and no error oc-
curs during execution. If the input is empty the behavior can be set via set_system(::
’no_object_result’,<Result>:). If necessary, an exception handling is raised.
HALCON 8.0.2
500 CHAPTER 7. IMAGE
Parallelization Information
elliptic_axis_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
gen_ellipse
Alternatives
elliptic_axis
See also
area_center_gray
Module
Foundation
Anisotropy coefficient:
Pk
0 rel[i] ∗ log2 (rel[i])
Anisotropy =
Entropy
where
rel[i] = Histogram of relative gray value frequencies
i = Gray value of input image (0 . . . 255)
Pk
k = Smallest possible gray value with 0 rel[i] ≥ 0.5
Parameter
Alternatives
select_gray
See also
entropy_image, gray_histo, gray_histo_abs, fuzzy_entropy, fuzzy_perimeter
Module
Foundation
To estimate the noise, one of the following four methods can be selected in Method:
• ’foerstner’: If Method is set to ’foerstner’, first for each pixel a homogeneity measure is computed based
on the first derivatives of the gray values of Image. By thresholding the homogeneity measure one obtains
the homogeneous regions in the image. The threshold is computed based on a starting value for the image
noise. The starting value is obtained by applying the method ’immerkaer’ (see below) in the first step. It
is assumed that the gray value fluctuations within the homogeneous regions are solely caused by the image
noise. Furthermore it is assumed that the image noise is Gaussian distributed. The average homogeneity
measure within the homogeneous regions is then used to calculate a refined estimate for the image noise.
The refined estimate leads to a new threshold for the homogeneity. The described process is iterated until the
estimated image noise remains constant between two successive iterations. Finally, the standard deviation of
the estimated image noise is returned in Sigma.
Note that in some cases the iteration falsely converges to the value 0. This happens, for example, if the gray
value histogram of the input image contains gaps that are caused either by an automatic radiometric scaling
of the camera or frame grabber, respectively, or by a manual spreading of the gray values using a scaling
factor > 1.
Also note that the result obtained by this method is independent of the value passed in Percent.
• ’immerkaer’: If Method is set to ’immerkaer’, first the following filter mask is applied to the input image:
1 −2 1
M = −2 4 −2 .
1 −2 1
The advantage of this method is that M is almost insensitive to image structure but only depends on the noise
in the image. Assuming a Gaussian distributed noise, its standard deviation is finally obtained as
r
π 1 X
Sigma = |Image ∗ M | ,
2 6N
Image
where N is the number of image pixels to which M is applied. Note that the result obtained by this method
is independent of the value passed in Percent.
• ’least_squares’: If Method is set to ’least_squares’, the fluctuations of the gray values with respect to a
locally fitted gray value plane are used to estimate the image noise. First, a homogeneity measure is computed
based on the first derivatives of the gray values of Image. Homogeneous image regions are determined by
selecting the Percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with
small magnitudes of the first derivatives. For each homogeneous pixel a gray value plane is fitted to its 3 × 3
HALCON 8.0.2
502 CHAPTER 7. IMAGE
neighborhood. The differences between the gray values within the 3 × 3 neighborhood and the locally fitted
plane are used to estimate the standard deviation of the noise. Finally, the average standard deviation over all
homogeneous pixels is returned in Sigma.
• ’mean’: If Method is set to ’mean’, the noise estimation is based on the difference between the input
image and a noiseless version of the input image. First, a homogeneity measure is computed based on the
first derivatives of the gray values of Image. Homogeneous image regions are determined by selecting
the Percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with small
magnitudes of the first derivatives. A mean filter is applied to the homogeneous image regions in order to
eliminate the noise. It is assumed that the difference between the input image and the thus obtained noiseless
version of the image represents the image noise. Finally, the standard deviation of the differences is returned
in Sigma. It should be noted that this method requires large connected homogenous image regions to be
able to reliably estimate the noise.
Note that the methods ’foerstner’ and ’immerkaer’ assume a Gaussian distribution of the image noise, whereas
the methods ’least_squares’ and’mean’ can be applied to images with arbitrarily distributed noise. In general, the
method ’foerstner’ returns the most accurate results while the method ’immerkaer’ shows the fastest computation.
If the image noise could not be estimated reliably, the error 3175 is raised. This may happen if the image does not
contain enough homogeneous regions, if the image was artificially created, or if the noise is not of Gaussian type.
In order to avoid this error, it might be useful in some cases to try one of the following modifications in dependence
of the estimation method that is passed in Method:
• Increase the size of the input image domain (useful for all methods).
• Increase the value of the parameter Percent (useful for methods ’least_squares’ and ’mean’).
• Use the method ’immerkaer’, instead of the methods ’foerstner’, ’least_squares’, or ’mean’. The method
’immerkaer’ does not rely on the existence of homogeneous image regions, and hence is almost always
applicable.
Parameter
Result
If the parameters are valid, the operator estimate_noise returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised. If the image noise could not be estimated reliably, the error 3175 is raised.
Parallelization Information
estimate_noise is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
grab_image, grab_image_async, read_image, reduce_domain
Possible Successors
binomial_filter, gauss_image, mean_image, smooth_image
Alternatives
noise_distribution_mean, intensity, min_max_gray
See also
gauss_distribution, add_noise_distribution
References
W. Förstner: "‘Image Preprocessing for Feature Extraction in Digital Intensity, Color and Range Images"‘, Springer
Lecture Notes on Earth Sciences, Summer School on Data Analysis and the Statistical Foundations of Geomatics,
1999
J. Immerkaer: "‘Fast Noise Variance Estimation"‘, Computer Vision and Image Understanding, Vol. 64, No. 2, pp.
300-302, 1996
Module
Foundation
Calculate gray value moments and approximation by a first order surface (plane).
The operator fit_surface_first_order calculates the gray value moments and the parameters of the
approximation of the gray values by a first order surface. The calculation is done by minimizing the distance
between the gray values and the surface. A first order surface is described by the following formula:
r_center and c_center are the center coordinates of intersection of the input region with the full image domain. By
the minimization process the parameters from Alpha to Gamma is calculated.
The algorithm used for the fitting can be selected via Algorithm:
’regression’ Standard ’least squares’ line fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for ClippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter Iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
HALCON 8.0.2
504 CHAPTER 7. IMAGE
Image(r, c) = Alpha(r−r_center)∗∗2+Beta(c−c_center)∗∗2+Gamma(r−r_center)∗(c−c_center)+Delta(r−r_center)
r_center and c_center are the center coordinates of the intersection of the input region with the full image domain.
By the minimization process the parameters from Alpha to Zeta is calculated.
The algorithm used for the fitting can be selected via Algorithm:
’regression’ Standard ’least squares’ fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for ClippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter Iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
1
P
H(X) = M N ln2 l Te (l)h(l)
where M × N is the size of the image, and h(l) is the histogram of the image. Furthermore,
Here, u(x(m, n)) is a fuzzy membership function defining the fuzzy set (see fuzzy_perimeter). The same
restrictions hold as in fuzzy_perimeter.
Parameter
HALCON 8.0.2
506 CHAPTER 7. IMAGE
Result
The operator fuzzy_entropy returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
fuzzy_entropy is reentrant and automatically parallelized (on tuple level).
See also
fuzzy_perimeter
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation
M
X −1 N
X −1 M
X −1 N
X −1
p(X) = |µX (xm,n ) − µX (xm,n+1 )| + |µX (xm,n ) − µX (xm+1,n )|
m=1 n=1 m=1 n=1
where M × N is the size of the image, and u(x(m, n)) is the fuzzy membership function (i.e., the input image).
This implementation uses Zadeh’s Standard-S function, which is defined as follows:
0, x≤a
2 x−a 2 ,
a < x≤b
c−a
µX (x) =
2
x−a
1 − 2 c−a , b < x ≤ c
1, c≤x
Parameter
Result
The operator fuzzy_perimeter returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise
an exception is raised.
Parallelization Information
fuzzy_perimeter is reentrant and automatically parallelized (on tuple level).
See also
fuzzy_entropy
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation
HALCON 8.0.2
508 CHAPTER 7. IMAGE
0 0 3 2 0 0 1 0 1 1 0
1 1 2 0 2 2 0 1 0 1 1
1 2 3 0 2 0 1 1 1 0 0
1 0 1 0 0 1 0 0
0 2 0 0 0 1 0 0
2 2 1 0 1 2 0 1
0 1 0 2 0 0 2 0
0 0 2 0 0 1 0 0
Parameter
Both histograms are tupels of 256 values, which — beginning at 0 — contain the frequencies of the individual gray
values of the image.
AbsoluteHisto indicates the absolute frequencies of the gray values in integers, and RelativeHisto indi-
cates the relative, i.e. the absolute frequencies divided by the area of the image as floating point numbers.
real-, int2-, uint2-, and int4-images are transformed into byte-images (first the largest and smallest gray value in the
image are determined, and then the original gray values are mapped linearly into the area 0..255) and then processed
as mentioned above. The histogram can also be returned directly as a graphic via the operators set_paint(::
WindowHandle,’histogram’:) and disp_image.
Attention
Real, int2, uint2, and int4 images are reduced to 256 gray values.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region in which the histogram is to be calculated.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 / int4 / real
Image the gray value distribution of which is to be calculated.
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; integer
Absolute frequencies of the gray values.
. RelativeHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; real
Frequencies, normalized to the area of the region.
Complexity
If F is the area of the region the runtime complexity is O(F + 255).
Result
The operator gray_histo returns the value 2 (H_MSG_TRUE) if the image has defined gray values and the
parameters are correct. The behavior in case of empty input (no input images available) is set via the operator
set_system(::’no_object_result’,<Result>:), the behavior in case of empty region is set via
set_system(::’empty_region_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
gray_histo is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, gen_region_histo
Alternatives
min_max_gray, intensity, gray_histo_abs
See also
set_paint, disp_image, histo_2dim, scale_image_max, entropy_gray
Module
Foundation
HALCON 8.0.2
510 CHAPTER 7. IMAGE
whereas MIN denotes the minimal gray value, e.g., -128 for an int1 image type. Therefore, the size of the tuple
results from the ratio of the full domain of gray values and the quantisation, e.g. for images of int2 in d 65536
3.0 e =
21846 . The origin gray value of the signed image types int1 resp. int2 is mapped on the index 128 resp. 32768,
negative resp. positive gray values have smaller resp. greater indices.
The histogram can also be returned directly as a graphic via the operators set_paint(::
WindowHandle,’histogram’:) and disp_image.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region in which the histogram is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2
Image the gray value distribution of which is to be calculated.
. Quantization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Quantization of the gray values.
Default Value : 1.0
List of values : Quantization ∈ {1.0, 2.0, 3.0, 5.0, 10.0}
Restriction : Quantization ≥ 1.0
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; integer
Absolute frequencies of the gray values.
Result
The operator gray_histo_abs returns the value 2 (H_MSG_TRUE) if the image has defined gray values
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(::’no_object_result’,<Result>:), the behavior in case of empty region is
set via set_system(::’empty_region_result’,<Result>:). If necessary an exception handling is
raised.
Parallelization Information
gray_histo_abs is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, gen_region_histo
Alternatives
min_max_gray, intensity, gray_histo
See also
set_paint, disp_image, histo_2dim, scale_image_max, entropy_gray
Module
Foundation
1 X
HorProjection(r) = Image(r + r0 , c + c0 )
n(r + r0 )
(r+r 0 ,c+c0 )∈Region
1 X
VertProjection(c) = Image(r + r0 , c + c0 )
n(c + c0 )
(r+r 0 ,c+c0 )∈Region
Here, (r0 , c0 ) denotes the upper left corner of the smallest enclosing axis-parallel rectangle of the input region (see
smallest_rectangle1), and n(x) denotes the number of region points in the corresponding row r + r0 or
column c + c0 . Hence, the horizontal projection returns a one-dimensional function that reflects the vertical gray
value changes. Likewise, the vertical projection returns a function that reflects the horizontal gray value changes.
If Mode = ’rectangle’is selected the projection is performed in the direction of the major axes of the smallest
enclosing rectangle of arbitrary orientation of the input region (see smallest_rectangle2). Here, the hor-
izontal projection direction corresponds to the larger axis, while the vertical direction corresponds to the smaller
axis. In this mode, all gray values within the smallest enclosing rectangle of arbitrary orientation of the input
region are used to compute the projections.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region to be processed.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int2 / uint2
Grayvalues for projections.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to compute the projections.
Default Value : ’simple’
List of values : Mode ∈ {’simple’, ’rectangle’}
. HorProjection (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Horizontal projection.
. VertProjection (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Vertical projection.
Parallelization Information
gray_projections is reentrant and processed without parallelization.
Module
1D Metrology
read_image(Image,’affe’)
texture_laws(Image,Texture,’el’,1,5)
draw_region(Region,WindowHandle)
histo_2dim(Region,Texture,Image,Histo2Dim)
disp_image(Histo2Dim,WindowHandle).
Complexity
If F is the plane of the region, the runtime complexity is O(F + 2562 ).
Result
The operator histo_2dim returns the value 2 (H_MSG_TRUE) if both images have defined gray values.
The behavior in case of empty input (no input images available) is set via the operator set_system(::
HALCON 8.0.2
512 CHAPTER 7. IMAGE
Attention
The calculation of Deviation does not follow the usual definition if the region of the image contains only one
pixel. In this case 0.0 is returned.
Parameter
Alternatives
select_gray, min_max_gray
See also
mean_image, mean_image, gray_histo, gray_histo_abs
Module
Foundation
Result
The operator min_max_gray returns the value 2 (H_MSG_TRUE) if the input image has the defined gray
values and the parameters are correct. The behavior in case of empty input (no input images available) is set via
the operator set_system(::’no_object_result’,<Result>:). The behaviour in case of an empty
HALCON 8.0.2
514 CHAPTER 7. IMAGE
1 X 1 X
MRow = (r − r)(Image(r, c) − Mean) MCol = (c − c)(Image(r, c) − Mean)
F2 F2
(r,c)∈Regions (r,c)∈Regions
where F is the plane, r, c the center, and m11 , m20 , and m02 the scaled moments of Regions.
The parameters Alpha, Beta and Mean describe a plane above the region:
Thus Alpha indicates the gradient in the direction of the line axis (“down”), Beta the gradient in the direction of
the column axis (to the “right”).
Parameter
Result
The operator moments_gray_plane returns the value 2 (H_MSG_TRUE) if an image with the defined gray
values (byte) is entered and the parameters are correct. The behavior in case of empty input (no input images
available) is set via the operator set_system(::’no_object_result’,<Result>:), the behavior in
case of empty region is set via set_system(::’empty_region_result’,<Result>:). If necessary
an exception handling is raised.
Parallelization Information
moments_gray_plane is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, threshold,
regiongrowing
See also
intensity, moments_region_2nd
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, pp 75-76
Module
Foundation
Calculate the deviation of the gray values from the approximating image plane.
The operator plane_deviation calculates the deviation of the gray values in Image from the approximation
of the gray values through a plane. Contrary to the standard deviation in case of intensity slanted gray value
planes also receive the value zero. The gray value plane is calculated according to gen_image_gray_ramp.
If F is the plane, α, β, µ the parameters of the image plane and (r0 , c0 ) the center, Deviation is defined by:
s
sum(r,c)∈Regions ((α(r − r0 ) + β(c − c0 ) + µ) − Image(r, c))2
Deviation = .
F
Attention
It should be noted that the calculation of Deviation does not follow the usual definition. It is defined to return
the value 0.0 for an image with only one pixel.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions, of which the plane deviation is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic
Gray value image.
. Deviation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Deviation of the gray values within a region.
Complexity
If F is the area of the region the runtime complexity amounts to O(F ).
Result
The operator plane_deviation returns the value 2 (H_MSG_TRUE) if Image is of the type byte.
The behavior in case of empty input (no input images available) is set via the operator set_system(::
’no_object_result’,<Result>:), the behavior in case of empty region is set via set_system(::
’empty_region_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
plane_deviation is reentrant and automatically parallelized (on tuple level).
Alternatives
intensity, gen_image_gray_ramp, sub_image
See also
moments_gray_plane
HALCON 8.0.2
516 CHAPTER 7. IMAGE
Module
Foundation
HALCON 8.0.2
518 CHAPTER 7. IMAGE
Complexity
If F is the area
√ √ of the input region and N the mean number of connected components the runtime complexity is
O(255(F + F N )).
Result
The operator shape_histo_all returns the value 2 (H_MSG_TRUE) if an image with the defined gray values
is entered. The behavior in case of empty input (no input images) is set via the operator set_system(::
’no_object_result’,<Result>:), the behavior in case of empty region is set via set_system(::
’empty_region_result’,<Result>:). If necessary an exception handling is raised.
Parallelization Information
shape_histo_all is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, threshold, gen_region_histo
Alternatives
shape_histo_point
See also
connection, convexity, compactness, connect_and_holes, entropy_gray, gray_histo,
set_paint, count_obj
Module
Foundation
Parameter
7.7 Format
change_format ( Image : ImagePart : Width, Height : )
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
HALCON 8.0.2
520 CHAPTER 7. IMAGE
. ImagePart (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Image with new format.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of new image.
Default Value : 512
Suggested values : Width ∈ {32, 64, 128, 256, 512, 768, 1024}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of new image.
Default Value : 512
Suggested values : Height ∈ {32, 64, 128, 256, 512, 525, 1024}
Parallelization Information
change_format is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
crop_part
See also
zoom_image_size, zoom_image_factor
Module
Foundation
at the top (Top), at the left (Left), at the bottom (Bottom), and at the right (Right). Positive values results in
a smaller, negative values in a larger size. If all parameters are set to zero, the region remains unchanged.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. ImagePart (output_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real
Image area.
. Top (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of rows clipped at the top.
Default Value : -1
Suggested values : Top ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
. Left (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of columns clipped at the left.
Default Value : -1
Suggested values : Left ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
. Bottom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of rows clipped at the bottom.
Default Value : -1
Suggested values : Bottom ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
. Right (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of columns clipped at the right.
Default Value : -1
Suggested values : Right ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
Result
crop_domain_rel returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
crop_domain_rel is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
reduce_domain, threshold, connection, regiongrowing, pouring
Alternatives
crop_domain, crop_rectangle1
See also
smallest_rectangle1, intersection, gen_rectangle1, clip_region
Module
Foundation
HALCON 8.0.2
522 CHAPTER 7. IMAGE
HALCON 8.0.2
524 CHAPTER 7. IMAGE
Result
tile_channels returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution.
If the input is empty the behavior can be set via set_system(::’no_object_result’,<Result>:).
If necessary, an exception handling is raised.
Parallelization Information
tile_channels is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
append_channel
Alternatives
tile_images, tile_images_offset
See also
change_format, crop_part, crop_rectangle1
Module
Foundation
Example
Result
tile_images returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If
the input is empty the behavior can be set via set_system(::’no_object_result’,<Result>:). If
necessary, an exception handling is raised.
Parallelization Information
tile_images is reentrant and automatically parallelized (on channel level).
Possible Predecessors
append_channel
Alternatives
tile_channels, tile_images_offset
See also
change_format, crop_part, crop_rectangle1
Module
Foundation
Tile multiple image objects into a large image with explicit positioning information.
tile_images_offset tiles multiple input image objects, which must contain the same number of channels,
into a large image. The input image object Images contains Num images, which may be of different size. The
output image TiledImage contains as many channels as the input images. The size of the output image is
determined by the parameters Width and Height. The position of the upper left corner of the input images in
the output images is determined by the parameters OffsetRow and OffsetCol. Both parameters must contain
exactly Num values. Optionally, each input image can be cropped to an arbitrary rectangle that is smaller than the
input image. To do so, the parameters Row1, Col1, Row2, and Col2 must be set accordingly. If any of these four
parameters is set to -1, the corresponding input image is not cropped. In any case, all four parameters must contain
Num values. If the input images are cropped the position parameters OffsetRow and OffsetCol refer to the
upper left corner of the cropped image. If the input images overlap each other in the output image (while taking
into account their respective domains), the image with the higher index in Images overwrites the image data of
the image with the lower index. The domain of TiledImage is obtained by copying the domains of Images to
the corresponding locations in the output image.
Attention
If the input images all have the same size and tile the output image exactly, the operator tile_images usually
will be slightly faster.
Parameter
HALCON 8.0.2
526 CHAPTER 7. IMAGE
/* Example 1 */
/* Grab 2 (multi-channel) NTSC images, crop the bottom 5 lines off */
/* of each image, the right 5 columns off of the first image, and */
/* the left five lines off of the second image, and put the cropped */
/* images side-by-side. */
gen_empty_obj (Images)
for I := 1 to 2 by 1
grab_image_async (ImageGrabbed, FGHandle, -1)
concat_obj (Images, ImageGrabbed, Images)
endfor
tile_images_offset (Images, TiledImage, [0,635], [0,0], [0,0],
[0,5], [474,474], [634,639])
/* Example 2 */
/* Enlarge image by 15 rows and columns on all sides */
EnlargeColsBy := 15
EnlargeRowsBy := 15
get_image_pointer1 (Image, Pointer, Type, WidthImage, HeightImage)
tile_images_offset (Image, EnlargedImage, EnlargeRowsBy, EnlargeColsBy,
-1, -1, -1, -1, WidthImage + EnlargeColsBy*2,
HeightImage + EnlargeRowsBy*2)
Result
tile_images_offset returns 2 (H_MSG_TRUE) if all parameters are correct and no error oc-
curs during execution. If the input is empty the behavior can be set via set_system(::
’no_object_result’,<Result>:). If necessary, an exception handling is raised.
Parallelization Information
tile_images_offset is reentrant and automatically parallelized (on channel level).
Possible Predecessors
append_channel
Alternatives
tile_channels, tile_images
See also
change_format, crop_part, crop_rectangle1
Module
Foundation
7.8 Manipulation
. ImageDestination (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Input image to be painted over.
. ImageSource (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image containing the desired gray values.
Example
/* Copy a circular part of the image ’monkey’ into a new image (New1): */
read_image(Image,’monkey’)
gen_circle(Circle,200,200,150)
reduce_domain(Image,Circle,Mask)
/* New image with black (0) background */
gen_image_proto(Image,New1,0.0)
/* Copy a part of the image ’monkey’ into New1 */
overpaint_gray(New1,Mask).
Result
overpaint_gray returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception is raised.
HALCON 8.0.2
528 CHAPTER 7. IMAGE
Parallelization Information
overpaint_gray is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto
Alternatives
get_image_pointer1, paint_gray, set_grayval, copy_image
See also
paint_region, overpaint_region
Module
Foundation
The parameter Type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
If you do not want to modify Image itself, you can use the operator paint_region, which returns the result
in a newly created image.
Attention
overpaint_region modifies the content of an already existing image (Image). Besides, even other image
objects may be affected: For example, if you created Image via copy_obj from another image object (or
vice versa), overpaint_region will also modify the image matrix of this other image object. Therefore,
overpaint_region should only be used to overpaint newly created image objects. Typical operators for this
task are, e.g., gen_image_const (creates a new image with a specified size), gen_image_proto (creates
an image with the size of a specified prototype image) or copy_image (creates an image as the copy of a
specified image).
Parameter
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the regions are to be painted.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be painted into the input image.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Paint regions filled or as boundaries.
Default Value : ’fill’
List of values : Type ∈ {’fill’, ’margin’}
Example
gen_rectangle1(Rectangle,100.0,100.0,300.0,300.0)
Result
overpaint_region returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception is raised.
Parallelization Information
overpaint_region is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, reduce_domain
Alternatives
set_grayval, paint_region, paint_xld
See also
reduce_domain, set_draw, paint_gray, overpaint_gray, gen_image_const
Module
Foundation
. ImageSource (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image containing the desired gray values.
. ImageDestination (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Input image to be painted over.
. MixedImage (output_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Result image.
Example
/* Copy a circular part of the image ’monkey’ into the image ’fabrik’: */
read_image(Image,’monkey’)
gen_circle(Circle,200,200,150)
reduce_domain(Image,Circle,Mask)
read_image(Image2,’fabrik’)
/* Copy a part of the image ’monkey’ into ’fabrik’ */
paint_gray(Mask,Image2,MixedImage).
Result
paint_gray returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
paint_gray is reentrant and processed without parallelization.
HALCON 8.0.2
530 CHAPTER 7. IMAGE
Possible Predecessors
read_image, gen_image_const, gen_image_proto
Alternatives
get_image_pointer1, set_grayval, copy_image, overpaint_gray
See also
paint_region, overpaint_region
Module
Foundation
The parameter Type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
As an alternative to paint_region, you can use the operator overpaint_region, which directly paints
the regions into Image.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be painted into the input image.
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the regions are to be painted.
. ImageResult (output_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Image containing the result.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Paint regions filled or as boundaries.
Default Value : ’fill’
List of values : Type ∈ {’fill’, ’margin’}
Example
read_image(Image,’monkey’)
gen_rectangle1(Rectangle,100.0,100.0,300.0,300.0)
/* paint a white rectangle */
paint_region(Rectangle,Image,ImageResult,255.0,’fill’,).
Result
paint_region returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception is raised.
Parallelization Information
paint_region is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, reduce_domain
Alternatives
set_grayval, overpaint_region, paint_xld
See also
reduce_domain, paint_gray, overpaint_gray, set_draw, gen_image_const
Module
Foundation
Parameter
. XLD (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject
XLD objects to be painted into the input image.
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the xld objects are to be painted.
. ImageResult (output_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Image containing the result.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Desired gray value of the xld object.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
Example
HALCON 8.0.2
532 CHAPTER 7. IMAGE
concat_obj(circle,arrows,green_dot)
/* paint a green circle and white arrows (to paint all
* objects e.g. blue, pass [0,0,255] tuple for GrayVal) */
paint_xld(green_dot,Image,ImageResult,[0,255,0,255,255,255])
Result
paint_xld returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be set
via set_system(::’no_object_result’,<Result>:). If necessary, an exception is raised.
Parallelization Information
paint_xld is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, gen_contour_polygon_xld,
threshold_sub_pix
Alternatives
set_grayval, paint_gray, paint_region
See also
gen_image_const
Module
Foundation
Alternatives
get_image_pointer1, paint_gray, paint_region
See also
get_grayval, gen_image_const, gen_image1, gen_image_proto
Module
Foundation
7.9 Type-Conversion
HALCON 8.0.2
534 CHAPTER 7. IMAGE
Result
convert_image_type returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
behavior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is
raised.
Parallelization Information
convert_image_type is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
scale_image
See also
scale_image, abs_image
Module
Foundation
HALCON 8.0.2
536 CHAPTER 7. IMAGE
Lines
8.1 Access
approx_chain ( : : Row, Column, MinWidthCoord, MaxWidthCoord,
ThreshStart, ThreshEnd, ThreshStep, MinWidthSmooth, MaxWidthSmooth,
MinWidthCurve, MaxWidthCurve, Weight1, Weight2,
Weight3 : ArcCenterRow, ArcCenterCol, ArcAngle, ArcBeginRow,
ArcBeginCol, LineBeginRow, LineBeginCol, LineEndRow, LineEndCol,
Order )
537
538 CHAPTER 8. LINES
set_d(t3,0.3,0);
HALCON 8.0.2
540 CHAPTER 8. LINES
set_d(t4,0.9,0);
set_d(t5,0.2,0);
set_d(t6,0.4,0);
set_d(t7,2.4,0);
set_i(t8,2,0);
set_i(t9,12,0);
set_d(t10,1.0,0);
set_d(t11,1.0,0);
set_d(t12,1.0,0);
T_approx_chain(Rows,Columns,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,
&Bzl,&Bzc,&Br,&Bwl,&Bwc,&Ll0,&Lc0,&Ll1,&Lc1,&order);
nob = length_tuple(Bzl);
nol = length_tuple(Ll0);
/* draw lines and arcs */
set_i(WindowHandleTuple,WindowHandle,0) ;
set_line_width(WindowHandle,4);
if (nob>0) T_disp_arc(Bzl,Bzc,Br,Bwl,Bwc);
set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);
Result
The operator approx_chain returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
approx_chain is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, get_region_contour, threshold, hysteresis_threshold
Possible Successors
set_line_width, disp_arc, disp_line
Alternatives
get_region_polygon, approx_chain_simple
See also
get_region_chain, smallest_circle, disp_circle, disp_line
Module
Foundation
Parameter
Result
The operator approx_chain_simple returns the value 2 (H_MSG_TRUE) if the parameters are correct.
Otherwise an exception is raised.
Parallelization Information
approx_chain_simple is reentrant and processed without parallelization.
HALCON 8.0.2
542 CHAPTER 8. LINES
Possible Predecessors
sobel_amp, edges_image, get_region_contour, threshold, hysteresis_threshold
Possible Successors
set_line_width, disp_arc, disp_line
Alternatives
get_region_polygon, approx_chain
See also
get_region_chain, smallest_circle, disp_circle, disp_line
Module
Foundation
8.2 Features
line_orientation ( : : RowBegin, ColBegin, RowEnd, ColEnd : Phi )
The operator line_position returns the center (RowCenter, ColCenter), the (Euclidean) length
(Length) and the orientation (−π/2 < Phi ≤ π/2) of the given lines. If more than one line is to be treated the
line and column indices can be passed as tuples. In this case the output parameters, of course, are also tuples.
The routine is applied, for example, to model lines in order to determine search regions for the edge detection (
detect_edge_segments).
Parameter
HALCON 8.0.2
544 CHAPTER 8. LINES
Attention
If only one feature is used the value of Operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
Alternatives
line_orientation, line_position, select_lines, select_lines_longest
See also
select_lines, select_lines_longest, detect_edge_segments, select_shape
Module
Foundation
Attention
If only one feature is used the value of Operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
. RowBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; integer
Row coordinates of the starting points of the input lines.
. ColBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; integer
Column coordinates of the starting points of the input lines.
. RowEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; integer
Row coordinates of the ending points of the input lines.
. ColEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; integer
Column coordinates of the ending points of the input lines.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Features to be used for selection.
Default Value : ’length’
List of values : Feature ∈ {’length’, ’row’, ’column’, ’phi’}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Desired combination of the features.
Default Value : ’and’
List of values : Operation ∈ {’and’, ’or’}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer / real
Lower limits of the features or ’min’.
Default Value : ’min’
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer / real
Upper limits of the features or ’max’.
Default Value : ’max’
. RowBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; integer
Row coordinates of the starting points of the output lines.
. ColBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; integer
Column coordinates of the starting points of the output lines.
HALCON 8.0.2
546 CHAPTER 8. LINES
Possible Predecessors
sobel_amp, edges_image, threshold, hysteresis_threshold, split_skeleton_region,
split_skeleton_lines
Possible Successors
set_line_width, disp_line
Alternatives
line_orientation, line_position, select_lines, partition_lines
See also
select_lines, partition_lines, detect_edge_segments, select_shape
Module
Foundation
HALCON 8.0.2
548 CHAPTER 8. LINES
Matching
9.1 Component-Based
clear_all_component_models ( : : : )
clear_all_training_components ( : : : )
549
550 CHAPTER 9. MATCHING
Possible Predecessors
train_model_components, write_training_components
See also
clear_training_components
Module
Matching
clear_component_model ( : : ComponentModelID : )
clear_training_components ( : : ComponentTrainingID : )
cluster_model_components (
TrainingImages : ModelComponents : ComponentTrainingID,
AmbiguityCriterion, MaxContourOverlap, ClusterThreshold : )
Adopt new parameters that are used to create the model components into the training result.
With cluster_model_components you can modify parameters after a first training has been per-
formed using train_model_components. cluster_model_components sets the crite-
rion AmbiguityCriterion that is used to solve the ambiguities, the maximum contour overlap
MaxContourOverlap, and the cluster threshold of the training result ComponentTrainingID to
the specified values. A detailed description of these parameters can be found in the documentation of
train_model_components. By modifying these parameters, the way in which the initial components are
merged into rigid model components changes. For example, the greater the cluster threshold is chosen, the fewer
initial components are merged.
The rigid model components are returned in ModelComponents. In order to receive reasonable results, it is es-
sential that the same training images that were used to perform the training with train_model_components
are passed in TrainingImages. The pose of the newly clustered components within the training images is
determined using the shape-based matching. As in train_model_components, one can decide whether the
shape models should be pregenerated by using set_system(’pregenerate_shape_models’,...).
Furthermore, set_system(’border_shape_models’,...) can be used to determine whether the mod-
els must lie completely within the training images or whether they can extend partially beyond the image border.
Thus, you can select suitable parameter values interactively by repeatedly calling
inspect_clustered_components with different parameter values and then setting the chosen val-
ues by using get_training_components.
Parameter
. TrainingImages (input_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Training images that were used for training the model components.
. ModelComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Contour regions of rigid model components.
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; integer
Handle of the training result.
. AmbiguityCriterion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Criterion for solving the ambiguities.
Default Value : ’rigidity’
List of values : AmbiguityCriterion ∈ {’distance’, ’orientation’, ’distance_orientation’, ’rigidity’}
. MaxContourOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum contour overlap of the found initial components.
Default Value : 0.2
Suggested values : MaxContourOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MaxContourOverlap) ∧ (MaxContourOverlap ≤ 1)
. ClusterThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for clustering the initial components.
Default Value : 0.5
Suggested values : ClusterThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (0 ≤ ClusterThreshold) ∧ (ClusterThreshold ≤ 1)
Example
HALCON 8.0.2
552 CHAPTER 9. MATCHING
InitialComponentRegions := [InitialComponentRegions,Rectangle3]
InitialComponentRegions := [InitialComponentRegions,Rectangle4]
* Get the training images
TrainingImages := []
for i := 1 to 4 by 1
read_image (TrainingImage, ’training_image-’+i$’02’+’.tif’)
TrainingImages := [TrainingImages,TrainingImage]
endfor
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponentRegions, TrainingImages,
ModelComponents, 22, 60, 30, 0.65, 0, 0, rad(60),
’speed’, ’rigidity’, 0.2, 0.5, ComponentTrainingID)
* Find the best value for the parameter ClusterThreshold.
inspect_clustered_components (ModelComponents, ComponentTrainingID,
’rigidity’, 0.2, 0.4)
* Adopt the ClusterThreshold into the training result.
cluster_model_components (TrainingImages, ModelComponents,
ComponentTrainingID, ’rigidity’, 0.2, 0.4)
* Create the component model based on the training result.
create_trained_component_model (ComponentTrainingID, -rad(30), rad(60), 10,
0.5, ’auto’, ’auto’, ’none’, ’use_polarity’,
’false’, ComponentModelID, RootRanking)
Result
If the parameter values are correct, the operator cluster_model_components returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
cluster_model_components is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, inspect_clustered_components
Possible Successors
get_training_components, create_trained_component_model,
modify_component_relations, write_training_components,
get_component_relations, clear_training_components,
clear_all_training_components
Module
Matching
create_component_model ( ModelImage,
ComponentRegions : : VariationRow, VariationColumn, VariationAngle,
AngleStart, AngleExtent, ContrastLowComp, ContrastHighComp,
MinSizeComp, MinContrastComp, MinScoreComp, NumLevelsComp,
AngleStepComp, OptimizationComp, MetricComp,
PregenerationComp : ComponentModelID, RootRanking )
Prepare a component model for matching based on explicitly specified components and relations.
create_component_model prepares patterns, which are passed in the form of a model image
ModelImage and several regions ComponentRegions, as a component model for matching. The out-
put parameter ComponentModelID is a handle for this model, which is used in subsequent calls to
find_component_model. In contrast to create_trained_component_model, no preceding training
with train_model_components needs to be performed before calling create_component_model.
Each of the regions passed in ComponentRegions describes a separate model component. Later, the index of
a component region in ComponentRegions is used to denote the model component. The reference point of a
component is the center of gravity of its associated region, which is passed in ComponentRegions. It can be
calculated by calling area_center.
The relative movements (relations) of the model components can be set by passing VariationRow,
VariationColumn, and VariationAngle. Because directly passing the relations is complicated, instead of
the relations the variations of the model components are passed. The variations describe the movements of the com-
ponents independently from each other relative to their poses in the model image ModelImage. The parameters
VariationRow and VariationColumn describe the movement of the components in row and column di-
rection by ± 21 VariationRow and ± 12 VariationColumn, respectively. The parameter VariationAngle
describes the angle variation of the component by ± 12 VariationAngle. Based on these values, the relations
are automatically computed. The three parameters must either contain one element, in which case the parameter is
used for all model components, or must contain the same number of elements as ComponentRegions, in which
case each parameter element refers to the corresponding model component in ComponentRegions.
The parameters AngleStart and AngleExtent determine the range of possible rotations of the component
model in an image.
Internally, a separate shape model is built for each model component (see create_shape_model). There-
fore, the parameters ContrastLowComp, ContrastHighComp, MinSizeComp, MinContrastComp,
MinScoreComp, NumLevelsComp, AngleStepComp, OptimizationComp, MetricComp, and
PregenerationComp correspond to the parameters of create_shape_model, with the following differ-
ences: First, in the parameter Contrast of create_shape_model the upper as well as the lower threshold
for the hysteresis threshold method can be passed. Additionally, a third value, which suppresses small connected
contour regions, can be passed. In contrast, the operator create_component_model offers three sepa-
rate parameters ContrastHighComp, ContrastLowComp, and MinScoreComp in order to set these three
values. Consequently, also the automatic computation of the contrast threshold(s) is different. If both hystere-
sis threshold should be automatically determined, both ContrastLowComp and ContrastHighComp must
be set to ’auto’. In contrast, if only one threshold value should be determined, ContrastLowComp must be
set to ’auto’ while ContrastHighComp must be set to an arbitrary value different from ’auto’. Secondly,
the parameter Optimization of create_shape_model provides the possibility to reduce the number
of model points as well as the possibility to completely pregenerate the shape model. In contrast, the oper-
ator create_trained_component_model uses a separate parameter PregenerationComp in order
to decide whether the shape models should be completely pregenerated or not. A third difference concerning
the parameter MinScoreComp should be noted. When using the shape-based matching, this parameter needs
not be passed when preparing the shape model using create_shape_model, but only during the search
using find_shape_model. In contrast, when preparing the component model it is favorable to analyze ro-
tational symmetries of the model components and similarities between the model components. However, this
analysis only leads to meaningful results if the value for MinScoreComp that is used during the search (see
find_component_model) is already approximately known.
In addition to the parameters ContrastLowComp, ContrastHighComp, and MinSizeComp also the pa-
rameters MinContrastComp, NumLevelsComp, AngleStepComp, and OptimizationComp can be au-
tomatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number of
elements as the number of regions in ComponentRegions, in which case each parameter element refers to the
corresponding element in ComponentRegions.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using find_component_model in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of find_component_model during the search. To
what extent a model component is suited to act as the root component depends on several factors. In principle, a
model component that can be found in the image with a high probability should be chosen. Therefore, a component
that is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as the root
component. Additionally, the computation time that is associated with the root component during the search
can serve as a criterion. A ranking of the model components that is based on the latter criterion is returned in
RootRanking. In this parameter the indices of the model components are sorted in descending order according
to their associated search time, i.e., RootRanking[0] contains the index of the model component that, chosen
as root component, allows the fastest search. Note that the ranking returned in RootRanking represents only a
coarse estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value
of the system parameter ’border_shape_models’ are identical when calling create_component_model and
HALCON 8.0.2
554 CHAPTER 9. MATCHING
find_component_model.
Parameter
Result
If the parameters are valid, the operator create_component_model returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Parallelization Information
create_component_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, concat_obj
HALCON 8.0.2
556 CHAPTER 9. MATCHING
Possible Successors
find_component_model
Alternatives
create_trained_component_model
See also
create_shape_model, find_shape_model
Module
Matching
create_trained_component_model ( : : ComponentTrainingID,
AngleStart, AngleExtent, MinContrastComp, MinScoreComp, NumLevelsComp,
AngleStepComp, OptimizationComp, MetricComp,
PregenerationComp : ComponentModelID, RootRanking )
Additionally, the computation time that is associated with the root component during the search can serve as a
criterion. A ranking of the model components that is based on the latter criterion is returned in RootRanking.
In this parameter the indices of the model components are sorted in descending order according to their associ-
ated computation time, i.e., RootRanking[0] contains the index of the model component that, chosen as root
component, allows the fastest search. Note that the ranking returned in RootRanking represents only a coarse
estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value of the
system parameter ’border_shape_models’ are identical when calling create_trained_component_model
and find_component_model.
Parameter
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; integer
Handle of the training result.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Smallest rotation of the component model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Extent of the rotation of the component model.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. MinContrastComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Minimum contrast of the components in the search images.
Default Value : ’auto’
Suggested values : MinContrastComp ∈ {’auto’, 10, 20, 20, 40}
Restriction : MinContrastComp ≥ 0
. MinScoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Minimum score of the instances of the components to be found.
Default Value : 0.5
Suggested values : MinScoreComp ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScoreComp) ∧ (MinScoreComp ≤ 1)
. NumLevelsComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Maximum number of pyramid levels for the components.
Default Value : ’auto’
List of values : NumLevelsComp ∈ {’auto’, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStepComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / string
Step length of the angles (resolution) for the components.
Default Value : ’auto’
Suggested values : AngleStepComp ∈ {’auto’, 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : AngleStepComp ≥ 0
. OptimizationComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Kind of optimization for the components.
Default Value : ’auto’
List of values : OptimizationComp ∈ {’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’,
’point_reduction_high’}
. MetricComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Match metric used for the components.
Default Value : ’use_polarity’
List of values : MetricComp ∈ {’use_polarity’, ’ignore_global_polarity’, ’ignore_local_polarity’,
’ignore_color_polarity’}
. PregenerationComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Complete pregeneration of the shape models for the components if equal to ’true’.
Default Value : ’false’
List of values : PregenerationComp ∈ {’true’, ’false’}
. ComponentModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; integer
Handle of the component model.
HALCON 8.0.2
558 CHAPTER 9. MATCHING
Result
If the parameters are valid, the operator create_trained_component_model returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
create_trained_component_model is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, read_training_components
Possible Successors
find_component_model
Alternatives
create_component_model
See also
create_shape_model, find_shape_model
Module
Matching
HALCON 8.0.2
560 CHAPTER 9. MATCHING
Internally, the shape-based matching is used for the component-based matching in order to search the individ-
ual components (see find_shape_model). Therefore, the parameters MinScoreComp, SubPixelComp,
NumLevelsComp, and GreedinessComp have the same meaning as the corresponding parameters in
find_shape_model. These parameters must either contain one element, in which case the parameter is used
for all components, or must contain the same number of elements as model components in ComponentModelID,
in which case each parameter element refers to the corresponding component in ComponentModelID.
NumLevelsComp may also contain two elements or twice the number of elements as model components. The
first value determines the number of pyramid levels to use. The second value determines the lowest pyramid level
to which the found matches are tracked. If different values should be used for different components, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevelsComp. If, for ex-
ample, two components are contained in ComponentModelID, and the number of pyramid levels is 5 for the
first component and 4 for the second component, and the lowest pyramid level is 2 for the first component and 1
for the second component, NumLevelsComp = [5,2,4,1] must be selected. Further details can be found in the
documentation of find_shape_models.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image in which the component model should be found.
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; integer
Handle of the component model.
. RootComponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Index of the root component.
Suggested values : RootComponent ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8}
. AngleStartRoot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Smallest rotation of the root component
Default Value : -0.39
Suggested values : AngleStartRoot ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtentRoot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Extent of the rotation of the root component.
Default Value : 0.78
Suggested values : AngleExtentRoot ∈ {6.28, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtentRoot ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Minimum score of the instances of the component model to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScore) ∧ (MinScore ≤ 1)
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of instances of the component model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum overlap of the instances of the component models to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MaxOverlap) ∧ (MaxOverlap ≤ 1)
. IfRootNotFound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Behavior if the root component is missing.
Default Value : ’stop_search’
List of values : IfRootNotFound ∈ {’stop_search’, ’select_new_root’}
. IfComponentNotFound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Behavior if a component is missing.
Default Value : ’prune_branch’
List of values : IfComponentNotFound ∈ {’prune_branch’, ’search_from_upper’, ’search_from_best’}
HALCON 8.0.2
562 CHAPTER 9. MATCHING
See also
find_shape_model, find_shape_models, get_shape_model_params,
get_component_model_params, train_model_components, set_shape_model_origin,
smallest_rectangle2
Module
Matching
gen_initial_components (
ModelImage : InitialComponents : ContrastLow, ContrastHigh, MinSize,
Mode, GenericName, GenericValue : )
HALCON 8.0.2
564 CHAPTER 9. MATCHING
When using the second possibility, i.e., the components of the component model are approximately known,
the training by using train_model_components can be performed without previously executing
gen_initial_components. If this is desired, the initial components can be specified by the user
and directly passed to train_model_components. Furthermore, if the components as well as the
relative movements (relations) of the components are known, gen_initial_components as well as
train_model_components need not be executed. In fact, by immediately passing the components as well
as the relations to create_component_model, the component model can be created without any training.
In both cases, however, gen_initial_components can be used to evaluate the effect of the feature ex-
traction parameters ContrastLow, ContrastHigh, and MinSize of train_model_components and
create_component_model, and hence to find suitable parameter values for a certain application.
For this, the image regions for the (initial) components must be explicitly given, i.e., for each (initial) component
a separate image from which the (initial) component should be created is passed. In this case, ModelImage
contains multiple image objects. The domain of each image object is used as the region of interest for calculating
the corresponding (initial) component. The image matrix of all image objects in the tuple must be identical, i.e.,
ModelImage cannot be constructed in an arbitrary manner using concat_obj, but must be created from the
same image using add_channels or equivalent calls. If this is not the case, an error message is returned. If
the paramters ContrastLow, ContrastHigh, or MinSize only contain one element, this value is applied
to the creation of all (initial) components. In contrast, if different values for different (initial) components should
be used, tuples of values can be passed for these three parameters. In this case, the tuples must have a length
that corresponds to the number of (initial) components, i.e., the number of image objects in ModelImage. The
contour regions of the (initial) components are returned in InitialComponents.
Thus, the second possibility is equivalent to the function of inspect_shape_model within the shape-based
matching. However, in contrast to inspect_shape_model, gen_initial_components does not return
the contour regions on multiple image pyramid levels. Therefore, if the number of pyramid levels to be used
should be chosen manually, preferably inspect_shape_model should be called individually for each (initial)
component.
For both described possibilities the parameters ContrastLow, ContrastHigh, and MinSize can be au-
tomatically determined. If both hysteresis threshold should be automatically determined, both ContrastLow
and ContrastHigh must be set to ’auto’. In contrast, if only one threshold value should be determined,
ContrastLow must be set to ’auto’ while ContrastHigh must be set to an arbitrary value different from
’auto’.
If the input image ModelImage has one channel the representation of the model is created with the method
that is used in create_component_model or create_trained_component_model for the metrics
’use_polarity’, ’ignore_global_polarity’, and ’ignore_local_polarity’. If the input image has more than one chan-
nel the representation is created with the method that is used for the metric ’ignore_color_polarity’.
Parameter
Result
If the parameter values are correct, the operator gen_initial_components returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
HALCON 8.0.2
566 CHAPTER 9. MATCHING
Parallelization Information
gen_initial_components is reentrant and processed without parallelization.
Possible Predecessors
draw_region, add_channels, reduce_domain
Possible Successors
train_model_components
Alternatives
inspect_shape_model
Module
Matching
Result
If the handle of the component model is valid, the operator get_component_model_params returns the
value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
get_component_model_params is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model
See also
get_shape_model_params
Module
Matching
HALCON 8.0.2
568 CHAPTER 9. MATCHING
Result
If the parameters are valid, the operator get_component_model_tree returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Parallelization Information
get_component_model_tree is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model
See also
train_model_components
Module
Matching
Return the relations between the model components that are contained in a training result.
get_component_relations returns the relations between model components after training them with
train_model_components. With the parameter ReferenceComponent, you can select a reference com-
ponent. get_component_relations then returns the relations between the reference component and
all other components in the model image (if Image = ’model_image’ or Image = 0) or in a training image
(if Image ≥ 1). In order to obtain the relations in the ith training image, Image must be set to i. The re-
sult of the training returned by train_model_components must be passed in ComponentTrainingID.
ReferenceComponent describes the index of the reference component and must be within the range of 0 and
n-1, if n is the number of model components (see train_model_components).
The relations are returned in form of regions in Relations as well as in form of numerical values in Row,
Column, Phi, Length1, Length2, AngleStart, and AngleExtent.
The region object tuple Relations is designed as follows. For each component a separate region is returned.
Consequently, Relations contains n regions, where the order of the regions within the tuple is determined by the
index of the corresponding components. The positions of all components in the image are represented by circles
with a radius of 3 pixels. For each component other than the reference component ReferenceComponent, ad-
ditionally the position relation and the orientation relation relative to the reference component are represented.
The position relation is represented by a rectangle and the orientation relation is represend by a circle sec-
tor with a radius of 30 pixels. The center of the circle is placed at the mean relative position of the compo-
nent. The rectangle describes the movement of the reference point of the respective component relative to the
pose of the reference component, while the circle sector describes the variation of the relative orientation (see
train_model_components). A relative orientation of 0 corresponds to the relative orientation of both com-
ponents in the model image. If both components appear in the same relative orientation in all images, the circle
sector consequently degenerates to a straight line.
In addition to the region object tuple Relations, the relations are also returned in form of numerical values in
Row, Column, Phi, Length1, Length2, AngleStart, and AngleExtent. These parameters are tuples
of length n and contain the relations of all components relative to the reference component, where the order of
the values within the tuples is determined by the index of the corresponding component. The position relation is
described by the parameters of the corresponding rectangle Row, Column, Phi, Length1, and Length2 (see
gen_rectangle2). The orientation relation is described by the starting angle AngleStart and the angle
extent AngleExtent. For the reference component only the position within the image is returned in Row and
Column. All other values are set to 0.
If the reference component has not been found in the current image, an array of empty regions is returned and the
corresponding parameter values are set to 0.
The operator get_component_relations is particularly useful in order to visualize the result of the train-
ing that was performed with train_model_components. With this, it is possible to evaluate the varia-
tions that are contained in the training images. Sometimes it might be reasonable to restart the training with
train_model_components while using a different set of training images.
Parameter
HALCON 8.0.2
570 CHAPTER 9. MATCHING
ScoreCompInst. The four tuples are always of length n, where n is the number of components in the com-
ponent model ComponentModelID. If a component could not be found during the search, an empty region
is passed in the corresponding element of FoundComponents and the value of the corresponding element in
RowCompInst, ColumnCompInst, AngleCompInst, and ScoreCompInst is set to 0.
Parameter
. FoundComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Found components of the selected component model instance.
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; integer
Handle of the component model.
. ModelStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Start index of each found instance of the component model in the tuples describing the component matches.
. ModelEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
End index of each found instance of the component model to the tuples describing the component matches.
. RowComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real
Row coordinate of the found component matches.
. ColumnComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real
Column coordinate of the found component matches.
. AngleComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Rotation angle of the found component matches.
. ScoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Score of the found component matches.
. ModelComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Index of the found components.
. ModelMatch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the found instance of the component model to be returned.
. MarkOrientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mark the orientation of the components.
Default Value : ’false’
List of values : MarkOrientation ∈ {’true’, ’false’}
. RowCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real
Row coordinate of all components of the selected model instance.
. ColumnCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real
Column coordinate of all components of the selected model instance.
. AngleCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Rotation angle of all components of the selected model instance.
. ScoreCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Score of all components of the selected model instance.
Example
HALCON 8.0.2
572 CHAPTER 9. MATCHING
Result
If the parameters are valid, the operator get_found_component_model returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
get_found_component_model is reentrant and processed without parallelization.
Possible Predecessors
find_component_model
See also
train_model_components, create_component_model
Module
Matching
get_training_components
( : TrainingComponents : ComponentTrainingID, Components, Image,
MarkOrientation : Row, Column, Angle, Score )
HALCON 8.0.2
574 CHAPTER 9. MATCHING
Result
If the handle of the training result is valid, the operator get_training_components returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
get_training_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
train_model_components
See also
find_shape_model
Module
Matching
inspect_clustered_components
( : ModelComponents : ComponentTrainingID, AmbiguityCriterion,
MaxContourOverlap, ClusterThreshold : )
Example
Result
If the handle of the training result is valid, the operator inspect_clustered_components returns the value
2 (H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
inspect_clustered_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
cluster_model_components
Module
Matching
modify_component_relations ( : : ComponentTrainingID,
ReferenceComponent, ToleranceComponent, PositionTolerance,
AngleTolerance : )
HALCON 8.0.2
576 CHAPTER 9. MATCHING
The size of the change is specified as follows: By specifying a position tolerance PositionTolerance, the
semi-axes of the rectangle that describes the reference point’s movement (see train_model_components)
are enlarged by PositionTolerance pixels. Accordingly, by specifying an orientation toler-
ance AngleTolerance, the angle range that describes the variation of the relative orientation (see
train_model_components) is enlarged by AngleTolerance to both sides. Consequently, negative tol-
erance values lead to a decreased size of the relations. The operator modify_component_relations is
particularly useful when the training images that were used during the training do not cover the entire spectrum of
the relative movements.
In order to select the relations that should be modified, values for ReferenceComponent and
ToleranceComponent can be passed in one of the following ways: For each of both parameters either one
value, several values, or the string ’all’ can be passed. The following table summarizes the different possibilities
by giving the affected relations for different combinations of parameter values. Here, four model components are
assumed (0, 1, 2, and 3). If, for example, ReferenceComponent is set to 0 and ToleranceComponent
is set to 1, then the relation (0,1), which corresponds to the relative movement of component 1 with respect to
component 0, will be modified.
ReferenceComponent ToleranceComponent Affected Relation(s)
’all’ ’all’ (0,1) (0,2) (0,3)
(1,0) (1,2) (1,3)
(2,0) (2,1) (2,3)
(3,0) (3,1) (3,2)
’all’ [1,2] (0,1) (0,2)
(1,2)
(2,1)
(3,1) (3,2)
[0,1] ’all’ (0,1) (0,2) (0,3)
(1,0) (1,2) (1,3)
0 1 (0,1)
0 [1,2] (0,1) (0,2)
[0,1] 2 (0,2) (1,2)
[0,1,2] [1,2,3] (0,1) (1,2) (2,3)
The number of tolerance values passed in PositionTolerance and AngleTolerance must be either 1 or
be equal to the number of affected relations. In the former case all affected relations are modified by the same
value, whereas in the latter case each relation can be modified individually by passing different values within a
tuple.
Parameter
Possible Predecessors
train_model_components
Possible Successors
create_trained_component_model
Module
Matching
HALCON 8.0.2
578 CHAPTER 9. MATCHING
To influence the search for the initial components, the parameters MinScore, SearchRowTol,
SearchColumnTol, SearchAngleTol, and TrainingEmphasis can be set. The parameter MinScore
determines what score a potential match must at least have to be regarded as an instance of the initial component
in the training image. The larger MinScore is chosen, the faster the training is. If the initial components can
be expected never to be occluded in the training images, MinScore may be set as high as 0.8 or even 0.9 (see
find_shape_model).
By default, the components are searched only at points in which the component lies completely within the respec-
tive training image. This means that a component will not be found if it extends beyond the borders of the image,
even if it would achieve a score greater than MinScore. This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause components that extend beyond the image border
to be found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as
being occluded, i.e., they lower the score. It should be noted that the runtime of the training will increase in this
mode.
When dealing with a high number of initial components and many training images, the training may take a long
time (up to several minutes). In order to speed up the training it is possible to restrict the search space for the single
initial components in the training images. For this, the poses of the initial components in the model image are used
as reference pose. The parameters SearchRowTol and SearchColumnTol specify the position tolerance
region relative to the reference position in which the search is performed. Assume, for example, that the position of
an initial component in the model image is (100,200) and SearchRowTol is set to 20 and SearchColumnTol
is set to 10. Then, this initial component is searched in the training images only within the axis-aligned rectangle
that is determined by the upper left corner (80,190) and the lower right corner (120,210). The same holds for
the orientation angle range, which can be restricted by specifying the angle tolerance SearchAngleTol to
the angle range of [-SearchAngleTol,+SearchAngleTol]. Thus, it is possible to considerably reduce the
computational effort during the training by an adequate acquisition of the training images. If one of the three
parameters is set to -1, no restriction of the search space is applied in the corresponding dimension.
The input parameters ContrastLow, ContrastHigh, MinSize, MinScore, SearchRowTol,
SearchColumnTol, and SearchAngleTol must either contain one element, in which case the parameter is
used for all initial components, or must contain the same number of elements as the initial components contained
in InitialComponents, in which case each parameter element refers to the corresponding initial component
in InitialComponents.
The parameter TrainingEmphasis offers another possibility to influence the computation time of the training
and to simultaneously affect its robustness. If TrainingEmphasis is set to ’speed’, on the one hand the training
is comparatively fast, on the other hand it may happen in some cases that some initial components are not found in
the training images or are found at a wrong pose. Consequently, this would lead to an incorrect computation of the
rigid model components and their relations. The poses of the found initial components in the individual training
images can be examined by using get_training_components. If erroneous matches occur the training
should be restarted with TrainingEmphasis set to ’reliability’. This results in a higher robustness at the cost
of a longer computation time.
Furthermore, during the pose determination of the initial components ambiguities may occur if the initial com-
ponents are rotationally symmetric or if several initial components are identical or at least similar to each other.
To solve the ambiguities, the most probable pose is calculated for each initial component in each training im-
age. For this, the individual ambiguous poses are evaluated. The pose of an initial component receives a good
evaluation if the relative pose of the initial component with respect to the other initial components is similar to
the corresponding relative pose in the model image. The method to evaluate this similarity can be chosen with
AmbiguityCriterion. In almost all cases the best results are obtained with ’rigidity’, which assumes the
rigidity of the compound object. The more the rigidity of the compound object is violated by the pose of the initial
component, the worse its evaluation is. In the case of ’distance’, only the distance between the initial components
is considered during the evaluation. Hence, the pose of the initial component receives a good evaluation if its dis-
tances to the other initial components is similar to the corresponding distances in the model image. Accordingly,
when choosing ’orientation’, only the relative orientation is considered during the evaluation. Finally, the simulta-
neous consideration of distance and orientation can be achieved by choosing ’distance_orientation’. In contrast to
’rigidity’, the relative pose of the initial components is not considered when using ’distance_orientation’.
The process of solving the ambiguities can be further influenced by the parameter MaxContourOverlap. This
parameter describes the extent by which the contours of two initial component matches may overlap each other.
Let the letters ’I’ and ’T’, for example, be two initial components that should be searched in a training image
that shows the string ’IT’. Then, the initial component ’T’ should be found at its correct pose. In contrast, the
initial component ’I’ will be found at its correct pose (’I’) but also at the pose of the ’T’ because of the simi-
HALCON 8.0.2
580 CHAPTER 9. MATCHING
larity of the two components. To discard the wrong match of the initial component ’I’, an appropriate value for
MaxContourOverlap can be chosen: If overlapping matches should be tolerated, MaxContourOverlap
should be set to 1. If overlapping matches should be completely avoided, MaxContourOverlap should be set
to 0. By choosing a value between 0 and 1, the maximum percentage of overlapping contour pixels can be adjusted.
The decision which initial components can be clustered to rigid model components is made based on the poses
of the initial components in the model image and in the training images. Two initial components are merged
if they do not show any relative movement over all images. Assume that in the case of the above mentioned
switch the training image would show the same switch state as the model image, the algorithm would merge the
respective initial components because it assumes that the entire switch is one rigid model component. The extent
by which initial components are merged can be influenced with the parameter ClusterThreshold. This cluster
threshold is based on the probability that two initial components belong to the same rigid model component. Thus,
ClusterThreshold describes the minimum probability which two initial components must have in order to be
merged. Since the threshold is based on a probability value, it must lie in the interval between 0 and 1. The greater
the threshold is chosen, the smaller the number of initial components that are merged. If a threshold of 0 is chosen,
all initial components are merged into one rigid component, while for a threshold of 1 no merging is performed
and each initial component is adopted as one rigid model component.
The final rigid model components are returned in ModelComponents. Later, the index of a component region
in ModelComponents is used to denote the model component. The poses of the components in the training
images can be examined by using get_training_components.
After the determination of the model components their relative movements are analyzed by determining the move-
ment of one component with respect to a second component for each pair of components. For this, the components
are referred to their reference points. The reference point of a component is the center of gravity of its contour
region, which is returned in ModelComponents. It can be calculated by calling area_center. Finally, the
relative movement is represented by the smallest enclosing rectangle of arbitrary orientation of the reference point
movement and by the smallest enclosing angle interval of the relative orientation of the second component over all
images. The determined relations can be inspected by using get_component_relations.
Parameter
. ModelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image from which the shape models of the initial components should be created.
. InitialComponents (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Contour regions or enclosing regions of the initial components.
. TrainingImages (input_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Training images that are used for training the model components.
. ModelComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Contour regions of rigid model components.
. ContrastLow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Lower hysteresis threshold for the contrast of the initial components in the image.
Default Value : ’auto’
Suggested values : ContrastLow ∈ {’auto’, 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : ContrastLow > 0
. ContrastHigh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Upper hysteresis threshold for the contrast of the initial components in the image.
Default Value : ’auto’
Suggested values : ContrastHigh ∈ {’auto’, 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : (ContrastHigh > 0) ∧ (ContrastHigh ≥ ContrastLow)
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Minimum size of connected contour regions.
Default Value : ’auto’
Suggested values : MinSize ∈ {’auto’, 0, 5, 10, 20, 30, 40}
Restriction : MinSize ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Minimum score of the instances of the initial components to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScore) ∧ (MinScore ≤ 1)
HALCON 8.0.2
582 CHAPTER 9. MATCHING
Result
If the parameter values are correct, the operator train_model_components returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
train_model_components is processed completely exclusively without parallelization.
Possible Predecessors
gen_initial_components
Possible Successors
inspect_clustered_components, cluster_model_components,
modify_component_relations, write_training_components,
get_training_components, get_component_relations,
create_trained_component_model, clear_training_components,
clear_all_training_components
See also
create_shape_model, find_shape_model
Module
Matching
Parameter
9.2 Correlation-Based
clear_all_ncc_models ( : : : )
clear_ncc_model ( : : ModelID : )
HALCON 8.0.2
584 CHAPTER 9. MATCHING
Parallelization Information
clear_ncc_model is processed completely exclusively without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, write_ncc_model
See also
clear_all_ncc_models
Module
Matching
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of find_ncc_model
will increase slightly in this case.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_ncc_model_origin.
Parameter
HALCON 8.0.2
586 CHAPTER 9. MATCHING
The operator find_ncc_model finds the best NumMatches instances of the NCC model ModelID in
the input image Image. The model must have been created previously by calling create_ncc_model or
read_ncc_model.
The position and rotation of the found instances of the model is returned in Row, Column, and Angle. The
coordinates Row and Column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the NCC model with
create_ncc_model. A different origin can be set with set_ncc_model_origin. Additionally, the score
of each found instance is returned in Score. The score is the normalized cross correlation of the template t(r, c)
and the image i(r, c):
Here, n denotes the number of points in the template, R denotes the domain (ROI) of the template, mt is the mean
gray value of the template
1 X
mt = t(u, v)
n
(u,v)∈R
1 X 2
s2t = (t(u, v) − mt )
n
(u,v)∈R
mi (r, c) is the mean gray value of the image at position (r, c) over all points of the template (i.e., the template
points are shifted by (r, c))
1 X
mi (r, c) = i(r + u, c + v)
n
(u,v)∈R
and s2i (r, c) is the variance of the gray values of the image at position (r, c) over all points of the template
1 X 2
s2i (r, c) = (i(r + u, c + v) − mi (r, c))
n
(u,v)∈R
The NCC measures how well the template and image correspond at a particular point (r, c). It assumes values
between −1 and 1. The larger the absolute value of the correlation, the larger the degree of correspondence
between the template and image. A value of 1 means that the gray values in the image are a linear transformation
of the gray values in the template:
i(r + u, c + v) = at(u, v) + b
where a > 0. Similarly, a value of −1 means that the gray values in the image are a linear transformation of the
gray values in the template with a < 0. Hence, in this case the template occurs with a reversed polarity in the
image. Because of the above property, the NCC is invariant to linear illumination changes.
The NCC as defined above is used if the NCC model has been created with Metric = ’use_polarity’. If the model
has been created with Metric = ’ignore_global_polarity’, the absolute value of ncc(r, c) is used as the score.
It should be noted that the NCC is very sensitive to occlusion and clutter as well as to nonlinear illumination
changes in the image. If a model should be found in the presence of occlusion, clutter, or nonlinear illumination
changes the search should be performed using the shape-based matching (see, e.g., create_shape_model).
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_ncc_model. A different origin set with set_ncc_model_origin is not taken into account here.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below).
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_ncc_model. In particular, this means that the angle ranges of the model and the search must truly
overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all angles in the re-
mainder of the paragraph are given in degrees, whereas they have to be specified in radians in find_ncc_model.
Hence, if the model, for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the
angle search space in find_ncc_model is, for example, set to AngleStart = 350◦ and AngleExtent =
20◦ , the model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ .
To find the model, in this example it would be necessary to select AngleStart = −10◦ .
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rotations
are found in the image. If the model has repeating structures it may happen that multiple instances with identical
rotations are found at similar positions in the image. The parameter MaxOverlap determines by what fraction
(i.e., a number between 0 and 1) two instances may at most overlap in order to consider them as different instances,
and hence to be returned separately. If two instances overlap each other by more than MaxOverlap only the
best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary
orientation (see smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances
may not overlap at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’false’, the model’s pose is only determined with pixel accuracy and the angle resolution
that was specified with create_ncc_model. If SubPixel is set to ’true’, the position as well as the rotation
are determined with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This
mode costs almost no computation time and achieves a high accuracy. Hence, SubPixel should usually be set to
’true’.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the num-
ber of levels is clipped to the range given when the shape model was created with create_ncc_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_ncc_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. If the lowest pyramid level to use is chosen too large, it may happen that
the desired accuracy cannot be achieved, or that wrong instances of the model are found because the model is not
specific enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model.
In this case, the lowest pyramid level to use must be set to a smaller value.
Parameter
HALCON 8.0.2
588 CHAPTER 9. MATCHING
Result
If the parameter values are correct, the operator find_ncc_model returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_ncc_model is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, set_ncc_model_origin
Possible Successors
clear_ncc_model
Alternatives
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models,
best_match_rot_mg
Module
Matching
HALCON 8.0.2
590 CHAPTER 9. MATCHING
The operator set_ncc_model_origin sets the origin (reference point) of the NCC model ModelID to a new
value. The origin is specified relative to the center of gravity of the domain (region) of the image that was used to
create the NCC model with create_ncc_model. Hence, an origin of (0,0) means that the center of gravity of
the domain of the model image is used as the origin. An origin of (-20,-40) means that the origin lies to the upper
left of the center of gravity.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; integer
Handle of the model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real
Row coordinate of the origin of the NCC model.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real
Column coordinate of the origin of the NCC model.
Result
If the handle of the model is valid, the operator set_ncc_model_origin returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
set_ncc_model_origin is processed completely exclusively without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model
Possible Successors
find_ncc_model, get_ncc_model_origin
See also
area_center
Module
Matching
9.3 Gray-Value-Based
HALCON 8.0.2
592 CHAPTER 9. MATCHING
The operator adapt_template serves to adapt a template which has been created by create_template
to the size of an image. The operator adapt_template can be called before the template is used with images
of another size, or if the image used to create the template had another size. If it is not called explicitly it will be
called internally each time another image size is used. The contents of the image is hereby irrelevant; only the
width of Image will be considered.
Parameter
The runtime of the operator depends on the size of the domain of Image. Therefore it is important to restrict the
domain as far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter
MaxError determines the maximal error which the searched position is allowed to have at most. The lower this
value is, the faster the operator runs.
Row and Column return the position of the best match, whereby Error indicates the average difference of the
grayvalues. If no position with an error below MaxError was found the position (0, 0) and a matching result of
255 for Error are returned. In this case MaxError has to be set larger.
The maximum error of the position (without noise) is 0.1 pixel. The average error is 0.03 pixel.
Parameter
HALCON 8.0.2
594 CHAPTER 9. MATCHING
The position of the found matching position is returned in Row and Column. The corresponding error is given
in Error. If no point below MaxError is found a value of 255 for Error and 0 for Row and Column is
returned. If the desired object is missed (no object found or wrong position) you have to set MaxError higher or
WhichLevels lower. Check also if the illumination has changed (see set_offset_template).
The maximum error of the position (without noise) is 0.1 pixel. The average error is 0.03 pixel.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Exactness in subpixels in case of ’true’.
Default Value : ’false’
List of values : SubPixel ∈ {’true’, ’false’}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of the used resolution levels.
Default Value : 4
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}
. WhichLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer / string
Resolution level up to which the method “best match” is used.
Default Value : 2
Suggested values : WhichLevels ∈ {’all’, ’original’, 0, 1, 2, 3, 4, 5, 6}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Average divergence of the grayvalues in the best match.
Result
If the parameter values are correct, the operator best_match_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain, set_reference_template, set_offset_template
Alternatives
fast_match, fast_match_mg, best_match, best_match_pre_mg, best_match_rot,
best_match_rot_mg, exhaustive_match, exhaustive_match_mg
Module
Matching
best_match_pre_mg applies gray value matching using an image pyramid. best_match_pre_mg works
analogously to best_match_mg, but it makes use of pre calculated pyramid which has to be generated before-
hand using gen_gauss_pyramid. This reduces runtime if more than one match has to be done or the pyramid
has be used otherwise. The pyramid has to be generated using the zooming factor 0.5 and the mode ’constant’.
Parameter
. ImagePyramid (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image-array ; Hobject : byte
Image pyramid inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Exactness in subpixels in case of ’true’.
Default Value : ’false’
List of values : SubPixel ∈ {’true’, ’false’}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of the used resolution levels.
Default Value : 3
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}
. WhichLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer / string
Resolution level up to which the method “best match” is used.
Default Value : ’original’
Suggested values : WhichLevels ∈ {’all’, ’original’, 0, 1, 2, 3, 4, 5, 6}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Average divergence of the grayvalues in the best match.
Result
If the parameter values are correct, the operator best_match_pre_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_pre_mg is reentrant and processed without parallelization.
Possible Predecessors
gen_gauss_pyramid, create_template, read_template, adapt_template, draw_region,
draw_rectangle1, reduce_domain, set_reference_template
Alternatives
fast_match, fast_match_mg, exhaustive_match, exhaustive_match_mg
Module
Matching
HALCON 8.0.2
596 CHAPTER 9. MATCHING
and AngleExtend define the maximum rotation of the pattern: AngleStart specifies the maximum counter
clockwise rotation and AngleExtend the maximum clockwise rotation relative to this angle. Both values have
to smaller or equal to the values used for the creation of the pattern (see create_template_rot). In addition
to best_match best_match_rot returns the rotion angle of the pattern in Angle (radiant). The accuracy
of this angle depends on the parameter AngleStep of create_template_rot. In the case of SubPixel =
’true’ the position and the angle are calculated with “sub pixel” accuracy.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Smallest Rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtend (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Maximum positive Extension of AngleStart.
Default Value : 0.79
Suggested values : AngleExtend ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtend > 0
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Subpixel accuracy in case of ’true’.
Default Value : ’false’
List of values : SubPixel ∈ {’true’, ’false’}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real
Column position of the best match.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Rotation angle of pattern.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Average divergence of the grayvalues of the best match.
Result
If the parameter values are correct, the operator best_match_rot returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_rot is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template_rot, read_template, set_offset_template,
set_reference_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain
Alternatives
best_match_rot_mg
See also
best_match, best_match_mg
Module
Matching
HALCON 8.0.2
598 CHAPTER 9. MATCHING
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_rot_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template_rot, set_reference_template, set_offset_template,
adapt_template, draw_region, draw_rectangle1, reduce_domain
Alternatives
best_match_rot, best_match_mg
See also
fast_match
Module
Matching
clear_all_templates ( : : : )
clear_template ( : : TemplateID : )
Possible Predecessors
create_template, create_template_rot, read_template, write_template
See also
clear_all_templates
Module
Matching
HALCON 8.0.2
600 CHAPTER 9. MATCHING
Parameter
. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Input image whose domain will be processed for the pattern matching.
. FirstError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Not yet in use.
Default Value : 255
List of values : FirstError ∈ {255}
. NumLevel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximal number of pyramid levels.
Default Value : 4
List of values : NumLevel ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Optimize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Kind of optimizing.
Default Value : ’sort’
List of values : Optimize ∈ {’none’, ’sort’}
. GrayValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Kind of grayvalues.
Default Value : ’original’
List of values : GrayValues ∈ {’original’, ’normalized’, ’gradient’, ’sobel’}
. TemplateID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
Result
If the parameters are valid, the operator create_template returns the value 2 (H_MSG_TRUE). If necessary
an exception handling will be raised.
Parallelization Information
create_template is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
adapt_template, set_reference_template, clear_template, write_template,
set_offset_template, best_match, best_match_mg, fast_match, fast_match_mg
Alternatives
create_template_rot, read_template
Module
Matching
A ∗ 12 ∗ AngleExtend
M=
AngleStep
After the transformation, a number (TemplateID) is assigned to the template for being used in the further
process.
A description of the other parameters can be found at the operator create_template.
Attention
You have to be aware, that depending on the resolution a large number of pre calculated patterns have to be created
which might result in a large amount of memory needed.
Parameter
HALCON 8.0.2
602 CHAPTER 9. MATCHING
The difference between fast_match and exhaustive_match is that the matching for one position is
stopped if the error is to high. This leads to a reduced runtime but one might miss correct matches. The runtime of
the operator depends mainly on the size of the domain of Image. Therefore it is important to restrict the domain
as far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter MaxError
determines the maximal error which the searched position is allowed to show. The lower this value is, the faster
the operator runs.
All points which show a matching error smaller than MaxError will be returned in the output region Matches.
This region can be used for further processing. For example by using a connection and best_match to find all
the matching objects. If no point has a match error below MaxError the empty region (i.e no points) is returned.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image inside of which the pattern has to be found.
. Matches (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
All points whose error lies below a certain threshold.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximal average difference of the grayvalues.
Default Value : 20
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 1
Result
If the parameter values are correct, the operator fast_match returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fast_match is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain
Possible Successors
connection, best_match
Alternatives
best_match, best_match_mg, fast_match_mg, exhaustive_match, exhaustive_match_mg
Module
Matching
The operator fast_match_mg like the operator fast_match performs a matching of the template of
TemplateID and Image. In contrast to fast_match, however, the search for good matches starts in scaled
down images (pyramid). The number of levels of the pyramid will be determined by NumLevel. Hereby the
value 1 indicates that no pyramid will be used. In this case the operator fast_match_mg is equivalent to the
operator fast_match. The value 2 triggers the search in the image with half the framesize. The search for
all those points showing an error small enough in the scaled down image (error smaller than MaxError) will be
refined at the corresponding positions in the original image (Image).
The runtime of matching dependends on the parameter MaxError: the larger the value the longer is the processing
time, because more points of the pattern have to be tested. If MaxError is to low the pattern will not be found.
The value has therefore to be optimized for every application.
NumLevel indicates the number of levels of the pyramid, including the original image. Optionally a second value
can be given. This value specifies the number (0..n) of the lowest level which is used the the matching. The region
found up to this level will then be zoomed to the size of the original level. This can used to increase the runtime in
the case that the accuracy does not have to be so high.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image inside of which the pattern has to be found.
. Matches (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
All points which have an error below a certain threshold.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. NumLevel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Number of levels in the pyramid.
Default Value : 3
List of values : NumLevel ∈ {1, 2, 3, 4, 5, 6, 7, 8}
Result
If the parameter values are correct, the operator fast_match_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fast_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain
Alternatives
best_match, best_match_mg, fast_match, exhaustive_match, exhaustive_match_mg
Module
Matching
HALCON 8.0.2
604 CHAPTER 9. MATCHING
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
file name.
. TemplateID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; integer
Template number.
Result
If the file name is valid, the operator read_template returns the value 2 (H_MSG_TRUE). If necessary an
exception handling will be raised.
Parallelization Information
read_template is processed completely exclusively without parallelization.
Possible Successors
adapt_template, set_reference_template, set_offset_template, best_match,
fast_match, best_match_rot
Module
Matching
set_reference_template allows to define a new reference position for a template. As default after call-
ing create_template or create_template_rot the center of gravity of the template is used. Using
set_reference_template the reference position can be redefined. In the case of the center of gravity as
reference the vector (0, 0) is returned after matching for a null translation of the pattern relative to the image.
Parameter
9.4 Shape-Based
clear_all_shape_models ( : : : )
HALCON 8.0.2
606 CHAPTER 9. MATCHING
The operator clear_all_shape_models frees the memory of all shape models that were created by
create_shape_model, create_scaled_shape_model, or create_aniso_shape_model. Af-
ter calling clear_all_shape_models, no model can be used any longer.
Attention
clear_all_shape_models exists solely for the purpose of implementing the “reset program” functionality
in HDevelop. clear_all_shape_models must not be used in any application.
Result
clear_all_shape_models always returns 2 (H_MSG_TRUE).
Parallelization Information
clear_all_shape_models is processed completely exclusively without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model, write_shape_model
Alternatives
clear_shape_model
Module
Matching
clear_shape_model ( : : ModelID : )
The model is generated using multiple image pyramid levels and is stored in memory. If a complete pregeneration
of the model is selected (see below), the model is generated at multiple rotations and anisotropic scales (i.e.,
independent scales in the row and column direction) on each level. The output parameter ModelID is a handle
for this model, which is used in subsequent calls to find_aniso_shape_model.
The number of pyramid levels is determined with the parameter NumLevels. It should be chosen as
large as possible because by this the time necessary to find the object is significantly reduced. On the
other hand, NumLevels must be chosen such that the model is still recognizable and contains a sufficient
number of points (at least four) on the highest pyramid level. This can be checked using the output of
inspect_shape_model. If not enough model points are generated, the number of pyramid levels is reduced
internally until enough model points are found on the highest pyramid level. If this procedure would lead to a
model with no pyramid levels, i.e., if the number of model points is already too small on the lowest pyramid level,
create_aniso_shape_model returns with an error message. If NumLevels is set to ’auto’ (or 0 for back-
wards compatibility), create_aniso_shape_model determines the number of pyramid levels automatically.
The automatically computed number of pyramid levels can be queried using get_shape_model_params. In
rare cases, it might happen that create_aniso_shape_model determines a value for the number of pyra-
mid levels that is too large or too small. If the number of pyramid levels is chosen too large, the model may not
be recognized in the image or it may be necessary to select very low parameters for MinScore or Greediness in
find_aniso_shape_model in order to find the model. If the number of pyramid levels is chosen too small,
the time required to find the model in find_aniso_shape_model may increase. In these cases, the number
of pyramid levels should be selected using the output of inspect_shape_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which
the model can occur in the image. Note that the model can only be found in this range of angles by
find_aniso_shape_model. The parameter AngleStep determines the step length within the selected
range of angles. Hence, if subpixel accuracy is not specified in find_aniso_shape_model, this param-
eter specifies the accuracy that is achievable for the angles in find_aniso_shape_model. AngleStep
should be chosen based on the size of the object. Smaller models do not have many different discrete rotations
in the image, and hence AngleStep should be chosen larger for smaller models. If AngleExtent is not an
integer multiple of AngleStep, AngleStep is modified accordingly.
The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of possible
anisotropic scales of the model in the row and column direction. A scale of 1 in both scale factors corresponds to
the original size of the model. The parameters ScaleRStep and ScaleCStep determine the step length within
the selected range of scales. Hence, if subpixel accuracy is not specified in find_aniso_shape_model,
these parameters specify the accuracy that is achievable for the scales in find_aniso_shape_model. Like
AngleStep, ScaleRStep and ScaleCStep should be chosen based on the size of the object. If the respective
range of scales is not an integer multiple of ScaleRStep and ScaleCStep, ScaleRStep and ScaleCStep
are modified accordingly.
Note that the transformations are treated internally such that the scalings are applied first, followed by the rotation.
Therefore, the model should usually be aligned such that it appears horizontally or vertically in the model image.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected
angle and scale range and stored in memory. The memory required to store the model is proportional to
the number of angle steps, the number of scale steps, and the number of points in the model. Hence, if
AngleStep, ScaleRStep, or ScaleCStep are too small or AngleExtent or the range of scales are
too big, it may happen that the model no longer fits into the (virtual) memory. In this case, AngleStep,
ScaleRStep, or ScaleCStep must be enlarged or AngleExtent or the range of scales must be re-
duced. In any case, it is desirable that the model completely fits into the main memory, because this avoids
paging by the operating system, and hence the time to find the object will be much smaller. Since an-
gles can be determined with subpixel resolution by find_aniso_shape_model, AngleStep ≥ 1◦ and
ScaleRStep, ScaleCStep ≥ 0.02 can be selected for models of a diameter smaller than about 200 pixels.
If AngleStep = 0 auto 0 or ScaleRStep, ScaleCStep = 0 auto 0 (or 0 for backwards compatibility in both
cases) is selected, create_aniso_shape_model automatically determines a suitable angle or scale step
length, respectively, based on the size of the model. The automatically computed angle and scale step lengths can
be queried using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_aniso_shape_model. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases,
HALCON 8.0.2
608 CHAPTER 9. MATCHING
the number of points is reduced according to the value of Optimization. If the number of points is reduced,
it may be necessary in find_aniso_shape_model to set the parameter Greediness to a smaller value,
e.g., 0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of
the search because in this case usually significantly more potential instances of the model must be examined. If
Optimization is set to ’auto’, create_aniso_shape_model automatically determines the reduction of
the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_aniso_shape_model
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set
to identical values. The effect of this parameter can be checked in advance with inspect_shape_model.
If Contrast is set to ’auto’, create_aniso_shape_model determines the three above described values
automatically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’),
or the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If,
for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_aniso_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_aniso_shape_model. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter Metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine MinContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image MinContrast should be set to 17. Obviously,
MinContrast must be smaller than Contrast. If the model should be recognized in very low contrast im-
ages, MinContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, MinContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
find_aniso_shape_model. If MinContrast is set to ’auto’, the minimum contrast is determined auto-
matically based on the noise in the model image. Consequently, an automatic determination only makes sense if
the image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If Metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime
of find_aniso_shape_model will increase slightly in this case. If Metric = ’ignore_local_polarity’, the
model is found even if the contrast changes locally. This mode can, for example, be useful if the object consists
of a part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the
runtime of find_aniso_shape_model increases significantly, it is usually better to create several models
that reflect the possible contrast variations of the object with create_aniso_shape_model, and to match
them simultaneously with find_aniso_shape_models. The above three metrics can only be applied to
single-channel images. If a multichannel image is used as the model image or as the search image only the first
channel will be used (and no error message will be returned). If Metric = ’ignore_color_polarity’, the model
is found even if the color contrast changes locally. This is, for example, the case if parts of the object can change
their color, e.g., from red to green. In particular, this mode is useful if it is not known in advance in which channels
the object is visible. In this mode, the runtime of find_aniso_shape_model can also increase significantly.
The metric ’ignore_color_polarity’ can be used for images with an arbitrary number of channels. If it is used for
single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that for Metric =
’ignore_color_polarity’ the number of channels in the model creation with create_aniso_shape_model
and in the search with find_aniso_shape_model can be different. This can, for example, be used to create
a model from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do
not need to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also
contain images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter
HALCON 8.0.2
610 CHAPTER 9. MATCHING
Possible Successors
find_aniso_shape_model, find_aniso_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_shape_model, create_scaled_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
612 CHAPTER 9. MATCHING
no longer fits into the (virtual) memory. In this case, either AngleStep or ScaleStep must be enlarged or
AngleExtent or the range of scales must be reduced. In any case, it is desirable that the model completely fits
into the main memory, because this avoids paging by the operating system, and hence the time to find the object will
be much smaller. Since angles can be determined with subpixel resolution by find_scaled_shape_model,
AngleStep ≥ 1◦ and ScaleStep ≥ 0.02 can be selected for models of a diameter smaller than about 200
pixels. If AngleStep = 0 auto 0 or ScaleStep = 0 auto 0 (or 0 for backwards compatibility in both cases)
is selected, create_scaled_shape_model automatically determines a suitable angle or scale step length,
respectively, based on the size of the model. The automatically computed angle and scale step lengths can be
queried using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_scaled_shape_model. Because of this, the recognition of the model might require slightly more
time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases,
the number of points is reduced according to the value of Optimization. If the number of points is reduced,
it may be necessary in find_scaled_shape_model to set the parameter Greediness to a smaller value,
e.g., 0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of
the search because in this case usually significantly more potential instances of the model must be examined. If
Optimization is set to ’auto’, create_scaled_shape_model automatically determines the reduction
of the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_scaled_shape_model
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set to
identical values. The effect of this parameter can be checked in advance with inspect_shape_model. If
Contrast is set to ’auto’, create_scaled_shape_model determines the three above described values
automatically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’),
or the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If,
for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
HALCON 8.0.2
614 CHAPTER 9. MATCHING
Possible Successors
find_scaled_shape_model, find_scaled_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_shape_model, create_aniso_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
616 CHAPTER 9. MATCHING
find_shape_model. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases, the
number of points is reduced according to the value of Optimization. If the number of points is reduced, it may
be necessary in find_shape_model to set the parameter Greediness to a smaller value, e.g., 0.7 or 0.8.
For small models, the reduction of the number of model points does not result in a speed-up of the search because
in this case usually significantly more potential instances of the model must be examined. If Optimization is
set to ’auto’, create_shape_model automatically determines the reduction of the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_shape_model typically
returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a completely
pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two modes. If
maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set to
identical values. The effect of this parameter can be checked in advance with inspect_shape_model. If
Contrast is set to ’auto’, create_shape_model determines the three above described values automati-
cally. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’), or the
minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not determined
automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If, for
example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_shape_model. In other words, this parameter separates the model from the noise in the image.
Therefore, a good choice is the range of gray value changes caused by the noise in the image. If, for example, the
gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If multichannel images
are used for the model and the search images, and if the parameter Metric is set to ’ignore_color_polarity’ (see
below) the noise in one channel must be multiplied by the square root of the number of channels to determine
MinContrast. If, for example, the gray values fluctuate within a range of 10 gray levels in a single channel
and the image is a three-channel image MinContrast should be set to 17. Obviously, MinContrast must
be smaller than Contrast. If the model should be recognized in very low contrast images, MinContrast
must be set to a correspondingly small value. If the model should be recognized even if it is severely occluded,
MinContrast should be slightly larger than the range of gray value fluctuations created by noise in order to en-
sure that the position and rotation of the model are extracted robustly and accurately by find_shape_model. If
MinContrast is set to ’auto’, the minimum contrast is determined automatically based on the noise in the model
image. Consequently, an automatic determination only makes sense if the image noise during the recognition is
similar to the noise in the model image. Furthermore, in some cases it is advisable to increase the automatically
determined value in order to increase the robustness against occlusions (see above). The automatically computed
minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If Metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of find_shape_model
will increase slightly in this case. If Metric = ’ignore_local_polarity’, the model is found even if the contrast
changes locally. This mode can, for example, be useful if the object consists of a part with medium gray value,
within which either darker or brighter sub-objects lie. Since in this case the runtime of find_shape_model
increases significantly, it is usually better to create several models that reflect the possible contrast variations of
the object with create_shape_model, and to match them simultaneously with find_shape_models.
The above three metrics can only be applied to single-channel images. If a multichannel image is used as the
model image or as the search image only the first channel will be used (and no error message will be returned).
If Metric = ’ignore_color_polarity’, the model is found even if the color contrast changes locally. This is,
for example, the case if parts of the object can change their color, e.g., from red to green. In particular, this
mode is useful if it is not known in advance in which channels the object is visible. In this mode, the runtime
of find_shape_model can also increase significantly. The metric ’ignore_color_polarity’ can be used for
images with an arbitrary number of channels. If it is used for single-channel images it has the same effect as
’ignore_local_polarity’. It should be noted that for Metric = ’ignore_color_polarity’ the number of channels
in the model creation with create_shape_model and in the search with find_shape_model can be
different. This can, for example, be used to create a model from a synthetically generated single-channel image.
Furthermore, it should be noted that the channels do not need to contain a spectral subdivision of the light (like
in an RGB image). The channels can, for example, also contain images of the same object that were obtained by
illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter
HALCON 8.0.2
618 CHAPTER 9. MATCHING
determine_shape_model_params is mainly useful to determine the above parameters before creating the
model, e.g., in an interactive system, which makes suggestions for these parameters to the user, but enables the
user to modify the suggested values.
The automatically determined parameters are returned as a name-value pair in ParameterName and
ParameterValue. The parameter names in ParameterName are identical to the names in Parameters,
where, of course, the value ’all’ is replaced by the actual parameter names. An exception is the parameter ’con-
trast_hyst’, for which the two values ’contrast_low’ and ’contrast_high’ are returned.
The remaining parameters (NumLevels, AngleStart, AngleExtent, ScaleMin, ScaleMax,
Optimization, Metric, Contrast, and MinContrast) have the same meaning as in
create_shape_model, create_scaled_shape_model, and create_aniso_shape_model.
The description of these parameters can be looked up with these operators. These parameters are used by
determine_shape_model_params to calculate the parameters to be determined in the same manner as
in create_shape_model, create_scaled_shape_model, and create_aniso_shape_model.
It should be noted that if the parameters of a shape model with isotropic scaling should be determined, i.e.,
if Parameters contains ’scale_step’ either explicitly or implicitly via ’all’, the parameters ScaleMin and
ScaleMax must contain one value each. If the parameters of a shape model with anisotropic scaling should
be determined, i.e., if Parameters contains ’scale_r_step’ or ’scale_c_step’ either explicitly or implicitly, the
parameters ScaleMin and ScaleMax must contain two values each. In this case, the first value of the respective
parameter refers to the scaling in the row direction, while the second value refers to the scaling in the column
direction.
Note that in determine_shape_model_params some parameters appear that can also be determined au-
tomatically (NumLevels, Optimization, Contrast, MinContrast). If these parameters should not be
determined automatically, i.e., their name is not passed in ParameterName, the corresponding parameters must
contain valid values and must not be set to ’auto’. In contrast, if these parameters are to be determined au-
tomatically, their values are treated in the following way: If the optimization or the (hysteresis) contrast is to be
determined automatically, i.e., ParameterName contains the value ’optimization’ or ’contrast’ (’contrast_hyst’),
the values passed in Optimization and Contrast are ignored. In contrast, if the maximum number of pyra-
mid levels or the minimum contrast is to be determined automatically, i.e., ParameterName contains the value
’num_levels’ or ’min_contrast’, you can let HALCON determine suitable values and at the same time specify an
upper or lower boundary, respectively:
If the maximum number of pyramid levels should be specified in advance, NumLevels can be set to the particular
value. If in this case Parameters contains the value ’num_levels’, the computed number of pyramid levels is
at most NumLevels. If NumLevels is set to ’auto’ (or 0 for backwards compatibility), the number of pyramid
levels is determined without restrictions as large as possible.
If the minimum contrast should be specified in advance, MinContrast can be set to the particular value. If in this
case Parameters contains the value ’min_contrast’, the computed minimum contrast is at least MinContrast.
If MinContrast is set to ’auto’, the minimum contrast is determined without restrictions.
Parameter
HALCON 8.0.2
620 CHAPTER 9. MATCHING
Find the best matches of an anisotropic scale invariant shape model in an image.
The operator find_aniso_shape_model finds the best NumMatches instances of the anisotropic scale
invariant shape model ModelID in the input image Image. The model must have been created previously by
calling create_aniso_shape_model or read_shape_model.
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in Row, Column, Angle, ScaleR, and ScaleC. The coordinates Row and Column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with create_aniso_shape_model. A different origin
can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_aniso_shape_model. A different origin set with set_shape_model_origin is not taken into
account. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even if
it would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped
to the range given when the model was created with create_aniso_shape_model. In particular, this
means that the angle ranges of the model and the search must truly overlap. The angle range in the search is
not adapted modulo 2π. To simplify the presentation, all angles in the remainder of the paragraph are given in
degrees, whereas they have to be specified in radians in find_aniso_shape_model. Hence, if the model,
for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the angle search space in
find_scaled_shape_model is, for example, set to AngleStart = 350◦ and AngleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
HALCON 8.0.2
622 CHAPTER 9. MATCHING
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_aniso_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_aniso_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_aniso_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image in which the model should be found.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; integer
Handle of the model.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. ScaleRMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum scale of the model in the row direction.
Default Value : 0.9
Suggested values : ScaleRMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleRMin > 0
HALCON 8.0.2
624 CHAPTER 9. MATCHING
Result
If the parameter values are correct, the operator find_aniso_shape_model returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_aniso_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_aniso_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_model, find_scaled_shape_model, find_shape_models,
find_scaled_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching
Find the best matches of multiple anisotropic scale invariant shape models.
The operator find_aniso_shape_models finds the best NumMatches instances of the anisotropic scale
invariant shape models that are passed in ModelIDs in the input image Image. The models must have been
created previously by calling create_aniso_shape_model or read_shape_model.
Hence, in contrast to find_aniso_shape_model, multiple models can be searched in the same image in
one call. This changes the semantics of all input parameters to some extent. All input parameters must either
contain one element, in which case the parameter is used for all models, or must contain the same number of ele-
ments as ModelIDs, in which case each parameter element refers to the corresponding element in ModelIDs.
(NumLevels may also contain either two or twice the number of elements as ModelIDs; see below.) As usual,
the domain of the input image Image is used to restrict the search space for the reference point of the models
ModelIDs. Consistent with the above semantics, the input image Image can therefore contain a single image
object or an image object tuple containing multiple image objects. If Image contains a single image object, its
domain is used as the region of interest for all models in ModelIDs. If Image contains multiple image objects,
each domain is used as the region of interest for the corresponding model in ModelIDs. In this case, the im-
age matrix of all image objects in the tuple must be identical, i.e., Image cannot be constructed in an arbitrary
manner using concat_obj, but must be created from the same image using add_channels or equivalent
calls. If this is not the case, an error message is returned. The above semantics also hold for the input con-
trol parameters. Hence, for example, MinScore can contain a single value or the same number of values as
ModelIDs. In the first case, the value of MinScore is used for all models in ModelIDs, while in the second
case the respective value of the elements in MinScore is used for the corresponding model in ModelIDs. An
extension to these semantics holds for NumMatches and MaxOverlap. If NumMatches contains one ele-
ment, find_aniso_shape_models returns the best NumMatches instances of the model irrespective of the
type of the model. If, for example, two models are passed in ModelIDs and NumMatches = 2 is selected, it
can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, NumMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in NumMatches. If,
for example, NumMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of NumMatches, see below. A similar extension
of the semantics holds for MaxOverlap. If a single value is passed for MaxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
MaxOverlap, the overlap is only computed for found instances of the model that have the same model type, i.e.,
only instances of the same model that overlap too much are eliminated. In this mode, models of different types
may overlap completely. For a detailed description of the semantics of MaxOverlap, see below. Hence, a call to
find_aniso_shape_models with multiple values for ModelIDs, NumMatches and MaxOverlap has
the same effect as multiple independent calls to find_aniso_shape_model with the respective parameters.
However, a single call to find_aniso_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in Row, Column, Angle, ScaleR, and ScaleC. The coordinates Row and Column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with create_aniso_shape_model. A different origin
can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_aniso_shape_model shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_aniso_shape_model. A different origin set with set_shape_model_origin is not taken into
account. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even if
it would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped
to the range given when the model was created with create_aniso_shape_model. In particular, this
means that the angle ranges of the model and the search must truly overlap. The angle range in the search is
HALCON 8.0.2
626 CHAPTER 9. MATCHING
not adapted modulo 2π. To simplify the presentation, all angles in the remainder of the paragraph are given in
degrees, whereas they have to be specified in radians in find_aniso_shape_models. Hence, if the model,
for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the angle search space in
find_aniso_shape_models is, for example, set to AngleStart = 350◦ and AngleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_aniso_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_aniso_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_aniso_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
628 CHAPTER 9. MATCHING
HALCON 8.0.2
630 CHAPTER 9. MATCHING
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_scaled_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_scaled_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_scaled_shape_model is
used. If NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
632 CHAPTER 9. MATCHING
Result
If the parameter values are correct, the operator find_scaled_shape_model returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_scaled_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_scaled_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_model, find_aniso_shape_model, find_shape_models,
find_scaled_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching
extension to these semantics holds for NumMatches and MaxOverlap. If NumMatches contains one ele-
ment, find_scaled_shape_models returns the best NumMatches instances of the model irrespective of
the type of the model. If, for example, two models are passed in ModelIDs and NumMatches = 2 is selected,
it can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, NumMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in NumMatches. If,
for example, NumMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of NumMatches, see below. A similar extension
of the semantics holds for MaxOverlap. If a single value is passed for MaxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
MaxOverlap, the overlap is only computed for found instances of the model that have the same model type, i.e.,
only instances of the same model that overlap too much are eliminated. In this mode, models of different types
may overlap completely. For a detailed description of the semantics of MaxOverlap, see below. Hence, a call to
find_scaled_shape_models with multiple values for ModelIDs, NumMatches and MaxOverlap has
the same effect as multiple independent calls to find_scaled_shape_model with the respective parameters.
However, a single call to find_scaled_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
The position, rotation, and scale of the found instances of the model are returned in Row, Column, Angle,
and Scale. The coordinates Row and Column are the coordinates of the origin of the shape model in the
search image. By default, the origin is the center of gravity of the domain (region) of the image that was
used to create the shape model with create_scaled_shape_model. A different origin can be set with
set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_scaled_shape_model shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_scaled_shape_model. A different origin set with set_shape_model_origin is not taken
into account. The model is searched within those points of the domain of the image, in which the model lies
completely within the image. This means that the model will not be found if it extends beyond the borders of the
image, even if it would achieve a score greater than MinScore (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause models that extend beyond the im-
age border to be found if they achieve a score greater than MinScore. Here, points lying outside the image are
regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase
in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleMin and ScaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
create_scaled_shape_model. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians in
find_scaled_shape_models. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_scaled_shape_models is, for example, set
to AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
HALCON 8.0.2
634 CHAPTER 9. MATCHING
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_scaled_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_scaled_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_scaled_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
636 CHAPTER 9. MATCHING
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin set with set_shape_model_origin is not taken into account.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_shape_model. In particular, this means that the angle ranges of the model and the search must
truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians
in find_shape_model. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_shape_model is, for example, set to
AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with create_shape_model. If SubPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, SubPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
HALCON 8.0.2
638 CHAPTER 9. MATCHING
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with create_shape_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
Result
If the parameter values are correct, the operator find_shape_model returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_scaled_shape_model, find_aniso_shape_model, find_scaled_shape_models,
find_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
640 CHAPTER 9. MATCHING
The operator find_shape_models finds the best NumMatches instances of the shape models that are passed
in the tuple ModelIDs in the input image Image. The models must have been created previously by calling
create_shape_model or read_shape_model.
Hence, in contrast to find_shape_model, multiple models can be searched in the same image in one call. This
changes the semantics of all input parameters to some extent. All input parameters must either contain one element,
in which case the parameter is used for all models, or must contain the same number of elements as ModelIDs,
in which case each parameter element refers to the corresponding element in ModelIDs. (NumLevels may also
contain either two or twice the number of elements as ModelIDs; see below.) As usual, the domain of the input
image Image is used to restrict the search space for the reference point of the models ModelIDs. Consistent
with the above semantics, the input image Image can therefore contain a single image object or an image object
tuple containing multiple image objects. If Image contains a single image object, its domain is used as the region
of interest for all models in ModelIDs. If Image contains multiple image objects, each domain is used as the
region of interest for the corresponding model in ModelIDs. In this case, the image matrix of all image objects
in the tuple must be identical, i.e., Image cannot be constructed in an arbitrary manner using concat_obj,
but must be created from the same image using add_channels or equivalent calls. If this is not the case, an
error message is returned. The above semantics also hold for the input control parameters. Hence, for example,
MinScore can contain a single value or the same number of values as ModelIDs. In the first case, the value
of MinScore is used for all models in ModelIDs, while in the second case the respective value of the elements
in MinScore is used for the corresponding model in ModelIDs. An extension to these semantics holds for
NumMatches and MaxOverlap. If NumMatches contains one element, find_shape_models returns the
best NumMatches instances of the model irrespective of the type of the model. If, for example, two models are
passed in ModelIDs and NumMatches = 2 is selected, it can happen that two instances of the first model and no
instances of the second model, one instance of the first model and one instance of the second model, or no instances
of the first model and two instances of the second model are returned. If, on the other hand, NumMatches contains
multiple values, the number of instances returned of the different models corresponds to the number specified in
the respective entry in NumMatches. If, for example, NumMatches = [1, 1] is selected, one instance of the
first model and one instance of the second model is returned. For a detailed description of the semantics of
NumMatches, see below. A similar extension of the semantics holds for MaxOverlap. If a single value is
passed for MaxOverlap, the overlap is computed for all found instances of the different models, irrespective of
the model type, i.e., instances of the same or of different models that overlap too much are eliminated. If, on the
other hand, multiple values are passed in MaxOverlap, the overlap is only computed for found instances of the
model that have the same model type, i.e., only instances of the same model that overlap too much are eliminated.
In this mode, models of different types may overlap completely. For a detailed description of the semantics
of MaxOverlap, see below. Hence, a call to find_shape_models with multiple values for ModelIDs,
NumMatches and MaxOverlap has the same effect as multiple independent calls to find_shape_model
with the respective parameters. However, a single call to find_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
The position and rotation of the found instances of the model is returned in Row, Column, and Angle. The
coordinates Row and Column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_shape_model shows how to create this matrix and use it to display the model at
the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin set with set_shape_model_origin is not taken into account.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_shape_model. In particular, this means that the angle ranges of the model and the search must
truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians
in find_shape_models. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_shape_models is, for example, set to
AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with create_shape_model. If SubPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, SubPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with create_shape_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
HALCON 8.0.2
642 CHAPTER 9. MATCHING
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
644 CHAPTER 9. MATCHING
HALCON 8.0.2
646 CHAPTER 9. MATCHING
Module
Matching
HALCON 8.0.2
648 CHAPTER 9. MATCHING
Matching-3D
affine_trans_object_model_3d ( : : ObjectModel3DID,
HomMat3D : ObjectModel3DIDAffineTrans )
clear_all_object_model_3d ( : : : )
649
650 CHAPTER 10. MATCHING-3D
The operator clear_all_object_model_3d frees the memory of all 3D object models that were created by
read_object_model_3d_dxf. After calling clear_all_object_model_3d, no model can be used
any longer.
Attention
clear_all_object_model_3d exists solely for the purpose of implementing the “reset program” function-
ality in HDevelop. clear_all_object_model_3d must not be used in any application.
Result
clear_all_object_model_3d always returns 2 (H_MSG_TRUE).
Parallelization Information
clear_all_object_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
read_object_model_3d_dxf
Alternatives
clear_object_model_3d
Module
3D Metrology
clear_all_shape_model_3d ( : : : )
clear_object_model_3d ( : : ObjectModel3DID : )
Parallelization Information
clear_object_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
read_object_model_3d_dxf
See also
clear_all_object_model_3d
Module
3D Metrology
clear_shape_model_3d ( : : ShapeModel3DID : )
convert_point_3d_cart_to_spher ( : : X, Y, Z, EquatPlaneNormal,
ZeroMeridian : Longitude, Latitude, Radius )
’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
HALCON 8.0.2
652 CHAPTER 10. MATCHING-3D
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.
The position of the zero meridian can be specified with the parameter ZeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
ZeroMeridian are valid:
’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.
Only reasonable combinations of EquatPlaneNormal and ZeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
EquatPlaneNormal=’y’ and ZeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from spherical to Cartesian coordinates by using
convert_point_3d_spher_to_cart, the same values must be passed for EquatPlaneNormal and
ZeroMeridian as were passed to convert_point_3d_cart_to_spher.
The operator convert_point_3d_cart_to_spher can be used, for example, to convert a given camera
position into spherical coordinates. If multiple camera positions are converted in this way, one obtains a pose range
(in spherical coordinates), which can be passed to create_shape_model_3d in order to create a 3D shape
model.
Parameter
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point.
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point.
. EquatPlaneNormal (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normal vector of the equatorial plane (points to the north pole).
Default Value : ’-y’
List of values : EquatPlaneNormal ∈ {’x’, ’y’, ’z’, ’-x’, ’-y’, ’-z’}
. ZeroMeridian (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Coordinate axis in the equatorial plane that points to the zero meridian.
Default Value : ’-z’
List of values : ZeroMeridian ∈ {’x’, ’y’, ’z’, ’-x’, ’-y’, ’-z’}
. Longitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Longitude of the 3D point.
. Latitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Latitude of the 3D point.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Radius of the 3D point.
Result
If the parameters are valid, the operator convert_point_3d_cart_to_spher returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
convert_point_3d_cart_to_spher is reentrant and processed without parallelization.
Possible Successors
create_shape_model_3d, find_shape_model_3d
See also
convert_point_3d_spher_to_cart
Module
3D Metrology
’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.
The position of the zero meridian can be specified with the parameter ZeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
ZeroMeridian are valid:
’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.
Only reasonable combinations of EquatPlaneNormal and ZeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
EquatPlaneNormal=’y’ and ZeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from Cartesian to spherical coordinates by using
convert_point_3d_cart_to_spher, the same values must be passed for EquatPlaneNormal and
ZeroMeridian as were passed to convert_point_3d_spher_to_cart.
The operator convert_point_3d_spher_to_cart can be used, for example, to convert a camera position
that is given in spherical coordinates into Cartesian coordinates. The result can then be utilized to create a complete
camera pose by passing the Cartesian coordinates to create_cam_pose_look_at_point.
Parameter
HALCON 8.0.2
654 CHAPTER 10. MATCHING-3D
’x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world coordi-
nate system points upwards in the image plane.
’-x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world
coordinate system points downwards in the image plane.
’y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world coordi-
nate system points upwards in the image plane.
’-y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world
coordinate system points downwards in the image plane.
’z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world coordi-
nate system points upwards in the image plane.
’-z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world
coordinate system points downwards in the image plane.
Alternatively to the above values, an arbitrary normal vector can be specified in RefPlaneNormal, which is not
restricted to the coordinate axes. For this, a tuple of three values representing the three components of the normal
vector must be passed.
Note that the position of the optical center and the point at which the camera looks must differ from each other.
Furthermore, the normal vector of the reference plane and the z axis of the camera must not be parallel. Otherwise,
the camera pose is not well-defined.
create_cam_pose_look_at_point is particularly useful if a 3D object model or a 3D shape
model should be visualized from a certain camera position. In this case, the pose that is cre-
ated by create_cam_pose_look_at_point can be passed to project_object_model_3d or
project_shape_model_3d, respectively.
It is also possible to pass tuples of different length for different input parameters. In this case, internally the
maximum number of parameter values over all input control parameters is computed. This number is taken as
the number of output camera poses. Then, all input parameters can contain a single value or the same number of
values as output camera poses. In the first case, the single value is used for the computation of all camera poses,
while in the second case the respective value of the element in the parameter is used for the computation of the
corresponding camera pose.
Parameter
HALCON 8.0.2
656 CHAPTER 10. MATCHING-3D
3D object model into a reference orientation such that the view that corresponds to longitude=0 and latitude=0 is
approximately at the center of the pose range. This can be achieved by passing appropriate values for the reference
orientation in RefRotX, RefRotY, RefRotZ, and OrderOfRotation. The rotation is performed around the
axes of the 3D object model, which origin was set to the reference point. The longitude and latitude range can then
be interpreted as a variation of the 3D object model pose around the reference orientation. There are two possible
ways to specify the reference orientation. The first possibility is to specify three rotation angles in RefRotX,
RefRotY, and RefRotZ and the order in which the three rotations are to be applied in OrderOfRotation,
which can either be ’gba’ or ’abg’. The second possibility is to specify the three components of the Rodriguez
rotation vector in RefRotX, RefRotY, and RefRotZ. In this case, OrderOfRotation must be set to ’ro-
driguez’ (see create_pose for detailed information about the order of the rotations and the definition of the
Rodriguez vector).
Thus, two transformations are applied to the 3D object model before computing the model views within the pose
range. The first transformation is the translation of the origin of the coordinate systems to the reference point. The
second transformation is the rotation of the 3D object model to the desired reference orientation around the axes
of the reference coordinate system. By combining both transformations one obtains the reference pose of the 3D
shape model. The reference pose of the 3D shape model thus describes the pose of the reference coordinate system
with respect to the coordinate system of the 3D object model defined by the DXF file. Let t = (x, y, z)0 be the
coordinates of the reference point of the 3D object model and R be the rotation matrix containing the reference
orientation. Then, a point pm given in the 3D object model coordinate system can be transformed to a point pr in
the reference coordinate system of the 3D shape model by applying the following formula:
pr = R · (pm − t)
This transformation can be expressed by a homogeneous 3D transformation matrix or alternatively in terms of a 3D
pose. The latter can be queried by passing ’reference_pose’ for the parameter GenParamNames of the operator
get_shape_model_3d_params. The above formula can be best imagined as a pose of pose type 8, 10, or 12,
depending on the value that was chosen for OrderOfRotation (see create_pose for detailed information
about the different pose types). Note, however, that get_shape_model_3d_params always returns the pose
using the pose type 0. Finally, poses that are given in one of the two coordinate systems can be transformed to the
other coordinate system by using trans_pose_shape_model_3d.
With MinContrast, it can be determined which edge contrast the model must at least have in the recognition
performed by find_shape_model_3d. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the search images, the noise in one channel must be multiplied by the square root
of the number of channels to determine MinContrast. If, for example, the gray values fluctuate within a range
of 10 gray levels in a single channel and the image is a three-channel image, MinContrast should be set to 17.
If the model should be recognized in very low contrast images, MinContrast must be set to a correspondingly
small value. If the model should be recognized even if it is severely occluded, MinContrast should be slightly
larger than the range of gray value fluctuations created by noise in order to ensure that the pose of the model is
extracted robustly and accurately by find_shape_model_3d.
The parameters described above are application-dependent and must be always specified when creating a 3D
shape model. In addition, there are some generic parameters that can optionally be used to influence the model
creation. For most applications these parameters need not to be specified but can be left at their default values.
If desired, these parameters and their corresponding values can be specified by using GenParamNames and
GenParamValues, respectively. The following values for GenParamNames are possible:
’num_levels’: For efficiency reasons the model views are generated on multiple pyramid levels. On higher levels
fewer views are generated than on lower levels. With the parameter ’num_levels’ the number of pyramid
levels on which model views are generated can be specified. It should be chosen as large as possible because
by this the time necessary to find the model is significantly reduced. On the other hand, the number of
levels must be chosen such that the shape representations of the views on the highest pyramid level are
still recognizable and contain a sufficient number of points (at least four). If not enough model points are
generated for a certain view, the view is deleted from the model and replaced by a view on a lower pyramid
level. If for all views on a pyramid level not enough model points are generated, the number of levels is
reduced internally until for at least one view enough model points are found on the highest pyramid level.
If this procedure would lead to a model with no pyramid levels, i.e., if the number of model points is too
small for all views already on the lowest pyramid level, create_shape_model_3d returns an error
message. If ’num_levels’ is set to ’auto’ (default value), create_shape_model_3d determines the
number of pyramid levels automatically. In this case all model views on all pyramid levels are automatically
HALCON 8.0.2
658 CHAPTER 10. MATCHING-3D
checked whether their shape representations are still recognizable. If the shape representation of a certain
view is found to be not recognizable, the view is deleted from the model and replaced by a view on a lower
pyramid level. Note that if ’num_levels’ is set to ’auto’, the number of pyramid levels can be different for
different views. In rare cases, it might happen that create_shape_model_3d determines a value for
the number of pyramid levels that is too large or too small. If the number of pyramid levels is chosen too
large, the model may not be recognized in the image or it may be necessary to select very low parameters
for MinScore or Greediness in find_shape_model_3d in order to find the model. If the number
of pyramid levels is chosen too small, the time required to find the model in find_shape_model_3d
may increase. In these cases, the views on the pyramid levels should be checked by using the output of
get_shape_model_3d_contours.
Suggested values: ’auto’, 3, 4, 5, 6
Default value: ’auto’
’optimization’: For models with particularly large model views, it may be useful to reduce the number of model
points by setting ’optimization’ to a value different from ’none’. If ’optimization’ = ’none’, all model points
are stored. In all other cases, the number of points is reduced according to the value of ’optimization’. If
the number of points is reduced, it may be necessary in find_shape_model_3d to set the parame-
ter Greediness to a smaller value, e.g., 0.7 or 0.8. For models with small model views, the reduction
of the number of model points does not result in a speed-up of the search because in this case usually
significantly more potential instances of the model must be examined. If ’optimization’ is set to ’auto’,
create_shape_model_3d automatically determines the reduction of the number of model points for
each model view.
List of values: ’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’, ’point_reduction_high’
Default value: ’auto’
’metric’: This parameter determines the conditions under which the model is recognized in the image. Cur-
rently, only the metric ’ignore_segment_polarity’ is supported, which recognizes an object even if the con-
trast changes locally.
List of values: ’ignore_segment_polarity’
’min_face_angle’: 3D edges are only included in the shape representations of the views if the angle between
the two 3D faces that are incident with the 3D object model edge is at least ’min_face_angle’. If
’min_face_angle’ is set to 0.0, all edges are included. If ’min_face_angle’ is set to π (equivalent to 180
degrees), only the silhouette of the 3D object model is included. This parameter can be used to suppress
edges within curved surfaces, e.g., the surface of a cylinder or cone. Curved surfaces are approximated by
multiple planar faces. The edges between such neighboring planar faces should not be included in the shape
representation because they also do not appear in real images of the model. Thus, ’min_face_angle’ should
be set sufficiently high to suppress these edges. The effect of different values for ’min_face_angle’ can be
inspected by using project_object_model_3d before calling create_shape_model_3d. Note
that if edges that are not visible in the search image are included in the shape representation, the performance
(robustness and speed) of the matching may decrease considerably.
Suggested values: rad(10), rad(20), rad(30), rad(45)
Default value: rad(15)
’min_size’: This value determines a threshold for the selection of significant model components based on the size
of the components, i.e., connected components that have fewer points than the specified minimum size are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level.
Suggested values: ’auto’, 0, 3, 5, 10, 20
Default value: ’auto’
’model_tolerance’: The parameter specifies the tolerance of the projected 3D object model edges in the image,
given in pixels. The higher the value is chosen, the fewer views need to be generated. Consequently, a higher
value results in models that are less memory consuming and faster to find with find_shape_model_3d.
On the other hand, if the value is chosen too high, the robustness of the matching will decrease. Therefore,
this parameter should only be modified with care. For most applications, a good compromise between speed
and robustness is obtained when setting ’model_tolerance’ to 1.
Suggested values: 0, 1, 2
Default value: 1
Parameter
. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; integer
Handle of the 3D object model.
HALCON 8.0.2
660 CHAPTER 10. MATCHING-3D
The domain of the image Image determines the search space for the reference point of the 3D object model.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. Note that in images with a
high degree of clutter or strong background texture, MinScore should be set to a value not much lower than 0.7
since otherwise false matches could be found.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search
will be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which
may cause the model not to be found in rare cases, even though it is visible in the image. For Greediness =
1, the maximum search speed is achieved. In almost all cases, the 3D shape model will always be found for
Greediness = 0.9.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the 3D shape model was created with create_shape_model_3d.
If NumLevels is set to 0, the number of pyramid levels specified in create_shape_model_3d is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. If the lowest pyramid level to use is
chosen too large, it may happen that the desired accuracy cannot be achieved, or that wrong instances of the model
are found because the model is not specific enough on the higher pyramid levels to facilitate a reliable selection of
the correct instance of the model. In this case, the lowest pyramid level to use must be set to a smaller value.
In addition to the parameters described above, there are some generic parameters that can optionally be used to in-
fluence the matching. For most applications these parameters need not to be specified but can be left at their default
values. If desired, these parameters and their corresponding values can be specified by using GenParamNames
and GenParamValues, respectively. The following values for GenParamNames are possible:
• If the pose range in which the model is to be searched is smaller than the pose range that was specified during
the model creation with create_shape_model_3d, the pose range can be restricted appropriately with
the following parameters. If the values lie outside the pose range of the model, the values are automatically
clipped to the pose range of the model.
’longitude_min’: Sets the minimum longitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’longitude_max’: Sets the maximum longitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’latitude_min’: Sets the minimum latitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-90)
’latitude_max’: Sets the maximum latitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(90)
’cam_roll_min’: Sets the minimum camera roll angle of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’cam_roll_max’: Sets the maximum camera roll angle of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’dist_min’: Sets the minimum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: 0
’dist_max’: Sets the maximum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: (∞)
• Further generic parameters that do not concern the pose range can be specified:
HALCON 8.0.2
662 CHAPTER 10. MATCHING-3D
’num_matches’: With this parameter the maximum number of instances to be found can be determined.
If more than the specified number of instances with a score greater than MinScore are found in the
image, only the best ’num_matches’ instances are returned. If fewer than ’num_matches’ are found,
only that number is returned, i.e., the parameter MinScore takes precedence over ’num_matches’. If
’num_matches’ is set to 0, all matches that satisfy the score criterion are returned. Note that the more
matches should be found the slower the matching will be.
Suggested values: 0, 1, 2, 3
Default value: 1
’max_overlap’: It may happen that multiple instances with similar positions but with different orientations
are found in the image. The parameter ’max_overlap’ determines by what fraction (i.e., a number be-
tween 0 and 1) two instances may at most overlap in order to consider them as different instances, and
hence to be returned separately. If two instances overlap each other by more than the specified value only
the best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle
of arbitrary orientation (see smallest_rectangle2) of the found instances. If 0 max _overlap 0 = 0,
the found instances may not overlap at all, while for 0 max _overlap 0 = 1 all instances are returned.
Suggested values: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Default value: 0.5
’pose_refinement’: This parameter determines whether the poses of the instances should be refined after
the matching. If ’pose_refinement’ is set to ’none’ the model’s pose is only determined with a limited
accuracy. In this case, the accuracy depends on several sampling steps that are used inside the match-
ing process and, therefore cannot be predicted very well. Therefore, ’pose_refinement’ should only be
set to ’none’ when the computation time is of primary concern and an approximate pose is sufficient.
In all other cases the pose should be determined through a least-squares adjustment, i.e., by minimiz-
ing the distances of the model points to their corresponding image points. In order to achieve a high
accuracy, this refinement is directly performed in 3D. Therefore, the refinement requires additional com-
putation time. The different modes for least-squares adjustment (’least_squares’, ’least_squares_high’,
and ’least_squares_very_high’) can be used to determine the accuracy with which the minimum distance
is searched for. The higher the accuracy is chosen, the longer the pose refinement will take, however.
For most applications ’least_squares_high’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
List of values: ’none’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’
Default value: ’least_squares_high’
’outlier_suppression’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. Then, in some cases it might be useful
to apply a robust outlier suppression during the least-squares adjustment. This might be necessary, for
example, if a high degree of clutter is present in the image, which prevents the least-squares adjustment
from finding the optimum pose. In this case, ’outlier_suppression’ should be set to either ’medium’
(eliminates a medium proportion of outliers) or ’high’ (eliminates a high proportion of outliers). How-
ever, in most applications, no robust outlier suppression is necessary, and hence, ’pose_refinement’ can
be set to ’none’. It should be noted that activating the outlier suppression comes along with a signifi-
cantly increasing computation time.
List of values: ’none’, ’medium’, ’high’
Default value: ’none’
’cov_pose_mode’: This parameter only takes effect if ’pose_refinement’ is set to a value other than ’none’,
and hence, a least-squares adjustment is performed. ’cov_pose_mode’ determines the mode in which
the accuracies that are computed during the least-squares adjustment are returned in CovPose. If
’cov_pose_mode’ is set to ’standard_deviations’, the 6 standard deviations of the 6 pose parameters
are returned for each match. In contrast, if ’cov_pose_mode’ is set to ’covariances’, CovPose contains
the 36 values of the complete 6 × 6 covariance matrix of the 6 pose parameters.
List of values: ’standard_deviations’, ’covariances’
Default value: ’standard_deviations’
’border_model’: The model is searched within those points of the domain of the image in which the model
lies completely within the image. This means that the model will not be found if it extends beyond
the borders of the image, even if it would achieve a score greater than MinScore. This behavior can
be changed by setting ’border_model’ to ’true’, which will cause models that extend beyond the image
border to be found if they achieve a score greater than MinScore. Here, points lying outside the image
are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the
search will increase in this mode.
List of values: ’false’, ’true’
Parameter
HALCON 8.0.2
664 CHAPTER 10. MATCHING-3D
Result
If the parameter values are correct, the operator find_shape_model_3d returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
project_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
get_object_model_3d_params ( : : ObjectModel3DID,
GenParamNames : GenParamValues )
’reference_point’: 3D coordinates of the reference point of the model. The reference point is the center of the
smallest enclosing axis-parallel cuboid (see parameter ’bounding_box1’).
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z).
Parameter
HALCON 8.0.2
666 CHAPTER 10. MATCHING-3D
get_shape_model_3d_params ( : : ShapeModel3DID,
GenParamNames : GenParamValues )
’cam_param’: Interior parameters of the camera that is used for the matching.
’ref_rot_x’: Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or
without unit).
’ref_rot_y’: Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or
without unit).
’ref_rot_z’: Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or
without unit).
’order_of_rotation’: Meaning of the rotation values of the reference orientation.
’longitude_min’: Minimum longitude of the model views.
’longitude_max’: Maximum longitude of the model views.
’latitude_min’: Minimum latitude of the model views.
’latitude_max’: Maximum latitude of the model views.
’cam_roll_min’: Minimum camera roll angle of the model views.
’cam_roll_max’: Maximum camera roll angle of the model views.
’dist_min’: Minimum camera-object-distance of the model views.
’dist_max’: Maximum camera-object-distance of the model views.
’min_contrast’: Minimum contrast of the objects in the search images.
’num_levels’: User-specified number of pyramid levels.
’num_levels_max’: Maximum number of used pyramid levels over all model views.
’optimization’: Kind of optimization by reducing the number of model points.
’metric’: Match metric.
’min_face_angle’: Minimum 3D face angle for which 3D object model edges are included in the 3D shape model.
’min_size’: Minimum size of the projected 3D object model edge (in number of pixels) to include the projected
edge in the 3D shape model.
’model_tolerance’: Maximum acceptable tolerance of the projected 3D object model edges (in pixels).
’num_views_per_level’: Number of model views per pyramid level. For each pyramid level the number of views
that are stored in the 3D shape model are returned. Thus, the number of returned elements corresponds to the
number of used pyramid levels, which can be queried with ’num_levels_max’.
’reference_pose’: Reference position and orientation of the 3d shape model. The returned pose describes the pose
of the internally used reference coordinate system of the 3D shape model with respect to the coordinate
system that is used in the underlying 3D object model.
’reference_point’: 3D coordinates of the reference point of the underlying 3D object model.
’bounding_box1’: Smallest enclosing axis-parallel cuboid of the underlying 3D object model in the following
order: [min_x, min_y, min_z, max_x, max_y, max_z].
A detailed description of the parameters can be looked up with the operator create_shape_model_3d.
It is possible to query the values of several parameters with a single operator call by passing a tuple containing the
names of all desired parameters to GenParamNames. As a result a tuple of the same length with the correspond-
ing values is returned in GenParamValues. Note that this is solely possible for parameters that return only a
single value.
Parameter
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; integer
Handle of the 3D shape model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Names of the generic parameters that are to be queried for the 3D shape model.
Default Value : ’num_levels_max’
List of values : GenParamNames ∈ {’cam_param’, ’ref_rot_x’, ’ref_rot_y’, ’ref_rot_z’, ’order_of_rotation’,
’longitude_min’, ’longitude_max’, ’latitude_min’, ’latitude_max’, ’cam_roll_min’, ’cam_roll_max’,
’dist_min’, ’dist_max’, ’min_contrast’, ’num_levels’, ’num_levels_max’, ’optimization’, ’metric’,
’min_face_angle’, ’min_size’, ’model_tolerance’, ’num_views_per_level’, ’reference_pose’,
’reference_point’, ’bounding_box1’}
. GenParamValues (output_control) . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string / integer / real
Values of the generic parameters.
Result
If the parameters are valid, the operator get_shape_model_3d_params returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
get_shape_model_3d_params is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
find_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
HALCON 8.0.2
668 CHAPTER 10. MATCHING-3D
Parameter
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject
Contour representation of the model view.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; integer
Handle of the 3D shape model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
Number of elements : CamParam = 8
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .pose-array ; real / integer
3D pose of the 3D shape model in the world coordinate system.
. HiddenSurfaceRemoval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Remove hidden surfaces?
Default Value : ’true’
List of values : HiddenSurfaceRemoval ∈ {’true’, ’false’}
. MinFaceAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Smallest face angle for which the edge is displayed
Default Value : 0.261799
Suggested values : MinFaceAngle ∈ {0.17, 0.26, 0.35}
Result
If the parameters are valid, the operator project_shape_model_3d returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Parallelization Information
project_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, get_shape_model_3d_params,
find_shape_model_3d
Alternatives
project_object_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
• POLYLINE
– Polyface meshes
• 3DFACE
• LINE
• CIRCLE
• ARC
HALCON 8.0.2
670 CHAPTER 10. MATCHING-3D
• ELLIPSE
• SOLID
• BLOCK
• INSERT
Two-dimensional linear elements like the DXF elements CIRCLE or ELLIPSE are interpreted as faces even if they
are not extruded. If necessary, they are closed. Two-dimensional linear elements that consist of just two points are
not used because they do not define a face. Thus, elements of the type LINE are only used if they are extruded.
The curved surface of extruded DXF entities of the type CIRCLE, ARC, and ELLIPSE is approximated by planar
faces. The accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’
and ’max_approx_error’. The parameter ’min_num_points’ defines the minimum number of sampling points
that are used for the approximation of the DXF element CIRCLE, ARC, or ELLIPSE. Note that the parameter
’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if
’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-
circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum
deviation of the XLD contour from the ideal circle or ellipse, respectively. The determination of this deviation
is carried out in the units used in the DXF file. For the determination of the accuracy of the approximation both
criteria are evaluated. Then, the criterion that leads to the more accurate approximation is used.
Internally, the following default values are used for the generic parameters:
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
One possible way to create a suitable DXF file is to create a 3D model of the object with the CAD program
AutoCAD. Ensure that the surface of the object is modelled, not only its edges. Lines that, e.g., define object
edges, will not be used by HALCON, because they do not define the surface of the object. Once the modelling is
completed, you can store the model in DWG format. To convert the DWG file into a DXF file that is suitable for
HALCON’s 3D matching, carry out the following steps:
• Export the 3D CAD model to a 3DS file using the 3dsout command of AutoCAD. This will triangulate the
object’s surface, i.e., the model will only consist of planes. (Users of AutoCAD 2007 or newer versions can
download this command utility from Autodesk’s web site.)
• Open a new empty sheet in AutoCAD.
• Import the 3DS file into this empty sheet with the 3dsin command of AutoCAD.
• Save the object into a DXF R12 file.
Users of other CAD programs should ensure that the surface of the 3D model is triangulated before it is exported
into the DXF file. If the CAD program is not able to carry out the triangulation, it is often possible to save the 3D
model in the proprietary format of the CAD program and to convert it into a suitable DXF file by using a CAD file
format converter that is able to perform the triangulation.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the DXF file
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or unit.
Default Value : ’m’
Suggested values : Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’µm’, 1.0, 0.01, 0.001, ’1.0e-6’, 0.0254, 0.3048,
0.9144}
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {’min_num_points’, ’max_approx_error’}
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference
coordinate system of a 3D shape model and vice versa.
The operator trans_pose_shape_model_3d transforms the pose PoseIn into the pose PoseOut by using
the transformation direction specified in Transformation. In the majority of cases, the operator will be used
to transform a camera pose that is given with respect to the source coordinate system to a camera pose that refers
to the target coordinate system.
The pose can be transformed between two coordinate systems. The first coordinate system is the reference coordi-
nate system of the 3D shape model that is passed in ShapeModel3DID. The origin of the reference coordinate
HALCON 8.0.2
672 CHAPTER 10. MATCHING-3D
system lies at the reference point of the underlying 3D object model. The orientation of the reference coordi-
nate system is determined by the reference orientation that was specified when creating the 3D shape model with
create_shape_model_3d.
The second coordinate system is the world coordiante system, i.e., the coordinate system of the 3D object model
that underlies the 3D shape model. This coordinate system is implicitly determined by the coordinates that are
stored in the DXF file that was read by using read_object_model_3d_dxf.
If Transformation is set to ’ref_to_model’, it is assumed that PoseIn refers to the reference coordinate
system of the 3D shape model. The resulting output pose PoseOut in this case refers to the coordinate system of
the 3D object model.
If Transformation is set to ’model_to_ref’, it is assumed that PoseIn refers to the coordinate system of the
3D object model. The resulting output pose PoseOut in this case refers to the reference coordinate system of the
3D shape model.
The relative pose of the two coordinate systems can be queried by passing ’reference_pose’ for GenParamNames
in the operator get_shape_model_3d_params.
Parameter
Parallelization Information
write_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d
Module
3D Metrology
HALCON 8.0.2
674 CHAPTER 10. MATCHING-3D
Morphology
11.1 Gray-Values
A range filtering is calculated according to the following scheme: The indicated mask is put over the image to be
filtered in such a way that the center of the mask touches all pixels once. For each of these pixels all neighboring
pixels covered by the mask are sorted in an ascending sequence corresponding to their gray values. Each sorted
sequence of gray values contains the same number of gray values like the mask has image points. The n-th highest
element, (= ModePercent, rank values between 0...100 in percent) is selected and set as result gray value in the
corresponding result image.
If ModePercent is 0, then the operator equals to the gray value opening ( gray_opening). If ModePercent
is 50, the operator results in the median filter, which is applied twice ( median_image). The ModePercent
100 in dual_rank means that it calculates the gray value closing ( gray_closing). Choosing parameter
values inside this range results in a smooth transformation of these operators.
Parameter
675
676 CHAPTER 11. MORPHOLOGY
read_image(Image,’fabrik’)
dual_rank(Image,ImageOpening,’circle’,10,10,’mirrored’)
disp_image(ImageOpening,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 10) with F = area of the structuring element.
Result
If the parameter values are correct the operator dual_rank returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
dual_rank is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, sub_image, regiongrowing
Alternatives
rank_image, gray_closing, gray_opening, median_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect, sigma_image
References
W. Eckstein, O. Munkelt “Extracting Objects from Digital Terrain Model” Remote Sensing and Reconstruction for
Threedimensional Objects and Scenes, SPIE Symposium on Optical Science, Engeneering, and Instrumentation,
July 1995, San Diego
Module
Foundation
the maximum gray value of the structuring element. For the generation of arbitrary structuring elements, see
read_gray_se.
Parameter
bothat(i, s) = (i • s) − i,
i.e., the difference of the closing of the image with s and the image (see gray_closing). For the generation of
structuring elements, see read_gray_se.
HALCON 8.0.2
678 CHAPTER 11. MORPHOLOGY
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageBotHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Bottom hat image.
Result
gray_bothat returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_bothat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se, gen_disc_se
Possible Successors
threshold
Alternatives
gray_closing
See also
gray_tophat, top_hat, gray_erosion_rect, sub_image
Module
Foundation
i • s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation and gray_erosion).
For the generation of structuring elements, see read_gray_se.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Gray-closed image.
Result
gray_closing returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_closing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Alternatives
dual_rank
See also
closing, gray_dilation, gray_erosion
Module
Foundation
i ◦ s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation_rect and
gray_erosion_rect).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Input image.
. ImageClosing (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Gray-closed image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_closing_rect returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the be-
havior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is
raised.
Parallelization Information
gray_closing_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_closing, gray_closing_shape
See also
closing_rectangle1, gray_dilation_rect, gray_erosion_rect
Module
Foundation
HALCON 8.0.2
680 CHAPTER 11. MORPHOLOGY
i • s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation_shape and
gray_erosion_shape).
Attention
Note that gray_closing_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Image for which the minimum gray values are to be calculated.
. ImageClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; real / integer
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight ≤ 511.0
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; real / integer
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth ≤ 511.0
. MaskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Shape of the mask.
Default Value : ’octagon’
List of values : MaskShape ∈ {’rectangle’, ’rhombus’, ’octagon’}
Result
gray_closing_shape returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
gray_closing_shape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_closing
See also
gray_dilation_shape, gray_erosion_shape, closing
Module
Foundation
Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see read_gray_se).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Gray-dilated image.
Result
gray_dilation returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_dilation is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Possible Successors
sub_image, gray_erosion
Alternatives
gray_dilation_rect
See also
gray_opening, gray_closing, dilation1, gray_skeleton
Module
Foundation
HALCON 8.0.2
682 CHAPTER 11. MORPHOLOGY
Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see read_gray_se).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Gray-eroded image.
Result
gray_erosion returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_erosion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Possible Successors
gray_dilation, sub_image
Alternatives
gray_erosion_rect
See also
gray_opening, gray_closing, erosion1, gray_skeleton
Module
Foundation
HALCON 8.0.2
684 CHAPTER 11. MORPHOLOGY
gray_erosion_rect calculates the minimum gray value of the input image Image within a rectangular mask
of size (MaskHeight, MaskWidth) for each image point. The resulting image is returned in ImageMin. If the
parameters MaskHeight or MaskWidth are even, they are changed to the next larger odd value. At the border
of the image the gray values are mirrored.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the minimum gray values are to be calculated.
. ImageMin (output_object) . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_erosion_rect returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the be-
havior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is
raised.
Parallelization Information
gray_erosion_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
gray_dilation_rect
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Image for which the minimum gray values are to be calculated.
. ImageMin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; real / integer
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; real / integer
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth
. MaskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Shape of the mask.
Default Value : ’octagon’
List of values : MaskShape ∈ {’rectangle’, ’rhombus’, ’octagon’}
Result
gray_erosion_shape returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
gray_erosion_shape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_erosion, gray_erosion_rect
See also
gray_opening_shape, gray_closing_shape, gray_skeleton
Module
Foundation
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion and gray_dilation).
For the generation of structuring elements, see read_gray_se.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Gray-opened image.
Result
gray_opening returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_opening is reentrant and automatically parallelized (on tuple level).
HALCON 8.0.2
686 CHAPTER 11. MORPHOLOGY
Possible Predecessors
read_gray_se
Alternatives
dual_rank
See also
opening, gray_dilation, gray_erosion
Module
Foundation
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion_rect and
gray_dilation_rect).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Input image.
. ImageOpening (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Gray-opened image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_opening_rect returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the be-
havior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is
raised.
Parallelization Information
gray_opening_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_opening, gray_opening_shape
See also
opening_rectangle1, gray_dilation_rect, gray_erosion_rect
Module
Foundation
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion_shape and
gray_dilation_shape).
Attention
Note that gray_opening_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
HALCON 8.0.2
688 CHAPTER 11. MORPHOLOGY
See also
gray_dilation_shape, gray_erosion_shape, opening
Module
Foundation
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the gray value range is to be calculated.
. ImageResult (output_object) . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image containing the gray value range.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_range_rect returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_range_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_dilation_rect, gray_erosion_rect, sub_image
Module
Foundation
tophat(i, s) = i − (i ◦ s),
i.e., the difference of the image and its opening with s (see gray_opening). For the generation of structuring
elements, see read_gray_se.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageTopHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Top hat image.
Result
gray_tophat returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_tophat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se, gen_disc_se
Possible Successors
threshold
Alternatives
gray_opening
See also
gray_bothat, top_hat, gray_erosion_rect, sub_image
Module
Foundation
read_gray_se ( : SE : FileName : )
HALCON 8.0.2
690 CHAPTER 11. MORPHOLOGY
Alternatives
gen_disc_se
See also
read_image, paint_region, paint_gray, crop_part
Module
Foundation
11.2 Region
read_image (Image,’/bilder/name.ext’)
threshold (Image,Regions,128,255)
gen_circle (Circle,0,0,16)
bottom_hat (Regions,Circle,RegionBottomHat).
Result
bottom_hat returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
See also
top_hat, morph_hat, gray_bothat, opening
Module
Foundation
#include "HalconCpp.h"
main()
{
HWindow w;
HRegion circ1 = HRegion::GenCircle (20, 10, 10.5);
circ1.Display (w);
w.Click ();
return(0);
}
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O(3 F ) .
Result
boundary returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
HALCON 8.0.2
692 CHAPTER 11. MORPHOLOGY
Close a region.
A closing operation is defined as a dilation followed by a Minkowsi subtraction. By applying closing
to a region, larger structures remain mostly intact, while small gaps between adjacent regions and holes smaller
than StructElement are closed, and the regions’ boundaries are smoothed. All closing variants share the
property that separate regions are not merged, but remain separate objects. The position of StructElement is
meaningless, since a closing operation is invariant with respect to the choice of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
closing is applied to each input region separately. If gaps between different regions are to be closed, union1
or union2 has to be called first.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Reproduction of ’closing ()’ using " << endl;
cout << "’dilation()’ and ’minkowski_sub1()’" << endl;
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F 1 · F 2) .
Result
closing returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
closing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
closing_circle, closing_golay
See also
dilation1, erosion1, opening, minkowski_sub1
Module
Foundation
HALCON 8.0.2
694 CHAPTER 11. MORPHOLOGY
Parameter
Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:
√
O(4 · F 1 · Radius) .
Result
closing_circle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
closing_golay is defined as a Minkowski addition followed by a Minkowski subtraction. First the Minkowski
addition of the input region (Region) with the structuring element from the Golay alphabet defined by
GolayElement and Rotation is computed. Then the Minkowski subtraction of the result and the structuring
element rotated by 180◦ is performed.
The following structuring elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used, and whether the fore-
ground (even) or background version (odd) of the selected element should be used. The Golay elements, together
with all possible rotations, are described with the operator golay_elements.
closing_golay serves to close holes smaller than the structuring element, and to smooth regions’ boundaries.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.
Parameter
Result
closing_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
696 CHAPTER 11. MORPHOLOGY
Result
closing_rectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Dilate a region.
dilation1 dilates the input regions with a structuring element. By applying dilation1 to a region, its
boundary gets smoothed. In the process, the area of the region is enlarged. Furthermore, disconnected regions
may be merged. Such regions, however, remain logically distinct region. The dilation is a set-theoretic region
operation. It uses the union operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
[
dilation1(R, M ) := t−~vm (R)
m∈M
For each point m in M a translation of the region R is performed. The union of all these translations is the dilation
of R with M . dilation1 is similar to the operator minkowski_add1, the difference is that in dilation1
the structuring element is mirrored at the origin. The position of StructElement is meaningless, since the
displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A dilation always results in enlarged regions. Closely spaced regions which may touch or overlap as a result of
the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator union1 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Dilated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
dilation1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
698 CHAPTER 11. MORPHOLOGY
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
dilation2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
700 CHAPTER 11. MORPHOLOGY
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Reproduction of ’dilation_circle ()’" << endl;
cout << "First = original image " << endl;
cout << "Blue = after dilation " << endl;
cout << "Red = before dilation " << endl;
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:
√
O(2 · Radius · F 1) .
Result
dilation_circle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
dilation_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
702 CHAPTER 11. MORPHOLOGY
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
dilation1, dilation2, dilation_seq
See also
erosion_golay, opening_golay, closing_golay, hit_or_miss_golay, thinning_golay,
thickening_golay, golay_elements
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:
√
O( F 1 · ld(H)) .
Result
dilation_rectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of
empty or no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
704 CHAPTER 11. MORPHOLOGY
In order to compute the skeleton of a region, usually the elements ’l’ and ’m’ are used. Only the “foreground
elements” (even rotation numbers) are used. The elements ’i’ and ’e’ result in unchanged output regions. The
elements ’l’, ’m’ and ’f2’ are identical for the foreground. The Golay elements, together with all possible rotations,
are described with the operator golay_elements.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Dilated regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Structuring element from the Golay alphabet.
Default Value : ’h’
List of values : GolayElement ∈ {’l’, ’d’, ’c’, ’f’, ’h’, ’k’}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(Iterations · 20 · F) .
Result
dilation_seq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Erode a region.
erosion1 erodes the input regions with a structuring element. By applying erosion1 to a region, its boundary
gets smoothed. In the process, the area of the region is reduced. Furthermore, connected regions may be split.
Such regions, however, remain logically one region. The erosion is a set-theoretic region operation. It uses the
intersection operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
erosion1(R, M ) := t−~vm (R).
m∈M
For each point m in M a translation of the region R is performed. The intersection of all these translations is
the erosion of R with M . erosion1 is similar to the operator minkowski_sub1, the difference is that in
erosion1 the structuring element is mirrored at the origin. The position of StructElement is meaningless,
since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Eroded regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
erosion1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
706 CHAPTER 11. MORPHOLOGY
See also
transpose_region
Module
Foundation
Result
erosion2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Simulation of ’erosion_circle ()’" << endl;
cout << "First = original image " << endl;
cout << "Red = after segmentation " << endl;
HALCON 8.0.2
708 CHAPTER 11. MORPHOLOGY
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:
√
O(2 · Radius · F 1) .
Result
erosion_circle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
output region. This means that the intersection of all translations of the structuring element within the region is
computed.
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.
Parameter
Result
erosion_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
710 CHAPTER 11. MORPHOLOGY
Result
erosion_rectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Alternatives
erosion1, minkowski_sub1
See also
gen_rectangle1
Module
Foundation
Result
erosion_seq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
712 CHAPTER 11. MORPHOLOGY
Alternatives
erosion_golay, erosion1, erosion2
See also
dilation_seq, hit_or_miss_seq, thinning_seq
Module
Foundation
n
[
P = (R ◦ Mi )
i=1
\n
Q = (P • Mi )
i=1
Regions larger than the structuring elements are preserved, while small gaps are closed.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElements (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Structuring elements.
. RegionFitted (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Fitted regions.
Result
fitting returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
fitting is reentrant and processed without parallelization.
Possible Predecessors
gen_struct_elements, gen_region_points
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
opening, closing, connection, select_shape
Module
Foundation
gen_struct_elements serves to generate eight structuring elements normally used in the operator
fitting. The default value ’noise’ of the parameter Type generates elements especially suited for the elim-
ination of noise.
h h h h x h h h x x h h
x x x h x h h x h h x h
h h h h x h x h h h h x
M1 M2 M3 M4
h h h h h h h x h h x h
x x h h x x x x h h x x
h x h h x h h h h h h h
M5 M6 M7 M8
Parameter
HALCON 8.0.2
714 CHAPTER 11. MORPHOLOGY
returned, while in StructElement2 the structuring element for the background is returned. Row and Column
determine the reference point of the structuring element.
The rotations are numbered from 0 to 15. This does not mean, however, that there are 16 different rotations: Even
values denote rotations of the foreground elements, while odd values denote rotations of the background elements.
For golay_elements only even values are accepted, and determine the Golay element for
StructElement1. The next larger odd value is used for StructElement2. There are no rotations for
the Golay elements ’h’ and ’i’. Therefore, only the values 0 and 1 are possible as “rotations” (and hence only 0 for
golay_elements). The element ’e’ has only four possible rotations, and hence the rotation must be between 0
and 7 (for golay_elements the values 0, 2, 4, or 6 must be used).
The tables below show the elements of the Golay alphabet with all possible rotations. The characters used have
the following meaning:
• Foreground pixel
◦ Background pixel
· Don’t care pixel
The names of the elements and their rotation numbers are displayed below the respective element. The elements
with even number contain the foreground pixels, while the elements with odd numbers contain the background
pixels.
• • •
• • •
• • •
h(0,1)
◦ ◦ ◦
◦ ◦ ◦
◦ ◦ ◦
i(0,1)
· · · ◦ ◦ · ◦ ◦ ◦ · ◦ ◦
◦ • ◦ ◦ • · ◦ • ◦ · • ◦
◦ ◦ ◦ ◦ ◦ · · · · · ◦ ◦
e(0,1) e(2,3) e(4,5) e(6,7)
◦ •
· · ◦ • · ·
◦ ◦ ◦ • · • · ◦ • · ◦ • · • · ◦
· • · • · · • • ◦ · · ◦
• • • • • · ◦ ◦
l(0,1) l(2,3) l(4,5) l(6,7)
• ◦
· · • ◦ · ·
• • • ◦ · • · • ◦ · • ◦ · • •
· • · ◦ · · ◦ • • · · •
◦ ◦ ◦ ◦ ◦ · • •
l(8,9) l(10,11) l(12,13) l(14,15)
• •
• · · · · •
• · · • · • · · • • • · · • · •
• • ◦ · · ◦ · • · ◦ · ·
• · · · · ◦ · ·
m(0,1) m(2,3) m(4,5) m(6,7)
· ·
◦ · · · · ◦
· · • · · • · • · ◦ · • · • · ·
◦ • • · · • · • · • · ·
· · • • • • • •
m(8,9) m(10,11) m(12,13) m(14,15)
◦ ◦
◦ · · · · ◦
◦ · · ◦ · • · · ◦ ◦ ◦ · · • · ◦
◦ • • · · • · • · • · ·
◦ · · · · • · ·
d(0,1) d(2,3) d(4,5) d(6,7)
· ·
• · · · · •
· · ◦ · · • · ◦ · • · ◦ · • · ·
• • ◦ · · ◦ · • · ◦ · ·
· · ◦ ◦ ◦ ◦ ◦ ◦
d(8,9) d(10,11) d(12,13) d(14,15)
• •
◦ • ◦ ◦ • ◦
• ◦ ◦ • • • · ◦ • ◦ • ◦ · • • •
◦ • • ◦ · • ◦ • ◦ • · ◦
• ◦ ◦ ◦ ◦ • ◦ ◦
f(0,1) f(2,3) f(4,5) f(6,7)
◦ ◦
• · ◦ ◦ · •
◦ ◦ • ◦ · • • • ◦ • ◦ • • • · ◦
• • ◦ ◦ • ◦ ◦ • ◦ ◦ • ◦
◦ ◦ • • • ◦ • •
f(8,9) f(10,11) f(12,13) f(14,15)
• ◦
◦ · • ◦ · ◦
• • • ◦ · • · • ◦ ◦ • ◦ · • · •
◦ • ◦ ◦ · ◦ ◦ • • ◦ · •
◦ ◦ ◦ ◦ ◦ ◦ • •
f2(0,1) f2(2,3) f2(4,5) f2(6,7)
◦ •
◦ · ◦ • · ◦
◦ ◦ ◦ • · • · ◦ • ◦ ◦ • · • · ◦
◦ • ◦ • · ◦ • • ◦ ◦ · ◦
• • • • • ◦ ◦ ◦
f2(8,9) f2(10,11) f2(12,13) f2(14,15)
• ·
· · • · · ·
• • ◦ · · • · ◦ · · • · · • · •
· • · · · · · • • · · •
· · · · · · ◦ ◦
k(0,1) k(2,3) k(4,5) k(6,7)
· ◦
· · · • · ·
· · · ◦ · • · · ◦ · · • · • · ·
· • · • · · • • · · · ·
◦ • • • • · · ·
k(8,9) k(10,11) k(12,13) k(14,15)
• •
• · · · · •
• · · • · ◦ · · • • • · · ◦ · •
• ◦ · · · · · ◦ · · · ·
• · · · · · · ·
c(0,1) c(2,3) c(4,5) c(6,7)
HALCON 8.0.2
716 CHAPTER 11. MORPHOLOGY
· ·
· · · ·
· ·
· · • · · ◦ • · · · • ◦ · ·
· ◦ • · · • · ◦ · • · ·
· · • • • • • •
c(8,9) c(10,11) c(12,13) c(14,15)
Parameter
hit_or_miss performs the hit-or-miss-transformation. First, an erosion with the structuring element
StructElement1 is done on the input region Region. Then an erosion with the structuring element
StructElement2 is performed on the complement of the input region. The intersection of the two resulting
regions is the result RegionHitMiss of hit_or_miss.
The hit-or-miss-transformation selects precisely the points for which the conditions given by the structuring ele-
ments StructElement1 and StructElement2 are fulfilled. StructElement1 determines the condition
for the foreground pixels, while StructElement2 determines the condition for the background pixels. In order
to obtain sensible results, StructElement1 and StructElement2 must fit like key and lock. In any case,
StructElement1 and StructElement2 must be disjunct. Row and Column determine the reference point
of the structuring elements.
Structuring elements (StructElement1, StructElement2) can be generated by calling operators like
gen_struct_elements, gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElement1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Erosion mask for the input regions.
. StructElement2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Erosion mask for the complements of the input regions.
. RegionHitMiss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Result of the hit-or-miss operation.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region, F 1 the area of the structuring element 1, and F 2 the area of the structuring
element 2. Then the runtime complexity for one object is:
√ √ √
O F· F1 + F2 .
Result
hit_or_miss returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
718 CHAPTER 11. MORPHOLOGY
Alternatives
hit_or_miss_golay, hit_or_miss_seq, erosion2, dilation2
See also
thinning, thickening, gen_region_points, gen_region_polygon_filled
Module
Foundation
Result
hit_or_miss_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
hit_or_miss_seq, hit_or_miss
See also
erosion_golay, dilation_golay, opening_golay, closing_golay, thinning_golay,
thickening_golay, golay_elements
Module
Foundation
Result
hit_or_miss_seq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
720 CHAPTER 11. MORPHOLOGY
minkowski_add1 ( Region,
StructElement : RegionMinkAdd : Iterations : )
For each point m in M a translation of the region R is performed. The union of all these translations is the
Minkowski addition of R with M . minkowski_add1 is similar to the operator dilation1, the difference
is that in dilation1 the structuring element is mirrored at the origin. The position of StructElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A Minkowski addition always results in enlarged regions. Closely spaced regions which may touch or overlap as
a result of the dilation are still treated as two separate regions. If the desired behavior is to merge them into one
region, the operator union1 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkAdd (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Dilated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
minkowski_add1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
HALCON 8.0.2
722 CHAPTER 11. MORPHOLOGY
Result
minkowski_add2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
minkowski_sub1 ( Region,
StructElement : RegionMinkSub : Iterations : )
Erode a region.
minkowski_sub1 computes the Minkowski subtraction of the input regions with a structuring element. By
applying minkowski_sub1 to a region, its boundary gets smoothed. In the process, the area of the region is
reduced. Furthermore, connected regions may be split. Such regions, however, remain logically one region. The
Minkowski subtraction is a set-theoretic region operation. It uses the intersection operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
minkowski_sub1(R, M ) := t~vm (R)
m∈M
For each point m in M a translation of the region R is performed. The intersection of all these translations is the
Minkowski subtraction of R with M . minkowski_sub1 is similar to the operator erosion1, the difference
is that in erosion1 the structuring element is mirrored at the origin. The position of StructElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkSub (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Eroded regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
minkowski_sub1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
724 CHAPTER 11. MORPHOLOGY
minkowski_sub2 computes the Minkowski subtraction of the input regions with a structuring element
(StructElement) having the reference point (Row,Column). minkowski_sub2 has a similar effect as
minkowski_sub1, the difference is that the reference point of the structuring element can be chosen arbitrarily.
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
A maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
Result
minkowski_sub2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Reproduction of ’dilation_circle ()’" << endl;
cout << "First = original image " << endl;
cout << "Red = after segmentation " << endl;
cout << "Blue = after erosion " << endl;
HByteImage img("monkey");
HWindow w;
HALCON 8.0.2
726 CHAPTER 11. MORPHOLOGY
return(0);
}
Result
morph_hat returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
skeleton, reduce_domain, select_shape, area_center, connection
Alternatives
skeleton, thinning
See also
thinning_seq, morph_skiz
Module
Foundation
Thinning of a region.
morph_skiz first performs a sequential thinning ( thinning_seq) of the input region with the element ’l’ of
the Golay alphabet. The number of iterations is determined by the parameter Iterations1. Then a sequential
thinning of the resulting region with the element ’e’ of the Golay alphabet is carried out. The number of iterations
for this step is determined by the parameter Iterations2. The skiz operation serves to compute a kind of
skeleton of the input regions, and to prune the branches of the resulting skeleton. If the skiz operation is applied to
the complement of the region, the region and the resulting skeleton are separated.
If very large values or ’maximal’ are passed for Iterations1 or Iterations2, the processing stops if no
more changes occur.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be thinned.
. RegionSkiz (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Result of the skiz operator.
. Iterations1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer / string
Number of iterations for the sequential thinning with the element ’l’ of the Golay alphabet.
Default Value : 100
Suggested values : Iterations1 ∈ {’maximal’, 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 40, 50, 70, 100, 150, 200,
300, 400}
Typical range of values : 0 ≤ Iterations1 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer / string
Number of iterations for the sequential thinning with the element ’e’ of the Golay alphabet.
Default Value : 1
Suggested values : Iterations2 ∈ {’maximal’, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 0 ≤ Iterations2 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O((Iterations1 + Iterations2) · 3 · F) .
Result
morph_skiz returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
728 CHAPTER 11. MORPHOLOGY
Open a region.
An opening operation is defined as an erosion followed by a Minkowsi addition. By applying opening to a
region, larger structures remain mostly intact, while small structures like lines or points are eliminated. In contrast,
a closing operation results in small gaps being retained or filled up (see closing).
opening serves to eliminate small regions (smaller than StructElement) and to smooth the boundaries of a
region. The position of StructElement is meaningless, since an opening operation is invariant with respect to
the choice of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .
Result
opening returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
730 CHAPTER 11. MORPHOLOGY
Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:
√
O(4 · F 1 · Radius) .
Result
opening_circle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be opened.
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Opened regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Structuring element from the Golay alphabet.
Default Value : ’h’
List of values : GolayElement ∈ {’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(6 · F) .
Result
opening_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
732 CHAPTER 11. MORPHOLOGY
Result
opening_rectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Parameter
/* Simulation of opening_seg */
opening_seg(Region,StructElement,RegionOpening):
erosion1(Region,StructElement,H1,1) >
connection(H1,H2)
dilation1(H2,StructElement,RegionOpening,1)
clear_obj([H1,H2]).
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √ √
q
O( F 1 · F 2 · F 1) .
Result
opening_seg returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
734 CHAPTER 11. MORPHOLOGY
Parameter
Result
pruning returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElement1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element for the foreground.
. StructElement2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element for the background.
. RegionThick (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Result of the thickening operator.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 2, 4, 8, 16, 32, 128}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 2, 4, 8, 16, 32, 128}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50, 70, 100, 200, 400}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region, F 1 the area of the structuring element 1, and F 2 the area of the structuring
element 2. Then the runtime complexity for one object is:
√ √ √
O Iterations · F · F1 + F2 .
Result
thickening returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
736 CHAPTER 11. MORPHOLOGY
Module
Foundation
Add the result of a hit-or-miss operation to a region (using a Golay structuring element).
thickening_golay performs a thickening of the input regions using morphological operations and structur-
ing elements from the Golay alphabet. The operator first applies a hit-or-miss-transformation to Region (cf.
hit_or_miss_golay), and then adds the detected points to the input region. The following structuring ele-
ments are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used. The Golay elements,
together with all possible rotations, are described with the operator golay_elements.
Attention
Not all values of Rotation are valid for any Golay element.
Parameter
Result
thickening_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
thickening_seq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
738 CHAPTER 11. MORPHOLOGY
Result
thinning returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thinning_golay, thinning_seq
See also
hit_or_miss
Module
Foundation
Remove the result of a hit-or-miss operation from a region (using a Golay structuring element).
thinning_golay performs a thinning of the input regions using morphological operations and structuring
elements from the Golay alphabet. The operator first applies a hit-or-miss-transformation to Region (cf.
hit_or_miss_golay), and then removes the detected points from the input region. The following structuring
elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used. The Golay elements,
together with all possible rotations, are described with the operator golay_elements.
Attention
Not all values of Rotation are valid for any Golay element.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Result of the thinning operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Structuring element from the Golay alphabet.
Default Value : ’h’
List of values : GolayElement ∈ {’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(6 · F) .
Result
thinning_golay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
740 CHAPTER 11. MORPHOLOGY
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thinning_seq, thinning
See also
erosion_golay, hit_or_miss_golay
Module
Foundation
’l’ Skeleton, similar to skeleton. This structuring element is also used in morph_skiz.
’m’ A skeleton with many “hairs” and multiple (parallel) branches.
’d’ A skeleton without multiple branches, but with many gaps, similar to morph_skeleton.
’c’ Uniform erosion of the region.
’e’ One pixel wide lines are shortened. This structuring element is also used in morph_skiz.
’i’ Isolated points are removed. (Only Iterations = 1 is useful.)
’f’ Y-junctions are eliminated. (Only Iterations = 1 is useful.)
’f2’ One pixel long branches and corners are removed. (Only Iterations = 1 is useful.)
’h’ A kind of inner boundary, which, however, is thicker than the result of boundary, is generated. (Only
Iterations = 1 is useful.)
’k’ Junction points are eliminated, but also new ones are generated.
The Golay elements, together with all possible rotations, are described with the operator golay_elements.
Parameter
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(Iterations · 6 · F) .
Result
thinning_seq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
• no region: set_system(’no_object_result’,<RegionResult>)
HALCON 8.0.2
742 CHAPTER 11. MORPHOLOGY
OCR
12.1 Hyperboxes
close_all_ocrs ( : : : )
close_ocr ( : : OcrHandle : )
743
744 CHAPTER 12. OCR
Parallelization Information
close_ocr is reentrant and processed without parallelization.
Possible Predecessors
write_ocr_trainf
Possible Successors
read_ocr
Module
OCR/OCV
’moments_region_2nd_rel_invar’ Normed 2nd relativ geometric moments of the region. See also
moments_region_2nd_rel_invar.
’moments_region_3rd_invar’ Normed 3rd geometric moments of the region. See also
moments_region_3rd_invar.
’moments_central’ Normed central geometric moments of the region. See also moments_region_central.
’phi’ Sinus and cosinus of the orientation (angle) of the character.
’num_connect’ Number of connecting components.
’num_holes’ Number of holes.
’projection_horizontal’ Horizontal projection of the gray values.
’projection_horizontal_invar’ Horizontal projection of the gray values with are automatically scaled to maximum
range.
’projection_vertical’ Vertical projection of the gray values.
’projection_vertical_invar’ Vertical projection of the gray values with are automatically scaled to maximum range.
’cooc’ Values of the binary cooccurrence matrix.
’moments_gray_plane’ Normed gray value moments and the angles of the gray value level.
’num_runs’ Number of chords in the region normed to the area.
’chord_histo’ Frequency of the chords per row.
’pixel’ Gray value of the character.
’pixel_invar’ Gray values of the character with automatic maximal scaling of the gray values.
’pixel_binary’ Region of the character as a binary image zoomed to a size of WidthPattern ×
HeightPattern.
’gradient_8dir’ Gradients are computed on the character image. The gradient directions are discretized into 8
directions. The amplitude image is decomposed into 8 channels according to these discretized directions. 25
samples on a 5 × 5 grid are extracted from each channel. These samples are used as features.
Parameter
HALCON 8.0.2
746 CHAPTER 12. OCR
Classify characters.
The operator do_ocr_multi assigns a class to every Character (character). For gray value features the
gray values from the surrounding rectangles of the regions are used. The gray values will be taken from the
parameter Image. For each character the corresponding class will be returned in Class and a confidence value
will be returned in Confidence. The confidence value indicates the similarity between the input pattern and the
assigned character.
Parameter
See also
write_ocr
Module
OCR/OCV
HALCON 8.0.2
748 CHAPTER 12. OCR
Parameter
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; integer
ID of the OCR classifier.
. WidthPattern (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the scaled characters.
. HeightPattern (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the scaled characters.
. Interpolation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Interpolation mode for scaling the characters.
. WidthMaxChar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the largest trained character.
. HeightMaxChar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the largest trained character.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Used features.
. Characters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
All characters of the set.
Result
The operator info_ocr_class_box always returns 2 (H_MSG_TRUE).
Parallelization Information
info_ocr_class_box is reentrant and processed without parallelization.
Possible Predecessors
read_ocr, create_ocr_class_box
Possible Successors
write_ocr
Module
OCR/OCV
Module
OCR/OCV
HALCON 8.0.2
750 CHAPTER 12. OCR
Possible Successors
do_ocr_multi, do_ocr_single, traind_ocr_class_box, trainf_ocr_class_box
See also
write_ocr, do_ocr_multi, traind_ocr_class_box, trainf_ocr_class_box
Module
OCR/OCV
HALCON 8.0.2
752 CHAPTER 12. OCR
12.2 Lexica
clear_all_lexica ( : : : )
clear_lexicon ( : : LexiconHandle : )
Clear a lexicon.
HALCON 8.0.2
754 CHAPTER 12. OCR
Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Unique name for the new lexicon.
Default Value : ’lex1’
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of a text file containing words for the new lexicon.
Default Value : ’words.txt’
. LexiconHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; integer
Handle of the lexicon.
Parallelization Information
import_lexicon is processed completely exclusively without parallelization.
Possible Successors
do_ocr_word_mlp, do_ocr_word_svm
Alternatives
create_lexicon
See also
lookup_lexicon, suggest_lexicon
Module
OCR/OCV
12.3 Neural-Nets
clear_all_ocr_class_mlp ( : : : )
HALCON 8.0.2
756 CHAPTER 12. OCR
Attention
clear_all_ocr_class_mlp exists solely for the purpose of implementing the “reset program” functionality
in HDevelop. clear_all_ocr_class_mlp must not be used in any application.
Result
clear_all_ocr_class_mlp always returns 2 (H_MSG_TRUE).
Parallelization Information
clear_all_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
do_ocr_single_class_mlp, evaluate_class_mlp
Alternatives
clear_ocr_class_mlp
See also
create_ocr_class_mlp, read_ocr_class_mlp, write_ocr_class_mlp,
trainf_ocr_class_mlp
Module
OCR/OCV
clear_ocr_class_mlp ( : : OCRHandle : )
NumHidden. The number of output variables of the MLP (NumOutput in create_class_mlp) is deter-
mined from the names of the characters to be used in the OCR, which are passed in Characters. As described
with create_class_mlp, the parameters Preprocessing and NumComponents can be used to specify
a preprocessing of the data (i.e., the feature vectors). The OCR already approximately normalizes the features.
Hence, Preprocessing can typically be set to ’none’. The parameter RandSeed has the same meaning as in
create_class_mlp.
The features to be used for the classification are determined by Features. Features can contain a tuple
of several feature names. Each of these feature names results in one or more features to be calculated for the
classifier. Some of the feature names compute gray value features (e.g., ’pixel_invar’). Because a classifier requires
a constant number of features (input variables), a character to be classified is transformed to a standard size,
which is determined by WidthCharacter and HeightCharacter. The interpolation to be used for the
transformation is determined by Interpolation. It has the same meaning as in affine_trans_image.
The interpolation should be chosen such that no aliasing effects occur in the transformation. For most applications,
Interpolation = ’constant’ should be used. It should be noted that the size of the transformed character
is not chosen too large, because the generalization properties of the classifier may become bad for large sizes.
In particular, large sizes will lead to the fact that small segmentation errors will have a large influence on the
computed features if gray value features are used. This happens because segmentation errors will change the
smallest enclosing rectangle of the regions, which leads to the fact that the character is zoomed differently than the
characters in the training set. In most applications, sizes between 6 × 8 and 10 × 14 should be used.
The parameter Features can contain the following feature names for the classification of the characters. By
specifying ’default’, the features ’ratio’ and ’pixel_invar’ are selected.
HALCON 8.0.2
758 CHAPTER 12. OCR
After the classifier has been created, it is trained using trainf_ocr_class_mlp. After this, the classifier can
be saved using write_ocr_class_mlp. Alternatively, the classifier can be used immediately after training to
classify characters using do_ocr_single_class_mlp or do_ocr_multi_class_mlp.
HALCON provides a number of pretrained OCR classifiers (see Solution Guide I, chapter ’OCR’, section ’Pre-
trained OCR Fonts’). These pretrained OCR classifiers can be read directly with read_ocr_class_mlp and
make it possible to read a wide variety of different fonts without the need to train an OCR classifier. Therefore, it
is recommended to try if one of the pretrained OCR classifiers can be used successfully. If this is the case, it is not
necessary to create and train an OCR classifier.
A comparison of the MLP and the support vector machine (SVM) (see create_ocr_class_svm) typically
shows that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better
recognition rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical
applications. Please note that this guideline assumes optimal tuning of the parameters.
Parameter
Result
If the parameters are valid, the operator create_ocr_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
create_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Successors
trainf_ocr_class_mlp
HALCON 8.0.2
760 CHAPTER 12. OCR
Alternatives
create_ocr_class_svm, create_ocr_class_box
See also
do_ocr_single_class_mlp, do_ocr_multi_class_mlp, clear_ocr_class_mlp,
create_class_mlp, train_class_mlp, classify_class_mlp
Module
OCR/OCV
HALCON 8.0.2
762 CHAPTER 12. OCR
The Expression may restrict the word to belong to a predefined lexicon created using create_lexicon
or import_lexicon, by specifying the name of the lexicon in angular brackets as in ’<mylexicon>’. If the
Expression is of any other form, it is interpreted as a regular expression with the same syntax as specified for
tuple_regexp_match. Note that you will usually want to use an expression of the form ’^...$’ when using
variable quantifiers like ’*’, to ensure that the entire word is used in the expression. Also note that in contrast to
tuple_regexp_match, do_ocr_word_mlp does not support passing extra options in an expression tuple.
If the word derived from the best class for each character does not match the Expression,
do_ocr_word_mlp attempts to correct it by considering the NumAlternatives best classes for each char-
acter. The alternatives used are identical to those returned by do_ocr_single_class_mlp for a single
character. It does so by testing all possible corrections for which the classification result is changed for at most
NumCorrections character regions.
In case the Expression is a lexicon and the above procedure did not yield a result, the most similar word in
the lexicon is returned as long as it requires less than NumCorrections edit operations for the correction (see
suggest_lexicon).
The resulting word is graded by a Score between 0.0 (no correction found) and 1.0 (original word correct), which
is dominated by the number of corrected characters but also adds a minor penalty for ignoring the second best
class or even all best classes (in case of lexica). Note that this is a combinatorial score which does not reflect the
original Confidence of the best Class.
Parameter
Parallelization Information
do_ocr_word_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp, read_ocr_class_mlp
Alternatives
do_ocr_multi_class_mlp
See also
create_ocr_class_mlp, classify_class_mlp
Module
OCR/OCV
HALCON 8.0.2
764 CHAPTER 12. OCR
get_params_ocr_class_mlp returns the parameters of an OCR classifier that were specified when the
classifier was created with create_ocr_class_mlp. This is particularly useful if the classifier was read
with read_ocr_class_mlp. The output of get_params_ocr_class_mlp can, for example, be used
to check whether a character to be read is contained in the classifier. For a description of the parameters, see
create_ocr_class_mlp.
Parameter
Compute the information content of the preprocessed feature vectors of an OCR classifier.
get_prep_info_ocr_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’prin-
cipal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created with
create_ocr_class_mlp. The preprocessing methods are described with create_class_mlp. The in-
formation content is derived from the variations of the transformed components of the feature vector, i.e., it is
computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput−1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains the
Result
If the parameters are valid, the operator get_prep_info_ocr_class_mlp returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
get_prep_info_ocr_class_mlp may return the error 9211 (Matrix is not positive definite) if
Preprocessing = ’canonical_variates’ is used. This typically indicates that not enough training samples
have been stored for each class.
HALCON 8.0.2
766 CHAPTER 12. OCR
Parallelization Information
get_prep_info_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_mlp, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
clear_ocr_class_mlp, create_ocr_class_mlp
Module
OCR/OCV
write_ocr_trainf, before calling trainf_ocr_class_mlp. The remaining parameters have the same
meaning as in train_class_mlp and are described in detail with train_class_mlp.
Parameter
Result
If the parameters are valid, the operator trainf_ocr_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
trainf_ocr_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
trainf_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_mlp, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp, write_ocr_class_mlp
Alternatives
read_ocr_class_mlp
HALCON 8.0.2
768 CHAPTER 12. OCR
See also
train_class_mlp
Module
OCR/OCV
12.4 Support-Vector-Machines
clear_all_ocr_class_svm ( : : : )
See also
create_ocr_class_svm, read_ocr_class_svm, write_ocr_class_svm,
trainf_ocr_class_svm
Module
OCR/OCV
clear_ocr_class_svm ( : : OCRHandle : )
HALCON 8.0.2
770 CHAPTER 12. OCR
chosen such that no aliasing effects occur in the transformation. For most applications, Interpolation =
’constant’ should be used. It should be noted that the size of the transformed character is not chosen too large,
because the generalization properties of the classifier may become bad for large sizes. In particular, for large sizes
small segmentation errors will have a large influence on the computed features if gray value features are used. This
happens because segmentation errors will change the smallest enclosing rectangle of the regions, thus the character
is zoomed differently than the characters in the training set. In most applications, sizes between 6 × 8 and 10 × 14
should be used.
The parameter Features can contain the following feature names for the classification of the characters. By
specifying ’default’, the features ’ratio’ and ’pixel_invar’ are selected.
HALCON 8.0.2
772 CHAPTER 12. OCR
Result
If the parameters are valid the operator create_ocr_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
Parallelization Information
create_ocr_class_svm is processed completely exclusively without parallelization.
Possible Successors
trainf_ocr_class_svm
Alternatives
create_ocr_class_mlp, create_ocr_class_box
See also
do_ocr_single_class_svm, do_ocr_multi_class_svm, clear_ocr_class_svm,
create_class_svm, train_class_svm, classify_class_svm
Module
OCR/OCV
HALCON 8.0.2
774 CHAPTER 12. OCR
HALCON 8.0.2
776 CHAPTER 12. OCR
Parameter
Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
get_prep_info_ocr_class_svm computes the information content of the training vectors that have
been transformed with the preprocessing given by Preprocessing. Preprocessing can be set to
’principal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created
with create_ocr_class_svm. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_ocr_class_svm, a sufficient number of samples must be stored in the training files given
by TrainingFile (see write_ocr_trainf).
InformationCont and CumInformationCont can be used to decide how many components of
the transformed feature vectors contain relevant information. An often used criterion is to require that
the transformed data must represent x% (e.g., 90%) of the total data. This can be decided eas-
ily from the first value of CumInformationCont that lies above x%. The number thus obtained
can be used as the value for NumComponents in a new call to create_ocr_class_svm. The
call to get_prep_info_ocr_class_svm already requires the creation of a classifier, and hence
the setting of NumComponents in create_ocr_class_svm to an initial value. However, if
get_prep_info_ocr_class_svm is called it is typically not known how many components are relevant, and
hence how to set NumComponents in this call. Therefore, the following two-step approach should typically be
used to select NumComponents: In a first step, a classifier with the maximum number for NumComponents is
created (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’). Then, the training samples are saved in a training file using write_ocr_trainf. Subsequently,
get_prep_info_ocr_class_svm is used to determine the information content of the components, and with
this NumComponents. After this, a new classifier with the desired number of components is created, and the
classifier is trained with trainf_ocr_class_svm.
HALCON 8.0.2
778 CHAPTER 12. OCR
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; integer
Handle of the OCR classifier.
. TrainingFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; string
Name(s) of the training file(s).
Default Value : ’ocr.trf’
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default Value : ’principal_components’
List of values : Preprocessing ∈ {’principal_components’, ’canonical_variates’}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Cumulative information content of the transformed feature vectors.
Example
Result
If the parameters are valid the operator get_prep_info_ocr_class_svm returns the value 2
(H_MSG_TRUE). If necessary, an exception handling is raised.
get_prep_info_ocr_class_svm may return the error 9211 (Matrix is not positive definite) if
Preprocessing = ’canonical_variates’ is used. This typically indicates that not enough training samples
have been stored for each class.
Parallelization Information
get_prep_info_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_svm, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
clear_ocr_class_svm, create_ocr_class_svm
Module
OCR/OCV
get_support_vector_num_ocr_class_svm
( : : OCRHandle : NumSupportVectors, NumSVPerSVM )
get_support_vector_ocr_class_svm ( : : OCRHandle,
IndexSupportVector : Index )
Return the index of a support vector from a trained OCR classifier that is based on support vector machines.
The operator get_support_vector_ocr_class_svm maps support vectors of a trained SVM-based
OCR classifier (given in OCRHandle) to the original training data set. The index of the SV is speci-
fied with IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be
a number between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be deter-
mined with get_support_vector_num_ocr_class_svm. The index of this SV in the training data
is returned in Index. get_support_vector_ocr_class_svm can, for example, be used to visu-
alize the support vectors. To do so, the train file that has been used to train the SVM must be read with
read_ocr_trainf. The value returned in Index must be incremented by 1 and can then be used to select
the support vectors with select_obj from the training characters. If more than one train file has been used
in trainf_ocr_class_svm Index behaves as if all train files had been merged into one train file with
concat_ocr_trainf.
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; integer
OCR handle.
. IndexSupportVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of stored support vectors.
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Index of the support vector in the training set.
Result
If the parameters are valid the operator get_support_vector_ocr_class_svm returns the value 2
(H_MSG_TRUE). If necessary, an exception handling is raised.
HALCON 8.0.2
780 CHAPTER 12. OCR
Parallelization Information
get_support_vector_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm, get_support_vector_num_ocr_class_svm
See also
create_ocr_class_svm, read_ocr_trainf, append_ocr_trainf, concat_ocr_trainf
Module
OCR/OCV
HALCON 8.0.2
782 CHAPTER 12. OCR
Result
If the parameters are valid the operator trainf_ocr_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
trainf_ocr_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
trainf_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_svm, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
do_ocr_single_class_svm, do_ocr_multi_class_svm, write_ocr_class_svm
Alternatives
read_ocr_class_svm
See also
train_class_svm
Module
OCR/OCV
Module
OCR/OCV
12.5 Tools
segment_characters ( Region, Image : ImageForeground,
RegionForeground : Method, EliminateLines, DotPrint, StrokeWidth,
CharWidth, CharHeight, ThresholdOffset, Contrast : UsedThreshold )
’local_contrast_best’ This method extracts text that differ locally from the background. Therefore, it is suited
for images with inhomogeneous illumination. The enhancment of the text borders, leads to a more accurate
determinaton of the outline of the text. Which is especially useful if the background is highly textured.
The parameter Contrast defines the minimum contrast,i.e., the minimum gray value difference between
symobls and background.
’local_auto_shape’ The minimum contrast is estimated automatically such that the number of very small regions
is reduced. This method is especially suitable for noisy images. The parameter ThresholdOffset can
be used to adjust the threshold. Let g(x, y) be the gray value at position (x, y) in the input Image. The
threshold condition is determined by:
g(x, y) ≤ UsedThreshold + ThresholdOffset.
Select EliminateLines if the extraction of characters is disturbed by lines that are horizontal or vertical with
respect to the lines of text and set its value to ’true’. The elimination is influenced by the maximum of CharWidth
and the maximum of CharHeight. For further information see the description of these parameters.
DotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.
StrokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters DotPrint, the average
CharWidth, and the average CharHeight.
CharWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average CharWidth. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
CharHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average CharHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5, and the maximum to 20.
ThresholdOffset: This parameter can be used to adjust the threshold, which is used when the segmentation
method ’local_auto_shape’ is chosen.
HALCON 8.0.2
784 CHAPTER 12. OCR
Contrast: Defines the minimum contrast between the text and the background. This parameter is used if the
segmentation method ’local_contrast_best’ is selected.
UsedThreshold: After the execution, this parameter returns the threshold used to segment the characters.
ImageForeground returns the image that was internally used for the segmentation.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Area in the image where the text lines are located.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. ImageForeground (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject
Image used for the segmentation.
. RegionForeground (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region of characters.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to segment the characters.
Default Value : ’local_auto_shape’
List of values : Method ∈ {’local_contrast_best’, ’local_auto_shape’}
. EliminateLines (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Eliminate horizontal and vertical lines?
Default Value : ’false’
List of values : EliminateLines ∈ {’true’, ’false’}
. DotPrint (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should dot print characters be detected?
Default Value : ’false’
List of values : DotPrint ∈ {’true’, ’false’}
. StrokeWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Stroke width of a character.
Default Value : ’medium’
List of values : StrokeWidth ∈ {’ultra_light’, ’light’, ’medium’, ’bold’}
. CharWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Width of a character.
Default Value : 25
Typical range of values : 1 ≤ CharWidth
Restriction : CharWidth ≥ 1
. CharHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Height of a character.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. ThresholdOffset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Value to adjust the segmentation.
Default Value : 0
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum gray value difference between text and background.
Default Value : 10
Typical range of values : 1 ≤ Contrast
Restriction : Contrast ≥ 1
. UsedThreshold (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Threshold used to segment the characters.
Example
Result
If the input parameters are set correctly, the operator segment_characters returns the value 2
(H_MSG_TRUE). Otherwise an exception will be raised.
Parallelization Information
segment_characters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
text_line_orientation
Possible Successors
select_characters, connection
Alternatives
threshold
Module
Foundation
HALCON 8.0.2
786 CHAPTER 12. OCR
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
CharHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum
is not set or equal -1, the operator automatically sets these value depending on average CharHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10 the minimum value is calculated by the system and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5 and the maximum to 20.
Punctuation: Set this parameter to ’true’ if the operator also has to detect punctuation marks (e.g. .,:’‘"),
otherwise they will be suppressed.
DiacriticMarks: Set this parameter to ’true’ if the text in your application contains diacritic marks (e.g. â,é,ö),
or to ’false’ to suppress them.
PartitionMethod: If neighboring characters are printed close to each other, they may be partly merged. With
this parameter you can specify the method to partition such characters. The possible values are ’none’, which
means no partitioning is perfomed. ’fixed_width’ means that the partitioning assumes a constant character width.
If the width of the extracted region is well above the average CharWidth, the region ist split into parts that have
the given average CharWidth. The partitioning starts at the left border of the region. ’variable_width’ means
that the characters are partitioned at the position where they have the thinnest connection. This method can be
selected for characters that are printed with a variable-width font or if many consecutive characters are extracted as
one symbol. It could be helpful to call text_line_slant and/or use text_line_orientation before
calling select_characters.
PartitionLines: If some text lines or some characters of different text lines are connected, set this parameter
to ’true’.
FragmentDistance: This parameter influences the connection of character fragments. If too much is con-
nected, set the parameter to ’narrow’ or ’medium’. In the case that more fragments should be connected, set
the parameter to ’medium’ or ’wide’. The connection is also influenced by the maximum of CharWidth and
CharHeight. See also ConnectFragments.
ConnectFragments: Set this parameter to ’true’ if the extracted symbols are fragmented, i.e., if a symbol is
not extracted as one region but broken up into several parts. See also FragmentDistance and StopAfter in
the step ’step3_connect_fragments’.
ClutterSizeMax: If the extracted characters contain clutter, i.e., small regions near the actual symbols, increase
this value. If parts of the symbols are missing, decrease this value.
StopAfter: Use this parameter in the case the operator does not produce the desired results. By modifying this
value the operator stops after the execution of the selected step and provides the corresponding results. To end on
completion, set StopAfter to ’completion’.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region of text lines in which to select the characters.
. RegionCharacters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Selected characters.
. DotPrint (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should dot print characters be detected?
Default Value : ’false’
List of values : DotPrint ∈ {’true’, ’false’}
. StrokeWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Stroke width of a character.
Default Value : ’medium’
List of values : StrokeWidth ∈ {’ultra_light’, ’light’, ’medium’, ’bold’}
. CharWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Width of a character.
Default Value : 25
Typical range of values : 1 ≤ CharWidth
Restriction : CharWidth ≥ 1
Result
If the input parameters are set correctly, the operator select_characters returns the value 2
(H_MSG_TRUE). Otherwise an exception will be raised.
Parallelization Information
select_characters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
segment_characters, text_line_slant
Possible Successors
do_ocr_single, do_ocr_multi
HALCON 8.0.2
788 CHAPTER 12. OCR
Alternatives
connection
Module
Foundation
read_image(Image,’letters’)
text_line_orientation(Image,Image,50,rad(-80),rad(80),OrientationAngle)
rotate_image(Image,ImageRotate,-OrientationAngle/rad(180)*180,’constant’)
Result
If the input parameters are set correctly, the operator text_line_orientation returns the value 2
(H_MSG_TRUE). Otherwise an exception will be raised.
Parallelization Information
text_line_orientation is reentrant and automatically parallelized (on tuple level).
Possible Successors
rotate_image, affine_trans_image, affine_trans_image_size
Module
Foundation
hom_mat2d_identity(HomMat2DIdentity)
read_image(Image,’dot_print_slanted’)
/* correct slant */
text_line_slant(Image,Image,50,rad(-45),rad(45),SlantAngle)
hom_mat2d_slant(HomMat2DIdentity,-SlantAngle,’x’,0,0,HomMat2DSlant)
affine_trans_image(Image,Image,HomMat2DSlant,’constant’,’true’)
HALCON 8.0.2
790 CHAPTER 12. OCR
Result
If the input parameters are set correctly, the operator text_line_slant returns the value 2 (H_MSG_TRUE).
Otherwise an exception will be raised.
Parallelization Information
text_line_slant is reentrant and automatically parallelized (on tuple level).
Possible Successors
hom_mat2d_slant, affine_trans_image, affine_trans_image_size
Module
Foundation
12.6 Training-Files
char name[128];
char class[128];
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
create_tuple(&Class,num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",class);
append_ocr_trainf(Character,Image,name,class);
}
Result
If the parameters are correct, the operator append_ocr_trainf returns the value 2 (H_MSG_TRUE). Other-
wise an exception will be raised.
Parallelization Information
append_ocr_trainf is processed completely exclusively without parallelization.
Possible Predecessors
threshold, connection, create_ocr_class_box, read_ocr
Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Alternatives
write_ocr_trainf, write_ocr_trainf_image
Module
OCR/OCV
HALCON 8.0.2
792 CHAPTER 12. OCR
Parameter
HALCON 8.0.2
794 CHAPTER 12. OCR
Object
13.1 Information
count_obj ( Objects : : : Number )
795
796 CHAPTER 13. OBJECT
’creator’ Output of the names of the procedures which initially created the image components (not the object).
’type’ Output of the type of image component (’byte’, ’int1’, ’int2’, ’uint2’ ’int4’, ’real’, ’direction’, ’cyclic’,
’complex’, ’vector_field’). The component 0 is of type ’region’ or ’xld’.
In the tuple Channel the numbers of the components about which information is required are stated. After car-
rying out get_channel_info, Information contains a tuple of strings (one string per entry in Channel)
with the required information.
Parameter
’image’ Object with region (definition domain) and at least one channel.
’region’ Object with a region without gray values.
’xld_cont’ XLD object as contour
’xld_poly’ XLD object as polygon
’xld_parallel’ XLD object with parallel polygons
Parameter
Parallelization Information
get_obj_class is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image, disp_region, disp_xld
See also
get_channel_info, count_relation
Module
Foundation
HALCON 8.0.2
798 CHAPTER 13. OBJECT
Attention
The parameter IsDefined can be TRUE even if the object was already deleted because the surrogates of deleted
objects are re-used for new objects. In this context see the example.
Parameter
. Object (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object ; Hobject
Object to be checked.
. IsDefined (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
boolean result value.
Example (Syntax: C)
circle(&Circle,100.0,100.0,100.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE): %d\n",IsDefined);
clear_obj(Circle);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_FALSE): %d\n",IsDefined);
gen_rectangle1(&Rectangle,200.0,200.0,300.0,300.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE!!!): %d\n",IsDefined);
Complexity
The runtime complexity is O(1).
Result
The operator test_obj_def returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input objects available) is set via the operator set_system(::
’no_object_result’,<Result>:).
Parallelization Information
test_obj_def is reentrant and processed without parallelization.
Possible Predecessors
clear_obj, gen_circle, gen_rectangle1
See also
set_check, clear_obj, reset_obj_db
Module
Foundation
13.2 Manipulation
clear_obj ( Objects : : : )
Parameter
. Objects (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Objects to be deleted.
Result
clear_obj returns 2 (H_MSG_TRUE) if all objects are contained in the HALCON database. If not all ob-
jects are valid (e.g., already cleared), an exception is raised, which also clears all valid objects. The operator
set_check(::’˜clear’:) can be used to suppress the raising of this exception. If the input is empty the
behavior can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception
is raised.
Parallelization Information
clear_obj is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
reset_obj_db
See also
test_obj_def, set_check
Module
Foundation
gen_circle(&Circle,200.0,400.0,23.0);
gen_rectangle1(&Rectangle,23.0,44.0,203.0,201.0);
concat_obj(Circle,Rectangle,&CirclAndRectangle);
clear_obj(Circle); clear_obj(Rectangle);
disp_region(CircleAndRectangle,WindowHandle);
HALCON 8.0.2
800 CHAPTER 13. OBJECT
Complexity
Runtime complexity: O(|Objects1| + |Objects2|);
Memory complexity of the result objects: O(|Objects1| + |Objects2|)
Result
concat_obj returns 2 (H_MSG_TRUE) if all objects are contained in the HALCON database. If the input is
empty the behavior can be set via set_system(::’no_object_result’,<Result>:). If necessary, an
exception is raised.
Parallelization Information
concat_obj is reentrant and processed without parallelization.
See also
count_obj, copy_obj, select_obj, disp_obj
Module
Foundation
count_obj(Regions,Num)
for(1,Num,i)
copy_obj(Regions,Single,i,1)
get_region_polygon(Single,5.0,Line,Column)
disp_polygon(WindowHandle,Line,Column)
clear_obj(Single)
loop().
Complexity
Runtime complexity: O(|Objects| + NumObj);
gen_empty_obj ( : EmptyObject : : )
HALCON 8.0.2
802 CHAPTER 13. OBJECT
Parallelization Information
integer_to_obj is reentrant and processed without parallelization.
See also
obj_to_integer
Module
Foundation
Complexity
Runtime complexity: O(|Objects| + Number)
Result
obj_to_integer returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via set_system(::’no_object_result’,<Result>:). If necessary, an exception is raised.
Parallelization Information
obj_to_integer is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
copy_obj, select_obj, copy_image, gen_image_proto
See also
integer_to_obj, count_obj
Module
Foundation
count_obj(Regions,&Num);
for (i=1; i<=Num; i++)
{
select_obj(Regions,&Single,i);
T_get_region_polygon(Single,5.0,&Row,&Column);
T_disp_polygon(WindowHandleTuple,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
clear_obj(Single);
}
Complexity
Runtime complexity: O(|Objects|)
Result
select_obj returns 2 (H_MSG_TRUE) if all objects are contained in the HALCON database and
all parameters are correct. If the input is empty the behavior can be set via set_system(::
’no_object_result’,<Result>:). If necessary, an exception is raised.
Parallelization Information
select_obj is reentrant and processed without parallelization.
Possible Predecessors
count_obj
Alternatives
copy_obj
See also
count_obj, concat_obj, obj_to_integer
Module
Foundation
HALCON 8.0.2
804 CHAPTER 13. OBJECT
Regions
14.1 Access
get_region_chain ( Region : : : Row, Column, Chain )
3 2 1
4 ∗ 0
5 6 7
The operator get_region_chain returns the code in the form of a tuple. In case of an empty region the
parameters Row and Column are zero and Chain is the empty tuple.
Attention
Holes of the region are ignored. Only one region may be passed, and it must have exactly one connection compo-
nent.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region to be transformed.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chain.begin.y ; integer
Line of starting point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chain.begin.x ; integer
Column of starting point.
. Chain (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chain.code-array ; integer
Direction code of the contour (from starting point).
Typical range of values : 0 ≤ Chain ≤ 7
Result
The operator get_region_chain normally returns the value 2 (H_MSG_TRUE). If more than one con-
nection component is passed an exception handling is caused. The behavior in case of empty input (no in-
put regions available) is set via the operator set_system(’no_object_result’,<Result>). The
behavior in case of empty region (the region is the empty set) is set via the operator set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_region_chain is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, skeleton, edges_image, gen_rectangle1, gen_circle
805
806 CHAPTER 14. REGIONS
Possible Successors
approx_chain, approx_chain_simple
See also
copy_obj, get_region_contour, get_region_polygon
Module
Foundation
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Output region.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; integer
Line numbers of contour pixels.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; integer
Column numbers of the contour pixels.
Number of elements : Columns = Rows
Result
The operator get_region_convex returns the value 2 (H_MSG_TRUE).
Parallelization Information
get_region_convex is reentrant and processed without parallelization.
Possible Predecessors
threshold, skeleton, dyn_threshold
Possible Successors
disp_polygon
Alternatives
shape_trans
See also
select_obj, get_region_contour
Module
Foundation
get_region_points returns the coordinates in the form of tuples. An empty region is passed as empty tuple.
Attention
Only one region may be passed.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
This region is accessed.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; integer
Line numbers of the pixels in the region
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; integer
Column numbers of the pixels in the region.
Number of elements : Columns = Rows
Result
The operator get_region_points normally returns the value 2 (H_MSG_TRUE). If more than one connec-
tion component is passed an exception handling is caused. The behavior in case of empty input (no input regions
available) is set via the operator set_system(’no_object_result’,<Result>).
Parallelization Information
get_region_points is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, connection
HALCON 8.0.2
808 CHAPTER 14. REGIONS
Alternatives
get_region_runs
See also
copy_obj, gen_region_points
Module
Foundation
The operator get_region_runs returns the region data in the form of chord tuples. The chord representation
is caused by examining a region line by line with ascending line number (= from “top” to “bottom”). Every line is
passed from left to right (ascending column number); storing all starting and ending points of region segments (=
chords). Thus a region can be described by a sequence of chords, a chord being defined by line number, starting
and ending points (column number). The operator get_region_runs returns the three components of the
chords in the form of tuples. In case of an empty region three empty tuples are returned.
Attention
Only one region may be passed.
Parameter
14.2 Creation
gen_checker_region ( : RegionChecker : WidthRegion, HeightRegion,
WidthPattern, HeightPattern : )
HALCON 8.0.2
810 CHAPTER 14. REGIONS
gen_checker_region(Checker,512,512,32,64:)
set_draw(WindowHandle,’fill’)
set_part(WindowHandle,0,0,511,511)
disp_region(Checker,WindowHandle)
Complexity
The required storage (in bytes) for the region is:
O((WidthRegion ∗ HeightRegion)/WidthPattern)
Result
The operator gen_checker_region returns the value 2 (H_MSG_TRUE) if the parameter values are correct.
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_checker_region is reentrant and processed without parallelization.
Possible Successors
paint_region
Alternatives
gen_grid_region, gen_region_polygon_filled, gen_region_points,
gen_region_runs, gen_rectangle1, concat_obj, gen_random_region,
gen_random_regions
See also
hamming_change_region, reduce_domain
Module
Foundation
Create a circle.
The operator gen_circle generates one or more circles described by the center and Radius. If several circles
shall be generated the coordinates must be passed in the form of tuples.
gen_circle only creates symmetric circles. To achieve this, the radius is rounded internally to a multiple of 0.5.
If an integer number is specified for the radius (i.e., 1, 2, 3, ...) an even diameter is obtained, and hence the circle
can only be symmetric with respect to a center with coordinates that have a fractional part of 0.5. Consequently,
internally the coordinates of the center are adapted to the closest coordinates that have a fractional part of 0.5. Here,
integer coordinates are rounded down to the next smaller values with a fractional part of 0.5. For odd diameters
(i.e., radius = 1.5, 2.5, 3.5, ...), the circle can only be symmetric with respect to a center with integer coordinates.
Hence, internally the coordinates of the center are rounded to the nearest integer coordinates. It should be noted
that the above algorithm may lead to the fact that circles with an even diameter are not contained in circles with
the next larger odd diameter, even if the coordinates specified in Row and Column are identical.
If the circle extends beyond the image edge it is clipped to the current image format if the value of the system flag
’clip_region’ is set to ’true’ ( set_system).
Parameter
. Circle (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Generated circle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; real / integer
Line index of center.
Default Value : 200.0
Suggested values : Row ∈ {0.0, 10.0, 50.0, 100.0, 200.0, 300.0}
Typical range of values : 1.0 ≤ Row ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; real / integer
Column index of center.
Default Value : 200.0
Suggested values : Column ∈ {0.0, 10.0, 50.0, 100.0, 200.0, 300.0}
Typical range of values : 1.0 ≤ Column ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; real / integer
Radius of circle.
Default Value : 100.5
Suggested values : Radius ∈ {1.0, 1.5, 2.0, 2.5, 3, 3.5, 4, 4.5, 5.5, 6.5, 7.5, 9.5, 11.5, 15.5, 20.5, 25.5, 31.5,
50.5}
Typical range of values : 1.0 ≤ Radius ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Radius > 0.0
Example
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
read_image(Image,’meer’)
gen_circle(Circle,300.0,200.0,150.5)
reduce_domain(Image,Circle,Mask)
disp_color(Mask,WindowHandle).
Complexity
Runtime complexity: O(Radius ∗ 2)
HALCON 8.0.2
812 CHAPTER 14. REGIONS
Create an ellipse.
The operator gen_ellipse generates one or more ellipses with the center (Row, Column), the orientation
Phi and the half-radii Radius1 and Radius2. The angle is indicated in arc measure according to the x axis in
mathematically positive direction. More than one region can be created by passing tuples of parameter values.
The center must be located within the image coordinates. The coordinate system runs from (0,0) (upper left corner)
to (Width-1,Height-1). See get_system and reset_obj_db in this context. If the ellipse reaches beyond the
edge of the image it is clipped to the current image format according to the value of the system flag ’clip_region’ (
set_system).
Parameter
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_insert(WindowHandle,’xor’)
repeat()
get_mbutton(WindowHandle,Row,Column,Button)
gen_ellipse(Ellipse,Row,Column,Column / 300.0,
(Row mod 100)+1,(Column mod 50) + 1)
disp_region(Ellipse,WindowHandle)
clear_obj(Ellipse)
until(Button = 1).
Complexity
Runtime complexity: O(Radius1 ∗ 2)
Storage complexity (byte): O(Radius1 ∗ 8)
Result
If the parameter values are correct, the operator gen_ellipse returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_ellipse is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_circle, gen_region_polygon_filled, draw_ellipse
See also
disp_ellipse, set_shape, smallest_circle, reduce_domain
Module
Foundation
gen_empty_region ( : EmptyRegion : : )
HALCON 8.0.2
814 CHAPTER 14. REGIONS
Parameter
read_image(Image,’fabrik’)
gen_grid_region(Raster,10,10,’lines’,512,512)
reduce_domain(Image,Raster,Mask)
sobel_amp(Mask,GridSobel,’sum_abs’,3)
disp_image(GridSobel,WindowHandle).
Complexity
The necessary storage (in bytes) for the region is:
O((ImageW idth/ColumnSteps) ∗ (ImageHeight/RowSteps))
Result
If the parameter values are correct the operator gen_grid_region returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_grid_region is reentrant and processed without parallelization.
Possible Successors
reduce_domain, paint_region
Alternatives
gen_region_line, gen_region_polygon, gen_region_points, gen_region_runs
See also
gen_checker_region, reduce_domain
Module
Foundation
HALCON 8.0.2
816 CHAPTER 14. REGIONS
HALCON 8.0.2
818 CHAPTER 14. REGIONS
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
disp_image(Image,WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
gen_rectangle1(Rectangle,Row1,Column1,Row2,Column2)
reduce_domain(Image,Rectangle,Mask)
emphasize(Mask,Emphasize,9,9,1.0)
disp_image(Emphasize,WindowHandle).
Result
If the parameter values are correct, the operator gen_rectangle1 returns the value 2 (H_MSG_TRUE). Oth-
erwise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_rectangle1 is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_rectangle2, gen_region_polygon, fill_up, gen_region_runs,
gen_region_points, gen_region_line
See also
draw_rectangle1, reduce_domain, smallest_rectangle1
Module
Foundation
HALCON 8.0.2
820 CHAPTER 14. REGIONS
gen_region_contour_xld creates a region Region from a subpixel XLD contour Contour. The contour
is sampled according to the Bresenham algorithm and influenced by the parameter neighborhood of the operator
set_system. Open contours are closed before converting them to regions. Finally, the parameter Mode defines
whether the region is filled up (filled) or returned by its contour (margin).
Please note that the coordinates of the contour points are rounded to their nearest integer pixel coordi-
nates during the conversion. This may lead to unexpected results when passing the contour obtained by
the operator gen_contour_region_xld to gen_region_contour_xld: When setting Mode of
gen_contour_region_xld to border, the input region of gen_contour_region_xld and the out-
put region of gen_region_contour_xld differ. For example, let us assume that the input region of
gen_contour_region_xld consists of the single pixel (1,1). Then, the resulting contour that is ob-
tained when calling gen_contour_region_xld with Mode set to border consists of the five points
(0.5,0.5), (0.5,1.5), (1.5,1.5), (1.5,0.5), and (0.5,0.5). Consequently, when passing this contour again to
gen_region_contour_xld, the resulting region consists of the points (1,1), (1,2), (2,2), and (2,1).
Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Input contour.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Created region.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Fill mode of the region.
Default Value : ’filled’
Suggested values : Mode ∈ {’filled’, ’margin’}
Parallelization Information
gen_region_contour_xld is reentrant and processed without parallelization.
Possible Predecessors
gen_contour_polygon_xld, gen_contour_polygon_rounded_xld
Alternatives
gen_region_polygon, gen_region_polygon_xld
See also
set_system
Module
Foundation
HALCON 8.0.2
822 CHAPTER 14. REGIONS
See also
hough_lines
Module
Foundation
HALCON 8.0.2
824 CHAPTER 14. REGIONS
The indicated coordinates stand for two consecutive pixels in the tupel.
Parameter
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Created region.
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y(-array) ; integer
Lines of the pixels in the region.
Default Value : 100
Suggested values : Rows ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Rows ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x(-array) ; integer
Columns of the pixels in the region.
Default Value : 100
Suggested values : Columns ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Columns ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Columns = Rows
Complexity
F shall be the number of pixels. If the pixels are sorted in ascending order the runtime complexity is: O(F ),
otherwise O(log(F ) ∗ F ).
Result
The operator gen_region_points returns the value 2 (H_MSG_TRUE) if the pixels are located within the im-
age format. Otherwise an exception handling is raised. The clipping according to the current image format is set via
the operator set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the
clipping or by an empty input) the operator set_system(’store_empty_region’,<true/false>)
determines whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_points is reentrant and processed without parallelization.
Possible Predecessors
get_region_points
Possible Successors
paint_region, reduce_domain
Alternatives
gen_region_polygon, gen_region_runs, gen_region_line
See also
reduce_domain
Module
Foundation
The operator gen_region_polygon creates a region from a polygon row described by a series of line and
column coordinates. The created region consists of the pixels of the routes defined thereby, wherein it is linearily
interpolated between the base points.
Attention
The region is not automatically closed and not filled. The gray values of the output regions are undefined.
Parameter
/* Polygon-approximation*/
get_region_polygon(Region,7,Row,Column)
/* store it as a region */
gen_region_polygon(Pol,Row,Column)
/* fill up the hole */
fill_up(Pol,Filled).
Result
If the base points are correct the operator gen_region_polygon returns the value 2 (H_MSG_TRUE). Oth-
erwise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping or
by an empty input) the operator set_system(’store_empty_region’,<true/false>) determines
whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_polygon is reentrant and processed without parallelization.
Possible Predecessors
get_region_polygon, draw_polygon
Alternatives
gen_region_polygon_filled, gen_region_points, gen_region_runs
See also
fill_up, reduce_domain, get_region_polygon, draw_polygon
Module
Foundation
HALCON 8.0.2
826 CHAPTER 14. REGIONS
The operator gen_region_polygon_filled creates a region from a polygon containing the cor-
ner points of the region (line and column coordinates) either clockwise or anti-clockwise. Contrary to
gen_region_polygon a “filled” region is returned here.
Parameter
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Created region.
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.y-array ; integer
Line indices of the base points of the region contour.
Default Value : 100
Suggested values : Rows ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Rows ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.x-array ; integer
Column indices of the base points of the region contour.
Default Value : 100
Suggested values : Columns ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Columns ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Columns = Rows
Example (Syntax: C)
/* Polygon approximation */
T_get_region_polygon(Region,7,&Row,&Column);
T_gen_region_polygon_filled(&Pol,Row,Column);
/* fill up with original gray value */
reduce_domain(Image,Pol,&New);
Result
If the base points are correct the operator gen_region_polygon_filled returns the value 2
(H_MSG_TRUE). Otherwise an exception handling is raised. The clipping according to the current
image format is set via the operator set_system(’clip_region’,<’true’/’false’>). If
an empty region is created (by the clipping or by an empty input) the operator set_system
(’store_empty_region’,<true/false>) determines whether the region is returned or an empty ob-
ject tuple.
Parallelization Information
gen_region_polygon_filled is reentrant and processed without parallelization.
Possible Predecessors
get_region_polygon, draw_polygon
Alternatives
gen_region_polygon, gen_region_points, draw_polygon
See also
gen_region_polygon, reduce_domain, get_region_polygon, gen_region_runs
Module
Foundation
Parameter
.
Parameter
HALCON 8.0.2
828 CHAPTER 14. REGIONS
Attention
label_to_region is not implemented for images of type ’real’. The input images must not contain negative
gray values.
Parameter
. LabelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / int4
Label image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions having a constant gray value.
Complexity
Let x1 be the minimum x-coordinate, x2 the maximum x-coordinate, y1 be the minimum y-coordinate, and y2 the
maximum y-coordinate of a particular gray value. Furthermore, let N be the number of different gray values in the
image. Then the runtime complexity is O(N ∗ (x2 − x1 + 1) ∗ (y2 − y1 + 1))
Result
label_to_region returns 2 (H_MSG_TRUE) if the gray values lie within a correct range. The behav-
ior with respect to the input images and output regions can be determined by setting the values of the flags
’no_object_result’, ’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an ex-
ception is raised.
Parallelization Information
label_to_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
min_max_gray, sobel_amp, binomial_filter, gauss_image, reduce_domain,
diff_of_gauss
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
See also
threshold, concat_obj, regiongrowing, region_to_label
Module
Foundation
14.3 Features
area_center ( Regions : : : Area, Row, Column )
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
Tuple area, row, column;
HALCON 8.0.2
830 CHAPTER 14. REGIONS
img.Display (w);
w.Click ();
reg.Display (w);
w.Click ();
cout << "Total number of regions: " << reg.Num () << endl;
return(0);
}
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator area_center returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
area_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
See also
select_shape
Module
Foundation
Calculation: If F is the area of the region and max is the maximum distance from the center to all contour pixels,
the shape factor C is defined as:
F
C=
(max2 ∗ π)
The shape factor C of a circle is 1. If the region is long or has holes C is smaller than 1. The operator
circularity especially responds to large bulges, holes and unconnected regions.
In case of an empty region the operator circularity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the shape factor are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Circularity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Roundness of the input region(s).
Assertion : (0 ≤ Circularity) ∧ (Circularity ≤ 1.0)
Example
Result
The operator circularity returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
circularity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
roundness, compactness, convexity, eccentricity
See also
area_center, select_shape
Module
Foundation
Calculation: If L is the length of the contour (see contlength) and F the area of the region the shape factor
C is defined as:
L2
C=
4F π
The shape factor C of a circle is 1. If the region is long or has holes C is larger than 1. The operator
compactness responds to the course of the contour (roughness) and to holes. In case of an empty region
the operator compactness returns the value 0 if no other behavior was set (see set_system). If more than
one region is passed the numerical values of the shape factor are stored in a tuple, the position of a value in the
tuple corresponding to the position of the region in the input tuple.
HALCON 8.0.2
832 CHAPTER 14. REGIONS
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Compactness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Compactness of the input region(s).
Assertion : (Compactness ≥ 1.0) ∨ (Compactness = 0)
Result
The operator compactness returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
compactness is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
compactness, convexity, eccentricity
See also
contlength, area_center, select_shape
Module
Foundation
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HWindow w;
HRegionArray reg;
cout << "Draw " << NumOfElements << " regions " << endl;
HALCON 8.0.2
834 CHAPTER 14. REGIONS
w.Click ();
return(0);
}
Result
The operator contlength returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
contlength is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
get_region_contour
Alternatives
compactness
See also
area_center, get_region_contour
Module
Foundation
Calculation: If Fc is the area of the convex hull and Fo the original area of the region the shape factor C is defined
as:
Fo
C=
Fc
The shape factor C is 1 if the region is convex (e.g., rectangle, circle etc.). If there are indentations or holes C is
smaller than 1.
In case of an empty region the operator convexity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the contour length are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter
Possible Predecessors
threshold, regiongrowing, connection
See also
select_shape, area_center, shape_trans
Module
Foundation
HALCON 8.0.2
836 CHAPTER 14. REGIONS
The operator eccentricity calculates three shape features derived from the geometric moments.
Definition: If the parameters Ra, Rb and the area A of the region are given (see elliptic_axis), the following
applies:
Ra
Anisometry =
Rb
π · Ra · Rb
Bulkiness =
A
StructureFactor = Anisometry · Bulkiness − 1
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as math-
ematical, infinitely small points that are represented by the center of the pixels (see the documentation of
elliptic_axis). This can lead to non-empty regions that have Rb = 0. In these cases, the output features
that require a division by Rb are set to 0. In particular, regions that contain a single point or regions whose points
lie exactly on a straight line (e.g., one pixel high horizontal regions or one pixel wide vertical regions) have an
anisometry of 0.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Anisometry (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Shape feature (in case of a circle = 1.0).
Assertion : Anisometry ≥ 1.0
. Bulkiness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Calculated shape feature.
. StructureFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Calculated shape feature.
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator eccentricity returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
eccentricity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
See also
elliptic_axis, moments_region_2nd, select_shape, area_center
Module
Foundation
Calculation:
If the moments M20 , M02 and M11 are normalized and passed to the area (see moments_region_2nd), the
radii Ra and Rb are calculated as:
q p
8(M20 + M02 + (M20 − M02 )2 + 4M11 2 )
Ra =
2
q p
8(M20 + M02 − (M20 − M02 )2 + 4M11 2 )
Rb =
2
The orientation Phi is defined by:
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system
(’no_object_result’,<Result>)).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as mathemat-
ical, infinitely small points that are represented by the center of the pixels. This means that Ra and Rb can assume
the value 0. In particular, for an empty region and a region containing a single point Ra = Rb = 0 is returned.
Furthermore, for regions whose points lie exactly on a straight line (e.g., one pixel high horizontal regions or one
pixel wide vertical regions), Rb = 0 is returned.
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
elliptic_axis(Seg,Ra,Rb,Phi)
area_center(Seg,_,Row,Column)
gen_ellipse(Ellipses,Row,Column,Phi,Ra,Rb)
set_draw(WindowHandle,’margin’)
disp_region(Ellipses,WindowHandle)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator elliptic_axis returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
HALCON 8.0.2
838 CHAPTER 14. REGIONS
Parallelization Information
elliptic_axis is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
gen_ellipse
Alternatives
smallest_rectangle2, orientation_region
See also
moments_region_2nd, select_shape, set_shape
References
R. Haralick, L. Shapiro “Computer and Robot Vision” Addison-Wesley, 1992, pp. 73-75
Module
Foundation
• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
The operator find_neighbors uses the chessboard distance between neighboring regions. It can be specified
by the parameter MaxDistance. Neighboring regions are located at the n-th position in RegionIndex1 and
RegionIndex2, i.e., the region with index RegionIndex1[n] from Regions1 is the neighbor of the region
with index RegionIndex2[n] from Regions2.
Attention
Covered regions are not found!
Parameter
The returned indices can be used, e.g., in select_obj to select the regions containing the test pixel.
HALCON 8.0.2
840 CHAPTER 14. REGIONS
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Line index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Index of the regions containing the test pixel.
Complexity √
If F is the area of the region and N is the number of regions the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator get_region_index returns the value 2 (H_MSG_TRUE) if the parameters are correct.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_region_index is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
select_region_point
See also
get_mbutton, get_mposition, test_region_point
Module
Foundation
Result
The operator get_region_thickness returns the value 2 (H_MSG_TRUE) if exactly one region is
passed. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>).
Parallelization Information
get_region_thickness is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, connection, select_shape, select_obj
See also
copy_obj, elliptic_axis
Module
Foundation
The parameter Similarity describes the similarity between the two regions based on the hamming distance
Distance:
Distance
Similarity = 1 −
|Regions1| + |Regions2|
If both regions are empty Similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Hamming distance of two regions.
Assertion : Distance ≥ 0
. Similarity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Similarity of two regions.
Assertion : (0 ≤ Similarity) ∧ (Similarity ≤ 1)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
hamming_distance returns the value 2 (H_MSG_TRUE) if the number of objects in both parameters is the same
and is not 0. The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set) is
set via set_system(’empty_region_result’,<Result>). If necessary an exception handling han-
dling is raised.
Parallelization Information
hamming_distance is reentrant and automatically parallelized (on tuple level).
HALCON 8.0.2
842 CHAPTER 14. REGIONS
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
intersection, complement, area_center
See also
hamming_change_region
Module
Foundation
The parameter Similarity describes the similarity between the two regions based on the hamming distance
Distance:
Distance
Similarity = 1 −
|N orm(Regions1)| + |Regions2|
’center’: The region is moved so that both regions have the save center of gravity.
If both regions are empty Similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
select_shape(Seg,H,’area’,’and’,100,2000)
inner_circle(H,Row,Column,Radius)
gen_circle(Circles,Row,Column,Radius:)
set_draw(WindowHandle,’margin’)
disp_region(Circles,WindowHandle)
Complexity √
If F is the area of the region and R is the radius of the inner circle the runtime complexity is O( F ∗ R).
Result
The operator inner_circle returns the value 2 (H_MSG_TRUE) if the input is not empty. The
HALCON 8.0.2
844 CHAPTER 14. REGIONS
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
inner_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
gen_circle, disp_circle
Alternatives
erosion_circle, inner_rectangle1
See also
set_shape, select_shape, smallest_circle
Module
Foundation
Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by: X
Mij = (Z0 − Z)i (S0 − S)j
(Z,S)∈R
p
Ib = h − h2 − M 20 ∗ M 02 + M 112
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M11 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Product of inertia of the axes through the center parallel to the coordinate axes.
. M20 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order (line-dependent).
. M02 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order (column-dependent).
. Ia (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
The one main axis of inertia.
. Ib (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
The other main axis of inertia.
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator moments_region_2nd returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (region is the empty set) is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
moments_region_2nd is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd_invar
See also
elliptic_axis
Module
Foundation
HALCON 8.0.2
846 CHAPTER 14. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M11 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Product of inertia of the axes through the center parallel to the coordinate axes.
. M20 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order (line-dependent).
. M02 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order (column-dependent).
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator moments_region_2nd_invar returns the value 2 (H_MSG_TRUE) if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_2nd_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. PHI1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order.
. PHI2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order.
Result
The operator moments_region_2nd_rel_invar returns the value 2 (H_MSG_TRUE) if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_2nd_rel_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
Calculation: x and y are the coordinates of the center of a region R with the area Z. Then the moments Mpq are
defined by: X
Mpq = M Z(xi , yi )(xi − x)p (yi − y)q
i=1
m10 m01
wherein are x = m00 and y = m00 .
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M21 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 3rd order (line-dependent).
. M12 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 3rd order (column-dependent).
. M03 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 3rd order (column-dependent).
HALCON 8.0.2
848 CHAPTER 14. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_3rd_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. I1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order.
. I2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order.
. I3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 2nd order.
. I4 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Moment of 3rd order.
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator moments_region_central returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_central is reentrant and automatically parallelized (on tuple level).
HALCON 8.0.2
850 CHAPTER 14. REGIONS
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
Orientation of a region.
The operator orientation_region calculates the orientation of the region. The operator is based on
elliptic_axis. In addition the point on the contour with maximal distance to the center of gravity is cal-
culated. If the column coordinate of this point is less than the column coordinate of the center of gravity the value
of π is added to the angle.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system
(’no_object_result’,<Result>)).
Parameter
HALCON 8.0.2
852 CHAPTER 14. REGIONS
To determine the rectangularity, first a rectangle is computed that has the same first and second order moments
as the input region. The computation of the rectangularity measure is finally based on the area of the difference
between the computed rectangle and the input region normalized with respect to the area of the rectangle.
For rectangles rectangularity returns the value 1. The more the input region deviates from a perfect rectan-
gle, the less the returned value for Rectangularity will be.
In case of an empty region the operator rectangularity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the rectangularity are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Attention
For input regions which orientation cannot be computed by using second order moments (as it is the case for
square regions, for example), the returned Rectangularity is underestimated by up to 10% depending on the
orientation of the input region.
Parameter
1 X 2
Sigma2 = (||p − p_i|| − Distance)
F
Sigma
Roundness = 1 −
Distance
0.4724
Distance
Sides = 1.4111
Sigma
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
HALCON 8.0.2
854 CHAPTER 14. REGIONS
always 0 (no runs of the length 0). If there are no blanks the empty tuple is passed at Background. Analogously
the empty tuple is passed in case of an empty region at Foreground.
Parameter
NumRuns
KFactor = √
Area
wherein Area indicates the area of the region. It should be noted that the K-factor can be smaller than 1.0 (in case
of long horizontal regions).
The L-factor (LFactor) indicates the mean number of runs for each line index occurring in the region.
MeanLength indicates the mean length of the runs. The parameter Bytes indicates how many bytes are neces-
sary for coding the region with runlengths.
Attention
All features calculated by the operator runlength_features are not rotation invariant because the runlength
coding depends on the direction. The operator runlength_features does not serve for calculating shape
features but for controlling and analysing the efficiency of the runlength coding.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. NumRuns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Number of runs.
Assertion : 0 ≤ NumRuns
. KFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Storing factor in relation to a square.
Assertion : 0 ≤ KFactor
. LFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Mean number of runs per line.
Assertion : 0 ≤ LFactor
. MeanLength (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Mean length of runs.
Assertion : 0 ≤ MeanLength
. Bytes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Number of bytes necessary for coding the region.
Assertion : 0 ≤ Bytes
Complexity
The mean runtime complexity is O(1).
Result
The operator runlength_features returns the value 2 (H_MSG_TRUE) if the input is not empty. If neces-
sary an exception handling is raised.
Parallelization Information
runlength_features is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
See also
runlength_features, runlength_distribution
Module
Foundation
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. DestRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
All regions containing the test pixel.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; integer
Line index of the test pixel.
Default Value : 100
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; integer
Column index of the test pixel.
Default Value : 100
HALCON 8.0.2
856 CHAPTER 14. REGIONS
Example
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
disp_image(Image)
regiongrowing(Image,Seg,3,3,5,0)
set_color(WindowHandle,’red’)
set_draw(WindowHandle,’margin’)
Button := 1
while (Button = 1)
fwrite_string(FileId,’Select the region with the mouse (End right button)’)
fnew_line(FileId)
get_mbutton(WindowHandle,Row,Column,Button)
select_region_point(Seg,Single,Row,Column)
disp_region(Single,WindowHandle)
endwhile
Complexity √
If F is the area of the region and N is the number of regions, the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator select_region_point returns the value 2 (H_MSG_TRUE) if the parameters are cor-
rect. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
select_region_point is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
test_region_point
See also
get_mbutton, get_mposition
Module
Foundation
select_region_spatial ( Regions1,
Regions2 : : Direction : RegionIndex1, RegionIndex2 )
• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
The regions at the n-th position in Regions1 and Regions2 are each checked for a neighboring relation.
The operator select_region_spatial calculates the centers of the regions to be compared and decides
according to the angle between the center straight lines and the x axis whether the direction relation is fulfilled.
The relation is fulfilled within the area of -45 degree to +45 degree around the coordinate axes. Thus, the direction
relation can be understood in such a way that the center of the second region must be located left (or right, above,
below) of the center of the first region. The indices of the regions fulfilling the direction relation are located at the
n-th position in RegionIndex1 and RegionIndex2, i.e., the region with the index RegionIndex2[n] has
the indicated relation with the region with the index RegionIndex1[n]. Access to regions via the index can be
obtained via the operator copy_obj.
Parameter
HALCON 8.0.2
858 CHAPTER 14. REGIONS
If only one feature (Features) is used the value of Operation is meaningless. Several features are processed
in the sequence in which they are entered.
Parameter
HALCON 8.0.2
860 CHAPTER 14. REGIONS
Example
Result
The operator select_shape returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
select_shape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
select_shape, select_gray, shape_trans, reduce_domain, count_obj
Alternatives
select_shape_std
See also
area_center, circularity, compactness, contlength, convexity, rectangularity,
elliptic_axis, eccentricity, inner_circle, smallest_circle,
smallest_rectangle1, smallest_rectangle2, inner_rectangle1, roundness,
connect_and_holes, diameter_region, orientation_region, moments_region_2nd,
moments_region_2nd_invar, moments_region_2nd_rel_invar, moments_region_3rd,
moments_region_3rd_invar, moments_region_central,
moments_region_central_invar, select_obj
Module
Foundation
’distance_dilate’ The minimum distance in the maximum norm from the edge of Pattern to the edge of every
region from Regions is determined (see distance_rr_min_dil).
’distance_contour’ The minimum Euclidean distance from the edge of Pattern to the edge of every region
from Regions is determined. (see distance_rr_min).
’distance_center’ The Euclidean distance from the center of Pattern to the center of every region from
Regions is determined.
’covers’ It is examined how well the region Pattern fits into the regions from Regions. If there is no shift
so that Pattern is a subset of Regions the overlap is 0. If Pattern corresponds to the region after a
corresponding shift the overlap is 100. Otherwise the area of the opening of Regions with Pattern is put
into relation with the area of Regions (in percent).
’fits’ It is examined whether Pattern can be shifted in such a way that it fits in Regions. If this is possible the
corresponding region is copied from Regions. The parameters Min and Max are ignored.
’overlaps_abs’ The area of the intersection of Pattern and every region in Regions is computed.
’overlaps_rel’ The area of the intersection of Pattern and every region in Regions is computed. The relative
overlap is the ratio of the area of the intersection and the are of the respective region in Regions (in percent).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region compared to Regions.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions fulfilling the condition.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Shape features to be checked.
Default Value : ’covers’
List of values : Feature ∈ {’distance_center’, ’distance_dilate’, ’distance_contour’, ’covers’, ’fits’,
’overlaps_abs’, ’overlaps_rel’}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Lower border of feature.
Default Value : 50.0
Suggested values : Min ∈ {0.0, 1.0, 5.0, 10.0, 20.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 99.0, 100.0,
200.0, 400.0}
Typical range of values : 0.0 ≤ Min
Minimum Increment : 0.001
Recommended Increment : 5.0
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Upper border of the feature.
Default Value : 100.0
Suggested values : Max ∈ {0.0, 10.0, 20.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 99.0, 100.0, 200.0, 300.0,
400.0}
Typical range of values : 0.0 ≤ Max
Minimum Increment : 0.001
Recommended Increment : 5.0
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
img.Display (w);
HALCON 8.0.2
862 CHAPTER 14. REGIONS
w.SetColor ("red");
seg.Display (w);
w.Click ();
return(0);
}
Result
The operator select_shape_proto returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
select_shape_proto is reentrant and processed without parallelization.
Possible Predecessors
connection, draw_region, gen_circle, gen_rectangle1, gen_rectangle2,
gen_ellipse
Possible Successors
select_gray, shape_trans, reduce_domain, count_obj
Alternatives
select_shape
See also
opening, erosion1, distance_rr_min_dil, distance_rr_min
Module
Foundation
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100:)
HALCON 8.0.2
864 CHAPTER 14. REGIONS
select_shape(Seg,H,’area’,’and’,100,2000)
smallest_circle(H,Row,Column,Radius)
gen_circle(Circles,Row,Column,Radius)
set_draw(WindowHandle,’margin’)
disp_region(Circles,WindowHandle)
Complexity √
If F is the area of the region, then the mean runtime complexity is O( F .
Result
The operator smallest_circle returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
gen_circle, disp_circle
Alternatives
elliptic_axis, smallest_rectangle1, smallest_rectangle2
See also
set_shape, select_shape, inner_circle
Module
Foundation
Result
The operator smallest_rectangle1 returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_rectangle1, gen_rectangle1
Alternatives
smallest_rectangle2, area_center
See also
select_shape
Module
Foundation
read_image(Image,’fabrik’)
HALCON 8.0.2
866 CHAPTER 14. REGIONS
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
smallest_rectangle2(Seg,Row,Column,Phi,Length1,Length2)
gen_rectangle2(Rectangle,Row,Column,Phi,Length1,Length2)
set_draw(WindowHandle,’margin’)
disp_region(Rectangle,WindowHandle)
Complexity
If F is
√ the area of the region and N is the number of supporting points of the convex hull, the runtime complexity
is O( F + N 2 ).
Result
The operator smallest_rectangle2 returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_rectangle2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_rectangle2, gen_rectangle2
Alternatives
elliptic_axis, smallest_rectangle1
See also
smallest_circle, set_shape
Module
Foundation
• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
Regions1 and Regions2 are checked for a neighboring relation.
The percentage Percent is interpreted in such a way that the area of the second region has to be located really
left/right or above/below the region margins of the first region by at least Percent percent. The indices of
the regions that fulfill at least one of these conditions are then located at the n-th position in the output parame-
ters RegionIndex1 and RegionIndex2. Additionally the output parameters Relation1 and Relation2
contain at the n-th position the type of relation of the region pair (RegionIndex1[n], RegionIndex2[n]),
i.e., region with index RegionIndex2[n] has the relation Relation1[n] and Relation2[n] with region with
index RegionIndex1[n].
Possible values for Relation1 and Relation2 are:
In RegionIndex1 and RegionIndex2 the indices of the regions in the tuples of the input regions (Regions1
or Regions2), respectively, are entered as image identifiers. Access to chosen regions via the index can be
obtained by the operator copy_obj.
Parameter
14.4 Geometric-Transformations
affine_trans_region ( Region : RegionAffineTrans : HomMat2D,
Interpolate : )
HALCON 8.0.2
868 CHAPTER 14. REGIONS
As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the region, e.g., by operators like area_center. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric region and then rotate the region around this point using
hom_mat2d_rotate, the resulting region will not lie on the original one. In such a case, you can compensate
this effect by applying the following translations to HomMat2D before using it in affine_trans_region:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_region(Region, RegionAffinTrans, HomMat2DAdapted, ’false’)
Parameter
Possible Successors
select_shape
Alternatives
move_region, mirror_region, zoom_region
See also
affine_trans_image
Module
Foundation
read_image(&Image,"monkey");
threshold(Image,&Seg,128.0,255.0);
mirror_region(Seg,&Mirror,"row",512);
disp_region(Mirror,WindowHandle);
Parallelization Information
mirror_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
affine_trans_region
See also
zoom_region
Module
Foundation
HALCON 8.0.2
870 CHAPTER 14. REGIONS
Translate a region.
move_region translates the input regions by the vector given by (Row, Column). If necessary, the resulting
regions are clipped with the current image format.
Parameter
The polar transformation is a change of the coordinate system. Instead of a row and a column coordinate, each
point’s position is expressed by its radius r (i.e. the distance to the center point Row, Column) and the angle φ
between the column axis (through the center point) and the line from the center point towards the point. Note that
this transformation is not affine.
The coordinate (0, 0) in the output region always corresponds to the point in the input region that is specified by
RadiusStart and AngleStart. Analogously, the coordinate (Height − 1, Width − 1) corresponds to the
point in the input region that is specified by RadiusEnd and AngleEnd. In the usual mode (AngleStart
< AngleEnd and RadiusStart < RadiusEnd), the polar transformation is performed in the mathemati-
cally positive orientation (counterclockwise). Furthermore, points with smaller radii lie in the upper part of the
output region. By suitably exchanging the values of these parameters (e.g., AngleStart > AngleEnd or
RadiusStart > RadiusEnd), any desired orientation of the output region can be achieved.
The angles can be chosen from all real numbers. Center point and radii can be real as well. However, if they are
both integers and the difference of RadiusEnd and RadiusStart equals Height−1, calculation will be sped
up through an optimized routine.
The radii and angles are inclusive, which means that the first row of the virtual target image contains the circle
with radius RadiusStart and the last row contains the circle with radius RadiusEnd. For complete circles,
where the difference between AngleStart and AngleEnd equals 2π (360 degrees), this also means that the
first column of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − Width ) degrees instead.
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting Interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
If more than one region is passed in Region, their polar transformations are computed individually and stored
as a tuple in PolarTransRegion. Please note that the indices of an input region and its transformation only
correspond if the system variable ’store_empty_regions’ is set to ’true’ (see set_system). Otherwise empty
output regions are discarded and the length of the input tuple Region is most likely not equal to the length of the
output tuple PolarTransRegion.
Attention
If Width or Height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see set_system). Otherwise, an output region that does not lie within the
dimensions of the current image can produce an error message.
Parameter
HALCON 8.0.2
872 CHAPTER 14. REGIONS
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting Interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
polar_trans_region_inv is the inverse function of polar_trans_region.
The call sequence:
polar_trans_region(Region, PolarRegion, Row, Column, rad(360), 0, 0,
Radius, Width, Height, ’nearest_neighbor’)
polar_trans_region_inv(PolarRegion, XYTransRegion, Row, Column, rad(360),
0, 0, Radius, Width, Height, Width, Height, ’nearest_neighbor’)
returns the region Region, restricted to the circle around (Row, Column) with radius Radius, as its output
region XYTransRegion.
If more than one region is passed in PolarRegion, their cartesian transformations are computed individually
and stored as a tuple in XYTransRegion. Please note that the indices of an input region and its transformation
only correspond if the system variable ’store_empty_regions’ is set to ’false’ (see set_system). Otherwise
empty output regions are discarded and the length of the input tuple PolarRegion is most likely not equal to
the length of the output tuple XYTransRegion.
Attention
If Width or Height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see set_system). Otherwise, an output region that does not lie within the
dimensions of the current image can produce an error message.
Parameter
HALCON 8.0.2
874 CHAPTER 14. REGIONS
x + x0
Column =
2
y + y0
Row = .
2
If Row and Column are set to the origin, the in morphology often used transposition results. Hence
transpose_region is often used to reflect (transpose) a structuring element.
Parameter
HALCON 8.0.2
876 CHAPTER 14. REGIONS
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O( F ) .
Result
transpose_region returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Zoom a region.
zoom_region enlarges or reduces the regions given in Region in the x- and y-direction by the given scale
factors ScaleWidth and ScaleHeight.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be zoomed.
. RegionZoom (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Zoomed region(s).
Number of elements : RegionZoom = Region
. ScaleWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; real
Scale factor in x-direction.
Default Value : 2.0
Suggested values : ScaleWidth ∈ {0.25, 0.5, 1.0, 2.0, 3.0}
Typical range of values : 0.0 ≤ ScaleWidth ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.5
. ScaleHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; real
Scale factor in y-direction.
Default Value : 2.0
Suggested values : ScaleHeight ∈ {0.25, 0.5, 1.0, 2.0, 3.0}
Typical range of values : 0.0 ≤ ScaleHeight ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.5
Parallelization Information
zoom_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
zoom_image_size, zoom_image_factor
Module
Foundation
14.5 Sets
complement ( Region : RegionComplement : : )
The resulting region is defined as the input region (Region) with all points from Sub removed.
HALCON 8.0.2
878 CHAPTER 14. REGIONS
Attention
Empty regions are valid for both parameters. On output, empty regions may result. The value of the system flag
’store_empty_region’ determines the behavior in this case.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. Sub (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
The union of these regions is subtracted from Region.
. RegionDifference (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Resulting region.
Example
Complexity
Let N be the number of regions, F _1 be their average√ F _2 be the total area of all regions in Sub. Then
area, and√
the runtime complexity is O(F _1 ∗ log(F _1) + N ∗ ( F _1 + F _2)).
Result
difference always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
difference is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape, disp_region
See also
intersection, union1, union2, complement, symm_difference
Module
Foundation
Complexity
Let N be the number of regions in Region1, F1 be their average√ √ F2 be the total area of all regions in
area, and
Region2. Then the runtime complexity is O(F1 log (F1 ) + N ∗ ( F1 + F2 )).
Result
intersection always returns 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given) can
be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
intersection is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
union1, union2, complement
Module
Foundation
Result
symm_difference always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no
HALCON 8.0.2
880 CHAPTER 14. REGIONS
regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case
of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
symm_difference is reentrant and processed without parallelization.
Possible Successors
select_shape, disp_region
See also
intersection, union1, union2, complement, difference
Module
Foundation
Complexity √ √
Let F be the sum of all areas of the input regions. Then the runtime complexity is O(log( F ) ∗ F ).
Result
union1 always returns 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given) can be set via
set_system(’no_object_result’,<Result>) and the behavior in case of an empty input region via
set_system(’empty_region_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
union1 is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
union2
See also
intersection, complement
Module
Foundation
14.6 Tests
test_equal_region ( Regions1, Regions2 : : : IsEqual )
HALCON 8.0.2
882 CHAPTER 14. REGIONS
Result
The operator test_equal_region returns the value 2 (H_MSG_TRUE) if the parameters are correct.
The behavior in case of empty input (no input objects available) is set via the operator set_system(:
:’no_object_result’,<Result>:). If the number of objects differs an exception is raised. Else
test_equal_region returns the value 2 (H_MSG_TRUE)
Parallelization Information
test_equal_region is reentrant and processed without parallelization.
Alternatives
intersection, complement, area_center
See also
test_equal_obj
Module
Foundation
Alternatives
union1, intersection, area_center
See also
select_region_point
Module
Foundation
14.7 Transformation
background_seg ( Foreground : BackgroundRegions : : )
HALCON 8.0.2
884 CHAPTER 14. REGIONS
/* Simulation of background_seg: */
background_seg(Foreground,BackgroundRegions):
complement(Foreground,Background)
get_system(’neighborhood’,Save)
set_system(’neighborhood’,4)
connection(Background,BackgroundRegions)
clear_obj(Background)
set_system(’neighborhood’,Save).
Complexity
Let F be the area of the background, H and W be the height
√ and√ width of the image, and N be the number of
resulting regions. Then the runtime complexity is O(H + F ∗ N ).
Result
background_seg always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
background_seg is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
Alternatives
complement, connection
See also
threshold, hysteresis_threshold, skeleton, expand_region, set_system, sobel_amp,
edges_image, roberts, bandpass_image
Module
Foundation
HALCON 8.0.2
886 CHAPTER 14. REGIONS
Parameter
read_image(Image,’affe’)
set_colored(WindowHandle,12)
threshold(Image,Light,150.0,255.0)
count_obj(Light,Number1)
fwrite_string(’Nummber of regions after threshold = ’+Number1)
fnew_line()
disp_region(Light,WindowHandle)
connection(Light,Many)
count_obj(Many,Number2)
fwrite_string(’Nummber of regions after threshold = ’+Number2)
fnew_line()
disp_region(Many,WindowHandle).
Complexity
Let F be the area√of the√input region and N be the number of generated connected components. Then the runtime
complexity is O( F ∗ N ).
Result
connection always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
connection is reentrant and processed without parallelization.
Possible Predecessors
auto_threshold, threshold, dyn_threshold, erosion1
Possible Successors
select_shape, select_gray, shape_trans, set_colored, dilation1, count_obj,
reduce_domain, add_channels
Alternatives
background_seg
See also
set_system, union1
Module
Foundation
HALCON 8.0.2
888 CHAPTER 14. REGIONS
output image should be large enough to contain the region. The extent of the input region can be obtained with
smallest_rectangle1.
The parameter Metric determines which metric is used for the calculation of the distances. If Metric = ’city-
block’, the distance is calculated from the shortest path from the point to the border of the region, where only
horizontal and vertical “movements” are allowed. They are weighted with a distance of 1. If Metric = ’chess-
board’, the distance is calculated from the shortest path to the border, where horizontal, vertical, and diagonal
“movements” are allowed. They are weighted with a distance of 1. If Metric = ’octagonal’, a combination
of these approaches is used, which leads to diagonal paths getting a higher weight. If Metric = ’chamfer-3-4’,
horizontal and vertical movements are weighted with a weight of 3, while diagonal movements are weighted with a
weight of 4. To normalize the distances, the resulting distance image is divided by 3. Since this normalization step
takes some time, and one usually is interested in the relative distances of the points, the normalization can be sup-
pressed with Metric = ’chamfer-3-4-unnormalized’. Finally, if Metric = ’euclidean’, the computed distance is
approximately Euclidean.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region for which the distance to the border is computed.
. DistanceImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : int4
Image containing the distance information.
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of metric to be used for the distance transformation.
Default Value : ’city-block’
List of values : Metric ∈ {’city-block’, ’chessboard’, ’octagonal’, ’chamfer-3-4’,
’chamfer-3-4-unnormalized’, ’euclidean’}
. Foreground (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Compute the distance for pixels inside (true) or outside (false) the input region.
Default Value : ’true’
List of values : Foreground ∈ {’true’, ’false’}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the output image.
Default Value : 640
Suggested values : Width ∈ {160, 192, 320, 384, 640, 768}
Typical range of values : 1 ≤ Width
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the output image.
Default Value : 480
Suggested values : Height ∈ {120, 144, 240, 288, 480, 576}
Typical range of values : 1 ≤ Height
Example
Complexity
The runtime complexity is O(Width ∗ Height).
Result
distance_transform returns H_MSG_2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
distance_transform is reentrant and processed without parallelization.
Possible Predecessors
threshold, dyn_threshold, regiongrowing
Possible Successors
threshold
See also
skeleton
References
P. Soille: “Morphological Image Analysis, Principles and Applications”; Springer Verlag Berlin Heidelberg New
York, 1999.
G. Borgefors: “Distance Transformations in Arbitrary Dimensions”; Computer Vision, Graphics, and Image Pro-
cessing, Vol. 27, pages 321–345, 1984.
P.E. Danielsson: “Euclidean Distance Mapping”; Computer Graphics and Image Processing, Vol. 14, pages 227–
248, 1980.
Module
Foundation
HALCON 8.0.2
890 CHAPTER 14. REGIONS
’image’ The input regions are expanded iteratively until they touch another region or the image border. In this
case, the image border is defined to be the rectangle ranging from (0,0) to (row_max,col_max). Here,
(row_max,col_max) corresponds to the lower right corner of the smallest surrounding rectangle of all input re-
gions (i.e., of all regions that are passed in Regions and ForbiddenArea). Because expand_region
processes all regions simultaneously, gaps between regions are distributed evenly to all regions. Overlapping
regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to the respective regions. Because the intersection with the original region is
computed after the shrinking operation gaps in the output regions may result, i.e., the segmentation is not
complete. This can be prevented by calling expand_region a second time with the complement of the
original regions as “forbidden area.”
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the gaps are to be closed, or which are to be separated.
. ForbiddenArea (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Regions in which no expansion takes place.
. RegionExpanded (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Expanded or separated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer / string
Number of iterations.
Default Value : ’maximal’
Suggested values : Iterations ∈ {’maximal’, 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 50, 70, 100, 200}
Typical range of values : 0 ≤ Iterations ≤ 1000 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Expansion mode.
Default Value : ’image’
List of values : Mode ∈ {’image’, ’region’}
Example
read_image(Image,’fabrik’)
threshold(Image,Light,100,255)
disp_region(Light,WindowHandle)
connection(Light,Seg)
expand_region(Seg,[],Exp1,’maximal’,’image’)
set_colored(WindowHandle,12)
set_draw(WindowHandle,’margin’)
disp_region(Exp1,WindowHandle)
Result
expand_region always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no re-
gions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an
HALCON 8.0.2
892 CHAPTER 14. REGIONS
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input region(s).
. RegionFillUp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Output region(s) with filled holes.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Shape feature used.
Default Value : ’area’
List of values : Feature ∈ {’area’, ’compactness’, ’convexity’, ’anisometry’, ’phi’, ’ra’, ’rb’, ’inner_circle’,
’outer_circle’}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum value for Feature.
Default Value : 1.0
Suggested values : Min ∈ {0.0, 1.0, 10.0, 50.0, 100.0, 500.0, 1000.0, 10000.0}
Typical range of values : 0.0 ≤ Min
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximum value for Feature.
Default Value : 100.0
Suggested values : Max ∈ {10.0, 50.0, 100.0, 500.0, 1000.0, 10000.0, 100000.0}
Typical range of values : 0.0 ≤ Max
Example (Syntax: C)
read_image(&Image,"affe");
threshold(Image,&Seg,120.0,255.0);
fill_up_shape(Seg,&Filled,"area",0.0,200.0);
Result
fill_up_shape returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
fill_up_shape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
fill_up
See also
select_shape, connection, area_center
Module
Foundation
Parameter
. InputRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be modified.
. OutputRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions having the required Hamming distance.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the region to be changed.
Default Value : 100
Suggested values : Width ∈ {64, 128, 256, 512}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width > 0
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the region to be changed.
Default Value : 100
Suggested values : Height ∈ {64, 128, 256, 512}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height > 0
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Hamming distance between the old and new regions.
Default Value : 1000
Suggested values : Distance ∈ {100, 500, 1000, 5000, 10000}
Typical range of values : 0 ≤ Distance ≤ 10000 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (Distance ≥ 0) ∧ (Distance < (Width · Height))
Complexity
Memory requirement of the generated region (worst case): O(2 ∗ Width ∗ Height).
Result
hamming_change_region returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case
of empty input (no regions given) can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
hamming_change_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
hamming_distance
Module
Foundation
HALCON 8.0.2
894 CHAPTER 14. REGIONS
complement(’full’,Region,Tmp) skeleton(Tmp,Result)
’border’ If the input regions do not touch or overlap this mode is equivalent to boundary(Region,Result),
i.e., it replaces each region by its boundary. If regions are touching they are aggregated into one region. The
corresponding output region then contains the boundary of the aggregated region, as well as the one pixel
wide separating line between the original regions. This corresponds to the following calls:
boundary(Region,Tmp1,’inner’) union1(Tmp1,Tmp2)
skeleton(Tmp2,Result)
’mixed’ In this mode the operator behaves like the mode ’medial_axis’ for non-overlapping regions. If regions
touch or overlap, again separating lines between the input regions are generated on output, but this time
including the “touching line” between regions, i.e., touching regions are separated by a line in the output
region. This corresponds to the following calls:
erosion1(Region,Mask,Tmp1,1) union1(Tmp1,Tmp2)
complement(full,Tmp2,Tmp3) skeleton(Tmp3,Result)
where Mask denotes the following “cross mask”:
×
× × ×
×
Parameter
read_image(Image,’wald1_rot’)
mean(Image,Mean,31,31)
dyn_threshold(Mean,Seg,20)
interjacent(Seg,Graph,’medial_axis’)
disp_region(Graph,WindowHandle)
Result
interjacent always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty
input region via set_system(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
interjacent is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
expand_region, junctions_skeleton, boundary
Module
Foundation
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
junctions_skeleton always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
junctions_skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
skeleton
Possible Successors
area_center, connection, get_region_points, difference
See also
pruning, split_skeleton_region
Module
Foundation
merge_regions_line_scan ( CurrRegions,
PrevRegions : CurrMergedRegions, PrevMergedRegions : ImageHeight,
MergeBorder, MaxImagesRegion : )
HALCON 8.0.2
896 CHAPTER 14. REGIONS
The operator merge_regions_line_scan connects adjacent regions, which were segmentated from adja-
cent images with the height ImageHeight. This operator was especially designed to process regions that were
extracted from images grabbed by a line scan camera. CurrRegions contains the regions from the current image
and PrevRegions the regions from the previous one.
With the help of the parameter MergeBorder two cases can be distinguished: If the top (first) line of the current
image touches the bottom (last) line of the previous image, MergeBorder must be set to ’top’, otherwise set
MergeBorder to ’bottom’.
If the operator merge_regions_line_scan is used recursivly, the parameter MaxImagesRegion deter-
mines the maximum number of images which are covered by a merged region. All older region parts are removed.
The operator merge_regions_line_scan returns two region arrays. PrevMergedRegions contains
all those regions from the previous input regions PrevRegions, which could not be merged with a current
region. CurrMergedRegions collects all current regions together with the merged parts from the previ-
ous images. Merged regions will exceed the original image, because the previous regions are moved upward
(MergeBorder=’top’) or downward (MergeBorder=’bottom’) according to the image height. For this the
system parameter ’clip_region’ (see also set_system) will internaly be set to ’false’.
Parameter
. CurrRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Current input regions.
. PrevRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Merged regions from the previous iteration.
. CurrMergedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Current regions, merged with old ones where applicable.
. PrevMergedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions from the previous iteration which could not be merged with the current ones.
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the line scan images.
Default Value : 512
List of values : ImageHeight ∈ {240, 480, 512}
. MergeBorder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Image line of the current image, which touches the previous image.
Default Value : ’top’
List of values : MergeBorder ∈ {’top’, ’bottom’}
. MaxImagesRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum number of images for a single region.
Default Value : 3
Suggested values : MaxImagesRegion ∈ {1, 2, 3, 4, 5}
Result
The operator merge_regions_line_scan returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
merge_regions_line_scan is reentrant and processed without parallelization.
Module
Foundation
between the initial split positions is now calculated by dividing the width of the input region by n. Note that the
distance between these initial split positions is typically not identical to Distance. Then, the final split positions
are determined in the neighborhood of the initial split positions such that the input region is split at positions where
it has the least vertical extent within this neighborhood. The maximum deviation of the final split position from
the initial split position is Distance*Percent*0.01.
The resulting regions are returned in Partitioned. Note that the input region is only partitioned if its width is
larger than 1.5 times Distance.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be partitioned.
. Partitioned (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Partitioned region.
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Approximate width of the resulting region parts.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum percental shift of the split position.
Default Value : 20
Suggested values : Percent ∈ {0, 10, 20, 30, 40, 50, 70, 90, 100}
Typical range of values : 0 ≤ Percent ≤ 100
Result
partition_dynamic returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
input (no regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>), and the behav-
ior in case of an empty result region via set_system(’store_empty_region’,<true/false>). If
necessary, an exception handling is raised.
Parallelization Information
partition_dynamic is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection
Alternatives
partition_rectangle
See also
intersection, smallest_rectangle1, shape_trans, clip_region
Module
Foundation
HALCON 8.0.2
898 CHAPTER 14. REGIONS
Result
partition_rectangle returns 2 (H_MSG_TRUE) if all parameters are correct. The
behavior in case of empty input (no regions given) can be set via set_system
(’no_object_result’,<Result>), the behavior in case of an empty input region via set_system
(’empty_region_result’,<Result>), and the behavior in case of an empty result region via
set_system(’store_empty_region’,<true/false>). If necessary, an exception handling is raised.
Parallelization Information
partition_rectangle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection
Alternatives
partition_dynamic
See also
intersection, smallest_rectangle1, shape_trans, clip_region
Module
Foundation
Height ∗ Width
Number = ,
2
read_image(Image,’affe’)
mean_image(Image,Mean,5,5)
dyn_threshold(Mean,Points,25)
rank_region(Points,Textur,15,15,30)
gen_circle(Mask,10,10,3)
opening1(Textur,Mask,Seg).
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ∗ 8).
Result
rank_region returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
rank_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape, disp_region
Alternatives
closing_rectangle1, expand_region
See also
rank_image, mean_image
Module
Foundation
HALCON 8.0.2
900 CHAPTER 14. REGIONS
Complexity √
Let F be the area of the input region. Then the runtime complexity is O( F ∗ 4).
Result
remove_noise_region returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of
empty input (no regions given) can be set via set_system(’no_object_result’,<Result>). If nec-
essary, an exception handling is raised.
Parallelization Information
remove_noise_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
dilation1, intersection, gen_region_points
Module
Foundation
Attention
If Type = ’outer_circle’ is selected it might happen that the resulting circular region does not completely cover
the input region. This is because internally the operators smallest_circle and gen_circle are used to
compute the outer circle.√As described in the documentation of smallest_circle, the calculated radius can
be too small by up to 1/ 2 − 0.5 pixels. Additionally,√the circle that is generated by gen_circle is translated
by up to 0.5 pixels in both directions, i.e., by up to 1/ 2 pixels. Consequently, when adding up both effects, the
original region might protrude beyond the returned circular region by at most 1 pixel.
Parameter
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
shape_trans returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
shape_trans is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, regiongrowing
Possible Successors
disp_region, regiongrowing_mean, area_center
See also
convexity, elliptic_axis, area_center, smallest_rectangle1,
smallest_rectangle2, inner_rectangle1, set_shape, select_shape, inner_circle
Module
Foundation
HALCON 8.0.2
902 CHAPTER 14. REGIONS
’character’ The regions will be treated like characters in a row and will be sorted according to their order in the
line: If two regions overlap horizontally, they will be sorted with respect to their column values, otherwise
they will be sorted with regard to their row values. To be able to sort a line correctly, all regions in the line
must overlap each other vertically. Furthermore, the regions in adjacent rows must not overlap.
’first_point’ The point with the lowest column value in the first row of the region.
’last_point’ The point with the highest column value in the last row of the region.
’upper_left’ Upper left corner of the surrounding rectangle.
’upper_right’ Upper right corner of the surrounding rectangle.
’lower_left’ Lower left corner of the surrounding rectangle.
’lower_right’ Lower right corner of the surrounding rectangle.
The parameter Order determines whether the sorting order is increasing or decreasing: using ’true’ the order will
be increasing, using ’false’ the order will be decreasing.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be sorted.
. SortedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Sorted regions.
. SortMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Kind of sorting.
Default Value : ’first_point’
List of values : SortMode ∈ {’character’, ’first_point’, ’last_point’, ’upper_left’, ’lower_left’, ’upper_right’,
’lower_right’}
. Order (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Increasing or decreasing sorting order.
Default Value : ’true’
List of values : Order ∈ {’true’, ’false’}
. RowOrCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Sorting first with respect to row, then to column.
Default Value : ’row’
List of values : RowOrCol ∈ {’row’, ’column’}
Result
If the parameters are correct, the operator sort_region returns the value 2 (H_MSG_TRUE). Otherwise an
exception will be raised.
Parallelization Information
sort_region is reentrant and processed without parallelization.
Possible Successors
do_ocr_multi, do_ocr_single
Module
Foundation
split_skeleton_lines splits lines represented by one pixel wide, non-branching regions into shorter lines
based on their curvature. A line is split if the maximum distance of a point on the line to the line segment
connecting its end points is larger than MaxDistance (split & merge algorithm). The start and end points of
the approximating line segments are returned in BeginRow, BeginCol, EndRow, and EndCol.
Attention
The input regions must represent non-branching lines, that is single branches of the skeleton.
Parameter
read_image(Image,’fabrik’)
edges_image (Image, ImaAmp, ImaDir, ’lanser2’, 0.5, ’nms’, 8, 16)
threshold (ImaAmp, RawEdges, 8, 255)
skeleton (RawEdges, Skeleton)
junctions_skeleton (Skeleton, EndPoints, JuncPoints)
difference (Skeleton, JuncPoints, SkelWithoutJunc)
connection (SkelWithoutJunc, SingleBranches)
select_shape (SingleBranches, SelectedBranches, ’area’, ’and’, 16, 99999)
split_skeleton_lines (SelectedBranches, 3, BeginRow, BeginCol, EndRow,
EndCol).
Result
split_skeleton_lines always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case
of an empty input region via set_system(’empty_region_result’,<Result>), and the behavior in
case of an empty result region via set_system(’store_empty_region’,<true/false>). If neces-
sary, an exception handling is raised.
Parallelization Information
split_skeleton_lines is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, select_shape, skeleton, junctions_skeleton, difference
Possible Successors
select_lines, partition_lines, disp_line
See also
split_skeleton_region, detect_edge_segments
Module
Foundation
HALCON 8.0.2
904 CHAPTER 14. REGIONS
split_skeleton_region (
SkeletonRegion : RegionLines : MaxDistance : )
read_image(Image,’fabrik’)
edges_image (Image, ImaAmp, ImaDir, ’lanser2’, 0.5, ’nms’, 8, 16)
threshold (ImaAmp, RawEdges, 8, 255)
skeleton (RawEdges, Skeleton)
junctions_skeleton (Skeleton, EndPoints, JuncPoints)
difference (Skeleton, JuncPoints, SkelWithoutJunc)
connection (SkelWithoutJunc, SingleBranches)
select_shape (SingleBranches, SelectedBranches, ’area’, ’and’, 16, 99999)
split_skeleton_region (SelectedBranches, Lines, 3)
Result
split_skeleton_region always returns the value 2 (H_MSG_TRUE). The behavior in case of empty in-
put (no regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>), and the behav-
ior in case of an empty result region via set_system(’store_empty_region’,<true/false>). If
necessary, an exception handling is raised.
Parallelization Information
split_skeleton_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, select_shape, skeleton, junctions_skeleton, difference
Possible Successors
count_obj, select_shape, select_obj, area_center, elliptic_axis,
smallest_rectangle2, get_region_polygon, get_region_contour
See also
split_skeleton_lines, get_region_polygon, gen_polygons_xld
Module
Foundation
Segmentation
15.1 Classification
add_samples_image_class_gmm ( Image, ClassRegions : : GMMHandle,
Randomize : )
Add training samples from an image to the training data of a Gaussian Mixture Model.
add_samples_image_class_gmm adds training samples from the Image to the Gaussian Mixture
Model (GMM) given by GMMHandle. add_samples_image_class_gmm is used to store the
training samples before a classifier to be used for the pixel classification of multichannel images with
classify_image_class_gmm is trained. add_samples_image_class_gmm works analogously
to add_sample_class_gmm. The Image must have a number of channels equal to NumDim, as spec-
ified with create_class_gmm. The training regions for the NumClasses pixel classes are passed in
ClassRegions. Hence, ClassRegions must be a tuple containing NumClasses regions. The order of
the regions in ClassRegions determines the class of the pixels. If there are no samples for a particular class
in Image an empty region must be passed at the position of the class in ClassRegions. With this mecha-
nism it is possible to use multiple images to add training samples for all relevant classes to the GMM by calling
add_samples_image_class_gmm multiple times with the different images and suitably chosen regions. The
regions in ClassRegions should contain representative training samples for the respective classes. Hence, they
need not cover the entire image. The regions in ClassRegions should not overlap each other, because this
would lead to the fact that in the training data the samples from the overlapping areas would be assigned to multi-
ple classes, which may lead to a lower classification performance. Image data of integer type can be particularly
badly suited for modelling with a GMM. Randomize can be used to overcome this problem, as explained in
add_sample_class_gmm.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; integer
GMM handle.
. Randomize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Standard deviation of the Gaussian noise added to the training data.
Default Value : 0.0
Suggested values : Randomize ∈ {0.0, 1.5, 2.0}
Restriction : Randomize ≥ 0.0
Result
If the parameters are valid, the operator add_samples_image_class_gmm returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
add_samples_image_class_gmm is processed completely exclusively without parallelization.
905
906 CHAPTER 15. SEGMENTATION
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm, write_samples_class_gmm
Alternatives
read_samples_class_gmm
See also
classify_image_class_gmm, add_sample_class_gmm, clear_samples_class_gmm,
get_sample_num_class_gmm, get_sample_class_gmm
Module
Foundation
Add training samples from an image to the training data of a multilayer perceptron.
add_samples_image_class_mlp adds training samples from the image Image to the multilayer per-
ceptron (MLP) given by MLPHandle. add_samples_image_class_mlp is used to store the
training samples before a classifier to be used for the pixel classification of multichannel images with
classify_image_class_mlp is trained. add_samples_image_class_mlp works analogously to
add_sample_class_mlp. Because here the MLP is always used for classification, OutputFunction =
’softmax’ must be specified when the MLP is created with create_class_mlp. The image Image must have
a number of channels equal to NumInput, as specified with create_class_mlp. The training regions for
the NumOutput pixel classes are passed in ClassRegions. Hence, ClassRegions must be a tuple con-
taining NumOutput regions. The order of the regions in ClassRegions determines the class of the pixels. If
there are no samples for a particular class in Image an empty region must be passed at the position of the class
in ClassRegions. With this mechanism it is possible to use multiple images to add training samples for all
relevant classes to the MLP by calling add_samples_image_class_mlp multiple times with the different
images and suitably chosen regions. The regions in ClassRegions should contain representative training sam-
ples for the respective classes. Hence, they need not cover the entire image. The regions in ClassRegions
should not overlap each other, because this would lead to the fact that in the training data the samples from the
overlapping areas would be assigned to multiple classes, which may lead to slower convergence of the training and
a lower classification performance.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; integer
MLP handle.
Result
If the parameters are valid, the operator add_samples_image_class_mlp returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
add_samples_image_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
classify_image_class_mlp, add_sample_class_mlp, clear_samples_class_mlp,
get_sample_num_class_mlp, get_sample_class_mlp, add_samples_image_class_svm
Module
Foundation
Add training samples from an image to the training data of a support vector machine.
add_samples_image_class_svm adds training samples from the image Image to the support vec-
tor machine (SVM) given by SVMHandle. add_samples_image_class_svm is used to store
the training samples before training a classifier for the pixel classification of multichannel images
with classify_image_class_svm. add_samples_image_class_svm works analogously to
add_sample_class_svm.
The image Image must have a number of channels equal to NumFeatures, as specified with
create_class_svm. The training regions for the NumClasses pixel classes are passed in ClassRegions.
Hence, ClassRegions must be a tuple containing NumClasses regions. The order of the regions in
ClassRegions determines the class of the pixels. If there are no samples for a particular class in Image,
an empty region must be passed at the position of the class in ClassRegions. With this mechanism it
is possible to use multiple images to add training samples for all relevant classes to the SVM by calling
add_samples_image_class_svm multiple times with the different images and suitably chosen regions.
The regions in ClassRegions should contain representative training samples for the respective classes. Hence,
they need not cover the entire image. The regions in ClassRegions should not overlap each other, because
this would lead to the fact that in the training data the samples from the overlapping areas would be assigned to
multiple classes, which may lead to slower convergence of the training and a lower classification performance.
A further application of this operator is the automatic novelty detection, where, e.g., anomalies in color or texture
can be detected. For this mode a training set that defines a sample region (e.g., skin regions for skin detection or
samples of the correct texture) is passed to the SVMHandle, which is created in the Mode ’novelty-detection’.
After training, regions that differ from the trained sample regions are detected (e.g., the rejection class for skin or
errors in texture).
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; integer
SVM handle.
Result
If the parameters are valid add_samples_image_class_svm returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception handling is raised.
Parallelization Information
add_samples_image_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_svm
Possible Successors
train_class_svm, write_samples_class_svm
Alternatives
read_samples_class_svm
See also
classify_image_class_svm, add_sample_class_svm, clear_samples_class_svm,
get_sample_num_class_svm, get_sample_class_svm, add_samples_image_class_mlp
Module
Foundation
HALCON 8.0.2
908 CHAPTER 15. SEGMENTATION
(gr , gc ) ∈ FeatureSpace
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HWindow win;
long nc;
image.Display (win);
win.SetColor ("green");
cout << "Draw the region of interrest " << endl;
win.SetDraw ("fill");
win.SetColor ("red");
feats.Display (win);
win.SetColor ("blue");
cd2reg.Display (win);
Complexity
Let A be the area of the input region. Then the runtime complexity is O(2562 + A).
Result
class_2dim_sup returns 2 (H_MSG_TRUE). If all parameters are correct, the behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_2dim_sup is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
histo_2dim, threshold, draw_region, dilation1, opening, shape_trans
Possible Successors
connection, select_shape, select_gray
Alternatives
class_ndim_norm, class_ndim_box, threshold
See also
histo_2dim
Module
Foundation
HALCON 8.0.2
910 CHAPTER 15. SEGMENTATION
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HWindow w;
long nc;
colimg.Display (w);
w.SetDraw ("margin");
w.SetColored (12);
seg.Display (w);
w.Click ();
return (0);
}
Result
class_2dim_unsup returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_2dim_unsup is reentrant and processed without parallelization.
Possible Predecessors
decompose2, decompose3, median_image, anisotropic_diffusion, reduce_domain
Possible Successors
select_shape, select_gray, connection
Alternatives
threshold, histo_2dim, class_2dim_sup, class_ndim_norm, class_ndim_box
Module
Foundation
read_image(Bild,’meer’)
disp_image(Image,WindowHandle)
set_color(WindowHandle,’green’)
fwrite_string(’Draw the learning region’)
fnew_line()
draw_region(Reg1,WindowHandle)
HALCON 8.0.2
912 CHAPTER 15. SEGMENTATION
reduce_domain(Image,Reg1,Foreground)
set_color(WindowHandle,’red’)
fwrite_string(’Draw Background’)
fnew_line()
draw_region(Reg2,WindowHandle)
reduce_domain(Image,Reg2,Background)
fwrite_string(’Training’)
fnew_line()
create_class_box(ClassifHandle)
learn_ndim_box(Foreground,Background,Image,ClassifHandle)
fwrite_string(’Classification’)
fnew_line()
class_ndim_box(Image,Res,ClassifHandle)
set_draw(WindowHandle,’fill’)
disp_region(Res,WindowHandle)
close_class_box(ClassifHandle).
Complexity
Let N be the number of hyper-cuboids and A be the area of the input region. Then the runtime complexity is
O(N, A).
Result
class_ndim_box returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_ndim_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, median_image, compose2, compose3, compose4,
compose5, compose6, compose7
Alternatives
class_ndim_norm, class_2dim_sup, class_2dim_unsup
See also
descript_class_box
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
int main ()
{
HImage image ("meer"),
t1, t2, t3,
m1, m2, m3, m;
HWindow w;
w.SetColor ("green");
image.Display (w);
HRegion empty;
Tuple cen, t;
w.SetColored (12);
reg.Display (w);
cout << "Result of classification" << endl;
return (0);
}
Complexity
Let N be the number of clusters and A be the area of the input region. Then the runtime complexity is O(N, A).
HALCON 8.0.2
914 CHAPTER 15. SEGMENTATION
Result
class_ndim_norm returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_ndim_norm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
learn_ndim_norm, compose2, compose3, compose4, compose5, compose6, compose7
Possible Successors
connection, select_shape, reduce_domain, select_gray
Alternatives
class_ndim_box, class_2dim_sup, class_2dim_unsup
Module
Foundation
Result
If the parameters are valid, the operator classify_image_class_gmm returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
classify_image_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
See also
add_samples_image_class_gmm, create_class_gmm
Module
Foundation
HALCON 8.0.2
916 CHAPTER 15. SEGMENTATION
Result
If the parameters are valid, the operator classify_image_class_mlp returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
classify_image_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
classify_image_class_svm, class_ndim_box, class_ndim_norm, class_2dim_sup
See also
add_samples_image_class_mlp, create_class_mlp
Module
Foundation
Classes := [Classes,IC]
create_class_svm (3, ’rbf’, 0.01, 0.01, 4, ’one-versus-all’,
’normalization’, 3, SVMHandle)
add_samples_image_class_svm (Image, Classes, SVMHandle)
train_class_svm (SVMHandle, 0.001, ’default’)
reduce_class_svm (SVMHandle, ’bottom_up’, 2, 0.01, SVMHandleReduced)
classify_image_class_svm (Image, ClassRegions, SVMHandleReduced)
dev_display (ClassRegions)
clear_class_svm (SVMHandleReduced)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator classify_image_class_svm returns the value 2 (H_MSG_TRUE).
If necessary, an exception handling is raised.
Parallelization Information
classify_image_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm, read_class_svm, reduce_class_svm
Alternatives
classify_image_class_mlp, class_ndim_box, class_ndim_norm, class_2dim_sup
See also
add_samples_image_class_svm, create_class_svm
Module
Foundation
HALCON 8.0.2
918 CHAPTER 15. SEGMENTATION
Complexity
Let N be the number of generated hyper-cuboids and A be the area of the larger input region. Then the runtime
complexity is O(N ∗ A).
Result
learn_ndim_box returns 2 (H_MSG_TRUE) if all parameters are correct and there is an active classificator.
The behavior with respect to the input images can be determined by setting the values of the flags ’no_object_result’
and ’empty_region_result’ with set_system. If necessary, an exception is raised.
Parallelization Information
learn_ndim_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, draw_region
Possible Successors
class_ndim_box, descript_class_box
Alternatives
learn_class_box, learn_ndim_norm
Module
Foundation
15.2 Edges
HALCON 8.0.2
920 CHAPTER 15. SEGMENTATION
(see sobel_amp). Only pixels with a filter response larger than MinAmplitude are used as candidates for
edge points. These thresholded edge points are thinned and split into straight segments. Due to technical reasons,
edge points in which several edges meet are lost. Therefore, detect_edge_segments usually does not return
closed object contours. The parameter MaxDistance controls the maximum allowed distance of an edge point
to its approximating line. For efficiency reasons, the sum of the absolute values of the coordinate differences is
used instead of the Euclidean distance. MinLength controls the minimum length of the line segments. Lines
shorter than MinLength are not returned.
Parameter
Htuple SobelSize,MinAmplitude,MaxDistance,MinLength;
Htuple RowBegin,ColBegin,RowEnd,ColEnd;
create_tuple(&SobelSize,1);
set_i(SobelSize,5,0);
create_tuple(&MinAmplitude,1);
set_i(MinAmplitude,32,0);
create_tuple(&MaxDistance,1);
set_i(MaxDistance,3,0);
create_tuple(&MinLength,1);
set_i(MinLength,10,0);
T_detect_edge_segments(Image,SobelSize,MinAmplitude,MaxDistance,MinLength,
&RowBegin,&ColBegin,&RowEnd,&ColEnd);
Result
detect_edge_segments returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
behaviour can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
detect_edge_segments is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
sigma_image, median_image
Possible Successors
select_lines, partition_lines, select_lines_longest, line_position,
line_orientation
Alternatives
sobel_amp, threshold, skeleton
Module
Foundation
HALCON 8.0.2
922 CHAPTER 15. SEGMENTATION
’hvnms’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values within
a seach space of ± 5 pixels, either horizontally or vertically. Non-maximum points are removed from the
region, gray values remain unchanged.
’loc_max’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values of its
eight neighbors.
Parameter
. ImgAmp (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Amplitude (gradient magnitude) image.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Image with thinned edge regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select horizontal/vertical or undirected NMS.
Default Value : ’hvnms’
List of values : Mode ∈ {’hvnms’, ’loc_max’}
Result
nonmax_suppression_amp returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with re-
spect to the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
nonmax_suppression_amp is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
sobel_amp
Possible Successors
threshold, hysteresis_threshold
Alternatives
local_max, nonmax_suppression_dir
See also
skeleton
References
S.Lanser: "‘Detektion von Stufenkanten mittels rekursiver Filter nach Deriche"’; Diplomarbeit; Technische Uni-
versität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: "‘Finding Edges and Lines in Images"’; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cam-
bridge, MA; 1983.
Module
Foundation
’nms’ Each point in the image is tested whether its gray value is a local maximum perpendicular to its direction.
In this mode only the two neighbors closest to the given direction are examined. If one of the two gray values
is greater than the gray value of the point to be tested, it is suppressed (i.e., removed from the input region.
The corresponding gray value remains unchanged).
’inms’ Like ’nms’. However, the two gray values for the test are obtained by interpolation from four adjacent
points.
Parameter
HALCON 8.0.2
924 CHAPTER 15. SEGMENTATION
See also
skeleton
References
S.Lanser: "‘Detektion von Stufenkanten mittels rekursiver Filter nach Deriche"’; Diplomarbeit; Technische Uni-
versität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: "‘Finding Edges and Lines in Images"’; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cam-
bridge; 1983.
Module
Foundation
15.3 Regiongrowing
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
expand_gray closes gaps between the input regions, which resulted from the suppression of small regions in a
segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses result
from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in which the
gray values or color are different from the gray values or color of neighboring pixles on the region’s border by at
most Threshold (in each channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray
value difference of at least 255 − Threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
ForbiddenArea). The number of iterations is determined by the parameter Iterations. By passing ’maxi-
mal’, expand_gray iterates until convergence, i.e., until no more changes occur. By passing 0 for this parameter,
all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are different in the
following ways:
’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because expand_gray processes all regions
simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value. Over-
lapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.
Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
seg.Display (win);
HRegionArray exp = seg.ExpandGray1 (image, empty_region,
"maximal", "image", 32);
win.SetDraw ("margin");
win.SetColored (12);
exp.Display (win);
win.Click ();
return (0);
}
Result
expand_gray always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty
input region via set_system(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
expand_gray is reentrant and processed without parallelization.
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
expand_gray_ref, expand_region
Module
Foundation
HALCON 8.0.2
926 CHAPTER 15. SEGMENTATION
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
expand_gray_ref closes gaps between the input regions, which resulted from the suppression of small regions
in a segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses
result from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in
which the gray values or color are different from a reference gray value or color by at most Threshold (in each
channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray value difference of at least
255 − Threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
ForbiddenArea). The number of iterations is determined by the parameter Iterations. By passing ’max-
imal’, expand_gray_ref iterates until convergence, i.e., until no more changes occur. By passing 0 for this
parameter, all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are differ-
ent in the following ways:
’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because expand_gray_ref processes all
regions simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value.
Overlapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.
Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
win.SetDraw ("margin");
win.SetColored (12);
image.Display (win);
return (0);
}
Result
expand_gray_ref always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of
an empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
expand_gray_ref is reentrant and processed without parallelization.
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
expand_gray, expand_region
Module
Foundation
HALCON 8.0.2
928 CHAPTER 15. SEGMENTATION
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
win.SetDraw ("margin");
win.SetColored (12);
image.Display (win);
reg.Display (win);
win.Click ();
return (0);
}
Parallelization Information
expand_line is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, anisotropic_diffusion,
median_image, affine_trans_image, rotate_image
Possible Successors
intersection, opening, closing
Alternatives
regiongrowing_mean, expand_gray, expand_gray_ref
Module
Foundation
For rectangles larger than one pixel, ususally the images should be smoothed with a lowpass filter with a size of at
least Row × Column before calling regiongrowing (so that the gray values at the centers of the regtangles
are “representative” for the whole rectangle). If the image contains little noise and the rectangles are small, the
smoothing can be omitted in many cases.
The resulting regions are collections of rectangles of the chosen size Row × Column . Only regions containing at
least MinSize points are returned.
Regiongrowing is a very fast operation, and thus suited for time-critical applications.
Attention
Column and Row are automatically converted to odd values if necessary.
Parameter
. Image (input_object) . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / int4 / real
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Segmented regions.
HALCON 8.0.2
930 CHAPTER 15. SEGMENTATION
read_image(Image,’fabrik’)
mean_image(Image,Mean,Row,Column)
regiongrowing(Mean,Result,Row,Column,6.0,100).
Complexity
Let N be the number of found regions and M the number of points in one of these regions. Then the runtime
complexity is O(N ∗ log(M ) ∗ M ).
Result
regiongrowing returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
regiongrowing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, mean_image, gauss_image, smooth_image, median_image,
anisotropic_diffusion
Possible Successors
select_shape, reduce_domain, select_gray
Alternatives
regiongrowing_n, regiongrowing_mean, label_to_region
Module
Foundation
HALCON 8.0.2
932 CHAPTER 15. SEGMENTATION
a = max {|gA |}
b = max {|gB |}
M inT ≤ |a − b| ≤ M axT
’gray-max-ratio’: Ratio of the maximum gray values
a = max {|gA |}
b = max {|gB |}
a b
M inT ≤ min , ≤ M axT
b a
’gray-min-diff’: Difference of the minimum gray values
a = min {|gA |}
b = min {|gB |}
M inT ≤ |a − b| ≤ M axT
HALCON 8.0.2
934 CHAPTER 15. SEGMENTATION
a = min {|gA |}
b = min {|gB |}
a b
M inT ≤ min , ≤ M axT
b a
’variance-diff’: Difference of the variances over all gray values (channels)
V ar(gB )
M inT ≤ ≤ M axT
V ar(gA )
’mean-abs-diff’: Difference of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
|a − b|
M inT ≤ ≤ M axT
Anzahl der Summen
’mean-abs-ratio’: Ratio of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
a b
M inT ≤ min , ≤ M axT
b a
’max-abs-diff’: Difference of the maximum distance of the components
HALCON 8.0.2
936 CHAPTER 15. SEGMENTATION
15.4 Threshold
auto_threshold ( Image : Regions : Sigma : )
Parallelization Information
auto_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
bin_threshold, char_threshold
See also
gray_histo, gray_histo_abs, histo_to_thresh, smooth_funct_1d_gauss, threshold
Module
Foundation
bin_threshold segments a single-channel gray value image using an automatically determined threshold.
First, the relative histogram of the gray values is determined. Then, relevant minima are extracted from the his-
togram, which are used as parameters for a thresholding operation. In order to reduce the number of minima, the
histogram is smoothed with a Gaussian, as in auto_threshold. The mask size is enlarged until there is only
one minimum in the smoothed histogram. The selected region contains the pixels with gray values from 0 to the
minimum. This operator is, for example useful for the segmentation of dark characters on a light paper.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Dark regions of the image.
Example
Parallelization Information
bin_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
auto_threshold, char_threshold
See also
gray_histo, smooth_funct_1d_gauss, threshold
Module
Foundation
For example, if you choose Percent = 95 the operator locates the gray value whose frequency is at most 5
percent of the maximum frequency. Because char_threshold assumes that the characters are darker than the
background, the threshold is searched for “to the left” of the maximum.
In comparison to bin_threshold, this operator should be used if there is no clear minimum between the
histogram peaks corresponding to the characters and the background, respectively, or if there is no peak corre-
sponding to the characters at all. This may happen, e.g., if the image contains only few characters or in the case of
a non-uniform illumination.
HALCON 8.0.2
938 CHAPTER 15. SEGMENTATION
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image.
. HistoRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region in which the histogram is computed.
. Characters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Dark regions (characters).
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma for the Gaussian smoothing of the histogram.
Default Value : 2.0
Suggested values : Sigma ∈ {0.0, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.0 ≤ Sigma ≤ 50.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.2
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Percentage for the gray value difference.
Default Value : 95
Suggested values : Percent ∈ {90, 92, 95, 96, 97, 98, 99, 99.5, 100}
Typical range of values : 0.0 ≤ Percent ≤ 100.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 0.5
. Threshold (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Calculated threshold.
Example
Parallelization Information
char_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
bin_threshold, auto_threshold, gray_histo, smooth_funct_1d_gauss, threshold
Module
Foundation
This test is performed for all points of the domain (region) of Image, intersected with the domain of the translated
Pattern. All points fulfilling the above condition are aggregated in the output region. The two images may be
of different size. Typically, Pattern is smaller than Image.
Parameter
HALCON 8.0.2
940 CHAPTER 15. SEGMENTATION
/* Simulation of dual_threshold */
dual_threshold(Laplace,Result,MinS,MinG,Threshold):
threshold(Laplace,Tmp1,Threshold,999999)
connection(Tmp1,Tmp2)
select_shape(Tmp2,Tmp3,’area’,’and’,MinS,999999)
select_gray(Laplace,Tmp3,Tmp4,’max’,’and’,MinG,999999)
threshold(Laplace,Tmp5,-999999,-Threshold)
connection(Tmp5,Tmp6)
select_shape(Tmp6,Tmp7,’area’,’and’,MinS,999999)
select_gray(Laplace,Tmp7,Tmp8,’min’,’and’,-999999,-MinG)
concat_obj(Tmp4,Tmp8,Result)
Result
dual_threshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
dual_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
min_max_gray, sobel_amp, binomial_filter, gauss_image, reduce_domain,
diff_of_gauss, sub_image, derivate_gauss, laplace_of_gauss, laplace,
expand_region
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
threshold, dyn_threshold, check_difference
See also
connection, select_shape, select_gray
Module
Foundation
HALCON 8.0.2
942 CHAPTER 15. SEGMENTATION
go ≥ gt + Offset
go ≤ gt − Offset
Typically, the threshold images are smoothed versions of the original image (e.g., by applying mean_image,
binomial_filter, gauss_image, etc.). Then the effect of dyn_threshold is similar to applying
threshold to a highpass-filtered version of the original image (see highpass_image).
With dyn_threshold, contours of an object can be extracted, where the objects’ size (diameter) is determined
by the mask size of the lowpass filter and the amplitude of the objects’ edges:
The larger the mask size is chosen, the larger the found regions become. As a rule of thumb, the mask size should
be about twice the diameter of the objects to be extracted. It is important not to set the parameter Offset to zero
because in this case too many small regions will be found (noise). Values between 5 and 40 are a useful choice.
The larger Offset is chosen, the smaller the extracted regions become.
All points of the input image fulfilling the above condition are stored jointly in one region. If necessary, the
connected components can be obtained by calling connection.
Attention
If Offset is chosen from −1 to 1 usually a very noisy region is generated, requiring large storage. If Offset
is chosen too large (> 60, say) it may happen that no points fulfill the threshold condition (i.e., an empty region is
returned). If Offset is chosen too small (< -60, say) it may happen that all points fulfill the threshold condition
(i.e., a full region is returned).
Parameter
Example
Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
dyn_threshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
dyn_threshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
mean_image, smooth_image, binomial_filter, gauss_image
Possible Successors
connection, select_shape, reduce_domain, select_gray, rank_region, dilation1,
opening, erosion1
Alternatives
check_difference, threshold
See also
highpass_image, sub_image
Module
Foundation
MinGray ≤ g ≤ MaxGray .
To reduce procesing time, the selection is done in two steps: At first all pixels along rows and columns with dis-
tances MinSize are processed. In the next step the neighborhood (size MinSize × MinSize) of all previously
selected points are processed.
Parameter
HALCON 8.0.2
944 CHAPTER 15. SEGMENTATION
images with more than 10 bits per pixel, the quantization must be chosen greater than 1. The histogram returned
by gray_histo_abs should furthermore be restricted to the parts that contain salient information. For example,
for an image with 12 bits per pixel, the quantization should be set to 4. Only the first 1024 entries of the computed
histogram (which contains 16384 entries in this example) should be passed to histo_to_thresh. Finally,
MinThresh must be multiplied by 4 (i.e., the quantization), while MaxThresh must be multiplied by 4 and
increased by 3 (i.e., the quantization minus 1).
Parameter
/* Calculate thresholds from a 12 bit uint2 image and threshold the image. */
gray_histo_abs (Image, Image, 4, AbsoluteHisto)
AbsoluteHisto := AbsoluteHisto[0:1023]
histo_to_thresh (AbsoluteHisto, 16, MinThresh, MaxThresh)
MinThresh := MinThresh*4
MaxThresh := MaxThresh*4+3
threshold (Image, Region, MinThresh, MaxThresh)
Parallelization Information
histo_to_thresh is reentrant and processed without parallelization.
Possible Predecessors
gray_histo
Possible Successors
threshold
See also
auto_threshold, bin_threshold, char_threshold
Module
Foundation
MinGray ≤ g ≤ MaxGray .
HALCON 8.0.2
946 CHAPTER 15. SEGMENTATION
All points of an image fulfilling the condition are returned as one region. If more than one gray value interval is
passed (tuples for MinGray and MaxGray), one separate region is returned for each interval.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / vector_field
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Segmented region.
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Lower threshold for the gray values.
Default Value : 128.0
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Upper threshold for the gray values.
Default Value : 255.0
Suggested values : MaxGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Restriction : MaxGray ≥ MinGray
Example
read_image(Image,’fabrik’)
sobel_dir(Image,EdgeAmp,EdgeDir,’sum_abs’,3)
threshold(EdgeAmp,Seg,50,255,2)
skeleton(Seg,Rand)
connection(Rand,Lines)
select_shape(Lines,Edges,’area’,’and’,10,1000000).
Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
threshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
threshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
histo_to_thresh, min_max_gray, sobel_amp, binomial_filter, gauss_image,
reduce_domain, fill_interlace
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
class_2dim_sup, hysteresis_threshold, dyn_threshold, bin_threshold,
char_threshold, auto_threshold, dual_threshold
See also
zero_crossing, background_seg, regiongrowing
Module
Foundation
operator threshold, threshold_sub_pix does not return regions, but the lines that separate regions with
a gray value less than Threshold from regions with a gray value greater than Threshold.
For the extraction, the input image is regarded as a surface, in which the gray values are interpolated bilinearly
between the centers of the individual pixels. Consistent with the surface thus defined, level crossing lines are
extracted for each pixel and linked into topologically sound contours. This means that the level crossing contours
are correctly split at junction points. If the image contains extended areas of constant gray value Threshold,
only the border of such areas is returned as level crossings.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Border (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject
Extracted level crossings.
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Threshold for the level crossings.
Default Value : 128
Suggested values : Threshold ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Example
read_image(Image,’fabrik’)
threshold_sub_pix(Image,Border,35)
disp_xld(Border,WindowHandle)
Result
threshold_sub_pix usually returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
threshold_sub_pix is reentrant and processed without parallelization.
Alternatives
threshold
See also
zero_crossing_sub_pix
Module
2D Metrology
HALCON 8.0.2
948 CHAPTER 15. SEGMENTATION
LightDark = ’dark’:
g(x, y) ≤ m(x, y) − v(x, y).
LightDark = ’equal’:
m(x, y) − v(x, y) ≤ g(x, y) ≤ m(x, y) + v(x, y).
LightDark = ’not_equal’:
m(x, y) − v(x, y) > g(x, y) ∨ g(x, y) < m(x, y) + v(x, y).
All pixels fulfilling the above condition are aggregated into the resulting region Region.
For the parameter StdDevScale values between −1.0 and 1.0 are sensible choices, with 0.2 as a suggested
value. If the parameter is too high or too low, an empty or full region may be returned. The parameter
AbsThreshold places an additional threshold on StdDevScale ∗ dev(x, y). If StdDevScale ∗ dev(x, y)
is below AbsThreshold for positive values of StdDevScale or above for negative values StdDevScale,
AbsThreshold is taken instead.
Parameter
. Image (input_object) . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / int2 / int4 / uint2 / real
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Segmented regions.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Mask width for mean and deviation calculation.
Default Value : 15
Suggested values : MaskWidth ∈ {9, 11, 13, 15}
Restriction : MaskWidth ≥ 1
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Mask height for mean and deviation calculation.
Default Value : 15
Suggested values : MaskHeight ∈ {9, 11, 13, 15}
Restriction : MaskHeight ≥ 1
. StdDevScale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Factor for the standard deviation of the gray values.
Default Value : 0.2
Suggested values : StdDevScale ∈ {-0.2, -0.1, 0.1, 0.2}
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Minimum gray value difference from the mean.
Default Value : 2
Suggested values : AbsThreshold ∈ {-2, -1, 0, 1, 2}
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Threshold type.
Default Value : ’dark’
List of values : LightDark ∈ {’dark’, ’light’, ’equal’, ’not_equal’}
Complexity
Let A be the area of the input region, then the runtime is O(A).
Result
var_threshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
var_threshold is reentrant and automatically parallelized (on tuple level, domain level).
Alternatives
dyn_threshold, threshold
References
W.Niblack, ”An Introduction to Digital Image Processing”, Page 115-116, Englewood Cliffs, N.J., Prentice Hall,
1986
Module
Foundation
HALCON 8.0.2
950 CHAPTER 15. SEGMENTATION
disp_xld(ZeroCrossings,WindowHandle)
Result
zero_crossing_sub_pix usually returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
zero_crossing_sub_pix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
laplace, laplace_of_gauss, diff_of_gauss, derivate_gauss
Alternatives
zero_crossing
See also
threshold_sub_pix
Module
2D Metrology
15.5 Topography
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
HALCON 8.0.2
952 CHAPTER 15. SEGMENTATION
image.Display (win);
win.SetColored (12);
maxi.Display (win);
win.Click ();
return (0);
}
Parallelization Information
local_max is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
get_region_points, connection
Alternatives
nonmax_suppression_amp, plateaus, plateaus_center
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method for the calculation of the partial derivatives.
Default Value : ’facet’
List of values : Filter ∈ {’facet’, ’gauss’}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Minimum absolute value of the eigenvalues of the Hessian matrix.
Default Value : 5.0
Suggested values : Threshold ∈ {2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0}
Restriction : Threshold ≥ 0.0
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinates of the detected maxima.
. Col (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinates of the detected maxima.
Result
local_max_sub_pix returns 2 (H_MSG_TRUE) if all parameters are correct and no error oc-
curs during the execution. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
local_max_sub_pix is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
critical_points_sub_pix, local_min_sub_pix, saddle_points_sub_pix
See also
local_max, plateaus, plateaus_center
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
HALCON 8.0.2
954 CHAPTER 15. SEGMENTATION
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
mins.Display (win);
win.Click ();
return (0);
}
Parallelization Information
local_min is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
get_region_points, connection
Alternatives
gray_skeleton, lowlands, lowlands_center
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HALCON 8.0.2
956 CHAPTER 15. SEGMENTATION
{
cout << "Usage : " << argv[0] << " <name of image>" << endl;
return (-1);
}
image.Display (win);
win.SetColored (12);
mins.Display (win);
win.Click ();
return (0);
}
Parallelization Information
lowlands is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
lowlands_center, gray_skeleton, local_min
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
{
if (argc != 2)
{
cout << "Usage : " << argv[0] << " <name of image>" << endl;
return (-1);
}
image.Display (win);
win.SetColored (12);
mins.Display (win);
win.Click ();
return (0);
}
Parallelization Information
lowlands_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
lowlands, gray_skeleton, local_min
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. Plateaus (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Extracted plateaus as regions (one region for each plateau).
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HALCON 8.0.2
958 CHAPTER 15. SEGMENTATION
image.Display (win);
win.SetColored (12);
maxi.Display (win);
win.Click ();
return (0);
}
Parallelization Information
plateaus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
plateaus_center, nonmax_suppression_amp, local_max
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. Plateaus (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Centers of gravity of the extracted plateaus as regions (one region for each plateau).
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
maxi.Display (win);
win.Click ();
return (0);
}
Parallelization Information
plateaus_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
plateaus, nonmax_suppression_amp, local_max
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
’all’ This is the normal mode of operation. All steps of the segmentation are performed. The regions are assigned
to maxima, and overlapping regions are split.
HALCON 8.0.2
960 CHAPTER 15. SEGMENTATION
’maxima’ The segmentation only extracts the local maxima of the input image. No corresponding regions are
extracted.
’regions’ The segmentation extracts the local maxima of the input image and the corresponding regions, which
are uniquely determined. Areas that were assigned to more than one maximum are not split.
In order to prevent the algorithm from splitting a uniform background that is different from the rest of the image,
the parameters MinGray and MaxGray determine gray value thresholds for regions in the image that should
be regarded as background. All parts of the image having a gray value smaller than MinGray or larger than
MaxGray are disregarded for the extraction of the maxima as well as for the assignment of regions. For a complete
segmentation of the image, MinGray = 0 und MaxGray = 255 should be selected. MinGray < MaxGray must
be observed.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Segmented regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of operation.
Default Value : ’all’
List of values : Mode ∈ {’all’, ’maxima’, ’regions’}
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
All gray values smaller than this threshold are disregarded.
Default Value : 0
Suggested values : MinGray ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}
Typical range of values : 0 ≤ MinGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : MinGray ≥ 0
. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
All gray values larger than this threshold are disregarded.
Default Value : 255
Suggested values : MaxGray ∈ {100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240,
250, 255}
Typical range of values : 0 ≤ MaxGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (MaxGray ≤ 255) ∧ (MaxGray > MinGray)
Example
/* Segmentation of a histogram */
read_image(Image,’monkey’)
texture_laws(Image,Texture,’el’,2,5)
draw_region(Region,draw_region)
reduce_domain(Texture,Region,Testreg)
histo_2dim(Testreg,Texture,Region,Histo)
pouring(Histo,Seg,’all’,0,255).
Complexity
Let N be the number of pixels in the input image and M be the number of found segments, where the enclosing
rectangle of the segment i contains mi pixels. Furthermore, let Ki be the number of chords in segment i. Then the
runtime complexity is
Result
pouring usually returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
pouring is processed under mutual exclusion against itself and without parallelization.
Possible Predecessors
binomial_filter, gauss_image, smooth_image, mean_image
Alternatives
watersheds, local_max
See also
histo_2dim, expand_region, expand_gray, expand_gray_ref
Module
Foundation
HALCON 8.0.2
962 CHAPTER 15. SEGMENTATION
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HRegion watersheds;
HRegionArray basins = gauss.Watersheds (&watersheds);
win.SetColored (12);
basins.Display (win);
win.Click ();
return (0);
}
Result
watersheds always returns 2 (H_MSG_TRUE). The behavior with respect to the input images and out-
put regions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’, and
’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
watersheds is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, invert_image
Possible Successors
expand_region, select_shape, reduce_domain, opening
Alternatives
watersheds_threshold, pouring
References
L. Vincent, P. Soille: “Watersheds in Digital Space: An Efficient Algorithm Based on Immersion Simulations”;
IEEE Transactions on Pattern Analysis and Machine Intelligence; vol. 13, no. 6; pp. 583-598; 1991.
Module
Foundation
HALCON 8.0.2
964 CHAPTER 15. SEGMENTATION
then no two basins are separated by a watershed exceeding Threshold, and hence, Basins will contain only
one region.
Parameter
System
16.1 Database
count_relation ( : : RelationName : NumOfTuples )
’image’: Image matrices. One matrix may also be the component of more than one image (no redundant storage).
’region’: Regions (the full and the empty region are always available). One region may of course also be the
component of more than one image object (no redundant storage).
’XLD’: eXtended Line Description: Contours, Polygons, paralles, lines, etc. XLD data types don’t have gray
values and are stored with subpixel accuracy.
’object’: Iconic objects. Composed of a region (called region) and optionally image matrices (called image).
’tuple’: In the compact mode, tuples of iconic objects are stored as a surrogate in this relation. Instead of working
with the individual object keys, only this tuple key is used. It depends on the host language, whether the
objects are passed individually (Prolog and C++) or as tuples (C, Smalltalk, Lisp, OPS-5).
Certain database objects will be created already by the operator reset_obj_db and therefore have to be avail-
able all the time (the undefined gray value component, the objects ’full’ (FULL_REGION in HALCON/C) and
’empty’ (EMPTY_REGION in HALCON/C) as well as the herein included empty and full region). By calling
get_channel_info, the operator therefore appears correspondingly also as ’creator’ of the full and empty
region. The procedure can be used for example to check the completeness of the clear_obj operation.
965
966 CHAPTER 16. SYSTEM
Parameter
. RelationName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Relation of interest of the HALCON database.
Default Value : ’object’
List of values : RelationName ∈ {’image’, ’region’, ’XLD’, ’object’, ’tuple’}
. NumOfTuples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of tuples in the relation.
Example
reset_obj_db(512,512,3)
count_relation(’image’,I1)
count_relation(’region’,R1)
count_relation(’XLD’,X1)
count_relation(’object’,O1)
count_relation(’tuple’,T1)
read_image(X,’monkey’)
count_relation(’image’,I2)
count_relation(’region’,R2)
count_relation(’XLD’,X2)
count_relation(’object’,O2)
count_relation(’tuple’,T2)
/*
Result: I1 = 1 (undefined image)
R1 = 2 (full and empty region)
X1 = 0 (no XLD data)
O1 = 2 (full and empty objects)
T1 = 0 (always 0 in the normal mode )
Result
If the parameter is correct, the operator count_relation returns the value 2 (H_MSG_TRUE). Otherwise an
exception is raised.
Parallelization Information
count_relation is reentrant and processed without parallelization.
Possible Predecessors
reset_obj_db
See also
clear_obj
Module
Foundation
Parameter
HALCON 8.0.2
968 CHAPTER 16. SYSTEM
Result
The operator reset_obj_db returns the value 2 (H_MSG_TRUE) if the parameter values are correct. Otherwise
an exception will be raised.
Parallelization Information
reset_obj_db is reentrant and processed without parallelization.
See also
get_channel_info, count_relation
Module
Foundation
16.2 Error-Handling
get_check ( : : : Check )
Herror err;
char message[MAX_STRING];
set_check("~give_error");
err = send_region(region,socket_id);
set_check("give_error");
if (err != H_MSG_TRUE) {
get_error_text((long)err,message);
fprintf(stderr,"my error message: %s\n",message);
exit(1);
}
Result
The operator get_error_text always returns the value 2 (H_MSG_TRUE).
Parallelization Information
get_error_text is reentrant and processed without parallelization.
Possible Predecessors
set_check
See also
set_check
Module
Foundation
HALCON 8.0.2
970 CHAPTER 16. SYSTEM
set_check ( : : Check : )
’color’: If this control mode is activated, only colors may be used which are supported by the display for the
currently active window. Otherwise an error message is displayed.
In case of deactivated control mode and non existent colors, the nearest color is used (see also set_color,
set_gray, set_rgb).
’text’: If this control mode is activated, it will check the coordinates during the setting of the text cursor as well
as during the display of strings ( write_string) to the effect whether a part of a sign would lie outside
the windowframe (a fact which is not forbidden in principle by the system).
If the control mode is deactivaed, the text will be clipped at the windowframe.
’data’: (For program development)
Checks the consistency of image objects (regions and grayvalue components.
’interface’: If this control mode is activated, the interface between the host language and the HALCON proce-
dures will be checked in course (e.g. typifying and counting of the values).
’database’: This is a consistency check of the database (e.g. checks whether an object which shall be canceled
does indeed exist or not.)
’give_error’: Determines whether errors shall trigger exceptions or not. If this control modes is deactivated,
the application program must provide a suitable error treatment itself. Please note that errors which are
not reported usually lead to undefined output parameters which may cause an unpredictable reaction of the
program. Details about how to handle exceptions in the different HALCON language interfaces can be found
in the HALCON Programmer’s Guide and the HDevelop User’s Guide.
’father’: If this control mode is activated when calling the operators open_window or open_textwindow,
HALCON allows only the usage of the number of another HALCON window as the father window of the
new window; otherwise it allows also the usage of IDs of operating system windows as the father window.
This control mode ist only relevant for windows of type ’X-Window’ and ’WIN32-Window’.
’region’: (For program development)
Checks the consistency of chords (this may lead to a notable speed reduction of routines).
’clear’: Normally, if a list of objects shall be canceled by using clear_obj, an exception will be raised, in case
individual objects do not or no longer exist. If the ’clear’ mode is activated, such objects will be ignored.
’memory’: (For program development)
Checks the memory blocks freed by the HALCON memory managemnet on consistency and overwriting of
memory borders.
’all’: Activates all control modes.
’none’: Deactivates all control modes.
’default’: Default settings: [’give_error’,’database’]
Parameter
. Check (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Desired control mode.
Default Value : ’default’
List of values : Check ∈ {’color’, ’text’, ’database’, ’data’, ’interface’, ’give_error’, ’father’, ’region’, ’clear’,
’memory’, ’all’, ’none’, ’default’}
Result
The operator set_check returns the value 2 (H_MSG_TRUE), if the parameters are correct. Otherwise an
exception will be raised.
Parallelization Information
set_check is reentrant and processed without parallelization.
See also
get_check, set_color, set_rgb, set_hsi, write_string
Module
Foundation
set_spy(::’mode’,’on’:),
and deactivated by using
set_spy(::’mode’,’off’:).
The debugging tool can further be activated with the help of the environment variable HALCONSPY. The definition
of this variable corresponds to calling up ’mode’ and ’on’.
The following control modes can be tuned (in any desired combination of course) with the help of Class/Value:
HALCON 8.0.2
972 CHAPTER 16. SYSTEM
’operator’ When a routine is called, its name and the names of its parameters will be given (in TRIAS notation).
Value: ’on’ or ’off’
default: ’off’
’input_control’ When a routine is called, the names and values of the input control parameters will be given.
Value: ’on’ or ’off’
default: ’off’
’output_control’ When a routine is called, the names and values of the output control parameters are given.
Value: ’on’ or ’off’
default: ’off’
’parameter_values’ Additional information on ’input_control’ and ’output_control’: indicates how many values
per parameter shall be displayed at most (maximum tuplet length of the output).
Value: tuplet length (integer)
default: 4
’db’ Information concerning the 4 relations in the HALCON-database. This is especially valuable in looking for
forgotten clear_obj.
Value: ’on’ or ’off’
default: ’off’
’input_gray_window’ Any reading access of the gray-value component of an (input) image object will cause the
gray-value component to be shown in the indicated window (Window-ID; ’none’ will deactivate this control
).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_region_window’ Any reading access of the region of an (input) iconic object will cause this region to be
shown in the indicated (Window-ID; ’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_xld_window’ Any reading access of the xld will cause this xld to be shown in the indicated (Window-ID;
’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’time’ Processing time of the operator
Value: ’on’ or ’off’
default: ’off’
’halt’ Determines whether there is a halt after every individual action (’multiple’) or only at the end of each oper-
ator (’single’). The parameter is only effective if the halt has been activated by ’timeout’ or ’button_window’.
Value: ’single’ or ’multiple’
default: ’multiple’
’timeout’ After every output there will be a halt of the indicated number of seconds.
Value: seconds (real)
default 0.0
’button_window’ Alternative to ’timeout’: after every output spy waits until the cursor indicates (’button_click’
= ’false’) or clicks into (’button_click’ = ’true’) the indicated window. (Window-ID; ’none’ will deactivate
this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’button_click’ Additional option for ’button_window’: determines whether or not a mouse-click has to be waited
for after an output.
Value: ’on’ or ’off’
default: ’off’
’button_notify’ If ’button_notify’ is activated, spy generates a beep after every output. This is useful in combi-
nation with ’button_window’.
Value: ’on’ or ’off’
default: ’off’
’log_file’ Spy can hereby divert the text output into a file having been opened with open_file.
Value: a file handle (see open_file)
’error’ If ’error’ is activated and an internal error occurs, spy will show the internal procedures (file/line) con-
cerned.
Value: ’on’ or ’off’
default: ’off’
’internal’ If ’internal’ is activated, spy will display the internal procedures and their parameters (file/line) while
an HALCON-operator is processed.
Value: ’on’ or ’off’
default: ’off’
Parameter
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Control mode
Default Value : ’mode’
List of values : Class ∈ {’mode’, ’operator’, ’input_control’, ’output_control’, ’parameter_values’,
’input_gray_window’, ’input_region_window’, ’input_xld_window’, ’db’, ’time’, ’halt’, ’timeout’,
’button_window’, ’button_click’, ’button_notify’, ’log_file’, ’error’, ’internal’}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string / integer / real
State of the control mode to be set.
Default Value : ’on’
Suggested values : Value ∈ {’on’, ’off’, 1, 2, 3, 4, 5, 10, 50, 0.0, 1.0, 2.0, 5.0, 10.0}
Example (Syntax: C)
Result
The operator set_spy returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an exception
is raised.
Parallelization Information
set_spy is processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
See also
get_spy, query_spy
Module
Foundation
16.3 Information
get_chapter_info ( : : Chapter : Info )
HALCON 8.0.2
974 CHAPTER 16. SYSTEM
:<Name>,’chapter’,Info:). The Online-texts will be taken from the files english.hlp, english.sta, en-
glish.num, english.key, and english.idx, which will be searched by HALCON in the currently used directory or the
directory ’help_dir’ (see also get_system and set_system).
Parameter
The texts will be taken from the files english.hlp, english.sta, english.key, english.num und english.idx which
will be searched by HALCON in the currently used directory or in the directory ’help_dir’ (respectively
’user_help_dir’) (see also get_system and set_system). By adding ’.latex’ after the slotname, the text
of slots containing textual information can be made available in LATEX notation.
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; string
Name of the operator on which more information is needed.
Default Value : ’get_operator_info’
. Slot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Desired information.
Default Value : ’abstract’
List of values : Slot ∈ {’short’, ’abstract’, ’procedure_class’, ’functionality’, ’effect’, ’complexity’,
’predecessor’, ’successor’, ’alternatives’, ’see_also’, ’keywords’, ’example’, ’attention’, ’result_state’,
’return_value’, ’references’, ’module’, ’html_path’, ’warning’}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Information (empty if no information is available)
Result
The operator get_operator_info returns the value 2 (H_MSG_TRUE) if the parameters are correct and the
helpfiles are availabe. Otherwise an exception handling is raised.
Parallelization Information
get_operator_info is processed completely exclusively without parallelization.
HALCON 8.0.2
976 CHAPTER 16. SYSTEM
Possible Predecessors
get_keywords, search_operator, get_operator_name, query_operator_info,
query_param_info, get_param_info
Possible Successors
get_param_names, get_param_num, get_param_types
Alternatives
get_param_names
See also
query_operator_info, get_param_info, get_operator_name, get_param_num,
get_param_types
Module
Foundation
’type_list’: Permitted type(s) of data for parameter values Values: ’real’, ’integer’ or ’string’ (for control parame-
ters), ’byte’, ’direction’, ’cyclic’, ’int1’, ’int2’, ’uint2’, ’int4’, ’real’, ’complex’, ’vector_field’ (for images).
’default_type’: Default-type for parameter values (for control parameters only). This type of parameter is the one
HALCON/C uses in the "‘simple mode"’. If ’none’ is indicated, the "‘tuple mode"’ must be used. Value:
’real’, ’integer’, ’string’ oder ’none’.
’sem_type’: Semantic type of the parameter. This is important to allow the assignment of the parameters to object
classes in object-oriented languages (C++, .NET, COM). If more than one parameter belongs semantically to
one type, this fact is indicated as well. So far the the following objects are supported:
object, image, region, xld,
xld_cont, xld_para, xld_poly, xld_ext_para, xld_mod_para,
integer, real, number, string,
channel, grayval, window,
histogram, distribution,
point(.x, .y), extent(.x, .y),
angle(.rad oder .deg),
circle(.center.x, .center.y, .radius),
arc(.center.x, .center.y, .angle.rad, .begin.x, .begin.y),
ellipse(.center.x, .center.y, .angle.rad, .radius1, .radius2),
line(.begin.x, .begin.y, .end.x, .end.y)
rectangle(.origin.x, .origin.y, .corner.x, .corner.y
or .extent.x, .extent.y),
polygon(.x, .y), contour(.x, .y),
coordinates(.x, .y), chord(.x1, .x2, .y),
chain(.begin.x, .begin.y, .code).
’default_value’: Default-value for the parameter (for input-control parameters only). It is the question of mere
information only (the parameter value must be transmitted explicitly, even if the default-value is used). This
entry serves only as a notice, a point of departure for own experiments. The values have been selected so that
they normally do not cause any errors but generate something that makes sense.
’multi_value’: ’true’, if more than one value is permitted in this parameter position, otherwise ’false’.
’multichannel’: ’true’, in case the input image object may be multichannel.
’mixed_type’: For control parameters exclusively and only if value tuples (’multivalue’-’true’) and various types
of data are permitted for the parameter values (’type_list’ having more than one value). In this case Slot
indicates, whether values of various types may be mixed in one tuple (’true’ or ’false’).
’values’: Selection of values (optional).
’value_list’: In case a parameter can take only a limited number of values, this fact will be indicated explicitly
(optional).
’valuemin’: Minimum value of a value interval.
’valuemax’: Maximum value of a value interval.
’valuefunction’: Function discribing the course of the values for a series of tests (lin, log, quadr, ...).
’steprec’: Recommended step width for the parameter values in a series of tests.
’steprec’: Minimum step width of the parameter values in a series of tests.
’valuenumber’: Expression describing the number of parameters as such or in relation to other parameters.
’assertion’: Expression describing the parameter values as such or in relation to other parameters.
The online-texts will be taken from the files english.hlp, english.sta, english.key, english.num and english.idx
which will be searched by HALCON in the currently used directory or the directory ’help_dir’ (see also
get_system and set_system).
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; string
Name of the procedure on whose parameter more information is needed.
Default Value : ’get_param_info’
. ParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the parameter on which more information is needed.
Default Value : ’Slot’
HALCON 8.0.2
978 CHAPTER 16. SYSTEM
See also
get_param_num, get_param_types, get_operator_name
Module
Foundation
HALCON 8.0.2
980 CHAPTER 16. SYSTEM
which allow more than one type as for example write_string. Hereby the types of input parameters are
combined in the variable InpCtrlParType, whereas the types of output parameters are combined in the variable
OutpCtrlParType. The following types are possible:
’integer’: an integer.
’integer tuple’: an integer or a tuple of integers.
’real’: a floating point number.
’real tuple’: a floating point number or a tuple of floating point numbers.
’string’: a string.
’string tuple’: a string or a tuple of strings.
’no_default’: individual value of which the type cannot be determined.
’no_default tuple’: individual value or tuple of values of which the type cannot be determined.
’default’: individual value of unknown type, whereby the systems assumes it to be an ’integer’.
Parameter
query_operator_info ( : : : Slots )
Possible Successors
get_operator_info
See also
get_operator_info
Module
Foundation
query_param_info ( : : : Slots )
HALCON 8.0.2
982 CHAPTER 16. SYSTEM
Module
Foundation
16.4 Operating-System
count_seconds ( : : : Seconds )
count_seconds(Start)
/* program segment to be measured */
count_seconds(End)
Seconds := End - Start
Result
The operator count_seconds always returns the value 2 (H_MSG_TRUE).
Parallelization Information
count_seconds is reentrant and processed without parallelization.
See also
set_system
Module
Foundation
system_call ( : : Command : )
Module
Foundation
wait_seconds ( : : Seconds : )
16.5 Parallelization
check_par_hw_potential ( : : AllInpPars : )
HALCON 8.0.2
984 CHAPTER 16. SYSTEM
is necessary to start check_par_hw_potential once for each operating system in order to correctly measure
the rather strong influence of the operating system on the potential of exploiting multiprocessor hardware. Under
Windows, HALCON stores the parallelization knowledge, which belongs to a specific machine, in the machine’s
registry. At this, it uses a machine-specific registry key, which can be used by different users simultaneously. In
the normal case, this key can be written or changed by any user under Windows NT. However, under Windows
2000 the key may only be changed by users with administrator privileges or by users which at least belong to the
“power user” group. For all other users check_par_hw_potential shows no effect (but does not return an
error). Under Linux/UNIX the parallelization information is stored in a file in the HALCON installation directory
($HALCONROOT). Again this means that check_par_hw_potential must be called by users with the ap-
propriate privileges, here by users which have write access to the HALCON directory. If HALCON is used within
a network under Linux/UNIX, the denoted file contains the information about every computer in the network for
which the hardware check has been successfully completed.
Attention
During its test loops check_par_hw_potential has to start every examined operator several times. Thus,
the processing of check_par_hw_potential can take rather a long time. check_par_hw_potential
bases on the automatic parallelization of operators which is exclusively supported by Parallel HALCON. Thus,
check_par_hw_potential always returns an appropriate error, if it used with a non-parallel HALCON ver-
sion. check_par_hw_potential must be called by users with the appropriate privileges for storing the
parallelization information permanently (see the operator’s description above for more details about this subject).
Parameter
load_par_knowledge ( : : FileName : )
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of parallelization knowledge file.
Default Value : ”
Result
load_par_knowledge returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
load_par_knowledge is local and processed completely exclusively without parallelization.
Possible Predecessors
store_par_knowledge
See also
store_par_knowledge, check_par_hw_potential
Module
Foundation
store_par_knowledge ( : : FileName : )
16.6 Parameters
get_system ( : : Query : Information )
HALCON 8.0.2
986 CHAPTER 16. SYSTEM
a + in the list below. By passing the string ’?’ as the parameter Query, the names of all system parameters are
provided with Information.
The following system parameters can be queried:
Versions
’parallel_halcon’: The currently used variant of HALCON: Parallel HALCON (’true’) or Standard HAL-
CON (’false’)
’version’: HALCON version number, e.g.: 6.0
’last_update’: Date of creation of the HALCON library
’revision’: Revision number of the HALCON library, e.g.: 1
Upper Limits
’max_contour_length’: Maximum number of contour respectively polygone control points of a region.
’max_images’: Maximum total of images.
’max_channels’: Maximum number of channels of an image.
’max_obj_per_par’: Maximum number of image objects which may be used during one call up per param-
eter
’max_inp_obj_par’: Maximum number of input parameters.
’max_outp_obj_par’: Maximum number of output parameters.
’max_inp_ctrl_par’: Maximum number of input control parameters.
’max_outp_ctrl_par’: Maximum number of output control parameters.
’max_window’: Maximum number of windows.
’max_window_types’: Maximum number of window systems.
’max_proc’: Maximum number of HALCON procedures (system defined + user defined).
Graphic
+’flush_graphic’: Determines, whether the flush operation is called or not after each visualization operation
in HALCON. Unix operating systems flash the display buffer auto- matically and make this parameter
effectless on respective operating systems, therefore.
+’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values.
If the values is -1 the gray values will be automatically scaled (default).
+’backing_store’: Storage of the window contents in case of overlaps.
+’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graph-
ics window is displayed.
+’window_name’: (no description available)
+’default_font’: Name of the font to set at opening the window.
+’update_lut’: (no description available)
+’x_package’: Number of bytes which are sent to the X server during each transfer of data.
+’num_gray_4’: Number of colors reserved under X Xindows concerning the output of graylevels (
disp_channel) on a machine with 4 bitplanes (16 colors).
+’num_gray_6’: Number of colors reserved under X Windows concerning the output of graylevels (
disp_channel) on a machine with 6 bitplanes (64 colors).
+’num_gray_8’: Number of colors reserved under X Windows concerning the output of graylevels (
disp_channel) on a machine with 8 bitplanes (256 colors).
+’num_gray_percentage’: HALCON reserves a certain amount of the available colors under X Windows
for the representation of graylevels ( disp_image). This shall interfere with other X applications
as little as possible. However, if HALCON does not succeed in reserving a minimum percentage of
’num_gray_percentage’ of the necessary colors on the X server, a certain amount of the lookup-table
will be claimed for the HALCON graylevels regardless of the consequences for other applications.
This may result in undesired shifts of color when switching between HALCON windows and windows
of other applications, or if (outside HALCON) a window-dump is generated. The number of the real
graylevels to be reserved depends on the number of available bitplanes on the outputmachine (see also
’num_gray_*’. Naturally no colors will be reserved on monochrome machines - the graylevels will
instead be dithered when displayed. If graylevel displays are used, only different shades of gray will
be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with 8 bit
pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: Before the first window on a machine with x bitplanes is opened, num_gray_x indicates the
number of colors which have to be reserved for the display of graylevels, afterwards, however, it will
indicate the number of colors which actually have been reserved.
+’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines
how many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-
color display under X windows.
+’num_graphic_2’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 2 bitplanes (4 colors).
+’num_graphic_4’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 4 bitplanes (16 colors).
+’num_graphic_6’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 6 bitplanes (64 colors).
+’num_graphic_8’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 8 bitplanes (256 colors).
Image Processing
+’neighborhood’: Using the 4 or 8 neighborhood.
+’init_new_image’: Initialization of images before applying grayvalue transformations.
+’no_object_result’: Behavior for an empty object lists.
+’empty_region_result’: Reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Possible return
values:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
HALCON 8.0.2
988 CHAPTER 16. SYSTEM
Parameter
HALCON 8.0.2
990 CHAPTER 16. SYSTEM
’neighborhood’: This parameter is used with all procedures which examine neighborhood relations:
connection, get_region_contour, get_region_chain, get_region_polygon,
get_region_thickness, boundary, paint_region, disp_region, fill_up,
contlength, shape_histo_all.
Value: 4 or 8
default: 8
’default_font’: Whenever a window is opened, a font will be set for the text output, whereby the ’default_font’
will be used. If the preset font cannot be found, another fontname can be set before opening the window.
Value: Filename of the fonts
default: fixed
’update_lut’ Determines whether the HALCON color tables are adapted according to their environment or not.
Value: ’true’ or ’false’
default: ’false’
’image_dir’: Image files (e.g. read_image and read_sequence) will be looked for in the currently used
directory and in ’image_dir’ (if no absolute paths are indicated). More than one directory name can be indi-
cated (searchpaths), seperated by semicolons (Windows) or colons (Unix). The path can also be determined
using the environment variable HALCONIMAGES.
Value: Name of the filepath
default: ’$HALCONROOT/images’ bzw. ’%HALCONROOT%/images’
’lut_dir’: Color tables ( set_lut) which are realized as an ASCII-file will be looked for in the currently used
directory and in ’lut_dir’ (if no absolute paths are indicated). If HALCONROOT is set, HALCON will search
the color tables in the sub-directory "‘lut"’.
Value: Name of the filepath
default: ’$HALCONROOT/lut’ bzw. ’%HALCONROOT%/lut’
’help_dir’: The online text files german or english.hlp, .sta, .key .num and .idx will be looked for in the cur-
rently used directory or in ’help_dir’. This system parameter is necessary for instance using the operators
get_operator_info and get_param_info. This parameter can also be set by the environment vari-
able HALCONROOT before initializing HALCON. In this case the variable must indicate the directory above
the helpdirectories (that is the HALCON-Homedirectory): e.g.: ’/usr/local/halcon’
Value: Name of the filepath
default: ’$HALCONROOT/help’ bzw. ’%HALCONROOT%/help’
’init_new_image’: Determines whether new images shall be set to 0 before using filters. This is not necessary if
always the whole image is filtered of if the data of not filtered image areas are unimportant.
Value: ’true’ or ’false’
default: ’true’
’no_object_result’: Determines how operations processing iconic objects shall react if the object tuplet is empty
(= no objects). Available values for Value:
’true’: the error will be ignored
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
default: ’true’
’empty_region_result’: Controls the reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Available values for
Value:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
HALCON 8.0.2
992 CHAPTER 16. SYSTEM
’filename_encoding’: This parameter determines how file and directory names are interpreted that are passed as
string parameters to and from HALCON. With the value ’locale’ these names are used unaltered, while with
the value ’utf8’ these names are interpreted as being UTF-8 encoded. In the latter case, HALCON tries to
translate input parameters from UTF-8 to the locale encoding according to the current system settings, and
output parameters from locale to UTF-8 encoding.
Value: ’locale’ or ’utf8’
default: ’locale’
’x_package’: The output of image data via the network may cause errors owing to the heavy load on the computer
or on the network. In order to avoid this, the data are transmitted in small packages. If the computer is used
locally, these units can be enlarged at will. This can lead to a notably improved output performance.
Value: package size (in bytes)
default: 20480
’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values. If the
values is -1 the gray values will be automatically scaled (default).
Value: -1 or 9..16
default: -1
’num_gray_4’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 4 bitplanes (16 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 12
default: 8
’num_gray_6’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 6 bitplanes (64 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 62
default: 50
’num_gray_8’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 8 bitplanes (256 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 254
default: 140
’num_gray_percentage’: Under X Windows HALCON reserves a part of the available colors for the represen-
tation of gray values ( disp_channel). This shall interfere with other X applications as little as possible.
However, if HALCON does not succeed in reserving a minimum percentage of ’num_gray_percentage’ of
the necessary colors on the X server, a certain amount of the lookup table will be claimed for the HALCON
graylevels regardless of the consequences. This may result in undesired shifts of color when switching be-
tween HALCON windows and windows of other applications, or (outside HALCON) if a window-dump is
generated. The number of the real graylevels to be reserved depends on the number of available bitplanes on
the outputmachine (see also ’num_gray_*’. Naturally no colors will be reserved on monochrome machines -
the graylevels will instead be dithered when displayed. If graylevel-displays are used, only different shades
of gray will be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with
8 bit pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: This value may only be changed before the first window has been opened on the machine. For before
opening the first window on a machine with x bitplanes, num_gray_x indicates the number of colors which
have to be reserved for the display of graylevels, afterwards, however, it will indicate the number of colors
which actually have been reserved.
Value: 0 - 100
default: 30
’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines how
many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-color display
under X windows.
default: 60
’int_zooming’: Determines if the zooming of images is done with integer arithmetic or with floating point arith-
metic. default: ’true’
’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graphics
window is displayed. default: ’default’
’num_graphic_2’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 2 bitplanes (4 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 2
default: 2
’num_graphic_4’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 4 bitplanes (16 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 14
default: 5
’num_graphic_6’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 6 bitplanes (64 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 62
default: 10
’num_graphic_8’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 8 bitplanes (256 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 64
default: 20
’graphic_colors’ HALCON reserves the first num_graphic_x colors form this list of color names as graphic col-
ors. As a default HALCON uses this same list which is also returned by using query_all_colors.
However, the list can be changed individually: hereby a tuplet of color names will be returned as value. It is
recommendable that such a tuplet always includes the colors ’black’ and ’white’, and optionally also ’red’,
’green’ and ’blue’. If ’default’ is set as Value, HALCON returns to the initial setting. Note: On graylevel
machines not the first x colors will be reserved, but the first x shades of gray from the list.
Attention: This value may only be changed before the first window has been opened on the machine.
Value: Tuplets of X Windows color names
default: see also query_all_colors
’current_runlength_number’: Regions will be stored internally in a certain runlengthcode. This parameter can
determine the maximum number of chords which may be used for representing a region. Please note that
some procedures raise the number on their own if necessary.
The value can be enlarged as well as reduced.
Value: maximum number of chords
default: 50000
’clip_region’: Determines whether the regions of iconic objects of the HALCON database will be clipped to
the currently used image size or not. This is the case for example in procedures like gen_circle,
gen_rectangle1 or dilation1.
See also: reset_obj_db
Value: ’true’ or ’false’
default: ’true’
’do_low_error’ Determines whether the HALCON should print low level error or not.
Value: ’true’ or ’false’
default: ’false’
’reentrant’ Determines whether HALCON must be reentrant for being used within a parallel programming en-
vironment (e.g. a multithreaded application). This parameter is only of importance for Parallel HALCON,
which can process several operators concurrently. Thus, the parameter is ignored by the sequentially working
HALCON-Version. If it is set to ’true’, Parallel HALCON internally uses synchronization mechanisms to
protect shared data objects from concurrent accesses. Though this is inevitable with any effectively paral-
lel working application, it may cause undesired overhead, if used within an application which works purely
sequentially. The latter case can be signalled by setting ’reentrant’ to ’false’. This switches off all internal
synchronization mechanisms and thus reduces overhead. Of course, Parallel HALCON then is no longer
thread-safe, which causes another side-effect: Parallel HALCON will then no longer use the internal paral-
lelization of operators, because this needs reentrancy. Setting ’reentrant’ to ’true’ resets Parallel HALCON
to its default state, i.e. it is reentrant (and thread-safe) and it uses the automatic parallelization to speed up
the processing of operators on multiprocessor machines.
HALCON 8.0.2
994 CHAPTER 16. SYSTEM
processing at the cost of higher memory consumption. Standard HALCON treats the value ’exclusive’ like
the value ’shared’.
Value: ’idle’,’exclusive’,’shared’
default: ’false’
’temporary_mem_cache’ Flag if unused temporary memory of an operator should be cached (’true’, default) or
freed (’false’). A single-threaded application can be speeded up by caching global memory, whereas freeing
reduces the memory consumption of a multithreaded application at the expense of speed.
Value: ’true’ or ’false’
default: ’true’
’alloctmp_max_blocksize’ Maximum size of memory blocks to be allocated within temporary memory manage-
ment. (No effect, if ’temporary_mem_cache’ == ’false’ ) Value: -1 or >= 0
default: -1
’mmx_enable’ Flag, if MMX operations were used to accelerate selected image processing operators (’true’) or
not (’false’). (No effect, if ’mmx_supported’ == ’false’, see also operator get_system) default: ’true’ if cpu
supports MMX, else ’false’
’language’ Language used for error messages. Value: ’english’ or ’german’. default: ’ english’
Parameter
. SystemParameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the system parameter to be changed.
Default Value : ’image_dir’
List of values : SystemParameter ∈ {’alloctmp_max_blocksize’, ’backing_store’,
’border_shape_models’, ’clip_region’, ’clock_mode’, ’current_runlength_number’, ’default_font’,
’do_low_error’, ’empty_region_result’, ’extern_alloc_funct’, ’extern_free_funct’, ’filename_encoding’,
’flush_file’, ’flush_graphic’, ’global_mem_cache’, ’graphic_colors’, ’help_dir’, ’icon_name’,
’image_cache_capacity’, ’image_dir’, ’image_dpi’, ’init_new_image’, ’int2_bits’, ’int_zooming’, ’language’,
’lut_dir’, ’max_connection’, ’mmx_enable’, ’neighborhood’, ’no_object_result’, ’num_graphic_2’,
’num_graphic_4’, ’num_graphic_6’, ’num_graphic_8’, ’num_graphic_percentage’, ’num_gray_4’,
’num_gray_6’, ’num_gray_8’, ’num_gray_percentage’, ’ocr_trainf_version’, ’parallelize_operators’,
’pregenerate_shape_models’, ’reentrant’, ’store_empty_region’, ’temporary_mem_cache’, ’thread_num’,
’thread_pool’, ’update_lut’, ’x_package’}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer / real
New value of the system parameter.
Default Value : ’true’
Suggested values : Value ∈ {’true’, ’false’, 0, 4, 8, 100, 140, 255}
Result
The operator set_system returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception will be raised.
Parallelization Information
set_system is local and processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db, get_system, set_check
See also
get_system, set_check, count_seconds
Module
Foundation
16.7 Serial
clear_serial ( : : SerialHandle, Channel : )
HALCON 8.0.2
996 CHAPTER 16. SYSTEM
Parameter
close_all_serials ( : : : )
close_serial ( : : SerialHandle : )
Parallelization Information
close_serial is reentrant and processed without parallelization.
Possible Predecessors
open_serial
See also
open_serial, close_file
Module
Foundation
HALCON 8.0.2
998 CHAPTER 16. SYSTEM
serial devices usually are named ’/dev/tty*’. The parameters of the serial device, e.g., its speed or number of data
bits, are set to the system default values for the respective device after the device has been opened. They can be set
or changed by calling set_serial_param.
Parameter
. PortName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; string
Name of the serial port.
Default Value : ’COM1’
Suggested values : PortName ∈ {’COM1’, ’COM2’, ’COM3’, ’COM4’, ’/dev/ttya’, ’/dev/ttyb’,
’/dev/tty00’, ’/dev/tty01’, ’/dev/ttyd1’, ’/dev/ttyd2’, ’/dev/cua0’, ’/dev/cua1’}
. SerialHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; integer
Serial interface handle.
Result
If the parameters are correct and the device could be opened, the operator open_serial returns the value 2
(H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
open_serial is reentrant and processed without parallelization.
Possible Successors
set_serial_param, read_serial, write_serial, close_serial
See also
set_serial_param, get_serial_param, open_file
Module
Foundation
HALCON 8.0.2
1000 CHAPTER 16. SYSTEM
If the parameters are correct and the parameters of the device could be set, the operator set_serial_param
returns the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
set_serial_param is reentrant and processed without parallelization.
Possible Predecessors
open_serial, get_serial_param
Possible Successors
read_serial, write_serial
See also
get_serial_param
Module
Foundation
16.8 Sockets
close_socket ( : : Socket : )
Close a socket.
close_socket closes a socket that was previously opened with open_socket_accept,
open_socket_connect, or socket_accept_connect. For a detailed example, see
open_socket_accept.
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; integer
Socket number.
Parallelization Information
close_socket is reentrant and processed without parallelization.
See also
open_socket_accept, open_socket_connect, socket_accept_connect
Module
Foundation
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; integer
Socket number.
. DataType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Data type of next HALCON data.
Parallelization Information
get_next_socket_data_type is reentrant and processed without parallelization.
See also
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
Module
Foundation
HALCON 8.0.2
1002 CHAPTER 16. SYSTEM
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; integer
Socket number.
. SocketDescriptor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Socket descriptor used by the operating system.
Parallelization Information
get_socket_descriptor is reentrant and processed without parallelization.
Possible Predecessors
open_socket_accept, open_socket_connect, socket_accept_connect
See also
set_socket_timeout
Module
Foundation
/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
/* Busy wait for an incoming connection */
dev_error_var (Error, 1)
dev_set_check (’~give_error’)
OpenStatus := 5
while (OpenStatus # 2)
socket_accept_connect (AcceptingSocket, ’false’, Socket)
OpenStatus := Error
wait_seconds (0.2)
endwhile
dev_set_check (’give_error’)
/* Connection established */
receive_image (Image, Socket)
threshold (Image, Region, 0, 63)
send_region (Region, Socket)
receive_region (ConnectedRegions, Socket)
area_center (ConnectedRegions, Area, Row, Column)
send_tuple (Socket, Area)
send_tuple (Socket, Row)
send_tuple (Socket, Column)
close_socket (Socket)
close_socket (AcceptingSocket)
/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’fabrik’)
send_image (Image, Socket)
receive_region (Region, Socket)
connection (Region, ConnectedRegions)
send_region (ConnectedRegions, Socket)
receive_tuple (Socket, Area)
receive_tuple (Socket, Row)
receive_tuple (Socket, Column)
close_socket (Socket)
Parallelization Information
open_socket_accept is reentrant and processed without parallelization.
Possible Successors
socket_accept_connect
See also
open_socket_connect, close_socket, get_socket_timeout, set_socket_timeout,
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
Module
Foundation
HALCON 8.0.2
1004 CHAPTER 16. SYSTEM
open_socket_connect opens a connection to an accepting socket on the computer HostName, which listens
on port Port. The listening socket in the other HALCON process must have been created earlier with the operator
open_socket_accept. The socket thus created is returned in Socket. To establish the connection, the
HALCON process, in which the accepting socket resides, must call socket_accept_connect. For a detailed
example, see open_socket_accept.
Parameter
. Image (output_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Received image.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; integer
Socket number.
Parallelization Information
receive_image is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_image, send_region, receive_region, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation
HALCON 8.0.2
1006 CHAPTER 16. SYSTEM
receive_xld reads an XLD object that was sent over the socket connection determined by Socket by another
HALCONprocess using the operator send_xld. If no XLD object has been sent, the HALCON process calling
receive_xld blocks until enough data arrives. For a detailed example, see send_xld.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Image to be sent.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; integer
Socket number.
Parallelization Information
send_image is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_image, send_region, receive_region, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation
Parameter
HALCON 8.0.2
1008 CHAPTER 16. SYSTEM
/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
socket_accept_connect (AcceptingSocket, ’true’, Socket)
receive_image (Image, Socket)
edges_sub_pix (Image, Edges, ’canny’, 1.5, 20, 40)
send_xld (Edges, Socket)
receive_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
gen_parallels_xld (Polygons, Parallels, 10, 30, 0.15, ’true’)
send_xld (Parallels, Socket)
receive_xld (ModParallels, Socket)
receive_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
close_socket (AcceptingSocket)
/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’mreut’)
send_image (Image, Socket)
receive_xld (Edges, Socket)
gen_polygons_xld (Edges, Polygons, ’ramer’, 2)
send_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
receive_xld (Parallels, Socket)
mod_parallels_xld (Parallels, Image, ModParallels, ExtParallels,
0.4, 160, 220, 10)
send_xld (ModParallels, Socket)
send_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
Parallelization Information
send_xld is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_xld, send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple, get_next_socket_data_type
Module
Foundation
Parameter
HALCON 8.0.2
1010 CHAPTER 16. SYSTEM
Tools
17.1 2D-Transformations
affine_trans_pixel ( : : HomMat2D, Row, Col : RowTrans, ColTrans )
Hence,
affine_trans_pixel (HomMat2D, Row, Col, RowTrans, ColTrans)
corresponds to the following operator sequence:
affine_trans_point_2d (HomMat2D, Row+0.5, Col+0.5, RowTmp, ColTmp)
RowTrans := RowTmp-0.5
ColTrans := ColTmp-0.5
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real / integer
Input pixel(s) (row coordinate).
Default Value : 64
Suggested values : Row ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real / integer
Input pixel(s) (column coordinate).
Default Value : 64
Suggested values : Col ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. RowTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real
Output pixel(s) (row coordinate).
1011
1012 CHAPTER 17. TOOLS
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
The transformation matrix can be created using the operators hom_mat2d_identity,
hom_mat2d_rotate, hom_mat2d_translate, etc., or can be the result of operators like
vector_angle_to_rigid.
For example, if HomMat2D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
Qx Px Px
R t R· +t
Qy = · Py = Py
00 1
1 1 1
Parameter
HALCON 8.0.2
1014 CHAPTER 17. TOOLS
Cols). The average projection error of the reconstructed points is returned in Error. This can be used to check
whether the optimization has converged to useful values.
Parameter
* Assume that Images contains the four images of the mosaic in the
* layout given in the above description. Then the following example
* computes the bundle-adjusted transformation matrices.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT,
’ncc’, 10, 0, 0, 480, 640, 0, 0.5,
’gold_standard’, 2, 42, HomMat2D,
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
bundle_adjust_mosaic (4, 1, From, To, HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches, ’rigid’, MosaicMatrices)
gen_bundle_adjusted_mosaic (Images, MosaicImage, HomMatrices2D,
’default’, ’false’, TransMat2D)
Result
If the parameters are valid, the operator bundle_adjust_mosaic returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
bundle_adjust_mosaic is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac
Possible Successors
gen_bundle_adjusted_mosaic
See also
gen_projective_mosaic
Module
Matching
hom_mat2d_compose ( : : HomMat2DLeft,
HomMat2DRight : HomMat2DCompose )
For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
Rl tl Rr tr Rl · Rr Rl +tl · tr
HomMat2DCompose = · =
00 1 00 1 0 0 1
Parameter
HALCON 8.0.2
1016 CHAPTER 17. TOOLS
Result
If the parameters are valid, the operator hom_mat2d_compose returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
hom_mat2d_compose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_compose, hom_mat2d_translate, hom_mat2d_translate_local,
hom_mat2d_scale, hom_mat2d_scale_local, hom_mat2d_rotate,
hom_mat2d_rotate_local, hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation
hom_mat2d_identity ( : : : HomMat2DIdentity )
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, HomMat2DIdentity is stored as the
tuple [1,0,0,0,1,0].
Parameter
. HomMat2DIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Transformation matrix.
Result
hom_mat2d_identity always returns 2 (H_MSG_TRUE).
Parallelization Information
hom_mat2d_identity is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation
HALCON 8.0.2
1018 CHAPTER 17. TOOLS
The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 +Px 0 1 0 −Px
R
HomMat2DRotate = 0 1 +Py · 0 · 0 1 −Py · HomMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_rotate_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_rotate returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
hom_mat2d_rotate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_rotate_local
Module
Foundation
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DRotate.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. HomMat2DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_rotate_local returns 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Parallelization Information
hom_mat2d_rotate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
HALCON 8.0.2
1020 CHAPTER 17. TOOLS
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_rotate
Module
Foundation
The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 +Px 0 1 0 −Px
S
HomMat2DScale = 0 1 +Py · 0 · 0 1 −Py · HomMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_scale_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sx 6= 0
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DScale.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
HALCON 8.0.2
1022 CHAPTER 17. TOOLS
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
cos(Theta) 0 0
Axis = 0 x 0 : HomMat2DSlant = sin(Theta) 1 0 · HomMat2D
0 0 1
1 − sin(Theta) 0
Axis = 0 y 0 : HomMat2DSlant = 0 cos(Theta) 0 · HomMat2D
0 0 1
The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DSlant. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the slant is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations for Axis = ’x’:
1 0 +Px cos(Theta) 0 0 1 0 −Px
HomMat2DSlant = 0 1 +Py · sin(Theta) 1 0 · 0 1 −Py · HomMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_slant_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Coordinate axis that is slanted.
Default Value : ’x’
List of values : Axis ∈ {’x’, ’y’}
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
HALCON 8.0.2
1024 CHAPTER 17. TOOLS
cos(Theta) 0 0
Axis = 0 x 0 : HomMat2DSlant = HomMat2D · sin(Theta) 1 0
0 0 1
1 − sin(Theta) 0
Axis = 0 y 0 : HomMat2DSlant = HomMat2D · 0 cos(Theta) 0
0 0 1
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DSlant.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Coordinate axis that is slanted.
Default Value : ’x’
List of values : Axis ∈ {’x’, ’y’}
. HomMat2DSlant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_slant_local returns 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Parallelization Information
hom_mat2d_slant_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_slant
Module
Foundation
HALCON 8.0.2
1026 CHAPTER 17. TOOLS
Parameter
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_translate_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_translate returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
hom_mat2d_translate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_translate_local
Module
Foundation
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
HALCON 8.0.2
1028 CHAPTER 17. TOOLS
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_translate_local returns 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Parallelization Information
hom_mat2d_translate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_translate
Module
Foundation
Possible Predecessors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_compose, hom_mat2d_invert
Module
Foundation
Since the image of a plane containing points (x, y, f, 1)T is to be calculated the last two columns of Q can be
joined:
1 0 0
r11 r12 r13 q11 q12 f · q13 + q14 0 1 0
R = r21 r22 r23 = q21 q22 f · r23 + q24 = Q · 0 0 f
r31 r32 r33 q31 q32 f · r33 + q34
0 0 1
Finally, the columns and rows of R are swapped in a way that the first row of P contains the transformation of the
row coordinates and the second row contains the transformation of the column coordinates so that P can be used
directly in projective_trans_image:
HALCON 8.0.2
1030 CHAPTER 17. TOOLS
0 1 0 0 1 0
P = 1 0 0 ·R· 1 0 0
0 0 1 0 0 1
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
3 × 4 3D transformation matrix.
. PrincipalPointRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Row coordinate of the principal point.
Default Value : 256
Suggested values : PrincipalPointRow ∈ {16, 32, 64, 128, 240, 256, 512}
. PrincipalPointCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Column coordinate of the principal point.
Default Value : 256
Suggested values : PrincipalPointCol ∈ {16, 32, 64, 128, 256, 320, 512}
. Focus (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Focal length in pixels.
Default Value : 256
Suggested values : Focus ∈ {1, 2, 5, 256, 32768}
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Homogeneous projective transformation matrix.
Parallelization Information
hom_mat3d_project is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_rotate, hom_mat3d_translate, hom_mat3d_scale
Possible Successors
projective_trans_image, projective_trans_point_2d, projective_trans_region,
projective_trans_contour_xld, hom_mat2d_invert
Module
Foundation
If fewer than 4 pairs of points (Px, Py, Pw), (Qx, Qy, Qw) are given, there exists no unique solution, if exactly 4
pairs are supplied the matrix HomMat2D transforms them in exactly the desired way, and if there are more than
4 point pairs given, hom_vector_to_proj_hom_mat2d seeks to minimize the transformation error. To
achieve such a minimization, two different algorithms are available. The algorithm to use can be chosen using the
parameter Method. For conventional geometric problems Method=’normalized_dlt’ usually yields better results.
However, if one of the coordinates Qw or Pw equals 0, Method=’dlt’ must be chosen.
In contrast to vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d uses homogeneous
coordinates for the points, and hence points at infinity (Pw = 0 or Qw = 0) can be used to determine the transforma-
tion. If finite points are used, typically Pw and Qw are set to 1. In this case, vector_to_proj_hom_mat2d can
also be used. vector_to_proj_hom_mat2d has the advantage that one additional optimization method can
be used and that the covariances of the points can be taken into account. If the correspondence between the points
has not been determined, proj_match_points_ransac should be used to determine the correspondence as
well as the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Input points 1 (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Input points 1 (y coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Input points 1 (w coordinate).
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Input points 2 (x coordinate).
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Input points 2 (y coordinate).
. Qw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Input points 2 (w coordinate).
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Estimation algorithm.
Default Value : ’normalized_dlt’
List of values : Method ∈ {’normalized_dlt’, ’dlt’}
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Homogeneous projective transformation matrix.
Parallelization Information
hom_vector_to_proj_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, points_foerstner, points_harris
Possible Successors
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
vector_to_proj_hom_mat2d, proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
HALCON 8.0.2
1032 CHAPTER 17. TOOLS
Module
Calibration
Compute a projective transformation matrix between two images by finding correspondences between points.
Given a set of coordinates of characteristic points (Cols1, Rows1) and (Cols2, Rows2) in both input images
Image1 and Image2, proj_match_points_ransac automatically determines corresponding points and
the homogeneous projective transformation matrix HomMat2D that best transforms the corresponding points
from the different images into each other. The characteristic points can, for example, be extracted with
points_foerstner or points_harris.
The transformation is determined in two steps: First, gray value correlations of mask windows around the input
points in the first and the second image are determined and an initial matching between them is generated using
the similarity of the windows in both images.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the algorithm’s performance, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the transformation contains a rotation, i.e., if the first image is rotated with respect to the second image, the
parameter Rotation may contain an estimate for the rotation angle or an angle interval in radians. A good guess
will increase the quality of the gray value matching. If the actual rotation differs too much from the specified
estimate the matching will typically fail. The larger the given interval, the slower the operator is since the entire
algorithm is run for all relevant angles within the interval.
Once the initial matching is complete, a randomized search algorithm (RANSAC) is used to determine the transfor-
mation matrix HomMat2D. It tries to find the matrix that is consistent with a maximum number of correspondences.
For a point to be accepted, its distance from the coordinates predicted by the transformation must not exceed the
threshold DistanceThreshold.
Once a choice has been made, the matrix is further optimized using all consistent points. For this optimization, the
EstimationMethod can be chosen to either be the slow but mathematically optimal ’gold_standard’ method
or the faster ’normalized_dlt’. Here, the algorithms of vector_to_proj_hom_mat2d are used.
Point pairs that still violate the consistency condition for the final transformation are dropped, the matched points
are returned as control values. Points1 contains the indices of the matched input points from the first image,
Points2 contains the indices of the corresponding points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to
obtain reproducible results. If RandSeed is set to a positive number, the operator yields the same result on every
call with the same parameters because the internally used random number generator is initialized with the seed
value. If RandSeed = 0, the random number generator is initialized with the current time. Hence, the results
may not be reproducible in this case.
Parameter
HALCON 8.0.2
1034 CHAPTER 17. TOOLS
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
hom_vector_to_proj_hom_mat2d, vector_to_proj_hom_mat2d
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
RTrans Row
CTrans = HomMat2D · Col
WTrans 1
!
RowTrans
RTrans
= WTrans
ColTrans CTrans
WTrans
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d
Module
Foundation
To transform the homogeneous coordinates to Euclidean coordinates, they have to be divided by Qw:
!
Qx
Ex Qw
= Qy
Ey Qw
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Homogeneous projective transformation matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (y coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (w coordinate).
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Output point (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Output point (y coordinate).
. Qw (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Output point (w coordinate).
Parallelization Information
projective_trans_point_2d is reentrant and processed without parallelization.
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_pixel
Module
Foundation
HALCON 8.0.2
1036 CHAPTER 17. TOOLS
The coordinates of the original point are passed in (Row1,Column1), while the corresponding angle is passed
in Angle1. The coordinates of the transformed point are passed in (Row2,Column2), while the corresponding
angle is passed in Angle2. The following equation describes the transformation of the point using homogeneous
vectors:
Row2 Row1
Column2 = HomMat2D · Column1
1 1
In particular, the operator vector_angle_to_rigid is useful to construct a rigid affine transformation from
the results of the matching operators (e.g., find_shape_model or best_match_rot_mg), which trans-
forms a reference image to the current image or (if the parameters are passed in reverse order) from the current
image to the reference image.
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Parameter
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Row coordinate of the original point.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Column coordinate of the original point.
. Angle1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Angle of the original point.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Row coordinate of the transformed point.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Column coordinate of the transformed point.
. Angle2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Angle of the transformed point.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Example
Parallelization Information
vector_angle_to_rigid is reentrant and processed without parallelization.
Possible Predecessors
best_match_rot_mg, best_match_rot
Possible Successors
hom_mat2d_invert, affine_trans_image, affine_trans_region,
affine_trans_contour_xld, affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_rigid
See also
vector_field_to_hom_mat2d
Module
Foundation
HALCON 8.0.2
1038 CHAPTER 17. TOOLS
The point correspondences are passed in the tuples (Px,Py) and (Qx,Qy), where corresponding points must be at
the same index positions in the tuples. If more than three point correspondences are passed the transformation
is overdetermined. In this case, the returned transformation is the transformation that minimizes the distances
between the input points (Px,Py) and the transformed points (Qx,Qy), as described in the following equation
(points as homogeneous vectors):
2
X
Qx[i] Px[i]
Qy[i] − HomMat2D · Py[i]
= minimum
i
1 1
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
If fewer than 4 pairs of points (Px,Py), (Qx,Qy) are given, there exists no unique solution, if exactly 4 pairs
are supplied the matrix HomMat2D transforms them in exactly the desired way, and if there are more than 4
point pairs given, vector_to_proj_hom_mat2d seeks to minimize the transformation error. To achieve
such a minimization, several different algorithms are available. The algorithm to use can be chosen using
the parameter Method. Method=’dlt’ uses a fast and simple, but also rather inaccurate error estimation al-
gorithm while Method=’normalized_dlt’ offers a good compromise between speed and accuracy. Finally,
Method=’gold_standard’ performs a mathematically optimal but slower optimization.
If ’gold_standard’ is used and the input points have been obtained from an operator like points_foerstner,
which provides a covariance matrix for each of the points, which specifies the accuracy of the points, this can be
taken into account by using the input parameters CovYY1, CovXX1, CovXY1 for the points in the first image and
CovYY2, CovXX2, CovXY2 for the points in the second image. The covariances are symmetric 2 × 2 matrices.
CovXX1/CovXX2 and CovYY1/CovYY2 are a list of diagonal entries while CovXY1/CovXY2 contains the non-
diagonal entries which appear twice in a symmetric matrix. If a different Method than ’gold_standard’ is used or
the covariances are unknown the covariance parameters can be left empty.
In contrast to hom_vector_to_proj_hom_mat2d, points at infinity cannot be used to
determine the transformation in vector_to_proj_hom_mat2d. If this is necessary,
hom_vector_to_proj_hom_mat2d must be used. If the correspondence between the points has not
been determined, proj_match_points_ransac should be used to determine the correspondence as well as
the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 1 (row coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 1 (column coordinate).
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Input points in image 2 (row coordinate).
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Input points in image 2 (column coordinate).
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Estimation algorithm.
Default Value : ’normalized_dlt’
List of values : Method ∈ {’normalized_dlt’, ’gold_standard’, ’dlt’}
. CovXX1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Row coordinate variance of the points in image 1.
Default Value : []
. CovYY1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Column coordinate variance of the points in image 1.
Default Value : []
. CovXY1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Covariance of the points in image 1.
Default Value : []
. CovXX2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Row coordinate variance of the points in image 2.
Default Value : []
HALCON 8.0.2
1040 CHAPTER 17. TOOLS
The point correspondences are passed in the tuples (Px, Py) and (Qx,Qy), where corresponding points must be
at the same index positions in the tuples. The transformation is always overdetermined. Therefore, the returned
transformation is the transformation that minimizes the distances between the original points (Px,Py) and the
transformed points (Qx,Qy), as described in the following equation (points as homogeneous vectors):
2
X
Qx[i] Px[i]
Qy[i] − HomMat2D · Py[i]
= minimum
i
1 1
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
X coordinates of the transformed points.
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Y coordinates of the transformed points.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Parallelization Information
vector_to_rigid is reentrant and processed without parallelization.
Possible Successors
affine_trans_image, affine_trans_region, affine_trans_contour_xld,
affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_hom_mat2d, vector_to_similarity
See also
vector_field_to_hom_mat2d
Module
Foundation
The point correspondences are passed in the tuples (Px, Py) and (Qx,Qy), where corresponding points must be
at the same index positions in the tuples. If more than two point correspondences are passed the transformation
is overdetermined. In this case, the returned transformation is the transformation that minimizes the distances
between the original points (Px,Py) and the transformed points (Qx,Qy), as described in the following equation
(points as homogeneous vectors):
2
X
Qx[i] Px[i]
Qy[i] − HomMat2D · Py[i]
= minimum
i
1 1
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
HALCON 8.0.2
1042 CHAPTER 17. TOOLS
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
X coordinates of the transformed points.
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Y coordinates of the transformed points.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Output transformation matrix.
Parallelization Information
vector_to_similarity is reentrant and processed without parallelization.
Possible Successors
affine_trans_image, affine_trans_region, affine_trans_contour_xld,
affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_hom_mat2d, vector_to_rigid
See also
vector_field_to_hom_mat2d
Module
Foundation
17.2 3D-Transformations
affine_trans_point_3d ( : : HomMat3D, Px, Py, Pz : Qx, Qy, Qz )
1 1
The transformation matrix can be created using the operators hom_mat3d_identity, hom_mat3d_scale,
hom_mat3d_rotate, hom_mat3d_translate, etc., or be the result of pose_to_hom_mat3d.
For example, if HomMat3D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
Qx Px Px
Qy R t Py
= R·
Py + t
Qz =
·
000 1 Pz Pz
1 1 1
Parameter
HALCON 8.0.2
1044 CHAPTER 17. TOOLS
Result
convert_pose_type returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
convert_pose_type is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration
Possible Successors
write_pose
See also
create_pose, get_pose_type, write_pose, read_pose
Module
Foundation
Create a 3D pose.
create_pose creates the 3D pose Pose. A pose describes a rigid 3D transformation, i.e., a transformation
consisting of an arbitrary translation and rotation, with 6 parameters: TransX, TransY, and TransZ specify the
translation along the x-, y-, and z-axis, respectively, while RotX, RotY, and RotZ describe the rotation.
3D poses are typically used in two ways: First, to describe the position and orientation of one coordinate system
relative to another (e.g., the pose of a part’s coordinate system relative to the camera coordinate system - in short:
the pose of the part relative to the camera) and secondly, to describe how coordinates can be transformed between
two coordinate systems (e.g., to transform points from part coordinates into camera coordinates).
Please note that you can “read” this chain in two ways: If you start from the right, the rotations are always
performed relative to the global (i.e., fixed or “old”) coordinate system. Thus, Rgba can be read as follows: First
rotate around the z-axis, then around the “old” y-axis, and finally around the “old” x-axis. In contrast, if you read
from the left to the right, the rotations are performed relative to the local (i.e., “new”) coordinate system. Then,
Rgba corresponds to the following: First rotate around the x-axis, the around the “new” y-axis, and finally around
the “new(est)” z-axis.
Reading Rgba from right to left corresponds to the following sequence of operator calls:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, RotZ, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, RotY, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, RotX, ’x’, 0, 0, 0, HomMat3DXYZ)
In contrast, reading from left to right corresponds to the following operator sequence:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate_local (HomMat3DIdent, RotX, ’x’, 0, 0, 0,
HomMat3DRotX)
hom_mat3d_rotate_local (HomMat3DRotX, RotY, ’y’, 0, 0, 0,
HomMat3DRotXY)
hom_mat3d_rotate_local (HomMat3DRotXY, RotZ, ’z’, 0, 0, 0, HomMat3DXYZ)
When passing ’abg’ in OrderOfRotation, the rotation corresponds to the following chain:
If you pass ’rodriguez’ in OrderOfRotation, the rotation parameters RotX, RotY, and RotZ are interpreted
as the x-, y-, and z-component of the so-called Rodriguez rotation vector. The direction of the vector defines the
(arbitrary) axis of rotation. The length of the vector usually defines the rotation angle with positive orientation.
Here, a variation of the Rodriguez vector is used, where the length of the vector defines the tangent of half the
rotation angle:
RotX p
Rrodriguez = rotate around RotY by 2 · arctan( RotX2 + RotY2 + RotZ2 )
RotZ
TransX
R t R(RotX, RotY, RotZ) TransY
Hpose = = =
000 1 TransZ
0 0 0 1
1 0 0 TransX 0
0 1 0 TransY 0
· R(RotX, RotY, RotZ)
= = H(t) · H(R)
0 0 1 TransZ 0
0 0 0 1 0 0 0 1
Transformation of coordinates
The following equation describes how a point can be transformed from coordinate system 1 into coordinate system
2 with a pose, or more exactly, with the corresponding homogeneous transformation matrix 2 H1 (input and output
points as homogeneous vectors, see also affine_trans_point_3d). Note that to transform points from
coordinate system 1 into system 2, you use the transformation matrix that describes the pose of coordinate system
1 relative to system 2.
2 1 TransX
p p R(RotX, RotY, RotZ) · p1 + TransY
= 2 H1 · =
1 1 TransZ
1
HALCON 8.0.2
1046 CHAPTER 17. TOOLS
0 1 0 0 −TransX
R(RotX, RotY, RotZ) 0 0 1 0
· −TransY
= H(R) · H(−t)
HR(p−T ) =
0 0 0 1 −TransZ
0 0 0 1 0 0 0 1
If you select ’coordinate_system’ for ViewOfTransform, the sequence of transformations remains constant,
but the rotation angles are negated. Please note that, contrary to its name, this is not equivalent to transforming a
coordinate system!
1 0 0 TransX 0
0 1 0
· R(−RotX, −RotY, −RotZ)
TransY 0
Hcoordinate_system =
0 0 1
TransZ 0
0 0 0 1 0 0 0 1
You can convert poses into other representation types using convert_pose_type and query the type using
get_pose_type.
Parameter
. TransX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Translation along the x-axis (in [m]).
Default Value : 0.1
Suggested values : TransX ∈ {-1.0, -0.75, -0.5, -0.25, -0.2, -0.1, -0.5, -0.25, -0.125, -0.01, 0, 0.01, 0.125,
0.25, 0.5, 0.1, 0.2, 0.25, 0.5, 0.75, 1.0}
. TransY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Translation along the y-axis (in [m]).
Default Value : 0.1
Suggested values : TransY ∈ {-1.0, -0.75, -0.5, -0.25, -0.2, -0.1, -0.5, -0.25, -0.125, -0.01, 0, 0.01, 0.125,
0.25, 0.5, 0.1, 0.2, 0.25, 0.5, 0.75, 1.0}
. TransZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Translation along the z-axis (in [m]).
Default Value : 0.1
Suggested values : TransZ ∈ {-1.0, -0.75, -0.5, -0.25, -0.2, -0.1, -0.5, -0.25, -0.125, -0.01, 0, 0.01, 0.125,
0.25, 0.5, 0.1, 0.2, 0.25, 0.5, 0.75, 1.0}
. RotX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Rotation around x-axis or x component of the Rodriguez vector (in [◦ ] or without unit).
Default Value : 90
Suggested values : RotX ∈ {90, 180, 270}
Typical range of values : 0 ≤ RotX ≤ 360
. RotY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Rotation around y-axis or y component of the Rodriguez vector (in [◦ ] or without unit).
Default Value : 90
Suggested values : RotY ∈ {90, 180, 270}
Typical range of values : 0 ≤ RotY ≤ 360
. RotZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Rotation around z-axis or z component of the Rodriguez vector (in [◦ ] or without unit).
Default Value : 90
Suggested values : RotZ ∈ {90, 180, 270}
Typical range of values : 0 ≤ RotZ ≤ 360
. OrderOfTransform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Order of rotation and translation.
Default Value : ’Rp+T’
Suggested values : OrderOfTransform ∈ {’Rp+T’, ’R(p-T)’}
. OrderOfRotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Meaning of the rotation values.
Default Value : ’gba’
Suggested values : OrderOfRotation ∈ {’gba’, ’abg’, ’rodriguez’}
. ViewOfTransform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
View of transformation.
Default Value : ’point’
Suggested values : ViewOfTransform ∈ {’point’, ’coordinate_system’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
3D pose.
Number of elements : 7
Example
HALCON 8.0.2
1048 CHAPTER 17. TOOLS
Result
create_pose returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception handling
is raised.
Parallelization Information
create_pose is reentrant and processed without parallelization.
Possible Successors
pose_to_hom_mat3d, write_pose, camera_calibration, hand_eye_calibration
Alternatives
read_pose, hom_mat3d_to_pose
See also
hom_mat3d_rotate, hom_mat3d_translate, convert_pose_type, get_pose_type,
hom_mat3d_to_pose, pose_to_hom_mat3d, write_pose, read_pose
Module
Foundation
Module
Foundation
hom_mat3d_compose ( : : HomMat3DLeft,
HomMat3DRight : HomMat3DCompose )
For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
Rl tl Rr tr Rl · Rr Rl ·tr + tl
HomMat3DCompose = · =
000 1 000 1 0 0 0 1
Parameter
. HomMat3DLeft (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
Left input transformation matrix.
. HomMat3DRight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
Right input transformation matrix.
. HomMat3DCompose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_compose returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
hom_mat3d_compose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_compose, hom_mat3d_translate, hom_mat3d_translate_local,
hom_mat3d_scale, hom_mat3d_scale_local, hom_mat3d_rotate,
hom_mat3d_rotate_local, pose_to_hom_mat3d
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
See also
affine_trans_point_3d, hom_mat3d_identity, hom_mat3d_rotate,
hom_mat3d_translate, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation
hom_mat3d_identity ( : : : HomMat3DIdentity )
HALCON 8.0.2
1050 CHAPTER 17. TOOLS
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, HomMat3DIdentity is stored as the
tuple [1,0,0,0,0,1,0,0,0,0,1,0].
Parameter
. HomMat3DIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
Transformation matrix.
Result
hom_mat3d_identity always returns 2 (H_MSG_TRUE).
Parallelization Information
hom_mat3d_identity is reentrant and processed without parallelization.
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Alternatives
pose_to_hom_mat3d
Module
Foundation
hom_mat3d_rotate adds a rotation by the angle Phi around the axis passed in the parameter Axis to the
homogeneous 3D transformation matrix HomMat3D and returns the resulting matrix in HomMat3DRotate. The
axis can by specified by passing the strings ’x’, ’y’, or ’z’, or by passing a vector [x,y,z] as a tuple.
The rotation is decribed by a 3×3 rotation matrix R. It is performed relative to the global (i.e., fixed) coordinate
system; this corresponds to the following chain of transformation matrices:
Axis = ’x’:
0
1 0 0
Rx 0
HomMat3DRotate = · HomMat3D Rx = 0 cos(Phi) − sin(Phi)
0
0 sin(Phi) cos(Phi)
000 1
Axis = ’y’:
0
cos(Phi) 0 sin(Phi)
Ry 0
· HomMat3D
HomMat3DRotate = Ry = 0 1 0
0
− sin(Phi) 0 cos(Phi)
000 1
Axis = ’z’:
0
cos(Phi) − sin(Phi) 0
Rz 0
· HomMat3D
HomMat3DRotate = Rz = sin(Phi) cos(Phi) 0
0
0 0 1
000 1
Axis = [x,y,z]:
0
Ra 0
HomMat3DRotate = · HomMat3D Ra = u · uT + cos(Phi) · (I − u · uT ) + sin(Phi) · S
0
000 1
x0 −z 0 y0
1 0 0 0
Axis
u= = y0 I= 0 1 0 S = z0 0 −x0
kAxisk
z0 0 0 1 −y 0 x0 0
The point (Px,Py,Pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat3DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 0 +Px 0 1 0 0 −Px
· 0 1 0 −Py · HomMat3D
0 1 0 +Py R 0
HomMat3DRotate = 0 0 1 +Pz ·
0 0 0 1 −Pz
0 0 0 1 000 1 0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_rotate_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
HALCON 8.0.2
1052 CHAPTER 17. TOOLS
Parameter
Axis = ’x’:
0
1 0 0
Rx 0
HomMat3DRotate = HomMat3D · Rx = 0 cos(Phi) − sin(Phi)
0
0 sin(Phi) cos(Phi)
000 1
Axis = ’y’:
0
cos(Phi) 0 sin(Phi)
Ry 0
HomMat3DRotate = HomMat3D · Ry = 0 1 0
0
− sin(Phi) 0 cos(Phi)
000 1
Axis = ’z’:
0
cos(Phi) − sin(Phi) 0
Rz 0
HomMat3DRotate = HomMat3D · Rz = sin(Phi) cos(Phi) 0
0
0 0 1
000 1
Axis = [x,y,z]:
0
Ra 0
HomMat3DRotate = HomMat3D · Ra = u · uT + cos(Phi) · (I − u · uT ) + sin(Phi) · S
0
000 1
0
−z 0 y0
x 1 0 0 0
Axis
u= = y0 I= 0 1 0 S = z0 0 −x0
kAxisk
z0 0 0 1 −y 0 x0 0
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat3DRotate.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Axis, to be rotated around.
Default Value : ’x’
Suggested values : Axis ∈ {’x’, ’y’, ’z’}
. HomMat3DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real
Output transformation matrix.
HALCON 8.0.2
1054 CHAPTER 17. TOOLS
Result
If the parameters are valid, the operator hom_mat3d_rotate_local returns 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Parallelization Information
hom_mat3d_rotate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate_local, hom_mat3d_scale_local,
hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate_local, hom_mat3d_scale_local, hom_mat3d_rotate_local
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_rotate, pose_to_hom_mat3d,
hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation
The point (Px,Py,Pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat3DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 0 +Px 0 1 0 0 −Px
0 1 0 +Py
· S
0 0 1 0
· −Py
· HomMat3D
HomMat3DScale =
0 0 1 +Pz 0 0 0 1 −Pz
0 0 0 1 000 1 0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_scale_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
HALCON 8.0.2
1056 CHAPTER 17. TOOLS
i.e., the coordinate system described by HomMat3D; this corresponds to the following chain of transformation
matrices:
0
Sx 0 0
S 0
HomMat3DScale = HomMat3D · S= 0 Sy 0
0
0 0 Sz
000 1
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat3DScale.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
Result
hom_mat3d_to_pose returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
hom_mat3d_to_pose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_rotate, hom_mat3d_translate, hom_mat3d_invert
Possible Successors
camera_calibration, write_pose, disp_caltab, sim_caltab
See also
create_pose, camera_calibration, disp_caltab, sim_caltab, write_pose, read_pose,
pose_to_hom_mat3d, project_3d_point, get_line_of_sight, hom_mat3d_rotate,
hom_mat3d_translate, hom_mat3d_invert, affine_trans_point_3d
Module
Foundation
HALCON 8.0.2
1058 CHAPTER 17. TOOLS
To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_translate_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
HALCON 8.0.2
1060 CHAPTER 17. TOOLS
Result
pose_to_hom_mat3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
pose_to_hom_mat3d is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, read_pose
Possible Successors
affine_trans_point_3d, hom_mat3d_invert, hom_mat3d_translate,
hom_mat3d_rotate, hom_mat3d_to_pose
See also
create_pose, camera_calibration, write_pose, read_pose, hom_mat3d_to_pose,
project_3d_point, get_line_of_sight, hom_mat3d_rotate, hom_mat3d_translate,
hom_mat3d_invert, affine_trans_point_3d
Module
Foundation
Parameter
HALCON 8.0.2
1062 CHAPTER 17. TOOLS
1 0 0 DX
0 1 1 DY
PoseNewOrigin = PoseIn ·
0 0 1
DZ
0 0 0 1
A typical application of this operator when defining a world coordinate system by placing the standard cal-
ibration plate on the plane of measurements. In this case, the external camera parameters returned by
camera_calibration correspond to a coordinate system that lies above the measurement plane, because
the coordinate system of the calibration plate is located on its surface and the plate has a certain thickness. To
correct the pose, call set_origin_pose with the translation vector (0,0,D), where D is the thickness of the
calibration plate.
Parameter
A pose describes a rigid 3D transformation, i.e., a transformation consisting of an arbitrary translation and rotation,
with 6 parameters, three for the translation, three for the rotation. With a seventh parameter different pose types
can be indicated (see create_pose).
A file generated by write_pose looks like the following:
Parameter
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .pose-array ; real / integer
3D pose.
Number of elements : 7
. PoseFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the exterior camera parameters.
Default Value : ’campose.dat’
List of values : PoseFile ∈ {’campose.dat’, ’campose.initial’, ’campose.final’}
Example
Result
write_pose returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been written success-
fully. If necessary an exception handling is raised.
HALCON 8.0.2
1064 CHAPTER 17. TOOLS
Parallelization Information
write_pose is local and processed completely exclusively without parallelization.
Possible Predecessors
camera_calibration, hom_mat3d_to_pose
See also
create_pose, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_pose, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation
17.3 Background-Estimator
close_all_bg_esti ( : : : )
close_bg_esti ( : : BgEstiHandle : )
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
Result
close_bg_esti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
close_bg_esti is local and processed completely exclusively without parallelization.
Possible Predecessors
run_bg_esti
See also
create_bg_esti
Module
Foundation
HALCON 8.0.2
1066 CHAPTER 17. TOOLS
AdaptMode denotes, whether the foreground/background decision threshold applied to the grayvalue difference
between estimation and actual value is fixed or whether it adapts itself depending on the grayvalue deviation of the
background pixels.
If AdaptMode is set to ’off’, the parameter MinDiff denotes a fixed threshold. The parameters StatNum,
ConfidenceC and TimeC are meaningless in this case.
If AdaptMode is set to ’on’, then MinDiff is interpreted as a base threshold. For each pixel an offset is added
to this threshold depending on the statistical evaluation of the pixel value over time. StatNum holds the number
of data sets (past frames) that are used for computing the grayvalue variance (FIR-Filter). ConfidenceC is used
to determine the confidence interval.
The confidence interval determines the values of the background statistics if background pixels are hidden by
a foreground object and thus are detected as foreground. According to the student t-distribution the confidence
constant is 4.30 (3.25, 2.82, 2.26) for a confidence interval of 99,8% (99,0%, 98,0%, 95,0%). TimeC holds a
time constant for the exp-function that raises the threshold in case of a foreground estimation of the pixel. That
means, the threshold is raised in regions where movement is detected in the foreground. That way larger changes in
illumination are tolerated if the background becomes visible again. The main reason for increasing this tolerance is
the impossibility for a prediction of illumintaion changes while the background is hidden. Therefore no adaptation
of the estimated background image is possible.
Attention
If GainMode was set to ’frame’, the run-time can be extremly long for large values of Gain1 or Gain2, because
the values for the gains’ table are determined by a simple binary search.
Parameter
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize 1. BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7.0,10,3.25,15.0,BgEstiHandle1)
/* initialize 2. BgEsti-Dataset with
frame orientated gains and fixed threshold */
create_bg_esti(InitImage,0.7,0.7,’frame’,30.0,4.0,
’off’,9.0,10,3.25,15.0,BgEstiHandle2).
Result
create_bg_esti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
create_bg_esti is local and processed completely exclusively without parallelization.
Possible Successors
run_bg_esti
See also
set_bg_esti_params, close_bg_esti
Module
Foundation
HALCON 8.0.2
1068 CHAPTER 17. TOOLS
Parameter
/* read Init-Image:*/
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7.0,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
/* change only the gain parameter in dataset: */
get_bg_esti_params(BgEstiHandle,par1,par2,par3,par4,
par5,par6,par7,par8,par9,par10)
set_bg_esti_params(BgEstiHandle,par1,par2,par3,0.004,
0.08,par6,par7,par8,par9,par10)
/* read the next image in sequence: */
read_image(Image3,’Image_3’)
/* estimate the Background: */
run_bg_esti(Image3,Region3,BgEstiHandle)
/* display the foreground region: */
disp_region(Region3,WindowHandle)
/* etc. */
Result
get_bg_esti_params returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
get_bg_esti_params is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti
Possible Successors
run_bg_esti
See also
set_bg_esti_params
Module
Foundation
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* give the background image from the aktive dataset: */
give_bg_esti(BgImage,BgEstiHandle)
/* display the background image: */
disp_image(BgImage,WindowHandle).
Result
give_bg_esti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
give_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
run_bg_esti
Possible Successors
run_bg_esti, create_bg_esti, update_bg_esti
See also
run_bg_esti, update_bg_esti, create_bg_esti
Module
Foundation
HALCON 8.0.2
1070 CHAPTER 17. TOOLS
The background estimation processes only single-channel images. Therefore the background has to be adapted
separately for every channel.
The background estimation should be used on half- or even quarter-sized images. For this, the input images (and
the initialization image!) has to be reduced using zoom_image_factor. The advantage is a shorter run-time
on one hand and a low-band filtering on the other. The filtering eliminates high frequency noise and results in a
more reliable estimation. As a result the threshold (see create_bg_esti) can be lowered. The foreground
region returned by run_bg_esti then has to be enlarged again for further processing.
Attention
The passed image (PresentImage) must have the same type and size as the background image of the current
data set (initialized with create_bg_esti).
Parameter
. PresentImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / real
Current image.
. ForegroundRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region of the detected foreground.
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; integer
ID of the BgEsti data set.
Example
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
Result
run_bg_esti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
run_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti, update_bg_esti
Possible Successors
run_bg_esti, give_bg_esti, update_bg_esti
See also
set_bg_esti_params, create_bg_esti, update_bg_esti, give_bg_esti
Module
Foundation
HALCON 8.0.2
1072 CHAPTER 17. TOOLS
/* read Init-Image:*/
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7.0,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
/* change parameter in dataset: */
set_bg_esti_params(BgEstiHandle,0.7,0.7,’fixed’,
0.004,0.08,’on’,9.0,10,3.25,20.0)
/* read the next image in sequence: */
read_image(Image3,’Image_3’)
/* estimate the Background: */
run_bg_esti(Image3,Region3,BgEstiHandle)
/* display the foreground region: */
disp_region(Region3,WindowHandle)
/* etc. */
Result
set_bg_esti_params returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
set_bg_esti_params is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti
Possible Successors
run_bg_esti
See also
update_bg_esti
Module
Foundation
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* use the Region and the information of a knowledge base */
/* to calculate the UpDateRegion */
update_bg_esti(Image1,UpdateRegion,BgEstiHandle)
/* then read the next image in sequence: */
read_image(,Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* etc. */
Result
update_bg_esti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
update_bg_esti is reentrant and processed without parallelization.
HALCON 8.0.2
1074 CHAPTER 17. TOOLS
Possible Predecessors
run_bg_esti
Possible Successors
run_bg_esti
See also
run_bg_esti, give_bg_esti
Module
Foundation
17.4 Barcode
clear_all_bar_code_models ( : : : )
Delete all bar code models and free the allocated memory
The operator clear_all_bar_code_models deletes all bar code models that were created by
create_bar_code_model. All memory used by the models is freed. After the operator call, all bar code
handles are invalid.
Attention
clear_all_bar_code_models exists solely for the purpose of implementing the “reset program” function-
ality in HDevelop. clear_all_bar_code_models must not be used in any application.
Result
The operator clear_all_bar_code_models returns the value 2 (H_MSG_TRUE) if all bar code models
were freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_all_bar_code_models is processed completely exclusively without parallelization.
Alternatives
clear_bar_code_model
See also
create_bar_code_model, find_bar_code
Module
Bar Code
clear_bar_code_model ( : : BarCodeHandle : )
Module
Bar Code
create_bar_code_model ( : : GenParamNames,
GenParamValues : BarCodeHandle )
HALCON 8.0.2
1076 CHAPTER 17. TOOLS
The output value DecodedDataStrings contains the decoded string of the symbol for each bar code
result. The contents of the strings are conform to the approriate standard of the symbology. Typically,
DecodedDataStrings contains only data characters. For bar codes with a mandatory check character the
check character is not included in the string. For bar codes with a facultative check character, like for example
Code 39, Codabar, 25 Industrial or 25 Interleaved, the result depends on the value of the ’check_char’ parame-
ter, which can be set in create_bar_code_model or set_bar_code_param. By default ’check_char’
is ’absent’ and the check character is interpreted as a normal data character and hence included in the decoded
string. When ’check_char’ is set to ’present’ the correctness of the check character is primarily tested. If the check
character is correct the decoded string contains just the data characters; if the check character is not correct the bar
code is graded as unreadable. Accordingly, the symbol region and the decoded string do not appear in the list of
resulting strings (DecodedDataStrings) and in the list of resulting regions (SymbolRegions).
The underlying decoded reference data, including start/stop and check characters, can be queried by using the
get_bar_code_result operator with the option ’decoded_reference’.
Following bar code symbologies are supported: 2/5 Industrial, 2/5 Interleaved, Codabar, Code 39, Code 93, Code
128, EAN-8, EAN-8 Add-On 2, EAN-8 Add-On 5, EAN-13, EAN-13 Add-On 2, EAN-13 Add-On 5, UPC-A,
UPC-A Add-On 2, UPC-A Add-On 5, UPC-E, UPC-E Add-On 2, UPC-E Add-On 5, PharmaCode, RSS-14, RSS-
14 Truncated, RSS-14 Stacked, RSS-14 Stacked Omnidirectional, RSS Limited, RSS Expanded, RSS Expanded
Stacked.
Note, that the PharmaCode can be read in forward and backward direction, both yielding a valid result. Therefore,
both strings are returned and concatenated into a single string in DecodedDataStrings by a separating comma.
Parameter
Access iconic objects that were created during the search or decoding of bar code symbols.
With the operator get_bar_code_object, iconic objects created during the last call of the operator
find_bar_code can be accessed. Besides the name of the object (ObjectName), the bar code model
(BarCodeHandle) must be passed to get_bar_code_object. In addition, in CandidateHandle an in-
dex to a single decoded symbol or a single symbol candidate must be passed. Alternatively, CandidateHandle
can be set to ’all’ and then all objects of the decoded symbols or symbol candidates are returned.
Setting ObjectName to ’symbol_regions’ will return regions of successfully decoded symbols. When choosing
’all’ as CandidateHandle, the regions of all decoded symbols are retrieved. The order of the returned objects
is the same as in find_bar_code. If there is a total of n decoded symbols CandidateHandle can be chosen
in between 0 and (n-1) to get the region of the respective decoded symbol.
Setting ObjectName to ’candidate_regions’ will return regions of potential bar codes. If there is a total of n
decoded symbols out of a total of m candidates then CandidateHandle can be chosen between 0 and (m-1).
With CandidateHandle between 0 and (n-1) the original segmented region of the respective decoded symbol
is retrieved. With CandidateHandle between n and (m-1) the region of the potential or undecodable symbol
is returned. In addition, CandidateHandle can be set to ’all’ to retrieve all candidate regions at the same time.
Setting ObjectName to ’scanlines_all’ or ’scanlines_valid’ will return XLD contours representing the partic-
ular detected bars in the scanlines applied on the candidate regions. ’scanlines_all’ represents all scanlines that
find_bar_code whould place in order to decode a barcode. ’scanlines_valid’ represents only those scanlines
that could be successfully decoded. For single row bar codes, there will be at least one ’scanlines_valid’ if the
symbol was successfully decoded. There will be no ’scanlines_valid’ if it was not decoded. For stacked bar codes
(e.g. ’RSS-14 Stacked’ and ’RSS Expanded Stacked’) this rule applies similarly, but on a per-symbol-row basis
rather then per-symbol. Note that get_bar_code_object returns all XLD contours merged into a single ar-
ray of XLDs and hence there is no way to identify the contours corresponding to separate scanlines. Furthermore,
if ’all’ is used as CandidateHandle, the output object will contain XLD contours for all symbols and in this
case there is no way to identify the contours corresponding to separate symbols as well. However, the contours
still can be used for visualization purposes.
Parameter
. BarCodeObjects (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Objects that are created as intermediate results during the detection or evaluation of bar codes.
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; integer
Handle of the bar code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; string / integer
Indicating the bar code results respectively candidates for which the data is required.
Default Value : ’all’
Suggested values : CandidateHandle ∈ {0, 1, 2, ’all’}
. ObjectName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the iconic object to return.
Default Value : ’symbol_regions’
List of values : ObjectName ∈ {’symbol_regions’, ’candidate_regions’, ’scanlines_all’, ’scanlines_valid’}
Result
The operator get_bar_code_object returns the value 2 (H_MSG_TRUE) if the given parameters are correct
and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_bar_code_object is reentrant and processed without parallelization.
Possible Predecessors
find_bar_code
See also
get_bar_code_result
Module
Bar Code
get_bar_code_param ( : : BarCodeHandle,
GenParamNames : GenParamValues )
Get one or several parameters that describe the bar code model.
HALCON 8.0.2
1078 CHAPTER 17. TOOLS
The operator get_bar_code_param allows to query parameters of a bar code model, which are of relevance
for a successful search and decoding of a respective class of bar codes.
The names of the desired parameters are passed in the generic parameter GenParamNames and the corresponding
values are returned in GenParamValues. All of these parameters can be set and changed at any time with the
operator set_bar_code_param.
The following parameters can be queried – ordered by different categories:
Size of the bar code elements:
’meas_thresh’: Threshold for the detection of edges in the bar code region.
’max_diff_orient’: Maximal difference in the orientation of edges in a bar code region. The difference in oriented
angles, given in degree, refers to neighboring pixels.
Further details on the above parameters can be found with the description of set_bar_code_param operator.
Parameter
Possible Successors
set_bar_code_param
Module
Bar Code
Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
The operator get_bar_code_result allows to access alphanumeric results of the find and decode process.
To access a result, first the handle of the bar code model (BarCodeHandle) and the index of the resulting
symbol (CandidateHandle) must be passed. CandidateHandle refers to the results, in the same order that
is returned by operator find_bar_code. CandidateHandle can take numbers from 0 to (n-1), where n is
the total number of successfully decoded symbols. Alternatively, CandidateHandle can be set to ’all’ if all
results are desired. The option ’all’ can be chosen only in the case where the return value of a single result is single
valued.
When ResultName is set to ’decoded_strings’ the decoded result is returned as a string in a human readable
format. This decoded string can be returned for a single result, i.e., CandidateHandle is for example 0, or for
all results simultaneously, i.e., CandidateHandle is set to ’all’. Note, that only data characters are comprised
in the decoded string. Start/stop characters are excluded, but can be refered to via ’decoded_reference’. For codes
with a facultative check character it depends on the settings whether the check character is returned or not. When
’check_char’ is set to the default value ’absent’ the decoded string takes the check character as a normal data
character. When ’check_char’ is set to ’present’ and if the check character is correct it will be ignored in the string.
If the check character is wrong the resulting string is an empty string.
When choosing ’decoded_reference’ as ResultName the underlying decoded reference data is returned. It com-
prises all original characters of the symbol, i.e., data characters, potential start or stop characters and check charac-
ters if present. For codes taking only numeric data, like, e.g., the EAN/UPC codes, the RSS-14 and RSS Limited
codes, or the 2/5 codes, the decoded reference data takes the same values as the decoded string data including check
characters. For codes with alphanumeric data, like for example code 39 or code 128 the decoded reference data are
the indices of the respective decoding table. For RSS Expanded and RSS Expanded Stacked the reference values
are the ASCII codes of the decoded data, where the special charachter FNC1 appears with value 10. Furthermore,
for all codes from the RSS family the first reference value reprsents a linkage flag with value of 1 if the flag is set
and 0 otherwise. As the decoded reference is a tuple of whole numbers it can only be called for a single result,
meaning that CandidateHandle has to be the handle number of the corresponding decoded symbol.
When ResultName is set to ’composite_strings’ or ’composite_reference’, then the decoded string or the refer-
ence data of a RSS Composite component is returned, respectively. For further details see the description of the
parameter ’composite_code’ of set_bar_code_param.
When ResultName is set to ’orientation’, the orientation for the specified result is returned. The ’orientation’ of
a bar code is defined as the angle between its reading direction and the horizontal image axis. The angle is positive
in counter clockwise direction and is given in degrees. It can be in the range of [-180.0 . . . 180.0] degrees. Note
that the reading direction is perpendicular to the bars of the bar code. A single angle is returned when only one
result is specified, e.g., by entering 0 for CandidateHandle. Otherwise, when CandidateHandle is set to
’all’, a tuple containing the angles of all results is returned.
Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; integer
Handle of the bar code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; string / integer
Indicating the bar code results respectively candidates for which the data is required.
Default Value : ’all’
Suggested values : CandidateHandle ∈ {0, 1, 2, ’all’}
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Names of the resulting data to return.
Default Value : ’decoded_strings’
Suggested values : ResultName ∈ {’decoded_strings’, ’decoded_reference’, ’orientation’,
’composite_strings’, ’composite_reference’}
HALCON 8.0.2
1080 CHAPTER 17. TOOLS
’element_size_min’: Minimal size of bar code elements, i.e. the minimal width of bars and spaces. For small bar
codes the value should be reduced to 1.5. In the case of huge bar codes the value should be increased, which
results in a shorter execution time and fewer candidates.
Typical values: [1.5 . . . 10.0]
Default: 2.0
’element_size_max’: Maximal size of bar code elements, i.e. the maximal width of bars and spaces. The value of
’element_size_max’ should be adequate low such that two neighboring bar codes are not fused into a single
one. On this other hand the value should be sufficiently high in order to find the complete bar code region.
Typical values: [4.0 . . . 60.0]
Default: 8.0
’element_height_min’: Minimal bar code height. The default value of this parameter is -1, meaning that the bar
code reader automatically derives a reasonable height from the other parameters. Just for very flat and very
high bar codes a manual adjustment of this parameter can be necessary. In the case of a bar code with a height
of less than 16 pixels the respective height should be set by the user. Note, that the minimal value is 8 pixels.
If the bar code is very high, i.e. 70 pixels and more, manually adjusting to the respective height can lead to a
speed-up of the subsequent finding and reading operation.
Typical values: [-1, 8 . . . 64]
Default: -1
’orientation’: Expected bar code orientation. A potential (candidate) bar code contains bars with similar ori-
entation. The ’orientation’ and ’orientation_tol’ parameters are used to specify the range [’orientation’-
’orientation_tol’, ’orientation’+’orientation_tol’]. find_bar_code processes a candidate bar code only
when the avarage orientation of its bars lies in this range. If the bar codes are expected to appear only in
certain orientations in the processed images, one can reduce the orientation range adequately. This enables
an early identification of false candidates and hence shorter execution times. This adjustment can be used for
images with a lot of texture, which includes fragments tending to result in false bar code candidates.
The actual orientation angle of a bar code is explained with get_bar_code_result(...,’orientation’,...)
with the only difference that for the early identification of false candidates the reading direction of the bar
codes is ignored, which results in relevant orientation values only in the range [-90.0 . . . 90.0]. The only ex-
ception to this rule constitutes the bar code symbol PharmaCode, which possesses a forward and a backward
reading direction at the same time: here, ’orientation’ can take values in the range [-180.0 . . . 180.0] and the
decoded result is unique corresponding to just one reading direction.
Typical values: [-90.0 . . . 90.0]
Default: 0.0
’orientation_tol’: Orientation tolerance. See the explanation of ’orientation’ parameter. As explained there, rel-
evant orientation values are only in the range of [-90.0 . . . 90.0], which means that with ’orientation_tol’ =
90 the whole range is spanned. Therefore, valid values for ’orientation_tol’ are only in the range of [0.0
. . . 90.0]. The default value 90.0 means that no restriction on the bar code candidates is performed.
Typical values: [0.0 . . . 90.0]
Default: 90.0
’meas_thresh’: The bar-space-sequence of a bar code is determined with a scanline measuring the position of the
edges. Finding these edges requires a threshold. ’meas_thresh’ defines this threshold which is a relative value
with respect to the dynamic range of the scanline pixels. In the case of disturbances in the bar code region or
a high noise level, the value of ’meas_thresh’ should be increased.
Typical values: [0.05 . . . 0.2]
Default: 0.05
’max_diff_orient’: A potential bar code region contains bars, and hence edges, with a similar orientation. The
value max_diff_orient denotes the maximal difference in this orientation between adjacent pixels and is given
in degree. If a bar code is of bad quality with jagged edges the parameter max_diff_orient should be set to
bigger values. If the bar code is of good quality max_diff_orient can be set to smaller values, thus reducing
the number of potential but false bar code candidates.
Typical values: [2 . . . 20]
Default: 10
’check_char’: For bar codes with a facultative check character, this parameter determines whether the check char-
acter is taken into account or not. If the bar code has a check character, ’check_char’ should be set to ’present’
and thus the check character is tested. In that case, a bar code result is returned only if the check sum is cor-
rect. For ’check_char’ set to ’absent’ no check sum is computed and bar code results are retunred as long as
they were successfully decoded. Bar codes with a facultative check character are, e.g. Code 39, Codabar, 25
Industrial and 25 Interleaved.
Values: [’absent’, ’present’]
Default: ’absent’
’composite_code’: EAN.UPC bar codes can have an additional 2D Composite code component appended. If
’composite_code’ is set to ’CC-A/B’ the composite component will be found and decoded. By default, ’com-
posite_code’ is set to ’none’ and thus it is disabled. If the searched bar code symbol has no attached composite
component, just the result of the bar code itself is returned by find_bar_code. Composite codes are sup-
ported only for bar codes of the RSS family.
Values: [’none’, ’CC-A/B’]
Default: ’none’
Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; integer
Handle of the bar code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Names of the generic parameters that shall be adjusted for finding and decoding bar codes.
Default Value : ’element_size_max’
List of values : GenParamNames ∈ {’element_size_min’, ’element_size_max’, ’element_height_min’,
’orientation’, ’orientation_tol’, ’meas_thresh’, ’max_diff_orient’, ’check_char’, ’composite_code’}
. GenParamValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; integer / string / real
Values of the generic parameters that are adjusted for finding and decoding bar codes.
Default Value : 8
Suggested values : GenParamValues ∈ {0.1, 1.5, 2, 8, 32, 45, ’present’, ’absent’, ’none’, ’CC-A/B’}
HALCON 8.0.2
1082 CHAPTER 17. TOOLS
Result
The operator set_bar_code_param returns the value 2 (H_MSG_TRUE) if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
set_bar_code_param is reentrant and processed without parallelization.
Possible Predecessors
create_bar_code_model
Possible Successors
find_bar_code
Module
Bar Code
17.5 Calibration
caltab_points ( : : CalTabDescrFile : X, Y, Z )
Read the mark center points from the calibration plate description file.
caltab_points reads the mark center points from the calibration plate description file CalTabDescrFile
(see gen_caltab) and returns their coordinates in X, Y und Z. The mark center points are 3D coordinates in
the calibration plate coordinate system und describe the 3D model of the calibration plate. The calibration plate
coordinate system is located in the middle of the surface of the calibration plate, its z-axis points into the calibration
plate, its x-axis to the right, and it y-axis downwards.
The mark center points are typically used as input parameters for the operator camera_calibration. This
operator projects the model points into the image, minimizes the distance between the projected points and the
observed 2D coordinates in the image (see find_marks_and_pose), and from this computes the exact values
for the interior and exterior camera parameters.
Parameter
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default Value : ’caltab.descr’
List of values : CalTabDescrFile ∈ {’caltab.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinates of the mark center points in the coordinate system of the calibration plate.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinates of the mark center points in the coordinate system of the calibration plate.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinates of the mark center points in the coordinate system of the calibration plate.
Example
* read_image(Image1, ’calib-01’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576]
find_marks_and_pose(Image1,Caltab1,’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ) >
* camera calibration
camera_calibration(NX, NY, NZ, RCoord1, CCoord1, StartCamPar,
StartPose1, ’all’, CamParam, FinalPose, Errors)
Result
caltab_points returns 2 (H_MSG_TRUE) if all parameter values are correct and the file
CalTabDescrFile has been read successfully. If necessary, an exception handling is raised.
Parallelization Information
caltab_points is reentrant and processed without parallelization.
Possible Successors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
project_3d_point, get_line_of_sight, gen_caltab
Module
Foundation
HALCON 8.0.2
1084 CHAPTER 17. TOOLS
Result
If the parameters are valid, the operator cam_mat_to_cam_par returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
cam_mat_to_cam_par is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
See also
camera_calibration, cam_par_to_cam_mat
Module
Calibration
Result
If the parameters are valid, the operator cam_par_to_cam_mat returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
cam_par_to_cam_mat is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration
See also
stationary_camera_self_calibration, cam_mat_to_cam_par
Module
Calibration
Then, the point is projected into the image plane, i.e., onto the sensor chip.
For the modeling of this projection process that is determined by the used combination of camera, lens, and frame
grabber, HALCON provides the following three 3D camera models:
For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in CamParam
is greater than 0, the projection is described by the following equations:
x
pc = y
z
x y
u = Focus · and v = Focus ·
z z
HALCON 8.0.2
1086 CHAPTER 17. TOOLS
In contrast, if the focal length is passed as 0 in CamParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
x
pc = y
z
u = x and v=y
2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:
ũ ṽ
c= + Cx and r= + Cy
Sx Sy
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators find_marks_and_pose and camera_calibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
x
pc = y ,
z
m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz
with
1
D =
1+ κ(ũ2 + (pv )2 )
pv = Sy · Cy
ũ
c= + Cx and r=t
Sx
Camera parameters
The total of 14 camera parameters for area scan cameras and 17 camera parameters for line scan cameras, respec-
tively, can be divided into the interior and exterior camera parameters:
Interior camera parameters: These parameters describe the characteristics of the used camera, especially the
dimension of the sensor itself and the projection properties of the used combination of lens, camera, and
frame grabber.
For area scan cameras, the above described camera model contains the following 8 parameters:
Focus: Focal length of the lens. 0 for telecentric lenses.
Kappa (κ): Distortion coefficient to model the pillow- or barrel-shaped distortions caused by the lens.
Sx : Scale factor. For pinhole cameras, it corresponds to the horizontal distance between two neighbor-
ing cells on the sensor. For telecentric cameras, it represents the horizontal size of a pixel in world
coordinates. Attention: This value increases, if the image is subsampled!
Sy : Scale factor. For pinhole cameras, it corresponds to the vertical distance between two neighboring
cells on the sensor. For telecentric cameras, it respresents the vertical size of a pixel in world coordi-
nates. Since in most cases the image signal is sampled line-synchronously, this value is determined
by the dimension of the sensor and needn’t be estimated for pinhole cameras by the calibration
process. Attention: This value increases, if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Row coordinate of the image center point (center of the radial distortion).
ImageWidth: Width of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
ImageHeight: Height of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
For line scan cameras, the above described camera model contains the following 11 parameters:
Focus: Focal length of the lens.
Kappa: Distortion coefficient to model the pin-cushion- or barrel-shaped distortions caused by the lens.
Sx : Scale factor, corresponds to the horizontal distance between two neighboring cells on the sensor.
Attention: This value increases if the image is subsampled!
Sy : Scale factor. During the calibration, it appears only in the form pv = Sy · Cy . pv describes the
distance of the image center point from the sensor line in [meters]. Attention: This value increases
if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Distance of the image center point (center of the radial distortion) from the sensor line in [scanlines].
ImageWidth: Width of the sampled image. Attention: This value decreases if the image is subsampled!
ImageHeight: Height of the sampled image. Attention: This value decreases if the image is subsam-
pled!
Vx : X-component of the motion vector.
Vy : Y-component of the motion vector.
Vz : Z-component of the motion vector.
Note that the term focal length is not quite correct and would be appropriate only for an infinite object
distance. To simplify matters, always the term focal length is used even if the image distance is meant.
HALCON 8.0.2
1088 CHAPTER 17. TOOLS
Exterior camera parameters: These 6 parameters describe the 3D pose, i.e., the position and orientation, of the
world coordinate system relative to the camera coordinate system. For line scan cameras, the pose of the
world coordinate system refers to the camera coordinate system of the first image line. Three parameters
describe the translation, three the rotation. See create_pose for more information about 3D poses. Note
that camera_calibration operates with all types of 3D poses for NStartPose.
When using the standard calibration plate, the world coordinate system is defined by the coordinate system
of the calibration plate which is located in the middle of the surface of the calibration plate, its z-axis pointing
into the calibration plate, its x-axis to the right, and it y-axis downwards.
How to generate a appropriate calibration plate? The simplest method to determine the interior parameters of
a camera is the use of the standard calibration plate as generated by the operator gen_caltab. You can
obtain high-precision calibration plates in various sizes and materials from your local distributor. In case of
small distances between object and lens it may be sufficient to print the calibration pattern by a laser printer
and to mount it on a cardboard. Otherwise – especially by using a wide-angle lens – it is possible to print
the PostScript file on a large ink-jet printer and to mount it on a aluminum plate. It is very important that
the mark coordinates in the calibration plate description file correspond to the real ones on the calibration
plate with high accuracy. Thus, the calibration plate description file has to be modified in accordance with
the measurement of the calibration plate!
How to take a set of suitable images? If you use the standard calibration plate, you can proceed in the following
way: With the combination of lens (fixed distance!), camera, and frame grabber to be calibrated a set of
images of the calibration plate has to be taken, see open_framegrabber and grab_image. The
following items have to be considered:
• At least a total of 10 to 20 images should be taken into account.
• The calibration plate has to be completely visible (incl. border!).
• Reflections etc. on the calibration plate should be avoided.
• Within the set of images the calibration plate should appear in different positions and orientations: Once
left in the image, once right, once (left and right) at the bottom, once (left or right) at the top, from
different distances etc. At this, the calibration plate should be rotated around its x- and/or y-axis, so the
perspective distortions of the calibration pattern are clearly visible. Thus, the exterior camera parameters
(camera pose with regard of the calibration plate) should be set to a large variety of different values!
• The calibration plate should fill at least a quarter of the whole image to ensure the robust detection of the
marks.
How to extract the calibration marks in the images? If a standard calibration plate is used, you can use the
operators find_caltab and find_marks_and_pose to determine the coordinates of the calibration
marks in each image and to compute a rough estimate for the exterior camera parameters. The concatenation
of these values can directly be used as initial values for the exterior camera parameters (NStartPose) in
camera_calibration.
Obviously, images in which the segmentation of the calibration plate ( find_caltab) has failed or the
calibration marks haven’t been determined successfully by find_marks_and_pose should not be used.
How to find suitable initial values for the interior camera parameters? The operators
find_marks_and_pose (determination of initial values for the exterior camera parameters) and
camera_calibration require initial values for the interior camera parameters. These parameters can be
provided by a appropriate text file (see read_cam_par) which can be generated by write_cam_par
or can be edited manually.
For area scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells depends on the dimen-
sion of the used chip of the camera (see technical specifications of the camera). Generally, common
chips are either 1/3”-Chips (e.g., SONY XC-73, SONY XC-777), 1/2”-Chips (e.g., SONY XC-999,
Panasonic WV-CD50), or 2/3”-Chips (e.g., SONY DXC-151, SONY XC-77). Notice: The value of
Sx increases if the image is subsampled! Appropriate initial values are:
The value for Sx is calibrated, since the video signal of a camera normally isn’t sampled pixel-
synchronously.
Sy : Since most off-the-shelf cameras have quadratic pixels, the same values for Sy are valid as for Sx .
In contrast to Sx the value for Sy will not be calibrated for pinhole cameras, because the video
signal of a camera normally is sampled line-synchronously. Thus, the initial value is equal to the
final value. Appropriate initial values are:
Full image (768*576) Subsampling (384*288)
1/3"-Chip 0.0000055 m 0.0000110 m
1/2"-Chip 0.0000086 m 0.0000172 m
2/3"-Chip 0.0000110 m 0.0000220 m
Cx and Cy : Initial values for the coordinates of the image center is the half image width and half image
height. Notice: The values of Cx and Cy decrease if the image is subsampled! Appropriate initial
values are:
Full image (768*576) Subsampling (384*288)
Cx 384.0 192.0
Cy 288.0 144.0
ImageWidth and ImageHeight: These two parameters are determined by the the used frame grabber
and therefore are not calibrated. Appropriate initial values are, for example:
Full image (768*576) Subsampling (384*288)
ImageWidth 768 384
ImageHeight 576 288
For line scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells can be taken from the
technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m, and 14e-6 m.
Notice: The value of Sx increase, if the image is subsampled!
Sy : The initial value for the size of a cell in the direction perpendicular to the sensor line can also be
taken from the technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m,
and 14e-6 m. Notice: The value of Sx increase, if the image is subsampled! In contrast to Sx , the
value for Sy will NOT be calibrated for line scan cameras, because it appears only in the form pv =
Sy · Cy . Therefore, it cannot be determined separately.
Cx : The initial value for the x-coordinate of the image center is the half image width. Notice: The
values of Cx decreases if the image is subsampled! Appropriate initial values are:
Image width: 1024 2048 4096 8192
Cx: 512 1024 2048 4096
Cy : The initial value for the y-coordinate of the image center can normally be set to 0.
ImageWidth and ImageHeight: These two parameters are determined by the used frame grabber and
therefore are not calibrated.
Vx , Vy , Vz : The initial values for the x-, y-, and z-component of the motion vector depend on the image
acquisition setup. Assuming a camera that looks perpendicularly onto a conveyor belt, and that is
rotated around its optical axis such that the sensor line is perpendicular to the conveyor belt, i.e., the
y-axis of the camera coordinate system is parallel to the conveyor belt, the initial values Vx = Vz =
0. The initial value for Vy can then be determined, e.g., from a line scan image of an object with
known size (e.g., calibration plate, ruler):
Vy = l[m]/l[row]
HALCON 8.0.2
1090 CHAPTER 17. TOOLS
with:
l[m] = Length of the object in object coordinates [meter]
l[row] = Length of the object in image coordinates [rows]
If, compared to the above setup, the camera is rotated 30 degrees around its optical axis, i.e., around
the z-axis of the camera coordinate system, the above determined initial values must be changed as
follows:
Vxz = sin(30) ∗ Vy
Vyz = cos(30) ∗ Vy
Vzz = Vz = 0
If, compared to the first setup, the camera is rotated -20 degrees around the x-axis of the camera
coordinate system, the following initial values result:
Vxx = Vx = 0
Vyx = cos(−20) ∗ Vy
Vzx = sin(−20) ∗ Vy
The quality of the initial values for Vx , Vy , and Vz are crucial for the success of the whole calibration.
If they are not precise enough, the calibration may fail.
Which camera parameters have to be estimated? The input parameter EstimateParams is used to select
which camera parameters to estimate. Usually this parameter is set to ’all’, i.e., all 6 exterior camera pa-
rameters (translation and rotation) and all interior camera parameters are determined. If the interior camera
parameters already have been determined (e.g., by a previous call to camera_calibration) it is often
desired to only determine the pose of the world coordinate system in camera coordinates (i.e., the exterior
camera parameters). In this case, EstimateParams can be set to ’pose’. This has the same effect as
EstimateParams = [’alpha’,’beta’,’gamma’,’transx’,’transy’,’transz’]. Otherwise, EstimateParams
contains a tuple of strings indicating the combination of parameters to estimate. In addition, parameters can
be excluded from estimation by using the prefix ~. For example, the values [’pose’,’~transx’] have the same
effect as [’alpha’,’beta’,’gamma’,’transy’,’transz’]. Whereas [’all’,’~focus’] determines all internal and ex-
ternal parameters except the focus, for instance. The prefix ~ can be used with all parameter values except
’all’.
What is the order within the individual parameters? The length of the tuple NStartPose corresponds to the
number of calibration images, e.g., using 15 images leads to a length of the tuple NStartPose equal to
15 · 7 = 105 (15 times the 7 exterior camera parameters). The first 7 values correspond to the pose of the
calibration plate in the first image, the next 7 values to the pose in the second image, etc.
This fixed number of calibration images has to be considered within the tuples with the coordinates of the 3D
model marks and the extracted 2D marks. If 15 images are used, the length of the tuples NRow and NCol
is 15 times the length of the tuples with the coordinates of the 3D model marks (NX, NY, and NZ). If every
image consists 49 marks, the length of the tuples NRow and NCol is 15 · 49 = 735, while the length of the
tuples NX, NY, and NZ is 49. The order of the values in NRow and NCol is “image after image”, i.e., using
49 marks the first 3D model point corresponds to the 1st, 50th, 99th, 148th, 197th, 246th, etc. extracted 2D
mark.
The 3D model points can be read from a calibration plate description file using the operator
caltab_points. Initial values for the poses of the calibration plate can be determined by applying
find_marks_and_pose for each image. The tuple NStartPose is set by the concatenation of all
these poses.
What is the meaning of the output parameters? If the camera calibration process is finished successfully, i.e.,
the minimization process has converged, the output parameters CamParam and NFinalPose contain the
computed exact values for the interior and exterior camera parameters. The length of the tuple NFinalPose
corresponds to the length of the tuple NStartPose.
The representation types of NFinalPose correspond to the representation type of the first tuple of
NStartPose (see create_pose). You can convert the representation type by convert_pose_type.
The computed average errors (Errors) give an impression of the accuracy of the calibration. The error
values (deviations in x and y coordinates) are measured in pixels.
Must I use a planar calibration object? No. The operator camera_calibration is designed in a way that
the input tuples NX, NY, NZ, NRow, and NCol can contain any 3D/2D correspondences, see the above para-
graph explaining the order of the single parameters.
Thus, it makes no difference how the required 3D model marks and the corresponding extracted 2D marks are
determined. On one hand, it is possible to use a 3D calibration pattern, on the other hand, you also can use any
characteristic points (natural landmarks) with known position in the world. By setting EstimateParams
to ’pose’, it is thus possible to compute the pose of an object in camera coordinates! For this, at least three
3D/2D-correspondences are necessary as input. NStartPose can, e.g., be generated directly as shown in
the program example for create_pose.
Attention
The minimization process of the calibration depends on the initial values of the interior (StartCamParam) and
exterior (NStartPose) camera parameters. The computed average errors Errors give an impression of the
accuracy of the calibration. The errors (deviations in x and y coordinates) are measured in pixels.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Ordered tuple with all x coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Ordered tuple with all y coordinates of the calibration marks (in meters).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Ordered tuple with all z coordinates of the calibration marks (in meters).
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real / integer
Initial values for the interior camera parameters.
. NStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Ordered tuple with all initial values for the exterior camera parameters.
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string / integer
Camera parameters to be estimated.
Default Value : ’all’
List of values : EstimateParams ∈ {’all’, ’pose’, ’alpha’, ’beta’, ’gamma’, ’transx’, ’transy’, ’transz’,
’focus’, ’kappa’, ’cx’, ’cy’, ’sx’, ’sy’, ’vx’, ’vy’, ’vz’}
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
. NFinalPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Ordered tuple with all exterior camera parameters.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Average error distances in pixels.
Example
HALCON 8.0.2
1092 CHAPTER 17. TOOLS
Result
camera_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired camera
parameters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
camera_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, pose_to_hom_mat3d, disp_caltab, sim_caltab
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab
Module
Calibration
• ’fixed’: Only Kappa is modified, the other interior camera parameters remain unchanged. In general, this
leads to a change of the visible part of the scene.
• ’fullsize’: The scale factors Sx and Sy and the image center point [Cx , Cy ]T are modified in order to preserve
the visible part of the scene. Thus, all points visible in the original image are also visible in the modified
(rectified) image. In general, this leads to undefined pixels in the modified image.
• ’adaptive’: A trade-off between the other modes: The visible part of the scene is slightly reduced to prevent
undefined pixels in the modified image. Similiarly to ’fullsize’, the scale factors and the image center point
are modified.
• ’preserve_resolution’: As in the mode ’fullsize’, all points visible in the original image are also visible in
the modified (rectified) image, i.e., the scale factors Sx and Sy and the image center point [Cx , Cy ]T are
modified. In general, this leads to undefined pixels in the modified image. In contrast to the mode ’fullsize’
additionally the size of the modified image is increased such that the image resolution does not decrease in
any part of the image.
In all modes the radial distortion coefficient κ in CamParOut is set to Kappa. The transformation of a pixel in
the modified image into the image plane using CamParOut results in the same point as the transformation of a
pixel in the original image via CamParIn.
Parameter
change_radial_distortion_contours_xld (
Contours : ContoursRectified : CamParIn, CamParOut : )
HALCON 8.0.2
1094 CHAPTER 17. TOOLS
Parallelization Information
change_radial_distortion_contours_xld is reentrant and processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, gen_contours_skeleton_xld, edges_sub_pix,
smooth_contours_xld
Possible Successors
gen_polygons_xld, smooth_contours_xld
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_image
Module
Calibration
change_radial_distortion_image ( Image,
Region : ImageRectified : CamParIn, CamParOut : )
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_contours_xld
Module
Calibration
Transform an XLD contour into the plane z=0 of a world coordinate system.
The operator contour_to_world_plane_xld transforms contour points given in Contours into the plane
z=0 in a world coordinate system and returns the 3D contour points in ContoursTrans. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose. In CamParam
you must pass the interior camera parameters (see write_cam_par for the sequence of the parameters and the
underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image point in the
camera coordinate system, taking into account the radial distortions. The line of sight is then transformed into the
world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the 3D
coordinates of the transformed contour ContoursTrans are obtained.
Parameter
HALCON 8.0.2
1096 CHAPTER 17. TOOLS
Result
contour_to_world_plane_xld returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception handling is raised.
Parallelization Information
contour_to_world_plane_xld is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
image_points_to_world_plane
Module
Calibration
Generate a calibration plate description file and a corresponding PostScript file. (obsolete)
create_caltab has been replaced with the operator gen_caltab. The operator is contained and described
for compatibility reasons only.
create_caltab generates the description of a standard calibration plate for HALCON. This calibration plate
consists of 49 black circular marks on a white plane which are surrounded by a black frame. The parameter Width
sets the width (equal to the height) of the whole calibration plate in meters. Using a width of 0.8 m, the distance
between two neighboring marks becomes 10 cm, and the mark radius and the frame width are set to 2.5 cm. The
calibration plate coordinate system is located in the middle of the surface of the calibration plate, its z-axis points
into the calibration plate, its x-axis to the right, and it y-axis downwards.
The file CalTabDescrFile contains the calibration plate description, e.g., the number of rows and columns
of the calibration plate, the geometry of the surrounding frame (see find_caltab), and the coordinates and
the radius of all calibration plate marks given in the calibration plate coordinate system. A file generated by
create_caltab looks like the following (comments are marked by a ’#’ at the beginning of a line):
#
# Description of the standard calibration plate
# used for the camera calibration in HALCON
#
# 7 rows X 7 columns
# Distance between mark centers [meter]: 0.1
# Quadratic frame (with outer and inner border) around calibration plate
w 0.025
o -0.41 0.41 0.41 -0.41
i -0.4 0.4 0.4 -0.4
# calibration marks at y = 0 m
-0.3 0 0.025
-0.2 0 0.025
-0.1 0 0.025
0 0 0.025
0.1 0 0.025
0.2 0 0.025
0.3 0 0.025
The file CalTabFile contains the corresponding PostScript description of the calibration plate.
HALCON 8.0.2
1098 CHAPTER 17. TOOLS
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file CalTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Width of the calibration plate in meters.
Default Value : 0.8
Suggested values : Width ∈ {1.2, 0.8, 0.6, 0.4, 0.2, 0.1}
Recommended Increment : 0.1
Restriction : 0.0 < Width
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the calibration plate description.
Default Value : ’caltab.descr’
List of values : CalTabDescrFile ∈ {’caltab.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’}
. CalTabFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the PostScript file.
Default Value : ’caltab.ps’
Example
Result
create_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception handling is raised.
Parallelization Information
create_caltab is processed completely exclusively without parallelization.
Possible Successors
read_cam_par, caltab_points
See also
gen_caltab, find_caltab, find_marks_and_pose, camera_calibration, disp_caltab,
sim_caltab
Module
Foundation
Project and visualize the 3D model of the calibration plate in the image.
disp_caltab is used to visualize the calibration marks and the connecting lines between the marks of the
used calibration plate (CalTabDescrFile) in the window specified by WindowHandle. Additionally, the
x- and y-axes of the plate’s coordiante system are printed on the plate’s surface. For this, the 3D model of
the calibration plate is projected into the image plane using the interior (CamParam) and exterior camera pa-
rameters (CaltabPose, i.e., the pose of the calibration plate in camera coordinates). The underlying camera
model (pinhole, telecentric, or line scan camera with radial distortion) is described in write_cam_par and
camera_calibration.
Typically, disp_caltab is used to verify the result of the camera calibration (see camera_calibration)
by superimposing it onto the original image. The current line width can be set by set_line_width, the current
color can be set by set_color. Additionally, the font type of the labels of the coordinate axes can be set by
set_font.
The parameter ScaleFac influences the number of supporting points to approximate the elliptic contours of the
calibration marks. You should increase the number of supporting points, if the image part in the output window is
displayed with magnification (see set_part).
Parameter
Result
disp_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception handling
is raised.
Parallelization Information
disp_caltab is reentrant, local, and processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par, read_pose
See also
find_marks_and_pose, camera_calibration, sim_caltab, write_cam_par,
read_cam_par, create_pose, write_pose, read_pose, project_3d_point,
get_line_of_sight
Module
Foundation
HALCON 8.0.2
1100 CHAPTER 17. TOOLS
Result
find_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and an image region is
found. The behavior in case of empty input (no image given) can be set via set_system(::
’no_object_result’,<Result>:) and the behavior in case of an empty result region via set_system
(::’store_empty_region’,<true/false>:). If necessary, an exception handling is raised.
Parallelization Information
find_caltab is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
find_marks_and_pose
See also
find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
caltab_points, gen_caltab
Module
Foundation
Extract the 2D calibration marks from the image and calculate initial values for the exterior camera parameters.
find_marks_and_pose is used to determine the necessary input data for the subsequent camera calibration
(see camera_calibration): First, the 2D center points [RCoord,CCoord] of the calibration marks within
the region CalTabRegion of the input image Image are extracted and ordered. Secondly, a rough estimate for
the exterior camera parameters (StartPose) is computed, i.e., the 3D pose (= position and orientation) of the
calibration plate relative to the camera coordinate system (see create_pose for more information about 3D
poses).
In the input image Image an edge detector is applied (see edges_image, mode ’lanser2’) to the region
CalTabRegion, which can be found by applying the operator find_caltab. The filter parameter for this
edge detection can be tuned via Alpha. In the edge image closed contours are searched for: The number of closed
contours must correspond to the number of calibration marks as described in the calibration plate description file
CalTabDescrFile and the contours have to be ellipticly shaped. Contours shorter than MinContLength are
discarded, just as contours enclosing regions with a diameter larger than MaxDiamMarks (e.g., the border of the
calibration plate).
For the detection of contours a threshold operator is applied on the resulting amplitudes of the edge detector. All
points with a high amplitude (i.e., borders of marks) are selected.
First, the threshold value is set to StartThresh. If the search for the closed contours or the successive pose
estimate fails, this threshold value is successively decreased by DeltaThresh down to a minimum value of
MinThresh.
Each of the found contours is refined with subpixel accuracy (see edges_sub_pix) and subsequently approx-
imated by an ellipse. The center points of these ellipses represent a good approximation of the desired 2D image
coordinates [RCoord,CCoord] of the calibration mark center points. The order of the values within these two tu-
ples must correspond to the order of the 3D coordinates of the calibration marks in the calibration plate description
file CalTabDescrFile, since this fixes the correspondences between extracted image marks and known model
marks (given by caltab_points)! If a triangular orientation mark is defined in a corner of the plate by the
plate description file (see gen_caltab), the mark will be detected and the point order is returned in row-major
order beginning with the corner mark in the (barycentric) negative quadrant with respect to the defined coordinate
system of the plate. Else, if no orientation mark is defined, the order of the center points is in row-major order
beginning at the upper left corner mark in the image.
Based on the ellipse parameters for each calibration mark, a rough estimate for the exterior camera parameters is
finally computed. For this purpose the fixed correspondences between extracted image marks and known model
marks are used. The estimate StartPose describes the pose of the calibration plate in the camera coordinate
system as required by the operator camera_calibration.
Parameter
HALCON 8.0.2
1102 CHAPTER 17. TOOLS
Result
find_marks_and_pose returns 2 (H_MSG_TRUE) if all parameter values are correct and an estimation for
the exterior camera parameters has been determined successfully. If necessary, an exception handling is raised.
Parallelization Information
find_marks_and_pose is reentrant and processed without parallelization.
Possible Predecessors
find_caltab
Possible Successors
camera_calibration
See also
find_caltab, camera_calibration, disp_caltab, sim_caltab, read_cam_par,
HALCON 8.0.2
1104 CHAPTER 17. TOOLS
The file CalTabPSFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file CalTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
Result
gen_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception handling is raised.
Parallelization Information
gen_caltab is processed completely exclusively without parallelization.
Possible Successors
read_cam_par, caltab_points
HALCON 8.0.2
1106 CHAPTER 17. TOOLS
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world
coordinate system.
gen_image_to_world_plane_map generates a projection map Map, which describes the mapping between
the image plane and the plane z=0 (plane of measurements) in a world coordinate system. This map can be used
to rectify an image with the operator map_image. The rectified image shows neither radial nor perspective dis-
tortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the plane
of measurements. The world coordinate system is chosen by passing its 3D pose relative to the camera coordinate
system in WorldPose. In CamParam you must pass the interior camera parameters (see write_cam_par for
the sequence of the parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
The size of the images to be mapped can be specified by the parameters WidthIn and HeightIn. The pixel
position of the upper left corner of the output image is determined by the origin of the world coordinate system.
The size of the output image can be choosen by the parameters WidthMapped, HeightMapped, and Scale.
WidthMapped and HeightMapped must be given in pixels.
With the parameter Scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
Scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter Interpolation specifies whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
The mapping function is stored in the output image Map. Map has the same size as the resulting images after
the mapping. If no interpolation is chosen, Map consists of one image containing one channel, in which for each
pixel of the resulting image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen, Map consists of one image containing
five channels. In the first channel for each pixel in the resulting image the linearized coordinates of the pixel in
the input image is stored that is in the upper left position relative to the transformed coordinates. The four other
channels contain the weights of the four neighboring pixels of the transformed coordinates which are used for the
bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to
the transformed coordinates. If several images have to be mapped using the same camera parameters,
gen_image_to_world_plane_map in combination with map_image is much more efficient than the op-
erator image_to_world_plane because the mapping function needs to be computed only once.
Parameter
HALCON 8.0.2
1108 CHAPTER 17. TOOLS
* -> determine output image size such that entire input image fits into it
ExtentX := MaxX-MinX
ExtentY := MaxY-MinY
WidthRectifiedImage := ExtentX/ScaleForSimilarPixelSize
HeightRectifiedImage := ExtentY/ScaleForSimilarPixelSize
* create mapping with the determined parameters
gen_image_to_world_plane_map(Map, FinalCamParam, PoseForEntireImage,
Width, Height,
WidthRectifiedImage, HeightRectifiedImage,
ScaleForSimilarPixelSize, ’bilinear’)
* transform grabbed images with the created map
while(1)
grab_image_async(Image, FGHandle, -1)
map_image(Image, Map, RectifiedImage)
endwhile
Result
gen_image_to_world_plane_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception handling is raised.
Parallelization Information
gen_image_to_world_plane_map is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Possible Successors
map_image
Alternatives
image_to_world_plane
See also
map_image, contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
gen_radial_distortion_map computes the mapping of images corresponding to a changing radial dis-
tortion in accordance to the interior camera parameters CamParIn and CamParOut which can be obtained,
e.g., using the operator camera_calibration. CamParIn and CamParOut contain the old and the new
camera parameters including the old and the new radial distortion, respectively (also see write_cam_par for
the sequence of the parameters and the underlying camera model). Each pixel of the potential output image is
transformed into the image plane using CamParOut and subsequently projected into a subpixel position of the
potential input image using CamParIn.
The mapping function is stored in the output image Map. The size of Map is given by the camera parameters
CamParOut and therefore defines the size of the resulting mapped images using map_image. The size of the
images to be mapped with map_image is determined by the camera parmaters CamParIn. If no interpolation
is chosen (Interpolation = ’none’), Map consists of one image containing one channel, in which for each
pixel of the output image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen (Interpolation = ’bilinear’),
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image
the linearized coordinate of the pixel in the input image is stored that is in the upper left position relative to
the transformed coordinates. The four other channels contain the weights of the four neighboring pixels of the
transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
If CamParOut was computed via change_radial_distortion_cam_par, the mapping describes the
effect of a lens with a modified radial distortion. If κ is 0, the mapping corresponds to a rectification.
If several images have to be mapped using the same camera parameters, gen_radial_distortion_map
in combination with map_image is much more efficient than the operator
change_radial_distortion_image because the transformation must be computed only once.
Parameter
HALCON 8.0.2
1110 CHAPTER 17. TOOLS
vector. The normal vectors are normalized and oriented such that they point away from the optical center which
is the origin of the camera coordinate system. If OutputType is set to ’center_normal’, the output parameters
Pose1 and Pose2 contain only six elements which describe the position and orientation of the circle instead of
the seven elements of the 3D pose that are returned if OutputType is set to ’pose’.
If more than one contour is passed in Contour, Radius must either contain a tuple that contains a value for
each contour or only one value which is then used for all contours. The resulting positions and orientations are
stored one after another in Pose1 and Pose2, i.e., Pose1 and Pose2 contain first the pose or the position and
the normal vector of the first contour, followed by the respective values for the second contour and so on.
Attention
The accuracy of the determined poses depends heavily on the accuracy of the extracted contours. The extraction of
curved edges using relatively large filter masks leads to a slightly shifted edge position. Edge extraction approaches
that are based on the first derivative of the image function (e.g., edges_sub_pix) yield edges that are shifted
towards the center of curvature, i.e., extracted ellipses will be slightly to small. Approaches that are based on the
second derivative of the image function ( laplace_of_gauss followed by zero_crossing_sub_pix)
result in edges that are shifted away from the center of curvature, i.e., extracted ellipses will be slightly too large.
These effects increase with the curvature of the edge and with the size of the filter mask that is used for the
edge extraction. Therefore, to achieve high accuracy, the ellipses should appear large in the image and the filter
parameter should be chosen such that small filter masks are used (see info_edges).
Parameter
The advantage of representing the line of sight as two points is that it is easier to transform the line in 3D. To do
so, all that is necessary is to apply the operator affine_trans_point_3d to the two points.
Parameter
Result
get_line_of_sight returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
get_line_of_sight is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par, camera_calibration
Possible Successors
affine_trans_point_3d
HALCON 8.0.2
1112 CHAPTER 17. TOOLS
See also
camera_calibration, disp_caltab, read_cam_par, project_3d_point,
affine_trans_point_3d
Module
Calibration
Output
The resulting Pose is of code-0 (see create_pose) and represents the pose of the center of the rectangle. You
can compute the pose of the corners of the rectangle as follows:
set_origin_pose (Pose, Width/2, -Height/2, 0, PoseCorner1)
set_origin_pose (Pose, Width/2, Height/2, 0, PoseCorner2)
set_origin_pose (Pose, -Width/2, Height/2, 0, PoseCorner3)
set_origin_pose (Pose, -Width/2, -Height/2, 0, PoseCorner4)
A rectangle is symmetric with respect to its x, y, and z axis and one and the same contour can represent a rectangle
in 4 different poses. The angles in Pose are normalized to be in the range [−90; 90] degrees and the rest of the 4
possible poses can be computed by combining flips around the corresponding axis:
∗ NOTE: the following code works ONLY for pose of type Code-0
∗ as it is returned by get_rectangle_pose
∗
∗ flip around z-axis
PoseFlippedZ := Pose
PoseFlippedZ[5] := PoseFlippedZ[5]+180
∗ flip around y-axis
PoseFlippedY := Pose
PoseFlippedY[4] := PoseFlippedY[4]+180
PoseFlippedY[5] := -PoseFlippedY[5]
∗ flip around x-axis
PoseFlippedX := Pose
PoseFlippedX[3] := PoseFlippedX[3]+180
PoseFlippedX[4] := -PoseFlippedX[4]
PoseFlippedX[5] := -PoseFlippedX[5]
Note that if the rectangle is a square (Width == Height) the number of alternative poses is 8.
If more than one contour are given in Contour, a corresponding tuple of values for both Width and Height
has to be provided as well. Yet, if only one value is provided for each of these arguments, then this value is applied
for each processed contour. A pose is estimated for each processed contour and all poses are concatenated in Pose
(see the example below).
• ratio Width/Height
• length of the projected contour
• degree of perspective distortion of the contour
In order to achieve an accurate pose estimation, there are three corresponding criteria that should be considered:
The ratio Width/Height should fulfill
1
< Width/Height < 3
3
For a rectangular object deviating from this criterion, its longer side dominates the determination of its pose. This
causes instability in the estimation of the angle around the longer rectangle’s axis. In the extreme case when one
of the dimensions is 0, the rectangle is in fact a line segment, whose pose cannot be estimated.
Secondly, the lengths of each side of the contour should be at least 20 pixels. An error is returned if a side of the
contour is less than 5 pixels long.
Thirdly, the more the contour appears projectively distorted, the more stable the algorithm works. Therefore, the
pose of a rectangle tilted w.r.t to the image plane can be estimated accurately, whereas the pose of an rectangle
parallel to the image plane of the camera could be unstable. This is further discussed in the next paragraph.
Additionally, there is a rule of thumb that ensures projective distortion: the rectangle should be placed in space
such that its size in x and y dimension in the camera coordinate system should not be less than 1/10th of its
distance from the camera in z direction.
get_rectangle_pose provides two measures for the accuracy of the estimated Pose. Error is the average
pixel error between the contour points and the modeled rectangle reprojected on the image. If Error is exceeding
0.5, this is an indication that the algorithm did not converge properly, and the resulting Pose should not be used.
CovPose contains 36 entries representing the 6 × 6 covariance matrix of the first 6 entries of Pose. The above
mentioned case of instability of the angle about the longer rectangle’s axis be detected by checking that the absolute
values of the variances and covariances of the rotations around the x and y axis (CovPose[21],CovPose[28],
and CovPose[22] == CovPose[27]) do not exceed 0.05. Further, unusually increased values of any of the
covariances and especially of the variances (the 6 values on the diagonal of CovPose with indices 0, 7, 14, 21, 28
and 35, respectively) indicate a poor quality of Pose.
HALCON 8.0.2
1114 CHAPTER 17. TOOLS
Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject
Contour(s) to be examined.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
Number of elements : CamParam = 8
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Width of the rectangle in meters.
Restriction : Width > 0
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Height of the rectangle in meters.
Restriction : Height > 0
. WeightingMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Weighting mode for the optimization phase.
Default Value : ’nonweighted’
List of values : WeightingMode ∈ {’nonweighted’, ’huber’, ’tukey’}
. ClippingFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Clipping factor for the elimination of outliers (typical: 1.0 for ’huber’ and 3.0 for ’tukey’).
Default Value : 2.0
Suggested values : ClippingFactor ∈ {1.0, 1.5, 2.0, 2.5, 3.0}
Restriction : ClippingFactor > 0
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
3D pose of the rectangle.
Number of elements : Pose = (7 · Contour)
. CovPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Covariances of the pose values.
Number of elements : CovPose = (36 · Contour)
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Root-mean-square value of the final residual error.
Number of elements : Error = Contour
Example
* ...
endfor
Result
get_rectangle_pose returns 2 (H_MSG_TRUE) if all parameter values are correct and the position of the
rectangle has been determined successfully. If the provided contour(s) cannot be segmented as a quadrangle
get_rectangle_pose returns H_ERR_FIT_QUADRANGLE. If further necessary, an exception handling is
raised.
Parallelization Information
get_rectangle_pose is reentrant, local, and processed without parallelization.
Possible Predecessors
edges_sub_pix
See also
get_circle_pose, set_origin_pose, camera_calibration
References
G.Schweighofer and A.Pinz: “Robust Pose Estimation from a Planar Target”; Transactions on Pattern Analysis
and Machine Intelligence (PAMI), 28(12):2024-2030, 2006
Module
3D Metrology
HALCON 8.0.2
1116 CHAPTER 17. TOOLS
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
H
CamStartPose MRelPoses BaseStartPose
CamFinalPose BaseFinalPose
From the set of calibration images, the operator hand_eye_calibration determines the two transformations
at the ends of the chain, i.e., the pose of the robot tool in camera coordinates (cam Htool ) and the pose of the
calibration object in the robot base coordinate system (base Hcal ). In the input parameters CamStartPose and
BaseStartPose, you must specify suitable starting values for these transformations which are constant over
all calibration images. hand_eye_calibration then returns the calibrated values in CamFinalPose and
BaseFinalPose.
In contrast, the transformation in the middle of the chain, tool Hbase , is known but changes for each calibration
image, because it describes the pose of the robot moving the camera, or to be more exact its inverse pose (pose of
the base coordinate system in robot tool coordinates). You must specify the (inverse) robot poses in the calibration
images in the parameter MRelPoses.
Internally, hand_eye_calibration uses a Newton-type algorithm to minimize an error function based on
normal equations. Analogously to the calibration of the camera itself (see camera_calibration), the hand-
eye calibration becomes more robust if you use many calibration images that were acquired with different robot
poses.
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from a calibration image, i.e., the pose of the calibration object in camera coordinates (i.e.,
the external camera parameters), are equal to a chain of poses or homogeneous transformation matrices, this time
from the calibration object via the robot’s tool to its base and finally to the camera:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
*
YH
H
6 H
CamStartPose MRelPoses BaseStartPose
CamFinalPose BaseFinalPose
Analogously to the configuration with a moving camera, the operator hand_eye_calibration determines
the two transformations at the ends of the chain, here the pose of the robot base coordinate system in camera coordi-
nates (cam Hbase ) and the pose of the calibration object relative to the robot tool (tool Hcal ). In the input parameters
CamStartPose and BaseStartPose, you must specify suitable starting values for these transformations.
hand_eye_calibration then returns the calibrated values in CamFinalPose and BaseFinalPose.
Please note that the names of the parameters BaseStartPose and BaseFinalPose are misleading for this
configuration!
The transformation in the middle of the chain, base Htool , describes the pose of the robot moving the calibration
object, i.e., the pose of the tool relative to the base coordinate system. You must specify the robot poses in the
calibration images in the parameter MRelPoses.
How do I get 3D model points and their projections? 3D model points given in the world coordinate system
(NX, NY, NZ) and their associated projections in the image (NRow, NCol) form the basis of the hand-eye
calibration. In order to be able to perform a successful hand-eye calibration, you need images of the 3D
model points that were obtained for sufficiently many different poses of the manipulator.
In principle, you can use arbitrary known points for the calibration. However, it is usually most convenient to
use the standard calibration plate, e.g., the one that can be generated with gen_caltab. In this case, you
can use the operators find_caltab and find_marks_and_pose to extract the position of the cali-
bration plate and of the calibration marks and the operator caltab_points to access the 3D coordinates
of the calibration marks (see also the description of camera_calibration).
The parameter MPointsOfImage specifies the number of 3D model points used for each pose of the
manipulator, i.e., for each image. With this, the 3D model points which are stored in a linearized fashion
in NX, NY, NZ, and their corresponding projections (NRow, NCol) can be associated with the corresponding
pose of the manipulator (MRelPoses). Note that in contrast to the operator camera_calibration the
3D coordinates of the model points must be specified for each calibration image, not only once.
How do I acquire a suitable set of images? If a standard calibration plate is used, the following procedure
should be used:
• At least 10 to 20 images from different positions should be taken in which the position of the camera
with respect to the calibration plate is sufficiently different. The position of the calibration plate (moving
camera: relative to the robot’s tool; stationary camera: relative to the robot’s base) must not be changed
between images.
• In each image, the calibration plate must be completely visible (including its border).
• No reflections or other disturbances should be visible on the calibration plate.
• The set of images must show the calibration plate from very different positions of the manipulator.
The calibration plate can and should be visible in different parts of the images. Furthermore, it should
be slightly to moderately rotated around its x- or y-axis, in order to clearly exhibit distortions of the
calibration marks. In other words, the corresponding exterior camera parameters (pose of the calibration
plate in camera coordinates) should take on many different values.
• In each image, the calibration plate should fill at least one quarter of the entire image, in order to ensure
the robust detection of the calibration marks.
• The interior camera parameters of the camera to be used must have been determined earlier and must be
passed in CamParam (see camera_calibration). Note that changes of the image size, the focal
length, the aperture, or the focus effect a change of the interior camera parameters.
• The camera must not be modified between the acquisition of the individual images, i.e., focal length,
aperture, and focus must not be changed, because all calibration images use the same interior camera
parameters. Please make sure that the focus is sufficient for the expected changes of the distance the
camera from the calibration plate. Therefore, bright lighting conditions for the calibration plate are
important, because then you can use smaller apertures which result in larger depth of focus.
How do I obtain suitable starting values? Depending on the used hand-eye configuration, you need starting val-
ues for the following poses:
Moving camera
BaseStartPose = pose of the calibration object in robot base coordinates
CamStartPose = pose of the robot tool in camera coordinates
Stationary camera
BaseStartPose = pose of the calibration object in robot tool coordinates
CamStartPose = pose of the robot base in camera coordinates
The camera’s coordinate system is oriented such that its optical axis corresponds to the z-axis, the x-axis
points to the right, and the y-axis downwards. The coordinate system of the standard calibration plate is
located in the middle of the surface of the calibration plate, its z-axis points into the calibration plate, its
x-axis to the right, and it y-axis downwards.
For more information about creating a 3D pose please refer to the description of create_pose which also
contains a short example.
In fact, you need a starting value only for one of the two poses (BaseStartPose or CamStartPose).
The other can be computed from one of the calibration images. This means that you can pick the pose that is
easier to determine and let HALCON compute the other one for you.
The main idea is to exploit the fact that the two poses for which we need starting values are connected via the
already described chain of transformations, here shown for a configuration with a moving camera:
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
*
YH
H
6 H
CamStartPose MRelPoses BaseStartPose
HALCON 8.0.2
1118 CHAPTER 17. TOOLS
In this configuration, it is typically easy to determine a starting value for cam Htool (CamStartPose). Thus,
we solve the equation for base Hcal (BaseStartPose):
Thus, to compute BaseStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for CamStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator find_marks_and_pose to determine the projections of the marks. An example program can
be found below.
For a configuration with a stationary camera, the chain of transformations is:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
* 6 HH
Y
H
CamStartPose MRelPoses BaseStartPose
tool
In this configuration, it is typically easier to determine a starting value for Hcal (BaseStartPose).
Thus, we solve the equation for cam Hbase (CamStartPose):
Thus, to compute CamStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for BaseStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator find_marks_and_pose to determine the projections of the marks. An example program can
be found below.
How do I obtain the poses of the robot? In the parameter MRelPoses you must pass the poses of the robot in
the calibration images (moving camera: pose of the robot base in robot tool coordinates; stationary camera:
pose of the robot tool in robot base coordinates) in a linearized fashion. We recommend to create the robot
poses in a separate program and save in files using write_pose. In the calibration program you can then
read and accumulate them in a tuple as shown in the example program below. Besides, we recommend to
save the pose of the robot tool in robot base coordinates independent of the hand-eye configuration. When
using a moving camera, you then invert the read poses before accumulating them. This is also shown in the
example program.
Via the cartesian interface of the robot, you can typically obtain the pose of the tool in base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (OrderOfRotation = ’gba’
or ’abg’, see create_pose). In this case, you can directly use the pose values obtained from the robot as
input for create_pose.
If the cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix
step by step using the operators hom_mat3d_rotate and hom_mat3d_translate and then convert
the matrix into a pose using hom_mat3d_to_pose. The following example code creates a pose from the
ZYZ representation described above:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, ϕ3, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, ϕ2, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, ϕ1, ’z’, 0, 0, 0, HomMat3DRotZYZ)
hom_mat3d_translate (HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose (base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the robot poses MRelPoses are specified with high
accuracy!
How can I exclude individual pose parameters from the estimation? hand_eye_calibration estimates
a maximum of 12 pose parameters, i.e., 6 parameters each for the two computed poses CamFinalPose
and BaseFinalPose. However, it is possible to exclude some of these pose parameters from the esti-
mation. This means that the starting values of the poses remain unchanged and are assumed constant for
the estimation of all other pose parameters. The parameter ToEstimate is used to determine which pose
parameters should be estimated. In ToEstimate, a list of keywords for the parameters to be estimated is
passed. The possible values are:
BaseFinalPose:
’baseTx’ = translation along the x-axis
’baseTy’ = translation along the y-axis
’baseTz’ = translation along the z-axis
’baseRa’ = rotation around the x-axis
’baseRb’ = rotation around the y-axis
’baseRg’ = rotation around the z-axis
’base_pose’ = all 6 BaseFinalPose parameters
CamFinalPose:
’camTx’ = translation along the x-axis
’camTy’ = translation along the y-axis
’camTz’ = translation along the z-axis
’camRa’ = rotation around the x-axis
’camRb’ = rotation around the y-axis
’camRg’ = rotation around the z-axis
’cam_pose’ = all 6 CamFinalPose parameters
In order to estimate all 12 pose parameters, you can pass the keyword ’all’ (or of course a tuple containing
all 12 keywords listed above).
It is useful to exclude individual parameters from estimation if those pose parameters have already been mea-
sured exactly. Therefor define a string tuple of the parameters that should be estimated or prefix the strings
of excluded parameters with a ’~’ sign. For example, ToEstimate = [’all’,’~camTx’] estimates all pose
values except the x translation of the camera. Whereas ToEstimate = [’base_pose’,’~baseRy’] estimates
the pose of the base apart from the rotation around the y-axis. The latter is equivalent to ToEstimate =
[’baseTx’,’baseTy’,’baseTz’,’baseRx’,’baseRz’].
Which terminating criteria can be used for the error minimization? The error minimization terminates either
after a fixed number of iterations or if the error falls below a given minimum error. The parameter
StopCriterion is used to choose between these two alternatives. If ’CountIterations’ is passed, the
algorithm terminates after MaxIterations iterations.
If StopCriterion is passed as ’MinError’, the algorithm runs until the error falls below the error threshold
given in MinError. If, however, the number of iterations reaches the number given in MaxIterations,
the algorithm terminates with an error message.
What is the order of the individual parameters? The length of the tuple MPointsOfImage corresponds to
the number of different positions of the manipulator and thus to the number of calibration images. The
parameter MPointsOfImage determines the number of model points used in the individual positions. If
the standard calibration plate is used, this means 49 points per position (image). If for example 15 images
were acquired, MPointsOfImage is a tuple of length 15, where all elements of the tuple have the value 49.
The number of calibration images which is determined by the length of MPointsOfImage, must also be
taken into account for the tuples for the 3D model points and for the extracted 2D marks, respectively. Hence,
for 15 calibration images with 49 model points each, the tuples NX, NY, NZ, NRow, and NCol must contain
15 · 49 = 735 values each. These tuples are ordered according to the image the respective points lie in, i.e.,
the first 49 values correspond to the 49 model points in the first image. The order of the 3D model points and
the extracted 2D model points must be the same in each image.
The length of the tuple MRelPoses also depends on the number of calibration images. If, for example, 15
images and therefore 15 poses are used, the length of the tuple MRelPoses is 15 · 7 = 105 (15 times 7 pose
parameters). The first seven parameters thus determine the pose of the manipulator in the first image, and so
on.
What do the output parameters mean? If StopCriterion was set to ’CountIterations’, the output parame-
ters BaseFinalPose and CamFinalPose are returned even if the algorithm didn’t converge. If, how-
ever, StopCriterion was set to ’MinError’, the error must fall below ’MinError’ in order for output
parameters to be returned.
HALCON 8.0.2
1120 CHAPTER 17. TOOLS
The representation type of BaseFinalPose and CamFinalPose is the same as in the corresponding
starting values. It can be changed with the operator convert_pose_type. The description of the dif-
ferent representation types and of their conversion can be found with the documentation of the operator
create_pose.
The parameter NumErrors contains a list of (numerical) errors from the individual iterations of the algo-
rithm. Based on the evolution of the errors, it can be decided whether the algorithm has converged for the
given starting values. The error values are returned as 3D deviations in meters. Thus, the last entry of the
error list corresponds to an estimate of the accuracy of the returned pose parameters.
Attention
The quality of the calibration depends on the accuracy of the input parameters (position of the calibration marks,
robot poses MRelPoses, and the starting positions BaseStartPose, CamStartPose). Based on the returned
error measures NumErrors, it can be decided, whether the algorithm has converged. Furthermore, the accuracy
of the returned pose can be estimated. The error measures are 3D differences in meters.
Parameter
read_cam_par(’campar.dat’, CamParam)
CalDescr := ’caltab.descr’
caltab_points(CalDescr, X, Y, Z)
* process all calibration images
for i := 0 to NumImages-1 by 1
read_image(Image, ’calib_’+i$’02d’)
* find marks on the calibration plate in every image
find_caltab(Image, CalPlate, CalDescr, 3, 150, 5)
find_marks_and_pose(Image, CalPlate, CalDescr, CamParam, 128, 10,
RCoordTmp, CCoordTmp, StartPose)
* accumulate 2D and 3D coordinates of the marks
RCoord := [RCoord, RCoordTmp]
CCoord := [CCoord, CCoordTmp]
XCoord := [XCoord, X]
YCoord := [YCoord, Y]
ZCoord := [ZCoord, Z]
NumMarker := [NumMarker, |RCoordTmp|]
* read pose of the robot tool in robot base coordinates
read_pose(’robpose_’+i$’02d’+’.dat’, RobPose)
* moving camera? invert pose
if (IsMovingCameraConfig=’true’)
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_invert(base_H_tool, tool_H_base)
hom_mat3d_to_pose(tool_H_base, RobPose)
endif
* accumulate robot poses
MRelPoses := [MRelPoses, RobPose]
* store the pose of the calibration plate in the first image and the
* corresponding pose of the robot for later use
if (i=0)
cam_P_cal := StartPose
RelPose0 := RobPose
endif
endfor
* obtain starting values: read one, compute the other
if (IsMovingCameraConfig=’true’)
* mov. camera: read pose of robot tool in camera coordinates
* compute pose of calibration plate in robot base coordinates
read_pose(’cam_P_tool.dat’, CamStartPose)
* BaseStartPose = inverse(CamStartPose * RelPose0) * cam_P_cal
pose_to_hom_mat3d(CamStartPose, cam_H_tool)
pose_to_hom_mat3d(RelPose0, tool_H_base)
pose_to_hom_mat3d(StartPose, cam_H_cal)
hom_mat3d_compose(cam_H_tool, tool_H_base, cam_H_base)
hom_mat3d_invert(cam_H_base, base_H_cam)
hom_mat3d_compose(base_H_cam, cam_H_cal, base_H_cal)
hom_mat3d_to_pose(base_H_cal, BaseStartPose)
else
HALCON 8.0.2
1122 CHAPTER 17. TOOLS
Result
hand_eye_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the method con-
verges with an error less than the specified minimum error (if StopCriterion = ’MinError’). If necessary, an
exception handling is raised.
Parallelization Information
hand_eye_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose
Possible Successors
write_pose, convert_pose_type, pose_to_hom_mat3d, disp_caltab, sim_caltab
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab
Module
Calibration
Transform image points into the plane z=0 of a world coordinate system.
The operator image_points_to_world_plane transforms image points which are given in Rows and
Cols into the plane z=0 in a world coordinate system and returns their 3D coordinates in X and Y. The world
coordinate system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose.
In CamParam you must pass the interior camera parameters (see write_cam_par for the sequence of the
parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image contour points
in the camera coordinate system, taking into account the radial distortions. The line of sight is then transformed
into the world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the
3D coordinates X and Y are obtained.
Parameter
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; real / integer
Row coordinates of the points to be transformed.
Default Value : 100.0
. Cols (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; real / integer
Column coordinates of the points to be transformed.
Default Value : 100.0
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or dimension
Default Value : ’m’
Suggested values : Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’µm’, 1.0, 0.01, 0.001, ’1.0e-6’, 0.0254, 0.3048,
0.9144}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; real
X coordinates of the points in the world coordinate system.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; real
Y coordinates of the points in the world coordinate system.
Example
Result
image_points_to_world_plane returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception handling is raised.
HALCON 8.0.2
1124 CHAPTER 17. TOOLS
Parallelization Information
image_points_to_world_plane is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
contour_to_world_plane_xld
Module
Calibration
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
image_to_world_plane rectifies an image Image by transforming it into the plane z=0 (plane of mea-
surements) in a world coordinate system. The resulting rectified image ImageWorld shows neither radial nor
perspective distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly
onto the plane of measurements. The world coordinate system is chosen by passing its 3D pose relative to the
camera coordinate system in WorldPose. In CamParam you must pass the interior camera parameters (see
write_cam_par for the sequence of the parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
The pixel position of the upper left corner of the output image ImageWorld is determined by the origin of the
world coordinate system. The size of the output image ImageWorld can be choosen by the parameters Width,
Height, and Scale. Width and Height must be given in pixels.
With the parameter Scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
Scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter Interpolation specifies, whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
If several images have to be rectified using the same parameters, gen_image_to_world_plane_map in
combination with map_image is much more efficient than the operator image_to_world_plane because
the mapping function needs to be computed only once.
Parameter
HALCON 8.0.2
1126 CHAPTER 17. TOOLS
Result
image_to_world_plane returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
image_to_world_plane is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Alternatives
gen_image_to_world_plane_map, map_image
See also
contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration
Result
project_3d_point returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
project_3d_point is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par, affine_trans_point_3d
Possible Successors
gen_region_points, gen_region_polygon, disp_polygon
See also
camera_calibration, disp_caltab, read_cam_par, get_line_of_sight,
affine_trans_point_3d
Module
Calibration
HALCON 8.0.2
1128 CHAPTER 17. TOOLS
Result
If the parameters are valid, the operator radiometric_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
radiometric_self_calibration is reentrant and processed without parallelization.
HALCON 8.0.2
1130 CHAPTER 17. TOOLS
Possible Predecessors
read_image, grab_image, grab_image_async, set_framegrabber_param, concat_obj,
proj_match_points_ransac, projective_trans_image
Possible Successors
lut_trans
See also
histo_2dim, gray_histo, gray_histo_abs, reduce_domain
Module
Calibration
Focus:foc: 0.00806039;
DOUBLE:0.0:;
"Focal length of the lens [meter]";
Kappa:kappa: -2253.5;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";
Sx:sx: 1.0629e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";
Sy:sy: 1.1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";
Cx:cx: 378.236;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";
Cy:cy: 297.587;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";
ImageWidth:imgw: 768;
INT:1:32767;
"Width of the used calibration images [pixel]";
ImageHeight:imgh: 576;
INT:1:32767;
"Height of the used calibration images [pixel]";
In addition to the 8 parameters of the parameter group Camera:Parameter, the parameter group LinescanCamera:
Parameter contains 3 parameters that describe the motion of the camera with respect to the object. With this,
the parameter group LinescanCamera:Parameter consists of the 11 parameters Focus, Kappa (κ), Sx, Sy, Cx, Cy,
ImageWidth, ImageHeight, Vx, Vy und Vz. A suitable file can look like the following:
Focus:foc: 0.061;
DOUBLE:0.0:;
"Focal length of the lens [meter]";
Kappa:kappa: -16.9761;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";
Sx:sx: 1.06903e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";
Sy:sy: 1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";
Cx:cx: 930.625;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";
Cy:cy: 149.962;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";
ImageWidth:imgw: 2048;
INT:1:32767;
"Width of the used calibration images [pixel]";
ImageHeight:imgh: 3840;
INT:1:32767;
"Height of the used calibration images [pixel]";
Vx:vx: 1.41376e-06;
HALCON 8.0.2
1132 CHAPTER 17. TOOLS
DOUBLE::;
"X-component of the motion vector [meter/scanline]";
Vy:vy: 5.45756e-05;
DOUBLE::;
"Y-component of the motion vector [meter/scanline]";
Vz:vz: 3.45872e-06;
DOUBLE::;
"Z-component of the motion vector [meter/scanline]";
Parameter
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of interior camera parameters.
Default Value : ’campar.dat’
List of values : CamParFile ∈ {’campar.dat’, ’campar.initial’, ’campar.final’}
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
Example
Result
read_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been read success-
fully. If necessary an exception handling is raised.
Parallelization Information
read_cam_par is reentrant and processed without parallelization.
Possible Successors
find_marks_and_pose, sim_caltab, gen_caltab, disp_caltab, camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
write_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation
Parameter
. SimImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Simulated calibration image.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default Value : ’caltab.descr’
List of values : CalTabDescrFile ∈ {’caltab.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’}
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CaltabPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Exterior camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements : 7
. GrayBackground (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Gray value of image background.
Default Value : 128
Suggested values : GrayBackground ∈ {0, 32, 64, 96, 128, 160}
Restriction : (0 ≤ GrayBackground) ≤ 255
. GrayCaltab (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Gray value of calibration plate.
Default Value : 224
Suggested values : GrayCaltab ∈ {144, 160, 176, 192, 208, 224, 240}
Restriction : (0 ≤ GrayCaltab) ≤ 255
. GrayMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Gray value of calibration marks.
Default Value : 80
Suggested values : GrayMarks ∈ {16, 32, 48, 64, 80, 96, 112}
Restriction : (0 ≤ GrayMarks) ≤ 255
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scaling factor to reduce oversampling.
Default Value : 1.0
Suggested values : ScaleFac ∈ {1.0, 0.5, 0.25, 0.125}
Recommended Increment : 0.05
Restriction : 1.0 ≥ ScaleFac
Example
Result
sim_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception handling
is raised.
HALCON 8.0.2
1134 CHAPTER 17. TOOLS
Parallelization Information
sim_caltab is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, find_marks_and_pose, read_pose, read_cam_par,
hom_mat3d_to_pose
Possible Successors
find_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, create_pose,
hom_mat3d_to_pose, project_3d_point, gen_caltab
Module
Calibration
x = PX .
Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3×4 projection matrix.
The projection matrix P can be decomposed as follows:
P = K(R|t) .
Here, R is a 3×3 rotation matrix and t is an inhomogeneous 3D vector. These two entities describe
the position (pose) of the camera in 3D space. This convention is analogous to the convention used in
camera_calibration, i.e., for R = I and t = 0 the x axis points to the right, the y axis downwards, and
the z axis points forward. K is the calibration matrix of the camera (the camera matrix) which can be described as
follows:
af sf u
K= 0 f v .
0 0 1
Here, f is the focal length of the camera in pixels, a the aspect ratio of the pixels, s is a factor that models the
skew of the image axes, and (u, v) is the principal point of the camera in pixels. In this convention, the x axis
corresponds to the column axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that t = 0. With this convention, it is easy to see that the
fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point.
Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity,
and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence
X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a
direction. With this, the above projection equation can be written as follows:
x = KRX .
If two images of the same point are taken with a stationary camera, the following equations hold:
x1 = K1 R1 X
x2 = K2 R2 X
and conseqently
x2 = K2 R2 R−1 −1 −1
1 K1 x1 = K2 R12 K1 x1 = H12 x1 .
If the camera parameters do not change when taking the two images, K1 = K2 holds. Because of the above, the
two images of the same 3D point are related by a projective 2D transformation. This transformation can be deter-
mined with proj_match_points_ransac. It needs to be taken into account that the order of the coordinates
of the projective 2D transformations in HALCON is the opposite of the above convention. Furthermore, it needs
to be taken into account that proj_match_points_ransac uses a coordinate system in which the origin
of a pixel lies in the upper left corner of the pixel, whereas stationary_camera_self_calibration
uses a coordinate system that corresponds to the definition used in camera_calibration, in which the
origin of a pixel lies in the center of the pixel. For projective 2D transformations that are determined with
proj_match_points_ransac the rows and columns must be exchanged and a translation of (0.5, 0.5) must
be applied. Hence, instead of H12 = K2 R12 K−11 the following equations hold in HALCON:
0 1 0.5 0 1 −0.5
H12 = 1 0 0.5 K2 R12 K−1
1
1 0 −0.5
0 0 1 0 0 1
and
0 1 −0.5 0 1 0.5
K2 R12 K1−1 = 1 0 −0.5 H12 1 0 0.5 .
0 0 1 0 0 1
From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can
be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D
transformation between the two images. Let Hij be the projective transformation from image i to image j. Then,
Kj K>
j = Hij Ki K> >
i Hij
K−> −1
j Kj = H−> −> −1 −1
ij Ki Ki Hij
From the second equation, linear constraints on the camera parameters can be derived. This method is used for
EstimationMethod = ’linear’. Here, all source images i given by MappingSource and all destination
images j given by MappingDest are used to compute constraints on the camera parameters. After the camera
parameters have been determined from these constraints, the rotation of the camera in the respective images can
be determined based on the equation Rij = K−1 j Hij Ki and by constructing a chain of transformations from the
reference image ReferenceImage to the respective image. From the first equation above, a nonlinear method
to determine the camera parameters can be derived by minimizing the following error:
> >
2
X
Kj K>
E= j − Hij Ki Ki Hij F
(i,j)∈{(s,d)}
Here, analogously to the linear method, {(s, d)} is the set of overlapping images specified by MappingSource
and MappingDest. This method is used for EstimationMethod = ’nonlinear’. To start the minimization,
the camera parameters are initialized with the results of the linear method. These two methods are very fast and
HALCON 8.0.2
1136 CHAPTER 17. TOOLS
return acceptable results if the projective 2D transformations Hij are sufficiently accurate. For this, it is essential
that the images do not have radial distortions. It can also be seen that in the above two methods the camera
parameters are determined independently from the rotation parameters, and consequently the possible constraints
are not fully exploited. In particular, it can be seen that it is not enforced that the projections of the same 3D
point lie close to each other in all images. Therefore, stationary_camera_self_calibration offers
a complete bundle adjustment as a third method (EstimationMethod = ’gold_standard’). Here, the camera
parameters and rotations as well as the directions in 3D corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization by minimizing the following error:
n m
!
X X 2 1 2 2
E= kxij − Ki Ri Xj k + 2 (ui + vi )
i=1 j=1
σ
In this equation, only the terms for which the reconstructed direction Xj is visible in image i are taken into account.
The starting values for the parameters in the bundle adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle adjustment requires a significantly longer execution
time than the two simpler methods. Nevertheless, because the bundle adjustment results in significantly better
results, it should be preferred.
In each of the three methods the camera parameters that should be computed can be specified. The remaining
parameters are set to a constant value. Which parameters should be computed is determined with the parameter
CameraModel which contains a tuple of values. CameraModel must always contain the value ’focus’ that
specifies that the focal length f is computed. If CameraModel contains the value ’principal_point’ the principal
point (u, v) of the camera is computed. If not, the principal point is set to (ImageWidth/2, ImageHeight/2).
If CameraModel contains the value ’aspect’ the aspect ratio a of the pixels is determined, otherwise it is set to
1. If CameraModel contains the value ’skew’ the skew of the image axes is determined, otherwise it is set to
0. Only the following combinations of the parameters are allowed: ’focus’, [’focus’, ’principal_point’], [’focus’,
’aspect’], [’focus’, ’principal_point’, ’aspect’] und [’focus’, ’principal_point’, ’aspect’, ’skew’].
Additionally, it is possible to determine the parameter Kappa which models radial lens distortions, if
EstimationMethod = ’gold_standard’ has been selected and the camera parameters are assumed constant.
In this case, ’kappa’ can also be included in the parameter CameraModel.
When using EstimationMethod = ’gold_standard’ to determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by adding a sigma to the parameter ’principal_point:
0.5’. If no sigma is given the penalty term in the above equation for calculating the error is ommited.
The parameter FixedCameraParams determines whether the camera parameters can change in each im-
age or whether they should be assumed constant for all images. To calibrate a camera so that it can
later be used for measuring with the calibrated camera, only FixedCameraParams = ’true’ is use-
ful. The mode FixedCameraParams = ’false’ is mainly useful to compute spherical mosaics with
gen_spherical_mosaic if the camera zoomed or if the focus changed significantly when the mosaic images
were taken. If a mosaic with constant camera parameters should be computed, of course FixedCameraParams
= ’true’ should be used. It should be noted that for FixedCameraParams = ’false’ the camera calibration
problem is determined very badly, especially for long focal lengths. In these cases, often only the focal length can
be determined. Therefore, it may be necessary to use CameraModel = ’focus’ or to constrain the position of the
principal point by using a small Sigma for the penalty term for the principal point.
The number of images that are used for the calibration is passed in NumImages. Based on the number of images,
several constraints for the camera model must be observed. If only two images are used, even under the assumption
of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should
be set to 0 by not adding ’skew’ to CameraModel. If FixedCameraParams = ’false’ is used, the full set of
camera parameters can never be determined, no matter how many images are used. In this case, the skew should be
set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least
one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other
images. If this is not the case the computation of the aspect ratio should be suppressed by not adding ’aspect’ to
CameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping
image pair is determined with proj_match_points_ransac. For example, for a 2×2 block of images in
the following layout
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 und 37→4. The indices of the images that determine the respective transformation are
given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example
MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by ReferenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in
HomMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must
be passed in Rows1, Cols1, Rows2, and Cols2. They can be determined from the output of
proj_match_points_ransac with tuple_select or with the HDevelop function subset. To enable
stationary_camera_self_calibration to determine which point pair belongs to which image pair,
NumCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices Ki are returned in CameraMatrices as 3 × 3 matrices. For
FixedCameraParams = ’false’, NumImages matrices are returned. Since for FixedCameraParams =
’true’ all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations Ri
are returned in RotationMatrices as 3 × 3 matrices. RotationMatrices always contains NumImages
matrices.
If EstimationMethod = ’gold_standard’ is used, (X, Y, Z) contains the reconstructed directions Xj . In ad-
dition, Error contains the average projection error of the reconstructed directions. This can be used to check
whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective
camera matrix should be multiplied with the corresponding rotation matrix (with hom_mat2d_compose).
Parameter
HALCON 8.0.2
1138 CHAPTER 17. TOOLS
* Assume that Images contains four images in the layout given in the
* above description. Then the following example performs the camera
* self-calibration using these four images.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT,
’ncc’, 10, 0, 0, 480, 640, 0, 0.5,
’gold_standard’, 2, 42, HomMat2D,
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
Result
If the parameters are valid, the operator stationary_camera_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
stationary_camera_self_calibration is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac
Possible Successors
gen_spherical_mosaic
See also
gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Module
Calibration
For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in CamParam
is greater than 0, the projection is described by the following equations:
x
pc = y
z
HALCON 8.0.2
1140 CHAPTER 17. TOOLS
x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in CamParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
x
pc = y
z
u = x and v=y
2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:
ũ ṽ
c= + Cx and r= + Cy
Sx Sy
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators find_marks_and_pose and camera_calibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
x
pc = y ,
z
m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz
with
1
D =
1 + κ(ũ2 + (pv )2 )
pv = Sy · Cy
ũ
c= + Cx and r=t
Sx
The format of the text file CamParFile is a (HALCON-independent) generic parameter description. It allows to
group arbitrary sets of parameters hierarchically. The description of a single parameter within a parameter group
consists of the following 3 lines:
Depending on the number of elements of CamParam, the parameter groups Camera:Parameter or LinescanCam-
era:Parameter, respectively, are written into the text file CamParFile (see read_cam_par for an example).
The parameter group Camera:Parameter consits of the 8 interior camera parameters of the area scan camera. The
parameter group LinescanCamera:Parameter consists of the 11 interior camera parameters of the line scan camera.
Parameter
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of interior camera parameters.
Default Value : ’campar.dat’
List of values : CamParFile ∈ {’campar.dat’, ’campar.initial’, ’campar.final’}
Example
HALCON 8.0.2
1142 CHAPTER 17. TOOLS
Result
write_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been written
successfully. If necessary an exception handling is raised.
Parallelization Information
write_cam_par is local and processed completely exclusively without parallelization.
Possible Predecessors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation
17.6 Datacode
clear_all_data_code_2d_models ( : : : )
Delete all 2D data code models and free the allocated memory
The operator clear_all_data_code_2d_models deletes all 2D data code models that were created by
create_data_code_2d_model or read_data_code_2d_model. All memory used by the models is
freed. After the operator call all 2D data code handles are invalid.
Attention
clear_all_data_code_2d_models exists solely for the purpose of implementing the “reset program”
functionality in HDevelop. clear_all_data_code_2d_models must not be used in any application.
Result
The operator clear_all_data_code_2d_models returns the value 2 (H_MSG_TRUE) if all 2D data code
models were freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_all_data_code_2d_models is processed completely exclusively without parallelization.
Alternatives
clear_data_code_2d_model
See also
create_data_code_2d_model, read_data_code_2d_model
Module
Data Code
clear_data_code_2d_model ( : : DataCodeHandle : )
Parameter
HALCON 8.0.2
1144 CHAPTER 17. TOOLS
* (2) Create a model for reading a wide range of Data matrix ECC 200 codes
* (this model will also read light symbols on dark background)
create_data_code_2d_model (’Data Matrix ECC 200’, ’default_parameters’,
’enhanced_recognition’, DataCodeHandle)
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)
Result
The operator create_data_code_2d_model returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Parallelization Information
create_data_code_2d_model is processed completely exclusively without parallelization.
Possible Successors
set_data_code_2d_param, find_data_code_2d
Alternatives
read_data_code_2d_model
See also
clear_data_code_2d_model, clear_all_data_code_2d_models
Module
Data Code
HALCON 8.0.2
1146 CHAPTER 17. TOOLS
Detect and read 2D data code symbols in an image or train the 2D data code model.
The operator find_data_code_2d detects 2D data code symbols in the input image (Image) and reads
the data that is coded in the symbol. Before calling find_data_code_2d, a model of a class of 2D data
codes that matches the symbols in the images must be created with create_data_code_2d_model or
read_data_code_2d_model. The handle returned by these operators is passed to find_data_code_2d
in DataCodeHandle. To look for more than one symbol in an image, the generic parameter
’stop_after_result_num’ can be passed to GenParamNames together with the number of requested symbols as
GenParamValues.
As a result the operator returns for every successfully decoded symbol the surrounding XLD contour
(SymbolXLDs), a result handle, which refers to a candidate structure that stores additional information about
the symbol as well as the search and decoding process (ResultHandles), and the string that is encoded in
the symbol (DecodedDataStrings). If the string is longer than 1024 characters it is shortened to 1020
characters followed by ’. . . ’. In this case, accessing the complete string is only possible with the operator
get_data_code_2d_results. Passing the candidate handle from ResultHandles together with the
generic parameter ’decoded_data’ get_data_code_2d_results returns a tuple with the ASCII code of
all characters of the string.
Adjusting the model
If there is a symbol in the image that cannot be read, it should be verified, whether the properties of the symbol
fit the model parameters. Special attention should be paid to the correct polarity (’polarity’, light-on-dark or dark-
on-light), the symbol size (’symbol_size’ for ECC 200, ’version’ for QR Code, ’symbol_rows’ and ’symbol_cols’
for PDF417), the module size (’module_size’ for ECC 200 and QR Code, ’module_width’ and ’module_aspect’
for PDF417), the possibility of a mirroring of the symbol (’mirrored’), and the specified minimum contrast (’con-
trast_min’). Further relevant parameters are the gap between neighboring foreground modules and, for ECC 200,
the maximum slant of the L-shaped finder pattern (’slant_max’). The current settings for these parameters are
returned by the operator get_data_code_2d_param. If necessary, the appropriate model parameters can be
adjusted with set_data_code_2d_param.
It is recommended to adjust the model as well as possible to the symbols in the images also for run-time reasons.
In general, the run-time of find_data_code_2d is higher for a more general model than for a more specific
model. One should take into account that a general model leads to a high run-time especially if no valid data code
can be found.
Train the model
Besides setting the model parameters manually with set_data_code_2d_param, the model can also be
trained with find_data_code_2d based on one or several sample images. For this the generic parameter
’train’ must be passed in GenParamNames. The corresponding value passed in GenParamValues determines
the model parameters that should be learned. The following values are possible:
’module_grid’: algorithm for calculating the module positions (fixed or variable grid).
• QR Code only:
’model_type’: whether the QR Code symbols follow the Model 1 or Model 2 specification.
It is possible to train several of these parameters in one call of find_data_code_2d by passing the generic pa-
rameter ’train’ in a tuple more than once in conjunction with the appropriate parameters: e.g., GenParamNames
= [’train’,’train’] and GenParamValues = [’polarity’,’module_size’]. Furthermore, in conjunction with ’train’
= ’all’ it is possible to exclude single parameters from training explicitly again by passing ’train’ more than once.
The names of the parameters to exclude, however, must be prefixed by ’˜’: GenParamNames = [’train’,’train’]
and GenParamValues = [’all’,’˜contrast’], e.g., trains all parameters except the minimum contrast.
For training the model, the following aspects should be considered:
• To use several images for the training, the operator find_data_code_2d must be called with the param-
eter ’train’ once for every sample image.
• It is also possible to train the model with several symbols in one image. Here, the generic parameter
’stop_after_result_num’ must be passed as a tuple to GenParamNames together with ’train’. The num-
ber of symbols in the image is passed in GenParamValues together with the training parameters.
• If the training image contains more symbols than the one that shall be used for the training the domain of the
image should be reduced to the symbol of interest with reduce_domain.
• In an application with very similar images, one image for training may be sufficient if the following assump-
tions are true: The symbol size (in modules) is the same for all symbols used in the application, foreground
and background modules are of the same size and there is no gap between neighboring foreground modules,
the background has no distinct texture; and the contrast of all images is almost the same. Otherwise, several
images should be used for training.
• In applications where the symbol size (in modules) is not fixed, the smallest as well as the biggest symbols
should be used for the training. If this can not be guaranteed, the limits for the symbol size should be adapted
manually after the training, or the symbol size should entirely be excluded from the training.
• During the first call of find_data_code_2d in the training mode, the trained model parameters are
restricted to the properties of the detected symbol. Any successive training call will, where necessary, extend
the parameter range to cover the already trained symbols as well as the new symbols. Resetting the model with
set_data_code_2d_param to one of its default settings (’default_parameters’ = ’standard_recognition’
or ’enhanced_recognition’) will also reset the training state of the model.
• If find_data_code_2d is not able to read the symbol in the training image, this will produce
no error or exception handling. This can simply be tested in the program by checking the results of
find_data_code_2d: SymbolXLDs, ResultHandles, DecodedDataStrings. These tuples
will be empty, and the model will not be modified.
HALCON 8.0.2
1148 CHAPTER 17. TOOLS
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Display all symbols, the strings encoded in them, and the module size
dev_set_color (’green’)
for i := 0 to |ResultHandles| - 1 by 1
SymbolXLD := SymbolXLDs[i+1]
dev_display (SymbolXLD)
get_contour_xld (SymbolXLD, Row, Col)
Result
The operator find_data_code_2d returns the value 2 (H_MSG_TRUE) if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
find_data_code_2d is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model, read_data_code_2d_model, set_data_code_2d_param
Possible Successors
get_data_code_2d_results, get_data_code_2d_objects, write_data_code_2d_model
See also
create_data_code_2d_model, set_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects
Module
Data Code
Access iconic objects that were created during the search for 2D data code symbols.
The operator get_data_code_2d_objects facilitates to access iconic objects that were created dur-
ing the last call of find_data_code_2d while searching and reading the 2D data code symbols. Be-
sides the name of the object (ObjectName), the 2D data code model (DataCodeHandle) must be passed
to get_data_code_2d_objects. In addition, in CandidateHandle a handle of a result or candi-
date structure or a string identifying a group of candidates (see get_data_code_2d_results) must be
passed. These handles are returned by find_data_code_2d for all successfully decoded symbols and by
get_data_code_2d_results for a group of candidates. If these operators return several handles in a tuple,
the individual handles can be accessed by normal tuple operations.
Some objects are not accessible without setting the model parameter ’persistence’ to 1 (see
set_data_code_2d_param). The persistence must be set before calling find_data_code_2d, either
while creating the model with create_data_code_2d_model or with set_data_code_2d_param.
Currently, the following iconic objects can be retrieved:
Regions of the modules
These region arrays correspond to the areas that were used for the classification. The returned object is a region
array. Hence it cannot be requested for a group of candidates. Therefore, a single result handle must be passed in
CandidateHandle. The model persistence must be 1 for this object. In addition, requesting the module ROIs
makes sense only for symbols that were detected as valid symbols. For other candidates, whose processing was
aborted earlier, the module ROIs are not available.
XLD contour
HALCON 8.0.2
1150 CHAPTER 17. TOOLS
This object can be requested for any group of results or for any single candidate or symbol handle. The persistence
setting is of no relevance.
Pyramid images
* Example demonstrating how to access the iconic objects of the data code
* search.
* Get the handles of all candidates that were detected as a symbol but
* could not be read
get_data_code_2d_results (DataCodeHandle, ’all_undecoded’, ’handle’,
HandlesUndecoded)
* For every undecoded symbol, get the contour and the classified
* module regions
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
dev_set_color (’blue’)
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the module regions of the foreground modules
dev_set_color (’green’)
get_data_code_2d_objects (ModuleFG, DataCodeHandle, HandlesUndecoded[i],
’module_1_rois’)
* Get the module regions of the background modules
dev_set_color (’red’)
get_data_code_2d_objects (ModuleBG, DataCodeHandle, HandlesUndecoded[i],
’module_0_rois’)
* Stop for inspecting the image
stop ()
endfor
Result
The operator get_data_code_2d_objects returns the value 2 (H_MSG_TRUE) if the given parameters are
correct and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_objects is reentrant and processed without parallelization.
Possible Predecessors
find_data_code_2d, query_data_code_2d_params
Possible Successors
get_data_code_2d_results
See also
query_data_code_2d_params, get_data_code_2d_results, get_data_code_2d_param,
set_data_code_2d_param
Module
Data Code
get_data_code_2d_param ( : : DataCodeHandle,
GenParamNames : GenParamValues )
Get one or several parameters that describe the 2D data code model.
The operator get_data_code_2d_param allows to query the parameters that are used to describe the 2D
data code model. The names of the desired parameters are passed in the generic parameter GenParamNames,
the corresponding values are returned in GenParamValues. All these parameters can be set and changed at any
time with the operator set_data_code_2d_param. A list with the names of all parameters that are valid for
the used 2D data code type is returned by the operator query_data_code_2d_params.
The following parameters can be queried – ordered by different categories and data code types:
Size and shape of the symbol:
HALCON 8.0.2
1152 CHAPTER 17. TOOLS
’symbol_cols_max’: maximum number of data columns in the symbol in codewords, i.e., excluding the
codewords of the start/stop pattern and of the two row indicators.
’symbol_rows_min’: minimum number of module rows in the symbol.
’symbol_rows_max’: maximum number of module rows in the symbol.
Appearance of the modules in the image:
• All data code types:
’polarity’: possible restrictions concerning the polarity of the modules, i.e., if they are printed dark on a light
background or vice versa: ’dark_on_light’, ’light_on_dark’, ’any’.
’mirrored’: describes whether the symbol is or may be mirrored (which is equivalent to swapping the rows
and columns of the symbol): ’yes’, ’no’, ’any’.
’contrast_min’: minimum contrast between the foreground and the background of the symbol (this measure
corresponds to the minimum gradient between the symbol’s foreground and the background).
• Data matrix ECC 200 and QR Code:
’module_size_min’: minimum module size in the image in pixels.
’module_size_max’: maximum module size in the image in pixels.
With the following parameters it is possible to specify whether neighboring foreground modules are con-
nected or whether there is or may be a gap between them (possible values are ’no’ (no gap) < ’small’ <
’big’):
’module_gap_col_min’: minimum gap in direction of the symbol columns.
’module_gap_col_max’: maximum gap in direction of the symbol columns.
’module_gap_row_min’: minimum gap in direction of the symbol rows.
’module_gap_row_max’: maximum gap in direction of the symbol rows.
• PDF417:
’module_width_min’: minimum module width in the image in pixels.
’module_width_max’: maximum module width in the image in pixels.
’module_aspect_min’: minimum module aspect ratio (module height to module width).
’module_aspect_max’: maximum module aspect ratio (module height to module width).
• Data matrix ECC 200:
’slant_max’: maximum slant of the L-shaped finder (the angle is returned in radians and corresponds to the
distortion that occurs when the symbol is printed or during the image acquisition).
’module_grid’: describes whether the size of the modules may vary (in a specific range) or not. Dependent
on the parameter different algorithms are used for the calculation of the module’s center positions. If
it is set to ’fixed’, an equidistant grid is used. Allowing a variable module size (’variable’), the grid is
aligned only to the alternating side of the finder pattern. With ’any’ both approaches are tested one after
the other.
• QR Code:
’position_pattern_min’: Number of position detection patterns that have to be visible for generating a new
symbol candidate (2 or 3).
General model behavior:
• All data code types:
’persistence’: controls whether certain intermediate results of the symbol search with
find_data_code_2d are stored only temporarily or persistently in the model: 0 (temporary),
1 (persistent).
’strict_model’: controls the behavior of find_data_code_2d while detecting symbols that could be
read but that do not fit the model restrictions concerning the size of the symbols: ’yes’ (strict: such
symbols are rejected), ’no’ (not strict: all readable symbols are returned as a result independent of their
size and the size specified in the model).
It is possible to query the values of several or all parameters with a single operator call by passing a tuple con-
taining the names of all desired parameters to GenParamNames. As a result a tuple of the same length with the
corresponding values is returned in GenParamValues.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; integer
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Names of the generic parameters that are to be queried for the 2D data code model.
Default Value : ’contrast_min’
List of values : GenParamNames ∈ {’strict_model’, ’persistence’, ’polarity’, ’mirrored’, ’contrast_min’,
’model_type’, ’version_min’, ’version_max’, ’symbol_size_min’, ’symbol_size_max’, ’symbol_cols_min’,
’symbol_cols_max’, ’symbol_rows_min’, ’symbol_rows_max’, ’symbol_shape’, ’module_size_min’,
’module_size_max’, ’module_width_min’, ’module_width_max’, ’module_aspect_min’,
’module_aspect_max’, ’module_gap_col_min’, ’module_gap_col_max’, ’module_gap_row_min’,
’module_gap_row_max’, ’slant_max’, ’module_grid’, ’position_pattern_min’}
. GenParamValues (output_control) . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Values of the generic parameters.
Result
The operator get_data_code_2d_param returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_param is reentrant and processed without parallelization.
Possible Predecessors
query_data_code_2d_params, set_data_code_2d_param, find_data_code_2d
Possible Successors
find_data_code_2d, write_data_code_2d_model
Alternatives
write_data_code_2d_model
See also
query_data_code_2d_params, set_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects, find_data_code_2d
Module
Data Code
Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
The operator get_data_code_2d_results allows to access several alphanumerical results that were calcu-
lated while searching and reading the 2D data code symbols. These results describe the search process in general
or one of the investigated candidates – independently of whether it could be read or not. The results are in most
cases not related to the symbol with the highest resolution but depend on the pyramid level that was investigated
when the reading process was aborted. To access a result, the name of the parameter (ResultNames) and the 2D
data code model (DataCodeHandle) must be passed. In addition, in CandidateHandle a handle of a result
or candidate structure or a string identifying a group of candidates must be passed. These handles are returned by
find_data_code_2d for all successfully decoded symbols and by get_data_code_2d_results for a
group of candidates. If these operators return several handles in a tuple, the individual handles can be accessed by
normal tuple operations.
Most results consist of one value. Several of these results can be queried for a specific candidate in a single call.
The values returned in ResultValues correspond to the appropriate parameter names in the ResultNames
tuple. As an alternative, these results can also be queried for a group of candidates (see below). In this case, only
one parameter can be requested per call, and ResultValues contains one value for every candidate.
Furthermore, there exists another group of results that consist of more than one value (e.g., ’bin_module_data’),
which are returned as a tuple. These parameters must always be queried exclusively: one result for one specific
candidate.
Apart from the candidate-specific results there are a number of results referring to the search process in general.
This is indicated by passing the string ’general’ in CandidateHandle instead of a candidate handle.
HALCON 8.0.2
1154 CHAPTER 17. TOOLS
Candidate groups
The following candidate group names are predefined and can be passed as CandidateHandle instead of a
single handle:
’general’: This value is used for results that refer to the last find_data_code_2d call in general but not to a
specific candidate.
’all_candidates’: All candidates (including the successfully decoded symbols) that were investigated during the
last call of find_data_code_2d.
’all_results’: All symbols that were successfully decoded during the last call of find_data_code_2d.
’all_undecoded’: All candidates of the last call of find_data_code_2d that were detected as 2D data code
symbols, but could not be decoded. For these candidates the error correction detected too many errors, or
there was an failure while decoding the error-corrected data because of inconsistent data.
’all_aborted’: All candidates of the last call of find_data_code_2d that could not be identified as valid 2D
data code symbols and for which the processing was aborted.
Supported results
Currently, the access to the following results, which are returned in ResultValues, is supported:
General results that do not depend on specific candidates (all data code types) – ’general’:
’decoding_error’: decoding error – for successfully decoded symbols this is the number of errors that were
detected and corrected by the error correction. The number of errors corresponds here to the number of
code words that lead to errors when trying to read them. If the error correction failed, a negative error
code is returned.
’symbology_ident’: The Symbology Identifier is used to indicate that the data code contains the FNC1 and/or
ECI characters.
FNC1 (Function 1 Character) is used if the data formating conforms to specific predefined industry
standards.
The ECI protocol (Extended Channel Interpretation) is used to change the default interpretation of the
encoded data. A 6-digit code number after the ECI character switches the interpretation of the following
characters from the default to a specific code page like an international character set. In the output stream
the ECI switch is coded as ’\nnnnnn’. Therefore all backslashs (’\’, ASCII code 92), that occur in the
normal output stream have to be doubled.
The ’symbology_ident’ parameter returns only the actual identifier value m (m ∈ [0, 6] (ECC 200 and QR
Code) and m ∈ [0, 2] (PDF417)) according to the specification of Data matrix, QR Codes, and PDF417
but not the identifier prefixes ’]d’, ’]Q’, and ’]L’ for Data matrix, QR Codes, and PDF417 respectively.
If required, this Symbology Identifier composed of the prefix and the value m has to be preceded the
decoded string (normally only if m > 1) manually. Symbols that contain ECI codes (and hence doubled
backslashs) can be recognised by the following identifier values: ECC 200: 4, 5, and 6, QR Code: 2, 4,
and 6, PDF417: 1.
• QR Codes:
’version’: version number that corresponds to the size of the symbol (version 1 = 21 × 21, version 2 = 25 ×
25, . . . , version 40 = 177 × 177).
’symbol_size’: detected size of the symbol in modules.
’model_type’: Type of the QR Code Model. In HALCON the older, original specification for QR Codes
Model 1 as well as the newer, enhanced form Model 2 are supported.
’mask_pattern_ref’, ’error_correction_level’: If a candidate is recognized as an QR Code the first step is
to read the format information encoded in the symbol. This includes a code for the pattern that was
used for masking the data modules (0 ≤ ’mask_pattern_ref’ ≤ 7) and the level of the error correction
(’error_correction_level’ ∈ [’L’, ’M’, ’Q’, ’H’]).
• PDF417:
’module_aspect’: module aspect ratio; this corresponds to the ratio of ’module_height’ to ’module_width’.
’error_correction_level’: If a candidate is recognized as a PDF417 the first step is to read the format infor-
mation encoded in the symbol. This includes the error correction level, which was used during encoding
(’error_correction_level’ ∈ [0, 8]).
Results that return a tuple of values and hence can be requested only separately and only for a single candidate:
HALCON 8.0.2
1156 CHAPTER 17. TOOLS
’corr_coded_data’: data obtained after applying the error correction: erroneous bits are corrected and all
redundant words are removed, but the words are still encoded according to the coding scheme that is
specific for the data code type
’decoded_data’: tuple with the decoded data words (= characters of the decoded data string) as ASCII code
or – for QR Code – as JIS8 and Shift JIS characters. In contrast to the decoded data string, there is no
restriction concerning the maximum length of 1024 characters.
’quality_isoiec15415’: tuple with the assessment of print quality in compliance with the international stan-
dard ISO/IEC 15415. The first element always contains the overall print quality of the symbol; the
length of the tuple and the denotation of the remaining elements depend on the specific data code type.
According to the standard the grades are whole numbers from 0 to 4, where 0 is the lowest and 4 the
highest grade. It is important to note that, even though the implementation is strictly based on the stan-
dard, the computation of the print quality grades depends on the preceding decoding algorithm. Thus,
different data code readers (of different vendors) can potentially produce slightly different results in the
print quality assessment.
For the 2D data codes ECC200 and QR Code, the print quality is described in a tuple with eight ele-
ments: (overall quality, contrast, modulation, fixed pattern damage, decode, axial nonuniformity, grid
nonuniformity, unused error correction).
The definition of the respective elements is as follows: The overall quality is the minimum of all indi-
vidual grades. The contrast is the range between the minimal and the maximal pixel intensity in the data
code domain, and a strong contrast results in a good grading. The modulation indicates how strong the
amplitudes of the data code modules are. Big amplitudes make the assignment of the modules to black
or white more certain, resulting in a high modulation grade. It is to note that the computation of the
modulation grade is influenced by the specific level of error correction capacity, meaning that the mod-
ulation degrades less for codes with higher error correction capacity. The fixed pattern of both ECC200
and QR Code is of high importance for detecting and decoding the codes. Degradation or damage of the
fixed pattern, or the respective quiet zones, is assessed with the fixed pattern damage quality. The decode
quality always takes the grade 4, meaning that the code could be decoded. Naturally, codes which can
not be decoded can not be assessed concerning print quality either. Originally, data codes have squared
modules, i.e. the width and height of the modules are the same. Due to a potentially oblique view
of the camera onto the data code or a defective fabrication of the data code itself, the width to height
ratio can be distorted. This deterioration results in a degraded axial nonuniformity. If apart from an
affine distortion the data code is subject to perspective or any other distortions too this degrades the grid
nonuniformity quality. As data codes are redundant codes, errors in the modules or codewords can be
corrected. The amount of error correcting capacities which is not already used by the present data code
symbol is expressed in the unused error correction quality. In a way, this grade reflects the reliability of
the decoding process. Note, that even codes with an unused error correction grading of 0, which could
possibly mean a false decoding result, can be decoded by the find_data_code_2d operator in a re-
liable way, because the implemented decoding functionality is more sophisticated and robust compared
to the reference decode algorithm proposed by the standard.
For the 2D stacked code PDF417 the print quality is described in a tuple with seven elements: (overall
quality, start/stop pattern, codeword yield, unused error correction, modulation, decodability, defects).
The definition of the respective elements is as follows: The overall quality is the minimum of all individ-
ual grades. As the PDF417 data code is a stacked code, which can be read by line scan devices as well,
print quality assessment is basically based on techniques for linear bar codes: a set of scan reflectance
profiles is generated across the symbol followed by the evaluation of the respective print qualities within
each scan, which are finally subsumed as overall print qualities. For more details the user is referred
to the standard for linear symbols ISO/IEC 14516. In start/stop pattern the start and stop patterns are
assessed concerning the quality of the reflectance profile and the correctness of the bar and space se-
quence. The grade codeword yield counts and evaluates the relative number of correct decoded words
acquired by the set of scan profiles. For the grade unused error correction the relative number of false
decoded words within the error correction blocks are counted. As for 2D data codes, the modulation
grade indicates how strong the amplitudes, i.e. the extremal intensities, of the bars and spaces are. The
grade decodability measures the deviation of the actual length of bars and spaces with respect to their
reference length. And finally, the grade defects refers to a measurement of how perfect the reflectance
profiles of bars and spaces are.
• PDF417:
’macro_exist’: symbols that are part of a group of symbols are called "‘Macro PDF417"’ symbols. These
symbols contain additional information within a control block. For macro symbols ’macro_exist’ returns
the value 1 while for conventional symbols 0 is returned.
’macro_segment_index’: returns the index of the symbol in the group. For macro symbols this information
is obligatory.
’macro_file_id’: returns the group identifier as a string. For macro symbols this information is obligatory.
’macro_segment_count’: returns the number of symbols that belong to the group. For macro symbols this
information is optional.
’macro_time_stamp’: returns the time stamp on the source file expressed as the elapsed time in seconds since
1970:01:01:00:00:00 GMT as a string. For macro symbols this information is optional.
’macro_checksum’: returns the CRC checksum computed over the entire source file using the CCITT-16
polynomial. For macro symbols this information is optional.
’macro_last_symbol’: returns 1 if the symbol is the last one within the group of symbols. Otherwise 0 is
returned. For macro symbols this information is optional.
Status message
The status parameter that can be queried for all candidates reveals why and where in the evaluation phase a candi-
date was discarded. The following list shows the most important status messages in the order of their generation
during the evaluation phase:
• QR Code:
HALCON 8.0.2
1158 CHAPTER 17. TOOLS
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted adjusting: finder patterns’ – It is not possible to determine the exact position of the finder pattern
in the processing image.
’aborted symbol: different number of rows and columns’ – It is not possible to determine for both dimen-
sions a consistent symbol size by the size and the position of the detected finder pattern. When reading
Model 2 symbols, this error may occur only with small symbols (< version 7 or 45 × 45 modules). For
bigger symbols the size is coded within the symbol in the version information region. The estimated size
is used only as a hint for finding the version information region.
’aborted symbol: invalid size’ – The size determined by the size and the position of the detected finder pat-
tern is too small or (only Model 1) too big.
’decoding of version information failed’ – While processing a Model 2 symbol, the symbol version as deter-
mined by the finder pattern is at least 7 (≥ 45 × 45 modules). However, reading the version from the
appropriate region in the symbol failed.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’decoding of format information failed’ – Reading the format information (mask pattern and error correction
level) from the appropriate region in the symbol failed.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
• PDF417:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: special decoding reader requested’ – The decoded data contains a message for program-
ming the data code reader. This feature is not supported.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
While processing a candidate, it is possible that internally several iterations for reading the symbol are performed.
If all attempts fail, normally the last abortion state is stored in the candidate structure. E.g., if the QR Code
model enables symbols with Model 1 and Model 2 specification, find_data_code_2d tries first to inter-
pret the symbol as Model 2 type. If this fails, Model 1 interpretation is performed. If this also fails, the sta-
tus variable is set to the latest failure state of the Model 1 interpretation. In order to get the error state of
the Model 2 branch, the ’model_type’ parameter of the data code model must be restricted accordingly (with
set_data_code_2d_param).
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; integer
Handle of the 2D data code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; string / integer
Handle of the 2D data code candidate or name of a group of candidates for which the data is required.
Default Value : ’all_candidates’
Suggested values : CandidateHandle ∈ {0, 1, 2, ’general’, ’all_candidates’, ’all_results’,
’all_undecoded’, ’all_aborted’}
. ResultNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Names of the results of the 2D data code to return.
Default Value : ’status’
Suggested values : ResultNames ∈ {’min_search_level’, ’max_search_level’, ’pass_num’, ’result_num’,
’candidate_num’, ’undecoded_num’, ’aborted_num’, ’handle’, ’pass’, ’status’, ’search_level’, ’process_level’,
* Example demonstrating how to access the results of the data code search.
* For every undecoded symbol, get the contour, the symbol size, and
* the binary module data
dev_set_color (’red’)
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the symbol size
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
[’symbol_rows’,’symbol_cols’], SymbolSize)
* Get the binary module data (has to be queried exclusively)
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
’bin_module_data’, BinModuleData)
* Stop for inspecting the data
stop ()
endfor
Result
The operator get_data_code_2d_results returns the value 2 (H_MSG_TRUE) if the given parameters are
correct and the requested results are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_results is reentrant and processed without parallelization.
Possible Predecessors
find_data_code_2d, query_data_code_2d_params
HALCON 8.0.2
1160 CHAPTER 17. TOOLS
Possible Successors
get_data_code_2d_objects
See also
query_data_code_2d_params, get_data_code_2d_objects, get_data_code_2d_param,
set_data_code_2d_param
Module
Data Code
query_data_code_2d_params ( : : DataCodeHandle,
QueryName : GenParamNames )
Get for a given 2D data code model the names of the generic parameters or objects that can be used in the other
2D data code operators.
The operator query_data_code_2d_params returns the names of the generic parameters that are sup-
ported by the 2D data code operators set_data_code_2d_param, get_data_code_2d_param,
find_data_code_2d, get_data_code_2d_results, and get_data_code_2d_objects. The
parameter QueryName is used to select the desired parameter group:
The returned parameter list depends only on the type of the data code and not on the current state of the model or
its results.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; integer
Handle of the 2D data code model.
. QueryName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the parameter group.
Default Value : ’get_result_params’
List of values : QueryName ∈ {’get_model_params’, ’set_model_params’, ’find_params’,
’get_result_params’, ’get_result_objects’}
. GenParamNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string
List containing the names of the supported generic parameters.
Example
* This example demonstrates how the names of all available model parameters
* can be queried. This is used to request first the settings of the
* untrained and then the settings of the trained model.
Result
The operator query_data_code_2d_params returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Parallelization Information
query_data_code_2d_params is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model
Possible Successors
get_data_code_2d_param, get_data_code_2d_results, get_data_code_2d_objects
Module
Data Code
Read a 2D data code model from a file and create a new model.
The operator read_data_code_2d_model reads the 2D data code model file FileName and creates a new
model that is an identical copy of the saved model. The parameter DataCodeHandle returns the handle of the
new model. The model file FileName must be created by the operator write_data_code_2d_model.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the 2D data code model file.
Default Value : ’data_code_model.dcm’
. DataCodeHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; integer
Handle of the created 2D data code model.
Example
Result
The operator read_data_code_2d_model returns the value 2 (H_MSG_TRUE) if the named 2D data code
file was found and correctly read. Otherwise, an exception will be raised.
HALCON 8.0.2
1162 CHAPTER 17. TOOLS
Parallelization Information
read_data_code_2d_model is processed completely exclusively without parallelization.
Possible Successors
find_data_code_2d
Alternatives
create_data_code_2d_model
See also
write_data_code_2d_model, clear_data_code_2d_model,
clear_all_data_code_2d_models
Module
Data Code
HALCON 8.0.2
1164 CHAPTER 17. TOOLS
’contrast_min’: minimum contrast between the foreground and the background of the symbol (this measure
corresponds with the minimum gradient between the symbol’s foreground and the background).
Values: [1 . . . 100]
Default: 30 (enhanced: 10)
• Datamatrix ECC 200 und QR-Code:
’module_size_min’: minimum size of the modules in the image in pixels.
Values: [2 . . . 100]
Default: 6 (enhanced: 2)
’module_size_max’: maximum size of the modules in the image in pixels.
Values: [2 . . . 100]
Default: 20 (enhanced: 100)
’module_size’: set ’module_size_min’ and ’module_size_max’ to the same value.
It is possible to specify whether neighboring foreground modules are connected or whether there is or may be
a gap between them. If the foreground modules are connected and fill the module space completely the gap
parameter can be set to ’no’. The parameter is set to ’small’ if there is a very small gap between two modules;
it can be set to ’big’ if the gap is slightly bigger. The last two settings may also be useful if the foreground
modules – although being connected – appear thinner as their entitled space (e.g., as a result of blooming
caused by a bright illuminant). If the foreground modules appear only as very small dots (in relation to the
module size: < 50%), in general, an appropriate preprocessing of the image for detecting or enlarging the
modules will be necessary (e.g., by gray_erosion_shape or gray_dilation_shape):
’module_gap_col_min’: minimum gap in direction of the symbol columns.
Values: ’no’, ’small’, ’big’
Default: ’no’
’module_gap_col_max’: maximum gap in direction of the symbol columns.
Values: ’no’, ’small’, ’big’
Default: ’small’ (enhanced: ’big’)
’module_gap_row_min’: minimum gap in direction of the symbol rows.
Values: ’no’, ’small’, ’big’
Default: ’no’
’module_gap_row_max’: maximum gap in direction of the symbol rows.
Values: ’no’, ’small’, ’big’
Default: ’small’ (enhanced: ’big’)
’module_gap_col’: set ’module_gap_col_min’ and ’module_gap_col_max’ to the same value.
’module_gap_row’: set ’module_gap_row_min’ and ’module_gap_row_max’ to the same value.
’module_gap_min’: set ’module_gap_col_min’ and ’module_gap_row_min’ to the same value.
’module_gap_max’: set ’module_gap_col_max’ and ’module_gap_row_max’ to the same value.
’module_gap’: set ’module_gap_col_min’, ’module_gap_col_max’, ’module_gap_row_min’, and ’mod-
ule_gap_row_max’ to the same value.
• PDF417:
’module_width_min’: minimum module width in the image in pixels.
Values: [2 . . . 100]
Default: 3 (enhanced: 2)
’module_width_max’: maximum module width in the image in pixels.
Values: [2 . . . 100]
Default: 15 (enhanced: 100)
’module_width’: set ’module_width_min’ and ’module_width_max’ to the same value.
’module_aspect_min’: minimum module aspect ratio (module height to module width).
Values: [0.5 . . . 20.0]
Default: 1.0
’module_aspect_max’: maximum module aspect ratio (module height to module width).
Values: [0.5 . . . 20.0]
Default: 4.0 (enhanced: 10.0)
’module_aspect’: set ’module_aspect_min’ and ’module_aspect_max’ to the same value.
• Data matrix ECC 200:
’slant_max’: maximum deviation of the angle of the L-shaped finder pattern from the (ideal) right angle (the
angle is specified in radians and corresponds to the distortion that occurs when the symbol is printed or
during the image acquisition).
Value range: [0.0 . . . 0.5235]
Default: 0.1745 = 10◦ (enhanced: 0.5235 = 30◦ )
’module_grid’: describes whether the size of the modules may vary (in a specific range) or not. Dependent
on this parameter different algorithms are used for calculating the module’s center positions. If it is set to
’fixed’, an equidistant grid is used. Allowing a variable module size (’variable’), the grid is aligned only
to the alternating side of the finder pattern. With ’any’ both approaches are tested one after the other.
Values: ’fixed’, ’variable’, ’any’
Default: ’fixed’ (enhanced: ’any’)
• QR Code:
’position_pattern_min’: Number of position detection patterns that have to be visible for generating a new
symbol candidate.
Value range: [2, 3]
Default: 3 (enhanced: 2)
When setting the model parameters, attention should be payed especially to the following issues:
• Symbols whose size does not comply with the size restrictions made in the model (with the generic parameters
’symbol_rows*’, ’symbol_cols*’, ’symbol_size*’, or ’version*’) will not be read if ’strict_model’ is set to
’yes’, which is the default. This behavior is useful if symbols of a specific size have to be detected while
other symbols should be ignored. On the other hand, neglecting this parameter can lead to problems, e.g.,
if one symbol of an image sequence is used to adjust the model (including the symbol size), but later in the
application the symbol size varies, which is quite common in practice.
• The run-time of find_data_code_2d depends mostly on the following model parameters, namely in
cases where the requested number of symbols cannot be found in the image: ’polarity’, ’module_size_min’
(ECC 200 and QR Code) and ’module_size_min’ together with ’module_aspect_min’ (PDF417), and if the
minimum module size is very small also the parameters ’module_gap_*’ (ECC 200 and QR Code), for QR
Code also ’position_pattern_min’.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; integer
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Names of the generic parameters that shall be adjusted for the 2D data code.
Default Value : ’polarity’
List of values : GenParamNames ∈ {’default_parameters’, ’strict_model’, ’persistence’, ’polarity’,
’mirrored’, ’contrast_min’, ’model_type’, ’version’, ’version_min’, ’version_max’, ’symbol_size’,
’symbol_size_min’, ’symbol_size_max’, ’symbol_cols’, ’symbol_cols_min’, ’symbol_cols_max’,
’symbol_rows’, ’symbol_rows_min’, ’symbol_rows_max’, ’symbol_shape’, ’module_size’,
HALCON 8.0.2
1166 CHAPTER 17. TOOLS
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)
Result
The operator set_data_code_2d_param returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
set_data_code_2d_param is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model, read_data_code_2d_model
Possible Successors
get_data_code_2d_param, find_data_code_2d, write_data_code_2d_model
Alternatives
read_data_code_2d_model
See also
query_data_code_2d_params, get_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects
Module
Data Code
The operator write_data_code_2d_model writes a 2D data code model, which was created by
create_data_code_2d_model, into a file with the name FileName. This facilitates creating an identi-
cal copy of the saved model in a later session with the operator read_data_code_2d_model. The handle of
the model to write is passed in DataCodeHandle.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; integer
Handle of the 2D data code model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the 2D data code model file.
Default Value : ’data_code_model.dcm’
Example
Result
The operator write_data_code_2d_model returns the value 2 (H_MSG_TRUE) if the passed handle is
valid and if the model can be written into the named file. Otherwise, an exception will be raised.
Parallelization Information
write_data_code_2d_model is reentrant and processed without parallelization.
Possible Predecessors
set_data_code_2d_param, find_data_code_2d
Alternatives
get_data_code_2d_param
See also
create_data_code_2d_model, set_data_code_2d_param, find_data_code_2d
Module
Data Code
17.7 Fourier-Descriptor
Normalizing of the Fourier coefficients with respect to the displacment of the starting point.
The operator abs_invar_fourier_coeff normalizes the Fourier coefficients with regard to the displace-
ments of the starting point. These occur when an object is rotated. The contour tracer get_region_contour
starts with recording the contour in the upper lefthand corner of the region and follows the contour clockwise. If
the object is rotated, the starting value for the contour point chain is different which leads to a phase shift in the
frequency space. The following two kinds of normalizing are available:
abs_amount: The phase information will be eliminated; the normalizing does not retain the structure, i.e. if the
AZ-invariants are backtransformed, no similarity with the pattern can be recognized anymore.
HALCON 8.0.2
1168 CHAPTER 17. TOOLS
az_invar1: AZ-invariants of the 1st order execute the normalizing with respect to displacing the starting point so
that the structure is retained; they are however more prone to local and global disturbances, in particular to
projective distortions.
Parameter
get_region_contour(single,&row,&col);
length_of_contour = length_tuple(row);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);
Parallelization Information
abs_invar_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff
Possible Successors
fourier_1dim_inv, match_fourier_coeff
Module
Foundation
are treated like complex-valued curves. Therefore, in order to determine the Fourier coefficients, the Fourier trans-
form for periodical functions is used. Hereby the parameter MaxCoef determines the absolutevalue + 1 of
the maximal number of Fourier coefficients, i.e. if n coefficients are indicated, the procedure will calculate coeffi-
cients ranging from −n to n. The contour will be approximated without loss, if n = numberof thecontourpoints,
whereby n = 100 approximates the contour so well that an error can hardly be distinguished; n ∈ [40, 50] however
is sufficient for most applications. If the parameter MaxCoef is set to 0, all coefficients will be determined.
Parameter
get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
Parallelization Information
fourier_1dim is reentrant and processed without parallelization.
Possible Predecessors
prep_contour_fourier
Possible Successors
invar_fourier_coeff, disp_polygon
Module
Foundation
HALCON 8.0.2
1170 CHAPTER 17. TOOLS
get_region_contour(single,&row,&col);
length_of_contour = row.Num();
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);
Parallelization Information
fourier_1dim_inv is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff, fourier_1dim
Possible Successors
disp_polygon
Module
Foundation
The control parameter InvarType indicates up to which level the affine representation shall be normalized.
Please note that indicating a certain level implies that the normalizing is executed with regard to all levels below.
For most applications a subsequent normalizing of the starting point is recommended!
Parameter
. RealCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Real parts of the Fourier coefficients.
. ImaginaryCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Imaginary parts of the Fourier coefficients.
. NormPar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Input of the normalizing coefficients.
Default Value : 1
Suggested values : NormPar ∈ {1, 2}
Restriction : NormPar ≥ 1
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
Parallelization Information
invar_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
fourier_1dim
Possible Successors
invar_fourier_coeff
Module
Foundation
none: No attenuation.
1/index: Absolute amounts of the Fourier coefficients will be divided by their index.
1/(index*index): Absolute amounts of the Fourier coefficients will be divided by their square index.
The higher the result value, the greater the differences between the pattern and the test contour. If the number of
coefficients is not the same, only the first n coefficients will be compared. The parameter MaxCoef indicates the
number of the coefficients to be compared. If MaxCoef is set to zero, all coefficients will be used.
Parameter
HALCON 8.0.2
1172 CHAPTER 17. TOOLS
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,
"az_invar1",&absrow,&abscol);
match_fourier_coeff(contur1_row, contur1_col,
contur2_row, contur2_col, 50,
"1/index", &Distance_wert);
Parallelization Information
match_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff
Module
Foundation
Please note that in contrast to the signed or unsigned area the affine mapping of the radian will not be transformed
linearly.
Parameter
get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
Parallelization Information
prep_contour_fourier is reentrant and processed without parallelization.
Possible Predecessors
move_contour_orig
Possible Successors
fourier_1dim
Module
Foundation
17.8 Function
abs_funct_1d ( : : Function : FunctionAbsolute )
HALCON 8.0.2
1174 CHAPTER 17. TOOLS
ComposedFunction(x) = Function2(Function1(x)) .
ComposedFunction has the same domain (x-range) as Function1. If the range (y-value range) of
Function1 is larger than the domain of Function2, the parameter Border determines the border treatment of
Function2. For Border=’zero’ values outside the domain of Function2 are set to 0, for Border=’constant’
they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored at the border, and for
Border=’cyclic’ they are continued cyclically. To obtain y-values, Function2 is interpolated linearly.
Parameter
. Function1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Input function 1.
. Function2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Input function 2.
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Border treatment for the input functions.
Default Value : ’constant’
List of values : Border ∈ {’zero’, ’constant’, ’mirror’, ’cyclic’}
. ComposedFunction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Composed function.
Parallelization Information
compose_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation
Possible Successors
write_funct_1d, gnuplot_plot_funct_1d, y_range_funct_1d, get_pair_funct_1d,
transform_funct_1d
Alternatives
create_funct_1d_pairs, read_funct_1d
See also
funct_1d_to_pairs
Module
Foundation
HALCON 8.0.2
1176 CHAPTER 17. TOOLS
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Input function
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of derivative
Default Value : ’first’
List of values : Mode ∈ {’first’, ’second’}
. Derivative (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Derivative of the input function
Parallelization Information
derivate_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array, smooth_funct_1d_gauss,
smooth_funct_1d_mean
Module
Foundation
HALCON 8.0.2
1178 CHAPTER 17. TOOLS
Parallelization Information
get_y_value_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation
y1 (x) = a1 y2 (a3 x + a4 ) + a2 .
The transformation parameters are determined by a least-squares minimization of the following function:
n−1
X 2
y1 (xi ) − a1 y2 (a3 xi + a4 ) + a2 .
i=0
The values of the function y2 are obtained by linear interpolation. The parameter Border determines the val-
ues of the function Function2 outside of its domain. For Border=’zero’ these values are set to 0, for
HALCON 8.0.2
1180 CHAPTER 17. TOOLS
Border=’constant’ they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored
at the border, and for Border=’cyclic’ they are continued cyclically. The calculated transformation parameters
are returned as a 4-tuple in Params. If some of the parameter values are known, the respective parameters can
be excluded from the least-squares adjustment by setting the corresponding value in the tuple UseParams to the
value ’false’. In this case, the tuple ParamsConst must contain the known value of the respective parameter. If
a parameter is used for the adjustment (UseParams = ’true’), the corresponding parameter in ParamsConst is
ignored. On output, match_funct_1d_trans additionally returns the sum of the squared errors ChiSquare
of the resulting function, i.e., the function obtained by transforming the input function with the transformation pa-
rameters, as well as the covariance matrix Covar of the transformation parameters Params. These parameters
can be used to decide whether a successful matching of the functions was possible.
Parameter
HALCON 8.0.2
1182 CHAPTER 17. TOOLS
Parallelization Information
scale_y_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation
HALCON 8.0.2
1184 CHAPTER 17. TOOLS
yt (x) = a1 y(a3 x + a4 ) + a2 .
The output function TransformedFunction is obtained by transforming the x and y values of the input func-
tion separately with the above formula, i.e., the output function is not sampled again. Therefore, the parameter a3
is restricted to a3 6= 0.0 . To resample a function, the operator sample_funct_1d can be used.
Parameter
The operator write_funct_1d writes the contents of Function to a file. The data is written in an ASCII
format. Therefore, the file can be exchanged between different architectures. The data can be read by the operator
read_funct_1d. There is no specific extension for this kind of file.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Function to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file to be written.
Result
If the parameters are correct the operator write_funct_1d returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
write_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Alternatives
write_tuple, fwrite_string
See also
read_funct_1d, write_image, write_region, open_file
Module
Foundation
HALCON 8.0.2
1186 CHAPTER 17. TOOLS
17.9 Geometry
RowA1 := 255
ColumnA1 := 10
RowA2 := 255
ColumnA2 := 501
disp_line (WindowHandle, RowA1, ColumnA1, RowA2, ColumnA2)
RowB1 := 255
ColumnB1 := 255
for i := 1 to 360 by 1
RowB2 := 255 + sin(rad(i)) * 200
ColumnB2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, RowB1, ColumnB1, RowB2, ColumnB2)
angle_ll (RowA1, ColumnA1, RowA2, ColumnA2,
RowB1, ColumnB1, RowB2, ColumnB2, Angle)
endfor
Result
angle_ll returns 2 (H_MSG_TRUE).
Parallelization Information
angle_ll is reentrant and processed without parallelization.
Alternatives
angle_lx
Module
Foundation
Calculate the angle between one line and the vertical axis.
The operator angle_lx calculates the angle between one line and the abscissa. As input the coordinates of two
points on the line (Row1,Column1, Row2,Column2) are expected. The calculation is performed as follows: We
interprete the line as a vector with starting point Row1,Column1 and end point Row2,Column2. Rotating the
vector counter clockwise onto the abscissa (center of rotation is the intersection point of the abscissa) yields the
angle. The result depends of the order of the points on line. The parameter Angle returns the angle in radians,
ranging from −π ≤ Angle ≤ π.
Parameter
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real / integer
Row coordinate the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real / integer
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real / integer
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real / integer
Column coordinate of the second point of the line.
HALCON 8.0.2
1188 CHAPTER 17. TOOLS
RowX1 := 255
ColumnX1 := 10
RowX2 := 255
ColumnX2 := 501
disp_line (WindowHandle, RowX1, ColumnX1, RowX2, ColumnX2)
Row1 := 255
Column1 := 255
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
angle_lx (Row1, Column1, Row2, Column2, Angle)
endfor
Result
angle_lx returns 2 (H_MSG_TRUE).
Parallelization Information
angle_lx is reentrant and processed without parallelization.
Alternatives
angle_ll
Module
Foundation
Example (Syntax: C)
Result
distance_cc returns 2 (H_MSG_TRUE).
Parallelization Information
distance_cc is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_pc, distance_cc_min
See also
distance_sr, distance_pr
Module
Foundation
Result
distance_cc_min returns 2 (H_MSG_TRUE).
HALCON 8.0.2
1190 CHAPTER 17. TOOLS
Parallelization Information
distance_cc_min is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_pc, distance_cc
See also
distance_sr, distance_pr
Module
Foundation
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Input region.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real / integer
Row coordinate of the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real / integer
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real / integer
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real / integer
Column coordinate of the second point of the line.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Minimum distance between the line and the region
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Maximum distance between the line and the region
Example
dev_close_window ()
read_image (Image, ’fabrik’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
5000, 100000000)
dev_clear_window ()
dev_set_color (’black’)
dev_display (SelectedRegions)
dev_set_color (’red’)
Row1 := 100
Row2 := 400
for Col := 50 to 400 by 4
disp_line (WindowHandle, Row1, Col+100, Row2, Col)
distance_lr (SelectedRegions, Row1, Col+100, Row2, Col,
DistanceMin, DistanceMax)
endfor
Result
distance_lr returns 2 (H_MSG_TRUE).
Parallelization Information
distance_lr is reentrant and processed without parallelization.
Alternatives
distance_lc, distance_pr, distance_sr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation
HALCON 8.0.2
1192 CHAPTER 17. TOOLS
Parameter
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
dev_clear_window ()
dev_display (Region)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
10000, 100000000)
get_region_contour (SelectedRegions, Rows, Columns)
RowLine1 := 5
ColLine1 := 300
RowLine2 := 300
ColLine2 := 400
NumberTuple := |Rows|
dev_set_color (’red’)
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
dev_set_color (’green’)
for i := 1 to NumberTuple by 5
disp_line (WindowHandle, Rows[i], Columns[i]-2, Rows[i], Columns[i]+2)
disp_line (WindowHandle, Rows[i]-2, Columns[i], Rows[i]+2, Columns[i])
distance_pl (Rows[i], Columns[i], RowLine1, ColLine1,
RowLine2, ColLine2, Distance)
endfor
Result
distance_pl returns 2 (H_MSG_TRUE).
Parallelization Information
distance_pl is reentrant and processed without parallelization.
Alternatives
distance_ps
See also
distance_pp, distance_pr
Module
Foundation
HALCON 8.0.2
1194 CHAPTER 17. TOOLS
Example
dev_close_window ()
read_image (Image, ’mreut’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
dev_display (Image)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
dev_clear_window ()
dev_display (Region)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’, 10000, 100000000)
get_region_contour (SelectedRegions, Rows, Columns)
RowPoint := 80
ColPoint := 250
NumberTuple := |Rows|
dev_set_color (’red’)
set_draw (WindowHandle, ’margin’)
disp_circle (WindowHandle, RowPoint, ColPoint, 10)
dev_set_color (’green’)
for i := 1 to NumberTuple by 10
disp_line (WindowHandle, Rows[i], Columns[i]-2, Rows[i], Columns[i]+2)
disp_line (WindowHandle, Rows[i]-2, Columns[i], Rows[i]+2, Columns[i])
distance_pp (RowPoint, ColPoint, Rows[i], Columns[i], Distance)
endfor
Result
distance_pp returns 2 (H_MSG_TRUE).
Parallelization Information
distance_pp is reentrant and processed without parallelization.
Alternatives
distance_ps
See also
distance_pl, distance_pr
Module
Foundation
Example
dev_close_window ()
read_image (Image, ’mreut’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
10000, 100000000)
Row1 := 255
Column1 := 255
dev_clear_window ()
dev_display (SelectedRegions)
dev_set_color (’red’)
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
distance_pr (SelectedRegions, Row2, Column2,
DistanceMin, DistanceMax)
endfor
Result
distance_pr returns 2 (H_MSG_TRUE).
Parallelization Information
distance_pr is reentrant and processed without parallelization.
Alternatives
distance_pc, distance_lr, distance_sr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation
HALCON 8.0.2
1196 CHAPTER 17. TOOLS
Result
distance_ps returns 2 (H_MSG_TRUE).
Parallelization Information
distance_ps is reentrant and processed without parallelization.
Alternatives
distance_pl
See also
distance_pp, distance_pr
Module
Foundation
The calculation is carried out by comparing all contour pixels (see get_region_contour). This means in
particular that each region must consist of exactly one connected component and that holes in the regions are
ignored. Furthermore, it is not checked whether one region lies completely within the other region. In this case, a
minimum distance > 0 is returned. It is also not checked whether both regions contain a nonempty intersection. In
the latter case, a minimum distance of 0 or > 0 can be returned, depending on whether the contours of the regions
contain a common point or not.
Attention
Both input parameters must contain the same number of regions. The regions must not be empty.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. MinDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Minimum distance between contours of the regions.
Assertion : 0 ≤ MinDistance
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; integer
Line index on contour in Regions1.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; integer
Column index on contour in Regions1.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; integer
Line index on contour in Regions2.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; integer
Column index on contour in Regions2.
Complexity
If N 1,N 2 are the lengths of the contours the runtime complexity is O(N 1 ∗ N 2).
Result
The operator distance_rr_min returns the value 2 (H_MSG_TRUE) if the input is not empty. Otherwise an
exception handling is raised.
Parallelization Information
distance_rr_min is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
distance_rr_min_dil, dilation1, intersection
Module
Foundation
N umberiterations ∗ 2 − 1
.
The mask ’h’ has the effect that precisely the maximum metrics are calculated.
Attention
Both parameters must contain the same number of regions. The regions must not be empty.
HALCON 8.0.2
1198 CHAPTER 17. TOOLS
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. MinDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Minimum distances of the regions.
Assertion : -1 ≤ MinDistance
Result
The operator distance_rr_min_dil returns the value 2 (H_MSG_TRUE) if the input is not empty. Other-
wise an exception handling is raised.
Parallelization Information
distance_rr_min_dil is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
distance_rr_min, dilation1, intersection
Module
Foundation
dev_set_color (’black’)
RowLine1 := 400
ColLine1 := 200
RowLine2 := 200
ColLine2 := 400
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
dev_set_color (’green’)
n := 0
for Rows := 40 to 200 by 4
disp_line (WindowHandle, Rows+n, Columns+n, Rows, Columns+n)
distance_sl (Rows+n, Columns+n, Rows, Columns+n, RowLine1, ColLine1,
RowLine2, ColLine2,DistanceMin, DistanceMax)
n := n+10
endfor
Result
distance_sl returns 2 (H_MSG_TRUE).
Parallelization Information
distance_sl is reentrant and processed without parallelization.
Alternatives
distance_pl
HALCON 8.0.2
1200 CHAPTER 17. TOOLS
See also
distance_ps, distance_pp
Module
Foundation
Result
distance_sr returns 2 (H_MSG_TRUE).
Parallelization Information
distance_sr is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_lr, distance_pr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation
dev_set_color (’black’)
RowLine1 := 400
ColLine1 := 200
RowLine2 := 240
ColLine2 := 400
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
dev_set_color (’red’)
n := 0
for Rows := 40 to 200 by 4
disp_line (WindowHandle, Rows, Columns, Rows+n, Columns+n)
HALCON 8.0.2
1202 CHAPTER 17. TOOLS
Result
distance_ss returns 2 (H_MSG_TRUE).
Parallelization Information
distance_ss is reentrant and processed without parallelization.
Alternatives
distance_pp
See also
distance_pl, distance_ps
Module
Foundation
draw_ellipse(WindowHandle,Row,Column,Phi,Radius1,Radius2)
get_points_ellipse([0,3.14],Row,Column,Phi,Radius1,Radius2,RowPoint,ColPoint)
Result
get_points_ellipse returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion is raised.
Parallelization Information
get_points_ellipse is reentrant and processed without parallelization.
Possible Predecessors
fit_ellipse_contour_xld, draw_ellipse, gen_ellipse_contour_xld
See also
gen_ellipse_contour_xld
Module
Foundation
dev_set_color (’black’)
RowLine1 := 350
ColLine1 := 250
RowLine2 := 300
ColLine2 := 300
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
n := 0
HALCON 8.0.2
1204 CHAPTER 17. TOOLS
Result
intersection_ll returns 2 (H_MSG_TRUE).
Parallelization Information
intersection_ll is reentrant and processed without parallelization.
Module
Foundation
dev_set_color (’black’)
RowLine1 := 400
ColLine1 := 200
RowLine2 := 240
ColLine2 := 400
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
n := 0
for Rows := 40 to 200 by 4
dev_set_color (’red’)
disp_circle (WindowHandle, Rows+n, Columns, 2)
projection_pl (Rows+n, Columns, RowLine1, ColLine1, RowLine2, ColLine2,
RowProj, ColProj)
dev_set_color (’blue’)
disp_line (WindowHandle, RowProj-2, ColProj, RowProj+2, ColProj)
disp_line (WindowHandle, RowProj, ColProj-2, RowProj, ColProj+2)
n := n+8
endfor
Result
projection_pl returns 2 (H_MSG_TRUE).
Parallelization Information
projection_pl is reentrant and processed without parallelization.
Module
Foundation
17.10 Grid-Rectification
connect_grid_points ( Image : ConnectingLines : Row, Col, Sigma,
MaxDist : )
HALCON 8.0.2
1206 CHAPTER 17. TOOLS
Result
find_rectification_grid returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
find_rectification_grid is reentrant and processed without parallelization.
Possible Successors
dilation_circle, reduce_domain
Module
Calibration
Generate a projection map that describes the mapping between an arbitrarily distorted image and the rectified
image.
HALCON 8.0.2
1208 CHAPTER 17. TOOLS
gen_arbitrary_distortion_map computes the mapping Map between an arbitrarily distorted image and
the rectified image. Assuming that the points (Row,Col) form a regular grid in the rectified image, each grid cell,
which is defined by the coordinates (Row,Col) of its four corners in the distorted image, is projected onto a square
of GridSpacing×GridSpacing pixels. The coordinates of the grid points must be passed line by line in Row
and Col. GridWidth is the width of the point grid in grid points. To compute the mapping Map, additionally
the width ImageWidth and height ImageHeight of the images to be rectified must be passed.
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
In contrary to gen_grid_rectification_map, gen_arbitrary_distortion_map is used when
the coordinates (Row,Col) of the grid points in the distorted image are already known or the relevant part of the
image consist of regular grid structures, which the coordinates can be derived from.
Parameter
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject : int4 / uint2
Image containing the mapping data.
. GridSpacing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Distance of the grid points in the rectified image.
Restriction : GridSpacing > 0
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinates of the grid points in the distorted image.
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinates of the grid points in the distorted image.
Restriction : number(Row) = number(Col)
. GridWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the point grid (number of grid points).
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images to be rectified.
Restriction : ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.y ; integer
Height of the images to be rectified.
Restriction : ImageHeight > 0
Result
gen_arbitrary_distortion_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception handling is raised.
Parallelization Information
gen_arbitrary_distortion_map is reentrant and processed without parallelization.
Possible Successors
map_image
See also
create_rectification_grid, find_rectification_grid, connect_grid_points,
gen_grid_rectification_map
Module
Calibration
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
gen_grid_rectification_map calculates the mapping between the grid points (Row,Col), which have
been actually detected in the distorted image Image (typically using saddle_points_sub_pix), and the
corresponding grid points of the ideal regular point grid. First, all paths that lead from their initial point via ex-
actly four different connecting lines back to the initial point are assembled from the grid points (Row,Col) and
the connecting lines ConnectingLines (detected by connect_grid_points). In case that the input of
grid points (Row,Col) and of connecting lines ConnectingLines was meaningful, one such ’mesh’ corre-
sponds to exactly one grid cell in the rectification grid. Afterwards, the meshes are combined to the point grid.
According to the value of Rotation, the point grid is rotated by 0, 90, 180 or 270 degrees. Note that the point
grid does not necessarily have the correct orientation. When passing ’auto’ in Rotation, the point grid is ro-
tated such that the black circular mark in the rectification grid is positioned to the left of the white one (see also
create_rectification_grid). Finally, the mapping Map between the distorted image and the rectified
image is calculated by interpolation between the grid points. Each grid cell, for which the coordinates (Row,Col)
of all four corner points are known, is projected onto a square of GridSpacing × GridSpacing pixels.
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
gen_grid_rectification_map additionally returns the calculated meshes as XLD contours in Meshes.
In contrary to gen_arbitrary_distortion_map, gen_grid_rectification_map and its prede-
cessors are used when the coordinates (Row,Col) of the grid points in the distorted image are neither known nor
can be derived from the image contents.
Attention
Each input XLD contour ConnectingLines must own the global attribute ’bright_dark’, as it is described with
connect_grid_points!
Parameter
HALCON 8.0.2
1210 CHAPTER 17. TOOLS
Possible Predecessors
connect_grid_points
Possible Successors
map_image
See also
gen_arbitrary_distortion_map
Module
Calibration
17.11 Hough
Parameter
. RegionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Binary edge image in which the circles are to be detected.
. RegionOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Centres of those circles which are included in the edge image by Percent percent.
Number of elements : RegionOut = ((Radius · Percent) · Mode)
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Radius of the circle to be searched in the image.
Default Value : 12
Typical range of values : 2 ≤ Radius ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : (1 ≤ Radius) ≤ 500
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Indicates the percentage (approximately) of the (ideal) circle which must be present in the edge image
RegionIn.
Default Value : 60
Typical range of values : 10 ≤ Percent ≤ 100 (lin)
Minimum Increment : 1
Recommended Increment : 5
Number of elements : (1 ≤ Percent) ≤ 100
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
The modus defines the position of the circle in question:
0 - the radius is equivalent to the outer border of the set pixels.
1 - the radius is equivalent to the centres of the circle lines´ pixels.
2 - both 0 and 1 (a little more fuzzy, but more reliable in contrast to circles set slightly differently, necessitates
50 % more processing capacity compared to 0 or 1 alone).
List of values : Mode ∈ {0, 1, 2}
Number of elements : (1 ≤ Mode) ≤ 3
Result
The operator hough_circles returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_circles is reentrant and processed without parallelization.
Module
Foundation
HALCON 8.0.2
1212 CHAPTER 17. TOOLS
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Binary edge image in which lines are to be detected.
. HoughImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : int2
Hough transform for lines.
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Adjusting the resolution in the angle area.
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
Result
The operator hough_line_trans returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_line_trans is reentrant and processed without parallelization.
Possible Predecessors
threshold, skeleton
Possible Successors
threshold, local_max
See also
hough_circle_trans, gen_region_hline
Module
Foundation
Compute the Hough transform for lines using local gradient direction.
The operator hough_line_trans_dir calculates the Hough transform for lines in those regions passed in
the domain of ImageDir. To do so, the angles and the lengths of the lines’ normal vectors are registered in the
parameter space (the so-called Hough or accumulator space).
In contrast to hough_line_trans, additionally the edge direction in ImageDir (e.g., returned by
sobel_dir or edges_image) is taken into account. This results in a more efficient computation and in a
reduction of the noise in the Hough space.
The parameter DirectionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with DirectionUncertainty = 10 a horizon-
tal line (i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and
+10 degrees. The higher DirectionUncertainty is chosen, the higher the computation time will
be. For DirectionUncertainty = 180 hough_line_trans_dir shows the same behavior as
hough_line_trans, i.e., the edge direction is ignored. DirectionUncertainty should be chosen at
least as high as the step width of the edge direction stored in ImageDir. The minimum step width is 2 degrees
(defined by the image type ’direction’).
The result is stored in a newly generated UINT2-Image (HoughImage), where the x-axis (i.e., columns) repre-
sents the angle between the normal vector and the x-axis of the original image, and the y-axis (i.e., rows) represents
the distance of the line from the origin.
The angle ranges from -90 to 180 degrees and will be stored with a resolution of 1/AngleResolution, which
means that one pixel in x-direction is equivalent to 1/AngleResolution degrees and that the HoughImage
has a width of 270∗AngleResolution+1 pixels. The height of the HoughImage corresponds to the distance
between the lower right corner of the surrounding rectangle of the input region and the origin.
The local maxima in the result image are equivalent to the parameter values of the lines in the original image.
Parameter
. ImageDir (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : direction
Image containing the edge direction. The edges must be described by the image domain.
. HoughImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : uint2
Hough transform.
. DirectionUncertainty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; integer
Uncertainty of the edge direction (in degrees).
Default Value : 2
Typical range of values : 2 ≤ DirectionUncertainty ≤ 180
Minimum Increment : 2
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Resolution in the angle area (in 1/degrees).
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
Result
The operator hough_line_trans_dir returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input is set via the operator set_system(’no_object_result’,<Result>).
If necessary an exception handling is raised.
Parallelization Information
hough_line_trans_dir is reentrant and processed without parallelization.
Possible Predecessors
edges_image, sobel_dir, threshold, hysteresis_threshold,
nonmax_suppression_dir, reduce_domain
Possible Successors
binomial_filter, gauss_image, threshold, local_max, plateaus_center
See also
hough_line_trans, hough_lines, hough_lines_dir
Module
Foundation
Detect lines in edge images with the help of the Hough transform and returns it in HNF.
The operator hough_lines allows the selection of linelike structures in a region, whereby it is not necessary
that the individual points of a line are connected. This process is based on the Hough transform. The lines are
returned in HNF, that is by the direction and length of their normal vector.
The parameter AngleResolution defines the degree of exactness concerning the determination of the angles.
It amounts to 1/AngleResolution degree. The parameter Threshold determines by how many points
of the original region a line’s hypothesis has to be supported at least in order to be taken over into the output.
The parameters AngleGap and DistGap define a neighborhood of the points in the Hough image in order to
determine the local maxima. The lines are returned in HNF.
Parameter
. RegionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Binary edge image in which the lines are to be detected.
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Adjusting the resolution in the angle area.
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Threshold value in the Hough image.
Default Value : 100
Typical range of values : 2 ≤ Threshold
HALCON 8.0.2
1214 CHAPTER 17. TOOLS
Detect lines in edge images with the help of the Hough transform using local gradient direction and return them in
normal form.
The operator hough_lines_dir selects line-like structures in a region based on the Hough transform. The
individual points of a line can be unconnected. The region is given by the domain of ImageDir. The lines are
returned in Hessian normal form (HNF), that is by the direction and length of their normal vector.
In contrast to hough_lines, additionally the edge direction in ImageDir (e.g., returned by sobel_dir or
edges_image) is taken into account. This results in a more efficient computation and in a reduction of the noise
in the Hough space.
The parameter DirectionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with DirectionUncertainty = 10 a horizontal line
(i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and +10 de-
grees. The higher DirectionUncertainty is chosen, the higher the computation time will be. For
DirectionUncertainty = 180 hough_lines_dir shows the same behavior as hough_lines, i.e.,
the edge direction is ignored. DirectionUncertainty should be chosen at least as high as the step width
of the edge direction stored in ImageDir. The minimum step width is 2 degrees (defined by the image type
’direction’).
The parameter AngleResolution defines how accurately the angles are determined. The accuracy amounts to
1/AngleResolution degrees. A subsequent smoothing of the Hough space results in an increased stability.
The smoothing filter can be selected by Smoothing, the degree of smoothing by the parameter FilterSize
(see mean_image or gauss_image for details). The parameter Threshold determines by how many
points of the original region a line’s hypothesis must at least be supported in order to be selected into the output.
The parameters AngleGap and DistGap define a neighborhood of the points in the Hough image in order to
determine the local maxima: AngleGap describes the minimum distance of two maxima in the Hough image
in angle direction and DistGap in distance direction, respectively. Thus, maxima exceeding Threshold but
lying close to an even higher maximum are eliminated. This can particularly be helpful when searching for short
and long lines simultaneously. Besides the unsmoothed Hough image HoughImage, the lines are returned in
HNF (Angle, Dist). If the parameter GenLines is set to ’true’, additionally those regions in ImageDir are
returned that contributed to the local maxima in Hough space. They are stored in the parameter Lines.
Parameter
HALCON 8.0.2
1216 CHAPTER 17. TOOLS
Result
The operator hough_lines returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_lines_dir is reentrant and processed without parallelization.
Possible Predecessors
edges_image, sobel_dir, threshold, nonmax_suppression_dir, reduce_domain,
skeleton
Possible Successors
gen_region_hline, select_matching_lines
See also
hough_line_trans_dir, hough_line_trans, gen_region_hline, hough_circles
Module
Foundation
Select those lines from a set of lines (in HNF) which fit best into a region.
Lines which fit best into a region can be selected from a set of lines which are available in HNF with the help of the
operator select_matching_lines; the region itself is also transmitted as a parameter (RegionIn). The
width of the lines can be indicated by the parameter LineWidth. The selected lines will be returned in HNF and
as regions (RegionLines).
The lines are selected iteratively in a loop: At first, the line showing the greatest overlap with the input region
is selected from the set of input lines. This line will then be taken over into the output set whereby all points
belonging to that line will not be considered in the further steps determining overlaps. The loop will be left when
the maximum overlap value of the region and the lines falls below a certain threshold value (Thresh). The
selected lines will be returned as regions as well as in HNF.
Parameter
17.12 Image-Comparison
clear_all_variation_models ( : : : )
clear_train_data_variation_model ( : : ModelID : )
HALCON 8.0.2
1218 CHAPTER 17. TOOLS
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; integer
ID of the variation model.
Result
clear_train_data_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
clear_train_data_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
prepare_variation_model
Possible Successors
compare_variation_model, compare_ext_variation_model, write_variation_model
Module
Matching
clear_variation_model ( : : ModelID : )
For Mode = ’dark’, Region contains all points that are too dark:
Finally, for Mode = ’light_dark’ two regions are returned in Region. The first region contains the result of Mode
= ’light’, while the second region contains the result of Mode = ’dark’. The respective regions can be selected
with select_obj.
Parameter
Result
compare_ext_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct and
if the internal threshold images have been generated with prepare_variation_model or
prepare_direct_variation_model.
Parallelization Information
compare_ext_variation_model is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
Possible Successors
select_obj, connection
HALCON 8.0.2
1220 CHAPTER 17. TOOLS
Alternatives
compare_variation_model, dyn_threshold
See also
get_thresh_images_variation_model
Module
Matching
Result
compare_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct and
if the internal threshold images have been generated with prepare_variation_model or
prepare_direct_variation_model.
Parallelization Information
compare_variation_model is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
Possible Successors
connection
Alternatives
compare_ext_variation_model, dyn_threshold
See also
get_thresh_images_variation_model
Module
Matching
HALCON 8.0.2
1222 CHAPTER 17. TOOLS
value morphology (e.g., gray_erosion_shape und gray_dilation_shape), and then training the syn-
thetically modified images. A different possibility to create the variation model from a single image is to create
the model with Mode=’direct’. In this case, the variation model can only be trained by specifying the ideal image
and the variation image directly with prepare_direct_variation_model. Since the variation typically
is large at the edges of the object, edge operators like sobel_amp, edges_image, or gray_range_rect
should be used to create the variation image.
Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images to be compared.
Default Value : 640
Suggested values : Width ∈ {160, 192, 320, 384, 640, 768}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the images to be compared.
Default Value : 480
Suggested values : Height ∈ {120, 144, 240, 288, 480, 576}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the images to be compared.
Default Value : ’byte’
Suggested values : Type ∈ {’byte’, ’int2’, ’uint2’}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method used for computing the variation model.
Default Value : ’standard’
Suggested values : Mode ∈ {’standard’, ’robust’, ’direct’}
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; integer
ID of the variation model.
Complexity
A variation model created with create_variation_model requires 12 ∗ Width ∗ Height bytes of mem-
ory for Mode = ’standard’ and Mode = ’robust’ for Type = ’byte’. For Type = ’uint2’ and Type = ’int2’,
14 ∗ Width ∗ Height are required. For Mode = ’direct’ and after the training data has been cleared with
clear_train_data_variation_model, 2 ∗ Width ∗ Height bytes are required for Type = ’byte’ and
4 ∗ Width ∗ Height for the other image types.
Result
create_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
create_variation_model is processed completely exclusively without parallelization.
Possible Successors
train_variation_model, prepare_direct_variation_model
See also
prepare_variation_model, clear_variation_model,
clear_train_data_variation_model, find_shape_model, affine_trans_image
Module
Matching
get_thresh_images_variation_model ( : MinImage,
MaxImage : ModelID : )
Return the threshold images used for image comparison by a variation model.
get_thresh_images_variation_model returns the threshold images of the variation
model ModelID in MaxImage and MinImage. The threshold images must be computed
with prepare_variation_model or prepare_direct_variation_model before
they can be read out. The formula used for calculating the threshold images is described with
prepare_variation_model or prepare_direct_variation_model. The threshold images
are used in compare_variation_model and compare_ext_variation_model to detect too large
deviations of an image with respect to the model. As described with compare_variation_model and
compare_ext_variation_model, gray values outside the interval given by MinImage and MaxImage
are regarded as errors.
Parameter
. MinImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int2 / uint2
Threshold image for the lower threshold.
. MaxImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
Threshold image for the upper threshold.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; integer
ID of the variation model.
Result
get_thresh_images_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
get_thresh_images_variation_model is reentrant and processed without parallelization.
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
See also
compare_variation_model, compare_ext_variation_model
Module
Matching
HALCON 8.0.2
1224 CHAPTER 17. TOOLS
tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .
If the current image c(x, y) is compared to the variation model using compare_variation_model, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:
In compare_ext_variation_model, extended comparison modes are available, which return only too
bright errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
get_thresh_images_variation_model.
It should be noted that RefImage and VarImage are not stored as the ideal and variation images in the model
to save memory in the model.
Parameter
Result
prepare_direct_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
prepare_direct_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
sobel_amp, edges_image, gray_range_rect
Possible Successors
compare_variation_model, compare_ext_variation_model,
get_thresh_images_variation_model, write_variation_model
Alternatives
prepare_variation_model
See also
create_variation_model
Module
Matching
tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .
If the current image c(x, y) is compared to the variation model using compare_variation_model, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:
In compare_ext_variation_model, extended comparison modes are available, which return only too
bright errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
get_thresh_images_variation_model. Furthermore, the training data can be deleted with
clear_train_data_variation_model to save memory.
HALCON 8.0.2
1226 CHAPTER 17. TOOLS
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; integer
ID of the variation model.
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
. VarThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Threshold for the differences based on the variation of the variation model.
Default Value : 2
Suggested values : VarThreshold ∈ {1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5}
Restriction : VarThreshold ≥ 0
Result
prepare_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
prepare_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
train_variation_model
Possible Successors
compare_variation_model, compare_ext_variation_model,
get_thresh_images_variation_model, clear_train_data_variation_model,
write_variation_model
Alternatives
prepare_direct_variation_model
See also
create_variation_model
Module
Matching
HALCON 8.0.2
1228 CHAPTER 17. TOOLS
Result
train_variation_model returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
train_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
create_variation_model, find_shape_model, affine_trans_image, concat_obj
Possible Successors
prepare_variation_model
See also
prepare_variation_model, compare_variation_model, compare_ext_variation_model,
clear_variation_model
Module
Matching
17.13 Kalman-Filter
filter_kalman ( : : Dimension, Model, Measurement,
PredictionIn : PredictionOut, Estimate )
Estimate the current state of a system with the help of the Kalman filtering.
The operator filter_kalman returns an estimate of the current state (or also a prediction of a future state)
of a discrete, stochastically disturbed, linear system. In practice, Kalman filters are used successfully in image
processing in the analysis of image sequences (background identification, lane tracking with the help of line tracing
or region analysis, etc.). A short introduction concerning the theory of the Kalman filters will be followed by a
detailed description of the routine filter_kalman itself.
KALMAN FILTER: A discrete, stochastically disturbed, linear system is characterized by the following markers:
• State x(t): Describes the current state of the system (speeds, temperatures,...).
The output function and the transition function are linear. Their application can therefore be written as a multipli-
cation with a matrix.
The transition function is described with the help of the transition matrix A(t) and the parameter matrix , the initial
function is described by the measurement matrix C(t). Hereby C(t) characterizes the dependency of the new state
on the old, G(t) indicates the dependency on the parameters. In practice it is rarely possible (or at least too time
consuming) to describe a real system and its behaviour in a complete and exact way. Normally only a relatively
small number of variables will be used to simulate the behaviour of the system. This leads to an error, the so called
system error (also called system disturbance) v(t).
The output function, too, is usually not exact. Each measurement is faulty. The measurement errors will be called
w(t). Therefore the following system equations arise:
x(t + 1) = A(t)x(t) + G(t)u(t) + v(t)
y(t) = c(t)x(t) + w(t)
The system error v(t) and the measurement error w(t) are not known. As far as systems are concerned which
are interpreted with the help of the Kalman filter, these two errors are considered as Gaussian distributed random
vectors (therefore the expression "‘stochastically disturbed systems"’). Therefore the system can be calculated, if
the corresponding expected values for v(t) and w(t) as well as the covariance matrices are known.
The estimation of the state of the system is carried out in the same way as in the Gaussian-Markov-estimation.
However, the Kalman filter is a recursive algorithm which is based only on the current measurements y(t) and the
latest state x(t). The latter implicitly also includes the knowlegde about earlier measurements.
A suitable estimate value x_0, which is interpreted as the expected value of a random variable for x(0), must be
indicated for the initial value x(0). This variable should have an expected error value of 0 and the covariance
matrix P _0 which also has to be indicated. At a certain time t the expected values of both disturbances v(t) and
w(t) should be 0 and their covariances should be Q(t) and R(t). x(t), v(t) and w(t) will usually be assumed to be
not correlated (any kind of noise-process can be modelled - however the development of the necessary matrices by
the user will be considerably more demanding). The following conditions must be met by the searched estimate
values xt :
• The estimate values xt are linearly dependent on the actual value x(t) and on the measurement sequence
y(0), y(1), · · · , y(t).
• xt being hereby considered to meet its expectations, i.e. Ext = Ex(t).
• The grade criterion for xt is the criterion of minimal variance, i.e. the variance of the estimation error defined
as x(t) − xt , being as small as possible.
P̂ (t)C 0 (t)
(K − III) K(t) = C(t)P̂ (t)C 0 (t)+R(t)
(K − IV ) xt = x̂(t) + K(t)(y(t) − C(t)x̂(t))
(K − V ) P̃ (t) = P̂ (t) − K(t)C(t)P̂ (t)
(K − I) x̂(t + 1) = A(t)xt + G(t)u(t)
(K − II) P̂ (t + 1) = A(t)P̃ (t)A0 (t) + Q(t)
Hereby P̃ (t) is the covariance matrix of the estimation error, x̂(t) is the extrapolation value respective the predic-
tion value of the state, P̂ (t) are the covariances of the prediction error x̂ − x, K is the amplifier matrix (the so
called Kalman gain), and X 0 is the transposed of a matrix X.
Please note that the prediction of the future state is also possible with the equation (K-I). Somtimes this is very
useful in image processing in order to determine "‘regions of interest"’ in the next image.
HALCON 8.0.2
1230 CHAPTER 17. TOOLS
As mentioned above, it is much more demanding to model any kind of noise processes. If for example the system
noise and the measurement noise are correlated with the corresponding covariance matrix L, the equations for the
Kalman gain and the error covariance matrix have to be modified:
P̂ (t)C 0 (t)+L(t)
(K − III) K(t) = C(t)P̂ (t)+C(t)l(t)+L0 C 0 (t)+R(t)
(K − V ) P̃ (t) = (P̂ (t) − K(t)C(t)P̂ (t))P̂ (t) − K(t)L(t)
This means that the user himself has to establish the linear system equations from (K-I) up to (K-V) with respect to
the actual problem. The user must therefore develop a mathematical model upon which the solution to the problem
can be based. Statistical characteristics describing the inaccuracies of the system as well as the measurement
errors, which are to be expected, thereby have to be estimated if they cannot be calculated exactly. Therefore the
following individual steps are necessary:
As mentioned above, the initialization of the system (point 7) hereby necessitates to indicate an estimate x0 of the
state of the system at the time 0 and the corresponding covariance matrix P0 . If the exact initial state is not known,
it is recommendable to set the components of the vector x0 to the average values of the corresponding range, and
to set high values for P0 (about the size of the squares of the range). After a few iterations (when the number of the
accumulated measurement values in total has exceeded the number of the system values), the values which have
been determined in this way are also useable.
If on the other hand the initial state is known exactly, all entries for P0 have to be set to 0, because P0 describes
the covariances of the error between the estimated value x0 and the actual value x(0).
THE FILTER ROUTINE:
A Kalman filter is dependent on a range of data which can be organized in four groups:
Model parameter: transition matrix A, control matrix G including the parameter u and the measurement matrix
C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L, and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂
Thereby many systems can work without input "‘from outside"’, i.e. without G and u. Further, system errors and
measurement errors are normally not correlated (L is dropped).
Actually the data necessary for the routine will be set by the following parameters:
Dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. Dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore [n,m,0] has to be passed.
Model: This parameter includes the lined up matrices (vectors) A,C,Q,G,u and (if necessary) L having been stored
in row-major order. Model therefore is a vector of the length n × n + n × m + n × n + n × p + p[+n × m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
Measurement: This parameter includes the matrix R which has been stored in row-major order, and the mea-
surement vector y lined up. Measurement therefore is a vector of the dimension m × m + m.
PredictionIn / PredictionOut: These two parameters include the matrix P̂ (the extrapolation-error co-
variance matrix) which has been stored in row-major order and the extrapolation vector x̂ lined up. This
means, they are vectors of the length n × n + n. PredictionIn therefore is an input parameter, which
must contain P̂ (t) and x̂(t) at the current time t. With PredictionOut the routine returns the correspond-
ing predictions P̂ (t + 1) and x̂(t + 1).
Estimate: With this parameter the routine returns the matrix P̃ (the estimation-error covariance matrix) which
has been stored in row-major order and the estimated state x̃ lined up. Estimate therefore is a vector of
the length n × n + n.
Please note that the covariance matrices (Q, R, P̂ , P̃ ) must of course be symmetric.
Parameter
. Dimension (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
The dimensions of the state vector, the measurement and the controller vector.
Default Value : [3,1,0]
Typical range of values : 0 ≤ Dimension ≤ 30
. Model (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The lined up matrices A, C, Q, possibly G and u, and if necessary L which have been stored in row-major
order.
Default Value : [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
Typical range of values : 0.0 ≤ Model ≤ 10000.0
. Measurement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The matrix R stored in row-major order and the measurement vector y lined up.
Default Value : [1.2,1.0]
Typical range of values : 0.0 ≤ Measurement ≤ 10000.0
. PredictionIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The matrix P̂ (the extrapolation-error covariances) stored in row-major order and the extrapolation vector x̂
lined up.
Default Value : [0.0,0.0,0.0,0.0,180.5,0.0,0.0,0.0,100.0,0.0,100.0,0.0]
Typical range of values : 0.0 ≤ PredictionIn ≤ 10000.0
. PredictionOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The matrix P∗ (the extrapolation-error covariances)stored in row-major order and the extrapolation vector x̂
lined up.
. Estimate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The matrix P̃ (the estimation-error covariances) stored in row-major order and the estimated state x̃ lined up.
Example
// Typical procedure:
// 1. To initialize the variables, which describe the model, e.g. with
read_kalman(’kalman.init’,Dim,Mod,Meas,Pred)
// Generation of the first measurements (typical of the first image of an
// image series) with an appropriate problem-specific routine (there is a
// fictitious routine extract_features in example):
extract_features(Image1,Meas,Meas1)
// first Kalman-Filtering:
filter_kalman(Dim,Mod,Meas1,Pred,Pred1,Est1)
// To use the estimate value (if need be the prediction too)
// with a problem-specific routine (here use_est):
use_est(Est1)
// To get the next measurements (e.g. from the next image):
extract_next_features(Image2,Meas1,Meas2)
// if need be Update of the model parameter (a constant model)
// second Kalman-Filtering:
filter_kalman(Dim,Mod,Meas2,Pred1,Pred2,Est2)
use_est(Est2)
extract_next_features(Image3,Meas2,Meas3).
// etc.
HALCON 8.0.2
1232 CHAPTER 17. TOOLS
Result
If the parameter values are correct, the operator filter_kalman returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling will be raised.
Parallelization Information
filter_kalman is reentrant and processed without parallelization.
Possible Predecessors
read_kalman, sensor_kalman
Possible Successors
update_kalman
See also
read_kalman, update_kalman, sensor_kalman
References
W.Hartinger: "‘Entwurf eines anwendungsunabh"angigen Kalman-Filters mit Untersuchungen im Bereich der
Bildfolgenanalyse"’; Diplomarbeit; Technische Universit"at M"unchen, Institut f"ur Informatik, Lehrstuhl Prof.
Radig; 1991.
R.E.Kalman: "‘A New Approach to Linear Filtering and Prediction Problems"’; Transactions ASME, Ser.D: Jour-
nal of Basic Engineering; Vol. 82, S.34-45; 1960.
R.E.Kalman, P.l.Falb, M.A.Arbib: "‘Topics in Mathematical System Theory"’; McGraw-Hill Book Company, New
York; 1969.
K-P. Karmann, A.von Brandt: "‘Moving Object Recognition Using an Adaptive Background Memory"’; Time-
Varying Image Processing and Moving Object Recognition 2 (ed.: V. Cappellini), Proc. of the 3rd Interantional
Workshop, Florence, Italy, May, 29th - 31st, 1989; Elsevier, Amsterdam; 1990.
Module
Foundation
Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Estimate of the initial state of the system: state x0 and corresponding covariance matrix P0
Many systems do not need entries "‘from outside"’, and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). The characteristics mentioned above can be
stored in an ASCII-file and then can be read with the help of the operator read_kalman. This ASCII-file must
have the following structure:
Dimension row
+ content row
+ matrix A
+ atrix C
+ matrix Q
[ + matrix G + vector u ]
[ + matrix L ]
+ matrix R
[ + matrix P0 ]
[ + vector x0 ]
Dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. Dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore Dimension = [n,m,0].
Model: This parameter includes the lined up matrices (vectors) A, C, Q, G, u and (if necessary) L having been
stored in row-major order. Model therefore is a vector of the length n×n+n×m+n×n+n×p+p[+n×m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
Measurement: This parameter includes the matrix R which has been stored in row-major order.
Measurement therefore is vector of the dimension m × m.
Prediction: This parameter includes the matrix P0 (the error covariance matrix of the initial state estimate)
and the initial state estimate x0 lined up. This means, it is a vector of the length n × n + n.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Description file for a Kalman filter.
Default Value : ’kalman.init’
. Dimension (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
The dimensions of the state vector, the measurement vector and the controller vector.
. Model (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The lined up matrices A, C, Q, possibly G and u, and if necessary L stored in row-major order.
. Measurement (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The matrix R stored in row-major order.
. Prediction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The matrix P0 (error covariance matrix of the initial state estimate) stored in row-major order and the initial
state estimate x0 lined up.
HALCON 8.0.2
1234 CHAPTER 17. TOOLS
Example
Result
If the description file is readable and correct, the operator read_kalman returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling will be raised.
Parallelization Information
read_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
update_kalman, filter_kalman, sensor_kalman
Module
Foundation
Each filtering is hereby based on certain measurement values. How these values are extracted from images or
sensor data depends strongly on the individual application and therefore must be entirely up to the user. However,
the operator sensor_kalman allows an interactive input of (fictitious) measurement values y and the corre-
sponding measurement-error covariance matrix R. Especially the testing of Kalman filters during the development
can hereby be facilitated.
The parameters MeasurementIn and MeasurementOut include the matrix R which has been stored in
row-major order and the measurement vector y lined up, i.e. they are vectors of the length Dimension ×
Dimension + Dimension
Parameter
Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂
Many systems do not need entries "‘from outside"’ and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). Some of the characteristics mentioned above
may change dynamically (from one iteration to the next). The operator update_kalman serves to modify parts
of the system according to an update file (ASCII) with the following structure (see also read_kalman):
HALCON 8.0.2
1236 CHAPTER 17. TOOLS
Dimension row
+ content row
+ matrix A
+ matrix C
+ matrix Q
+ matrix G + vector u
+ matrix L
+ matrix R
DimensionIn / DimensionOut: These parameters include the dimensions of the state vector, measurement
vector and controller vector and therefore are vectors [n,m,p], whereby n indicates the number of the state
variables, m the number of the measurement values and p the number of the controller members. n and m are
invariant for a given system, i.e. they must not differ from corresponding input values of the update file. For
a system without without influence "‘from outside"’ p = 0.
ModelIn / ModelOut: These parameters include the lined up matrices (vectors) A, C, Q, G, u and if necessary
L which have been stored in row-major order. ModelIn / ModelOut therefore are vectors of the length
n × n + n × m + n × n + n × p + p[+n × m]. The last summand is dropped if system errors and measurement
errors are not correlated, i.e. no value has been set for L.
MeasurementIn / MeasurementOut: These parameters include the matrix R stored in row-major order, and
therefore are vectors of the dimension m × m.
Parameter
Result
If the update file is readable and correct, the operator update_kalman returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
update_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
read_kalman, filter_kalman, sensor_kalman
Module
Foundation
HALCON 8.0.2
1238 CHAPTER 17. TOOLS
17.14 Measure
close_all_measures ( : : : )
close_measure ( : : MeasureHandle : )
fuzzy_measure_pairing serves to extract straight edge pairs that lie perpendicular to the major axis of a
rectangle or an annular arc. In addition to measure_pos it uses fuzzy member functions to evaluate and select
the edge pairs.
The extraction algorithm is identical to fuzzy_measure_pos. In addition, the edges are grouped to pairs: If
Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the major axis of the
rectangle or the annular arc are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the cor-
responding edges with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond.
If Transition = ’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge
defines the transition for RowEdgeFirst and ColumnEdgeFirst.
Having extracted subpixel edge locations, the edges are paired. The features of a possible edge pair are evaluated
by a fuzzy function, set by set_fuzzy_measure. Which edge pairs are selected can be determined with the
parameter FuzzyThresh, which constitutes a threshold on the weight over all fuzzy sets, i.e., the geometric
mean of the weights of the defined fuzzy membership functions. As an extension to fuzzy_measure_pairs,
the pairing algorithm can be restricted by Pairing. Currently only ’no_restriction’ is available, which returns all
possible edge pairs, allowing interleaving and inclusion of pairs. Finally, the best scored NumPairs edge pairs
are returned, whereas 0 indicates to return all possible found edge combinations.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc. The
corresponding edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond, the fuzzy scores in
FuzzyScore. In addition, the distance between each edge pair is returned in IntraDistance, corresponding
to the distance between EdgeFirst[i] and EdgeSecond[i].
Attention
fuzzy_measure_pairing only returns meaningful results if the assumptions that the edges are straight and
perpendicular to the major axis of the rectangle or annular arc are fulfilled. Thus, it should not be used to extract
edges from curved objects, for example. Furthermore, the user should ensure that the rectangle or annular arc is
as close to perpendicular as possible to the edges in the image. Additionally, Sigma must not become larger than
approx. 0.5 * Length1 (for Length1 see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pairing ignores the domain of Image for efficiency reasons.
If certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
HALCON 8.0.2
1240 CHAPTER 17. TOOLS
HALCON 8.0.2
1242 CHAPTER 17. TOOLS
HALCON 8.0.2
1244 CHAPTER 17. TOOLS
The edge extraction algorithm is described in the documentation of the operator measure_pos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
Interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For Interpolation = ’bilinear’, bilinear interpolation is used,
while for Interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator gen_measure_arc. For this, an optimized data structure, a so-called
measure object, is constructed and returned in MeasureHandle. The size of the images in which measurements
will be performed must be specified in the parameters Width and Height.
The system parameter ’int_zooming’ (see set_system) affects the accuracy and speed of the calculations used
to construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.
Parameter
HALCON 8.0.2
1246 CHAPTER 17. TOOLS
-1.57080, -0.78540, 0.78540, 1.57080, 2.35619, 3.14159, 3.92699, 4.71239, 5.49779, 6.28318}
Typical range of values : -6.28318 ≤ AngleExtent ≤ 6.28318 (lin)
Minimum Increment : 0.03142
Recommended Increment : 0.31416
Restriction : AngleExtent 6= 0.0
. AnnulusRadius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Radius (half width) of the annulus.
Default Value : 10.0
Suggested values : AnnulusRadius ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ AnnulusRadius ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : AnnulusRadius ≤ Radius
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the image to be processed subsequently.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Typical range of values : 0 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the image to be processed subsequently.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Typical range of values : 0 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation to be used.
Default Value : ’nearest_neighbor’
List of values : Interpolation ∈ {’nearest_neighbor’, ’bilinear’, ’bicubic’}
. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; integer
Measure object handle.
Result
If the parameter values are correct, the operator gen_measure_arc returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
gen_measure_arc is reentrant and processed without parallelization.
Possible Predecessors
draw_circle
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing
Alternatives
edges_sub_pix
See also
gen_measure_rectangle2
Module
1D Metrology
gen_measure_rectangle2 prepares the extraction of straight edges which lie perpendicular to the major
axis of a rectangle. The center of the rectangle is passed in the parameters Row and Column, the direction of
the major axis of the rectangle in Phi, and the length of the two axes, i.e., half the diameter of the rectangle, in
Length1 and Length2.
The edge extraction algorithm is described in the documentation of the operator measure_pos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
Interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For Interpolation = ’bilinear’, bilinear interpolation is used,
while for Interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator gen_measure_rectangle2. For this, an optimized data structure,
a so-called measure object, is constructed and returned in MeasureHandle. The size of the images in which
measurements will be performed must be specified in the parameters Width and Height.
The system parameter ’int_zooming’ (see set_system) affects the accuracy and speed of the calculations used
to construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.
Parameter
HALCON 8.0.2
1248 CHAPTER 17. TOOLS
the rectangle are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the corresponding edges
with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond. If Transition =
’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge defines the transition
for RowEdgeFirst and ColumnEdgeFirst. I.e., dependent on the positioning of the measure object, edge
pairs with a light-dark-light transition or edge pairs with a dark-light-dark transition are returned. This is suited,
e.g., to measure objects with different brightness relative to the background.
If more than one consecutive edge with the same transition is found, the first one is used as a pair element. This
behavior may cause problems in applications in which the threshold Threshold cannot be selected high enough
to suppress consecutive edges of the same transition. For these applications, a second pairing mode exists that only
selects the respective strongest edges of a sequence of consecutive rising and falling edges. This mode is selected
by appending ’_strongest’ to any of the above modes for Transition, e.g., ’negative_strongest’. Finally, it is
possible to select which edge pairs are returned. If Select is set to ’all’, all edge pairs are returned. If it is set to
’first’, only the first of the extracted edge pairs is returned, while it is set to ’last’, only the last one is returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle. The corresponding
edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond. In addition, the distance between
each edge pair is returned in IntraDistance and the distance between consecutive edge pairs is returned
in InterDistance. Here, IntraDistance[i] corresponds to the distance between EdgeFirst[i] and EdgeSec-
ond[i], while InterDistance[i] corresponds to the distance between EdgeSecond[i] and EdgeFirst[i+1], i.e., the
tuple InterDistance contains one element less than the tuples of the edge pairs.
Attention
measure_pairs only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_pairs ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; integer
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default Value : 30.0
Suggested values : Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 0.5
Recommended Increment : 2
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of gray value transition that determines how edges are grouped to edge pairs.
Default Value : ’all’
List of values : Transition ∈ {’all’, ’positive’, ’negative’, ’all_strongest’, ’positive_strongest’,
’negative_strongest’}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of edge pairs.
Default Value : ’all’
List of values : Select ∈ {’all’, ’first’, ’last’}
HALCON 8.0.2
1250 CHAPTER 17. TOOLS
(Transition = ’all’). Finally, it is possible to select which edge points are returned. If Select is set to ’all’,
all edge points are returned. If it is set to ’first’, only the first of the extracted edge points is returned, while it is set
to ’last’, only the last one is returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle or arc in
(RowEdge,ColumnEdge). The corresponding edge amplitudes are returned in Amplitude. In addition, the
distance between consecutive edge points is returned in Distance. Here, Distance[i] corresponds to the distance
between Edge[i] and Edge[i+1], i.e., the tuple Distance contains one element less than the tuples RowEdge and
ColumnEdge.
Attention
measure_pos only returns meaningful results if the assumptions that the edges are straight and perpendicular to
the major axis of the rectangle or arc are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle or arc is as close to perpendicular as possible
to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1
see gen_measure_rectangle2).
It should be kept in mind that measure_pos ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; integer
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default Value : 30.0
Suggested values : Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 2
Recommended Increment : 0.5
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Light/dark or dark/light edge.
Default Value : ’all’
List of values : Transition ∈ {’all’, ’positive’, ’negative’}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of end points.
Default Value : ’all’
List of values : Select ∈ {’all’, ’first’, ’last’}
. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the edge.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the edge.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the edge (with sign).
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edges.
Result
If the parameter values are correct the operator measure_pos returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
HALCON 8.0.2
1252 CHAPTER 17. TOOLS
Parallelization Information
measure_pos is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pos
See also
measure_pairs, fuzzy_measure_pairs, fuzzy_measure_pairing
Module
1D Metrology
Extracting points with a particular grey value along a rectangle or an annular arc.
measure_thresh extracts points for which the gray value within an one-dimensional gray value profile is equal
to the specified threshold Threshold. The gray value profile is projected onto the major axis of the measure
rectangle which is passed with the parameter MeasureHandle, so the threshold points calculated within the
gray value profile correspond to certain image coordinates on the rectangle’s major axis. These coordinates are
returned as the operator results in RowThresh and ColumnThresh.
If the gray value profile intersects the threshold line for several times, the parameter Select determines which
values to return. Possible settings are ’first’, ’last’, ’first_last’ (first and last) or ’all’. For the last two cases
Distance returns the distances between the calculated points.
The gray value profile is created by averaging the gray values along all line segments, which are defined by the
measure rectangle as follows:
1. The segments are perpendicular to the major axis of the rectangle,
2. they have an integer distance to the center of the rectangle,
3. the rectangle bounds the segments.
For every line segment, the average of the gray values of all points with an integer distance to the major axis is
calculated. Due to translation and rotation of the measure rectangle with respect to the image coordinates the input
image Image is in general sampled at subpixel positions.
Since this involves some calculations which can be used repeatedly in several projections, the operator
gen_measure_rectangle2 is used to perform these calculations only once in advance. Here, the measure
object MeasureHandle is generated and different interpolation schemes can be selected.
Attention
measure_thresh only returns meaningful results if the assumptions that the edges are straight and perpendicu-
lar to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_thresh ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; integer
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.0, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Threshold.
Default Value : 128.0
Typical range of values : 0 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 0.5
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of points.
Default Value : ’all’
List of values : Select ∈ {’all’, ’first’, ’last’, ’first_last’}
HALCON 8.0.2
1254 CHAPTER 17. TOOLS
be defined by one function each. Such a specified feature is called fuzzy set. Specifying no function for a fuzzy
set means not to use this feature for the final edge evaluation. Setting a second fuzzy function to a set means to
discard the first defined function and replace it by the second one. A previously defined fuzzy member function
can be discarded completely by reset_fuzzy_measure.
Functions for five different fuzzy set types selected by the SetType parameter can be defined, the sub types of a
set beeing mutual exclusive:
• ’contrast’ will use the fuzzy function to evaluate the amplitudes of the edge candidates. When extracting
edge pairs, the fuzzy evaluation is obtained by the geometric average of the fuzzy contrast scores of both
edges.
• The fuzzy function of ’position’ evaluates the distance of each edge candidate to the reference point of the
measure object, generated by gen_measure_arc or gen_measure_rectangle2. The reference
point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference point to the
middle or the end of the one-dimensional gray value profile instead. If the fuzzy position evaluation depends
on the position of the object along the profile, ’position_first_edge’ / ’position_last_edge’ sets the referece
point at the position of the first/last extracted edge. When extracting edge pairs the position of a pair is
referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the distance of each edge pair to the reference point of
the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’posi-
tion_last_pair’, respectively. Contrary to ’position’, this set is only used by fuzzy_measure_pairs/
fuzzy_measure_pairing.
• ’size’ denotes a fuzzy set that evaluates the normed distance of the two edges of a pair in pixels. This set
is only used by fuzzy_measure_pairs/ fuzzy_measure_pairing. Specifying an upper bound
for the size by terminating the member function with a corresponding fuzzy value of 0.0 will speed up
fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible pairs need to be con-
sidered.
• ’gray’ sets a fuzzy function to weight the mean projected gray value between two edges of a pair. This set is
only used by fuzzy_measure_pairs / fuzzy_measure_pairing.
A fuzzy member function is defined as a piecewise linear function by at least two pairs of values, sorted in an
ascending order by their x value. The x values represent the edge feature and must lie within the parameter space
of the set type, i.e., in case of ’contrast’ and ’gray’ feature and, e.g., byte images within the range 0.0 ≤ x ≤
255.0. In case of ’size’ x has to satisfy 0.0 ≤ x whereas in case of ’position’ x can be any real number. The
y values of the fuzzy function represent the weight of the corresponding feature value and have to satisfy the
range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined by the smallest and the greatest x value, the
y values of the interval borders are continued constantly. Such Fuzzy member functions can be generated by
create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric middle of the weights of
each set.
Parameter
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; integer
Measure object handle.
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of the fuzzy set.
Default Value : ’contrast’
List of values : SetType ∈ {’position’, ’position_center’, ’position_end’, ’position_first_edge’,
’position_last_edge’, ’position_pair_center’, ’position_pair_end’, ’position_first_pair’, ’position_last_pair’,
’size’, ’gray’, ’contrast’}
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; real / integer
Fuzzy member function.
Example
HALCON 8.0.2
1256 CHAPTER 17. TOOLS
Parallelization Information
set_fuzzy_measure is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs,
transform_funct_1d
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
Alternatives
set_fuzzy_measure_norm_pair
See also
reset_fuzzy_measure
Module
1D Metrology
• ’size’ denotes a fuzzy set that valuates the normalized distance of two edges of a pair in pixels:
d
x= (x ≥ 0) .
s
Specifying an upper bound x_max for the size by terminating the member function with a corresponding
fuzzy value of 0.0 will speed up fuzzy_measure_pairs / fuzzy_measure_pairing because not
all possible pairs must be considered. Additionally, this fuzzy set can also be specified as a normalized size
difference by ’size_diff’
s−d
x= (x ≤ 1)
s
|s − d|
x= (0 ≤ x ≤ 1) .
s
• The fuzzy function of ’position’ evaluates the signed distance p of each edge candidate to the reference point
of the measure object, generated by gen_measure_arc or gen_measure_rectangle2:
p
x= .
s
The reference point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference
point to the middle or the end of the one-dimensional gray value profile, instead. If the fuzzy position
valuation depends on the position of the object along the profile ’position_first_edge’ / ’position_last_edge’
sets the referece point at the position of the first/last extracted edge. When extracting edge pairs, the position
of a pair is referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the signed distance of each edge pair to the reference point
of the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’posi-
tion_last_pair’, respectively. Contrary to ’position’, this set is only used by fuzzy_measure_pairs/
fuzzy_measure_pairing.
A normalized fuzzy member function is defined as a piecewise linear function by at least two pairs of values,
sorted in an ascending order by their x value. The y values of the fuzzy function represent the weight of the
corresponding feature value and must satisfy the range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined
by the smallest and the greatest x value, the y values of the interval borders are continued constantly. Such Fuzzy
member functions can be generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric mean of the weights of
each set.
Parameter
HALCON 8.0.2
1258 CHAPTER 17. TOOLS
Parallelization Information
set_fuzzy_measure_norm_pair is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs
Possible Successors
fuzzy_measure_pairs, fuzzy_measure_pairing
Alternatives
transform_funct_1d, set_fuzzy_measure
See also
reset_fuzzy_measure
Module
1D Metrology
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing, measure_thresh
Alternatives
gen_measure_rectangle2, gen_measure_arc
See also
close_measure
Module
1D Metrology
17.15 OCV
close_all_ocvs ( : : : )
close_ocv ( : : OCVHandle : )
read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);
HALCON 8.0.2
1260 CHAPTER 17. TOOLS
Result
close_ocv returns 2 (H_MSG_TRUE), if the handle is valid. Otherwise, an exception handling is raised.
Parallelization Information
close_ocv is processed completely exclusively without parallelization.
Possible Predecessors
read_ocv, create_ocv_proj
See also
close_ocr
Module
OCR/OCV
create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");
Result
create_ocv_proj returns 2 (H_MSG_TRUE), if the parameters are correct. Otherwise, an exception handling
is raised.
Parallelization Information
create_ocv_proj is processed completely exclusively without parallelization.
Possible Successors
traind_ocv_proj, write_ocv, close_ocv
Alternatives
read_ocv
See also
create_ocr_class_box
Module
OCR/OCV
HALCON 8.0.2
1262 CHAPTER 17. TOOLS
Possible Successors
close_ocv
See also
create_ocv_proj
Module
OCR/OCV
read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);
Result
read_ocv returns 2 (H_MSG_TRUE), if the file is correct. Otherwise, an exception handling is raised.
Parallelization Information
read_ocv is processed completely exclusively without parallelization.
Possible Predecessors
write_ocv
Possible Successors
do_ocv_simple, close_ocv
See also
read_ocr
Module
OCR/OCV
a pattern consists of an image with a reduced domain (ROI) for the area of the pattern. Note that the pattern should
not only contain foreground pixels (e.g. dark pixels of a character) but also background pixels. This can be imple-
mented e.g. by the smallest surrounding rectangle of the pattern. Without this context an evaluation of the pattern
is not possible.
If more than one pattern has to be trained this can be achieved by multiple calls (one for each pattern) or by calling
traind_ocv_proj once with all patterns and a tuple of the corresponding names. The result will be in both
cases the same. However using multiple calls will normally result in a longer execution time than using one call
with all patterns.
Parameter
create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");
Result
traind_ocv_proj returns 2 (H_MSG_TRUE), if the handle and the training pattern(s) are correct. Otherwise,
an exception handling is raised.
Parallelization Information
traind_ocv_proj is processed completely exclusively without parallelization.
Possible Predecessors
write_ocr_trainf, create_ocv_proj, read_ocv, threshold, connection,
select_shape
Possible Successors
close_ocv
See also
traind_ocr_class_box
Module
OCR/OCV
HALCON 8.0.2
1264 CHAPTER 17. TOOLS
Parameter
. OCVHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; integer
Handle of the OCV tool to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file where the tool has to be saved.
Default Value : ’test_ocv’
Result
write_ocv returns 2 (H_MSG_TRUE), if the data is correct and the file can be written. Otherwise, an exception
handling is raised.
Parallelization Information
write_ocv is reentrant and processed without parallelization.
Possible Predecessors
traind_ocv_proj
Possible Successors
close_ocv
See also
write_ocr
Module
OCR/OCV
17.16 Shape-from
Parameter
. MultiFocusImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte
Multichannel gray image consisting of multiple focus levels.
. Depth (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte
Depth image.
. Confidence (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte
Confidence of depth estimation.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Filter used to find sharp pixels.
Default Value : ’highpass’
List of values : Filter ∈ {’highpass’, ’bandpass’}
. Selection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Method used to find sharp pixels.
Default Value : ’next_maximum’
List of values : Selection ∈ {’next_maximum’, ’local’}
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
threshold(Confidence,HighConfidence,10,255);
reduce_domain(SharpImage,HighConfidence,ConfidentSharp);
Parallelization Information
depth_from_focus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
compose2, compose3, compose4, add_channels, read_image, read_sequence
Possible Successors
select_grayvalues_from_channels, mean_image, binomial_filter, gauss_image,
threshold
See also
count_channels
Module
3D Metrology
HALCON 8.0.2
1266 CHAPTER 17. TOOLS
estimate_sl_al_lr estimates the Slant of a light source, i.e., the angle between the light source and the
positive z-axis, and the albedo of the surface in the input image Image, i.e. the percentage of light reflected by
the surface, using the algorithm of Lee and Rosenfeld.
Attention
The Albedo is assumed constant for the entire surface depicted in the image.
Parameter
estimate_tilt_lr estimates the tilt of a light source, i.e. the angle between the light source and the x-axis
after projection into the xy-plane, from the image Image using the algorithm of Lee and Rosenfeld.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image for which the tilt is to be estimated.
. Tilt (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Result
estimate_tilt_lr always returns the value 2 (H_MSG_TRUE).
Parallelization Information
estimate_tilt_lr is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology
HALCON 8.0.2
1268 CHAPTER 17. TOOLS
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte
Shaded input image with at least three channels.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : real
Reconstructed height field.
. Slants (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg-array ; real / integer
Angle between the light sources and the positive z-axis (in degrees).
Default Value : 45.0
Suggested values : Slants ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slants ≤ 180.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Tilts (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg-array ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 45.0
Suggested values : Tilts ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilts ≤ 360.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
Result
If all parameters are correct phot_stereo returns the value 2 (H_MSG_TRUE). Otherwise, an exception is
raised.
Parallelization Information
phot_stereo is reentrant and processed without parallelization.
Possible Predecessors
estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr, estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology
select_grayvalues_from_channels ( MultichannelImage,
IndexImage : Selected : : )
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
Parallelization Information
select_grayvalues_from_channels is reentrant and automatically parallelized (on tuple level, domain
level).
Possible Predecessors
depth_from_focus, mean_image
Possible Successors
disp_image
See also
count_channels
Module
Foundation
HALCON 8.0.2
1270 CHAPTER 17. TOOLS
HALCON 8.0.2
1272 CHAPTER 17. TOOLS
case, the calculated heights must be multiplied with the step width after the call to sfs_pentland. A Cartesian
coordinate system with the origin in the lower left corner of the image is used internally. sfs_pentland can
only handle byte-images.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default Value : 45.0
Suggested values : Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 45.0
Suggested values : Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default Value : 1.0
Suggested values : Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Typical range of values : 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Albedo ≥ 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of ambient light.
Default Value : 0.0
Suggested values : Ambient ∈ {0.1, 0.5, 1.0}
Typical range of values : 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Ambient ≥ 0.0
Result
If all parameters are correct sfs_pentland returns the value 2 (H_MSG_TRUE). Otherwise, an exception is
raised.
Parallelization Information
sfs_pentland is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology
shade_height_field computes a shaded image from the height field ImageHeight as if the image were
illuminated by an infinitely far away light source. It is assumed that the surface described by the height field has
Lambertian reflection properties determined by Albedo and Ambient. The parameter Shadows determines
whether shadows are to be calculated.
Attention
shade_height_field assumes that the heights are given on a lattice with step width 1. If this is not the
case, the heights must be divided by the step width before the call to shade_height_field. Otherwise, the
derivatives used internally to compute the orientation of the surface will be estimated to steep or too flat. Example:
The height field is given on 100*100 points on the square [0,1]*[0,1]. Then the heights must be divided by 1/100
first. A Cartesian coordinate system with the origin in the lower left corner of the image is used internally.
Parameter
HALCON 8.0.2
1274 CHAPTER 17. TOOLS
Module
Foundation
17.17 Stereo
binocular_calibration ( : : NX, NY, NZ, NRow1, NCol1, NRow2, NCol2,
StartCamParam1, StartCamParam2, NStartPose1, NStartPose2,
EstimateParams : CamParam1, CamParam2, NFinalPose1, NFinalPose2,
RelPose, Errors )
Parameter
HALCON 8.0.2
1276 CHAPTER 17. TOOLS
Rows1 := []
Cols1 := []
StartPoses1 := []
Rows2 := []
Cols2 := []
StartPoses2 := []
Result
binocular_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired pa-
rameters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
binocular_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, write_cam_par, pose_to_hom_mat3d, disp_caltab,
gen_binocular_rectification_map
See also
find_caltab, sim_caltab, read_cam_par, create_pose, convert_pose_type,
read_pose, hom_mat3d_to_pose, create_caltab, binocular_disparity,
binocular_distance
Module
3D Metrology
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
It should be noted, that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window, referenced by 2m + 1 and 2n + 1, has to be odd numbered and is passed in
MaskWidth and MaskHeight. The search space is confined by the minimum and maximum disparity value
MinDisparity and MaxDisparity. Due to pixel values not defined beyond the image border the resulting
domain of Disparity and Score is not set along the image border within a margin of height (MaskHeight-
1)/2 at the top and bottom border and of width (MaskWidth-1)/2 at the left and right border. For the same reason,
the maximum disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum statistical
spread of gray values within the correlation window can be defined in TextureThresh. This threshold is applied
on both input images Image1 and Image2. In addition, ScoreThresh guarantees the matching quality and
defines the maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting
HALCON 8.0.2
1278 CHAPTER 17. TOOLS
Filter to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a
concurrent direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_disparity is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages of similar disparity to reduce the disparity range on the next lower pyramid level.
TextureThresh and ScoreThresh are applied on every level and the returned domain of the Disparity
and Score images arises from the intersection of the resulting domains of every single level. Generally, pyramid
structures are the more advantageous the more the disparity image can be segmented into regions of homogeneous
disparities and the bigger the disparity range is specified. As a drawback, coarse pyramid levels might loose
important texture information which can result in deficient disparity values.
Finally, the value ’interpolation’ for parameter SubDisparity performs subpixel refinement of disparities. It is
switched off by setting the parameter to ’none’.
Parameter
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)
Result
binocular_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
binocular_disparity is reentrant and automatically parallelized (on domain level).
Possible Predecessors
map_image
Possible Successors
threshold, disparity_to_distance
Alternatives
binocular_distance
See also
map_image, gen_binocular_rectification_map, binocular_calibration
Module
3D Metrology
HALCON 8.0.2
1280 CHAPTER 17. TOOLS
binocular_distance computes pixel-wise correspondences between two images of a rectified stereo rig
using correlation techniques. Different from binocular_distance this operator transforms these pixel cor-
relations into distances of the corresponding 3D world points to the stereo camera system.
The algorithm requires a reference image Image1 and a search image Image2 which must be rectified,
i.e., corresponding epipolar lines are parallel and lie on identical image rows ( r1 = r2 ). In case this
assumption is violated the images can be rectified by using the operators binocular_calibration,
gen_binocular_rectification_map and map_image. Hence, given a pixel in the reference image
Image1 the homologous pixel in Image2 is selected by searching along the corresponding row in Image2 and
matching a local neighborhood within a rectangular window of size MaskWidth and MaskHeight. For each
defined reference pixel the pixel correspondences are transformed into distances of the world points defined by the
intersection of the lines of sight of a corresponding pixel pair to the z = 0 plane of the rectified stereo system.
These distances are returned in the single channel image Distance. For this transformation the rectified internal
camera parameters CamParamRect1 of the projective camera 1 and CamParamRect2 of the projective camera
2, and the the external parameters RelPoseRect have to be defined. Latter characterizes the relative pose of
both cameras to each other and specifies a point transformation from the rectified camera system 2 to the recti-
fied camera system 1. These parameters can be obtained from the operator binocular_calibration and
gen_binocular_rectification_map. After all, a quality measure for each distance value is returned in
Score, containing the best result of the matching function S of a reference pixel. For the matching, the gray
values of the original unprocessed images are used.
The used matching function is defined by the parameter Method allocating three different kinds of correlation:
r+m c+n
1
| g1 (r0 , c0 ) − g2 (r0 , c0 + d) |,
P P
• ’sad’: Summed Absolute Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 255.
r+m c+n
1
(g1 (r0 , c0 ) − g2 (r0 , c0 + d))2 ,
P P
• ’ssd’: Summed Squared Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 65025.
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))(g2 (r 0 ,c0 +d)−g¯2 (r,c+d))
r 0 =r−m c0 =c−n
• ’nnc’: Normalized Cross Correlation S(r, c, d) = s ,
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))2 (g2 (r 0 ,c0 +d)−g¯2 (r,c+d))2
r 0 =r−m c0 =c−n
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
It should be noted that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window has to be odd numbered and is passed in MaskWidth and MaskHeight. The
search space is confined by the minimum and maximum disparity value MinDisparity and MaxDisparity.
Due to pixel values not defined beyond the image border the resulting domain of Distance and Score is
generally not set along the image border within a margin of height MaskHeight/2 at the top and bottom border
and of width MaskWidth/2 at the left and right border. For the same reason, the maximum disparity range is
reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum variance
within the correlation window can be defined in TextureThresh. This threshold is applied on both input
images Image1 and Image2. In addition, ScoreThresh guarantees the matching quality and defines the
maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting Filter
to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a concurrent
direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_distance is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmentated
into rectangular subimages to reduce the disparity range on the next lower pyramid level. TextureThresh and
ScoreThresh are applied on every level and the returned domain of the Distance and Score images arises
from the intersection of the resulting domains of every single level. Generally, pyramid structures are the more
advantageous the more the distance image can be segmented into regions of homogeneous distance values and the
bigger the disparity range must be specified. As a drawback, coarse pyramid levels might loose important texture
information which can result in deficient distance values.
Finally, the value ’interpolation’ for parameter SubDistance increases the refinement and accuracy of the dis-
tance values. It is switched off by setting the parameter to ’none’.
Parameter
HALCON 8.0.2
1282 CHAPTER 17. TOOLS
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpose.dat’, RelPose)
Result
binocular_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
binocular_distance is reentrant and automatically parallelized (on domain level).
Possible Predecessors
map_image
Possible Successors
threshold
Alternatives
binocular_disparity
See also
map_image, gen_binocular_rectification_map, binocular_calibration,
distance_to_disparity, disparity_to_distance
Module
3D Metrology
Transform a disparity value into a distance value in a rectified binocular stereo system.
disparity_to_distance transforms a disparity value into a distance of an object point to the binocular
stereo system. The cameras of this system must be rectified and are defined by the rectified internal parameters
CamParamRect1 of the projective camera 1 and CamParamRect2 of the projective camera 2, and the external
parameters RelPoseRect. Latter specifies the relative pose of both cameras to each other by defining a point
transformation from rectified camera system 2 to rectified camera system 1. These parameters can be obtained from
the operator binocular_calibration and gen_binocular_rectification_map. The disparity
value Disparity is defined by the column difference of the image coordinates of two corresponding points
on an epipolar line according to the equation d = c2 − c1 (see also binocular_disparity). This value
characterises a set of 3D object points of an equal distance to a plane beeing parallel to the rectified image plane of
the stereo system. The distance to the subset plane z = 0 which is parallel to the rectified image plane and contains
the optical centers of both cameras is returned in Distance.
Parameter
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Given an image point of the rectified camera 1, specified by its image coordinates (Row1,Col1), and its disparity in
a rectified binocular stereo system, disparity_to_point_3d computes the corresponding three dimensional
object point. Whereby the disparity value Disparity defines the column difference of the image coordinates
of two corresponding features on an epipolar line according to the equation d = c2 − c1 . The rectified binocular
camera system is specified by its internal camera parameters CamParamRect1 of the projective camera 1 and
HALCON 8.0.2
1284 CHAPTER 17. TOOLS
CamParamRect2 of the projective camera 2, and the external parameters RelPoseRect defining the pose of
the rectified camera 2 in relation to the rectified camera 1. These camera parameters can be obtained from the
operators binocular_calibration and gen_binocular_rectification_map. The 3D point is
returned in Cartesian coordinates (X,Y,Z) of the rectified camera system 1.
Parameter
Parameter
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Point transformation from rectified camera 2 to rectified camera 1.
Number of elements : 7
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Distance of a world point to camera 1.
Restriction : 0 < Distance
. Disparity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Disparity between the images of the point.
Result
distance_to_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
distance_to_disparity is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map
Possible Successors
binocular_disparity
Module
3D Metrology
Image coordinates result from 3D direction vectors by multiplication with the camera matrix CamM at:
col X
row = CamM at · Y .
1 1
Therefore, the fundamental matrix FMatrix is calculated from the essential matrix EMatrix and the camera
matrices CamMat1, CamMat2 by the following formula:
The transformation of the essential matrix to the fundamental matrix goes along with the propagation of the co-
variance matrices CovEMat to CovFMat. If CovEMat is empty CovFMat will be empty too.
The conversion operator essential_to_fundamental_matrix is used especially for a subsequent visu-
alization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
HALCON 8.0.2
1286 CHAPTER 17. TOOLS
Parameter
. EMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real / integer
Essential matrix.
. CovEMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
9 × 9 covariance matrix of the essential matrix.
Default Value : []
. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real / integer
Camera matrix of the 1. camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real / integer
Camera matrix of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
Parallelization Information
essential_to_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
vector_to_essential_matrix
Alternatives
rel_pose_to_fundamental_matrix
Module
3D Metrology
In the case of a known covariance matrix CovFMat of the fundamental matrix FMatrix, the covariance matrix
CovFMatRect of the above rectified fundamental matrix is calculated. This can help for an improved stereo
matching process because the covariance matrix defines in terms of probabilities the image domain where to find
a corresponding match.
Similar to the operator gen_binocular_rectification_map the output images Map1 and Map2 describe
the transformation, also called mapping, of the original images to the rectified ones. The parameter Mapping
specifies whether bilinear interpolation (’bilinear_map’) should be applied between the pixels in the input image
or whether the gray value of the nearest neighboring pixel should be taken (’nn_map’). The size and resolution
of the maps and of the transformed images can be adjusted by the parameter SubSampling, which applies a
sub-sampling factor to the original images. For example, a factor of two will halve the image sizes. If just the two
homographies are required Mapping can be set to ’no_map’ and no maps will be returned. For speed reasons,
this option should be used if for a specific stereo configuration the images must be rectified only once. If the stereo
setup is fixed, the maps should be generated only once and both images should be rectified with map_image;
this will result in the smallest computational cost for on-line rectification.
When using the maps, the transformed images are of the same size as their maps. Each pixel in the map contains
the description of how the new pixel at this position is generated. The images Map1 and Map2 are single channel
images if Mapping is set to ’nn_map’ and five channel images if it is set to ’bilinear_map’. In the first channel,
which is of type int4, the pixels contain the linear coordinates of their reference pixels in the original image. With
Mapping equal to ’no_map’ this reference pixel is the nearest neighbor to the back-transformed pixel coordinates
of the map. In the case of bilinear interpolation the reference pixel is the next upper left pixel relative to the back-
transformed coordinates. The following scheme shows the ordering of the pixels in the original image next to the
back-transformed pixel coordinates, where the reference pixel takes the number 2.
2 3
4 5
The channels 2 to 5, which are of type uint2, contain the weights of the relevant pixels for the bilinear interpolation.
Based on the rectified images, the disparity be computed using binocular_disparity. In contrast to stereo
with fully calibrated cameras, using the operator gen_binocular_rectification_map and its succes-
sors, metric depth information can not be derived for weakly calibrated cameras. The disparity map gives just a
qualitative depth ordering of the scene.
Parameter
. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : int4 / uint2
Image coding the rectification of the 1. image.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : int4 / uint2
Image coding the rectification of the 2. image.
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real / integer
Fundamental matrix.
. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
9 × 9 covariance matrix of the fundamental matrix.
Default Value : []
. Width1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the 1. image.
Default Value : 512
List of values : Width1 ∈ {128, 256, 512, 1024}
Restriction : Width1 > 0
. Height1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the 1. image.
Default Value : 512
List of values : Height1 ∈ {128, 256, 512, 1024}
Restriction : Height1 > 0
. Width2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the 2. image.
Default Value : 512
List of values : Width2 ∈ {128, 256, 512, 1024}
Restriction : Width2 > 0
. Height2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the 2. image.
Default Value : 512
List of values : Height2 ∈ {128, 256, 512, 1024}
Restriction : Height2 > 0
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Subsampling factor.
Default Value : 1
List of values : SubSampling ∈ {1, 2, 3, 1.5}
HALCON 8.0.2
1288 CHAPTER 17. TOOLS
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common
rectified image plane.
Given a pair of stereo images, rectification determines a transformation of each image plane in a way that pairs of
conjugate epipolar lines become collinear and parallel to the horizontal image axes. The rectified epipolar images
can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras. The camera centers of
this virtual rig are maintained whereas the image planes coincide, which means that the focal lengths are set equal,
and the optical axes parallel.
To achieve the transformation map for epipolar images gen_binocular_rectification_map requires the
internal camera parameters CamParam1 of the projective camera 1 and CamParam2 of the projective camera 2,
as well as the relative pose RelPose defining a point transformation from camera 2 to camera 1. These parameters
can be obtained, e.g., from the operator binocular_calibration.
The projection onto a common plane has many degrees of freedom which are implicitly restricted by selecting a
certain method in Method (currently only one method available):
• ’geometric’ specifies the orientation of the common image plane by the cross product of the base line and the
line of intersection of the original image planes. The new focal length are determined in such a way as the
old prinzipal points have the same distance to the new common image plane.
In case of bilinear interpolation, each map contains one five-channel image. The first channel contains for each
pixel of the respective map the linear coordinate of the pixel in the respective input image that is in the upper left
position with respect to the transformed coordinate. The remaining four channels of each map contain the weights
of the four neighboring pixels of the transformed coordinates which are used for the bilinear interpolation. The
mapping of the channel numbers to the neighboring pixels is as follows:
2 3
4 5
In addition, gen_binocular_rectification_map returns the modified internal and external camera pa-
rameters of the rectified stereo rig. CamParamRect1 and CamParamRect2 contain the modified internal pa-
rameters of camera 1 and camera 2, respectively. The rotation of the rectified camera in relation to the original
camera is specified by CamPoseRect1 and CamPoseRect2, respectively. Finally, RelPoseRect returns
the modified relative pose of the rectified camera system 2 in relation to the rectified camera system 1 defining
a translation in x only. Generally, the transformations are defined in a way that the rectified camera 1 is left of
the rectified camera 2. This means that the optical center of camera 2 has a positive x coordinate of the rectified
coordinate system of camera 1.
Parameter
. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : int4 / uint2
Image containing the mapping data of camera 1.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : int4 / uint2
Image containing the mapping data of camera 2.
. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Internal parameters of the projective camera 1.
Number of elements : 8
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Internal parameters of the projective camera 2.
Number of elements : 8
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Point transformation from camera 2 to camera 1.
Number of elements : 7
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Factor of sub sampling.
Default Value : 1.0
Suggested values : SubSampling ∈ {0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0}
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of rectification.
Default Value : ’geometric’
List of values : Method ∈ {’geometric’}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation.
Default Value : ’bilinear’
List of values : Interpolation ∈ {’none’, ’bilinear’}
. CamParamRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Rectified internal parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Rectified internal parameters of the projective camera 2.
Number of elements : 8
. CamPoseRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements : 7
. CamPoseRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements : 7
. RelPoseRect (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements : 7
HALCON 8.0.2
1290 CHAPTER 17. TOOLS
Example
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)
while 1
grab_image_async (Image1, FGHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
Result
gen_binocular_rectification_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If
necessary, an exception handling is raised.
Parallelization Information
gen_binocular_rectification_map is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration
Possible Successors
map_image
Alternatives
gen_image_to_world_plane_map
See also
map_image, gen_image_to_world_plane_map, contour_to_world_plane_xld,
image_points_to_world_plane
Module
3D Metrology
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Given two lines of sight from different cameras, specified by their image points (Row1,Col1) of camera 1 and
(Row2,Col2) of camera 2, intersect_lines_of_sight computes the 3D point of intersection of these
lines. The binocular camera system is specified by its internal camera parameters CamParam1 of the projective
camera 1 and CamParam2 of the projective camera 2, and the external parameters RelPose defining the pose
of the cameras by a point transformation from camera 2 to camera 1. These camera parameters can be obtained,
e.g., from the operator binocular_calibration, if the coordinates of the image points (Row1,Col1) and
(Row2,Col2) refer to the respective original image coordinate system. In case of rectified image coordinates (
e.g., obtained from epipolar images), the rectified camera parameters must be passed, as they are returned by the
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2 along with known internal camera parameters, specified by the camera matrices CamMat1
and CamMat2, match_essential_matrix_ransac automatically determines the geometry of the stereo
setup and finds the correspondences between the characteristic points. The geometry of the stereo setup is repre-
sented by the essential matrix EMatrix and all corresponding points have to fulfill the epipolar constraint.
HALCON 8.0.2
1292 CHAPTER 17. TOOLS
The operator match_essential_matrix_ransac is designed to deal with a linear camera model. The
internal camera parameters are passed by the arguments CamMat1 and CamMat2, which are 3×3 upper triangular
matrices desribing an affine transformation. The relation between a vector (X,Y,1), representing the direction from
the camera to the viewed 3D space point and its (projective) 2D image coordinates (col,row,1) is:
col X f /sx s cx
row = CamM at · Y where CamM at = 0 f /sy cy .
1 1 0 0 1
Note the column/row ordering in the point coordinates which has to be compliant with the x/y notation of the
camera coordinate system. The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor
and (cx , cy ) indicates the principal point. Mainly, these are the elements known from the camera parameters as
used for example in camera_calibration. Alternatively, the elements of the camera matrix can be described
in a different way, see e.g. stationary_camera_self_calibration. Multiplied by the inverse of the
camera matrices the direction vectors in 3D space are obtained from the (projective) image coordinates. For known
camera matrices the epipolar constraint is given by:
T
X2 X1
Y2 · EM atrix · Y1 = 0 .
1 1
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the essential matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the essen-
tial matrix EMatrix. It tries to find the essential matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix CovEMat as well. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-
linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences
differ depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_essential_matrix_ransac a special configuration of scene points and cameras
exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that the output parameters EMatrix, CovEMat and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter
HALCON 8.0.2
1294 CHAPTER 17. TOOLS
Module
3D Metrology
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between
image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2, match_fundamental_matrix_ransac automatically finds the correspondences
between the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and all corresponding points
have to fulfill the epipolar constraint, namely:
T
Cols2 Cols1
Rows2 · FMatrix · Rows1 = 0 .
1 1
Note the column/row ordering in the point coordinates: because the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation has to be compliant with the camera
coordinate system. So, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an initial
matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algo-
rithm is applied to find the fundamental matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the fun-
damental matrix FMatrix. It tries to find the matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. If left and right camera are identical and the relative orien-
tation between them is a pure translation then choose EstimationMethod equal to ’trans_normalized_dlt’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed camera
HALCON 8.0.2
1296 CHAPTER 17. TOOLS
looking onto a moving conveyor belt. In order to get a unique solution in the correspondence problem the min-
imum required number of corresponding points is eight in the general case and three in the special, translational
case.
The fundamental matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as
well the covariance of the fundamental matrix CovFMat. Here, ’normalized_dlt’ and ’gold_standard’ stand for
direct-linear-transformation and gold-standard-algorithm respectively.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 1.
Restriction : (length(Rows1) ≥ 8) ∨ (length(Rows1) ≥ 3)
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 1.
Restriction : length(Cols1) = length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 2.
Restriction : (length(Rows2) ≥ 8) ∨ (length(Rows2) ≥ 3)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 2.
Restriction : length(Cols2) = length(Rows2)
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Gray value comparison metric.
Default Value : ’ssd’
List of values : GrayMatchMethod ∈ {’ssd’, ’sad’, ’ncc’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default Value : 10
Typical range of values : 3 ≤ MaskSize ≤ 15
Restriction : MaskSize ≥ 1
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default Value : 200
Typical range of values : 50 ≤ RowTolerance ≤ 200
Restriction : RowTolerance ≥ 1
HALCON 8.0.2
1298 CHAPTER 17. TOOLS
Compute the relative orientation between two cameras by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo
images Image1 and Image2 along with known internal camera parameters CamPar1 and CamPar2,
match_rel_pose_ransac automatically determines the geometry of the stereo setup and finds the corre-
spondences between the characteristic points. The geometry of the stereo setup is represented by the relative
pose RelPose and all corresponding points have to fulfill the epipolar constraint. RelPose indicates the rel-
ative pose of camera 1 with respect to camera 2 (See create_pose for more information about poses and
their representations.). This is in accordance with the explicit calibration of a stereo setup using the operator
binocular_calibration. Now, let R, t be the rotation and translation of the relative pose. Then, the essen-
tial matrix E is defined as E = ([t]× R)T , where [t]× denotes the 3 × 3 skew-symmetric matrix realising the cross
product with the vector t. The pose can be determined from the epipolar constraint:
T
X2 X1 0 −tz ty
Y2 · ([t]× R)T · Y1 = 0 where [t]× = tz 0 −tx .
1 1 −ty tx 0
Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a subsequent threedimensional reconstruction
of the scene, using for instance vector_to_rel_pose, can be carried out only up to a single global scaling
factor.
The operator match_rel_pose_ransac is designed to deal with a camera model, that includes lens dis-
tortions. This is in contrast to the operator match_essential_matrix_ransac, which encompasses
only straight line preserving cameras. The camera parameters are passed in CamPar1 and CamPar2. The
3D direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see camera_calibration).
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the relative pose that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the rel-
ative pose RelPose. It tries to find the relative pose that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as well the
covariance of the relative pose CovRelPose. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-linear-
transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences differ
depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_rel_pose_ransac a special configuration of scene points and cameras exists: if all
3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the
essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the
operator. This means that the output parameters RelPose, CovRelPose and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter
HALCON 8.0.2
1300 CHAPTER 17. TOOLS
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_rel_pose, gen_binocular_rectification_map
See also
binocular_calibration, match_fundamental_matrix_ransac,
match_essential_matrix_ransac, create_pose
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology
HALCON 8.0.2
1302 CHAPTER 17. TOOLS
Compute the fundamental matrix from the relative orientation of two cameras.
Cameras including lens distortions can be modeled by the following set of parameters: the focal length f , two
scaling factors sx , sy , the coordinates of the principal point (cx , cy ) and the distortion coefficient κ. For a more
detailed description see the operator camera_calibration. Only cameras with a distortion coefficient equal
to zero project straight lines in the world onto straight lines in the image. Then, image projection is a linear
mapping and the camera, i.e. the set of internal parameters, can be described by the camera matrix CamM at:
f /sx 0 cx
CamM at = 0 f /sy cy .
0 0 1
Going from a nonlinear model to a linear model is an approximation of the real underlying camera. For a variety of
camera lenses, especially lenses with long focal length, the error induced by this approximation can be neglected.
Following the formula E = ([t]× R)T , the essential matrix E is derived from the translation t and the rotation
R of the relative pose RelPose (see also operator vector_to_rel_pose). In the linearized framework the
fundamental matrix can be calculated from the relative pose and the camera matrices according to the formula
presented under essential_to_fundamental_matrix:
The transformation from a relative pose to a fundamental matrix goes along with the propagation of the covariance
matrices CovRelPose to CovFMat. If CovRelPose is empty CovFMat will be empty too.
The conversion operator rel_pose_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameter
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Relative orientation of the cameras (3D pose).
. CovRelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
6 × 6 covariance matrix of relative pose.
Default Value : []
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Parameters of the 1. camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Parameters of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
Parallelization Information
rel_pose_to_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
vector_to_rel_pose
Alternatives
essential_to_fundamental_matrix
See also
camera_calibration
Module
3D Metrology
Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D
points.
For a stereo configuration with known camera matrices the geometric relation between the two images is de-
fined by the essential matrix. The operator vector_to_essential_matrix determines the essential matrix
EMatrix from in general at least six given point correspondences, that fulfill the epipolar constraint:
T
X2 X1
Y2 · EM atrix · Y1 = 0
1 1
The operator vector_to_essential_matrix is designed to deal only with a linear camera model. This is
in constrast to the operator vector_to_rel_pose, that encompasses lens distortions too. The internal camera
parameters are passed by the arguments CamMat1 and CamMat2, which are 3 × 3 upper triangular matrices
desribing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the camera
to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:
HALCON 8.0.2
1304