Automation in Videogrammetry
Automation in Videogrammetry
ABSTRACT
In the last few years the automation of digital industrial photogrammetric systems has increased dramatically. Due to advancements
in digital image processing software, coded targets and auto-correlating methods, a large number of photogrammetric measurement
tasks can now be fully automated. In many cases a "one button click" is enough to provide 3D-coordinates of measurement points
without any manual interaction, as soon as digital images are acquired. The evolving technology of intelligent cameras is the next
logical step towards automated photogrammetric measurements. An intelligent camera containing an integrated computer can
process the image immediately after it is taken. The technology provides not only a much shorter processing time for the images but
also more control over the measurement process precisely at that time when it is needed most, namely during image acquisition. This
is taking place in the form of real time feedback from the camera itself. This paper describes the role of a digital intelligent camera in
the automation of an industrial photogrammetric measurement system and gives an overview of existing automation techniques in
industrial photogrammetry. As an example of an intelligent camera, the performance of the new digital intelligent camera INCA,
developed and manufactured by Geodetic Services Inc. will be described through reference to a number of example measurement
applications.
2.2 Software
Figure 2b. Examples of coded targets.
Advances in automation in system software are clearly central
to the goal of achieving a fully automated system. Automation The second variety of automation feature, which is designed to
software can be broadly defined into two categories. The first alleviate the user of some of the decision making process, is
variety is designed to relieve the user of many of the functions more difficult to implement given the complex nature of
associated with image processing and reduction, whilst the decision making. The most common and time-consuming
second makes decisions for the user. process is that of new point determination. To coordinate new
points it is necessary to locate and label each and every
Three common repetitive functions that are easily automated are unknown point in at least two images. Obviously this is a
the processes of line following, driveback and exterior tedious process which is prone to operator error. To facilitate
orientation determination. The use of uniformly spaced retro- unambiguous labeling it is often necessary to introduce target
reflective target strips within industry is commonplace. This is identification labels.
especially the case in the antenna and aerospace fields. In order
to simplify the labeling or re-labeling process it is possible to To automate the coordination of target points it is first necessary
use two points at the start of a strip to define the separation and to identify potential targets in each image. This identification
direction of the subsequent strip targets. With each subsequent process will also locate false targets such as overhead lights,
target measured the direction of the line is recomputed and the flash hot spots or even return from discarded retro-reflective
search patch modified. The process terminates when no target material. With these potential target locations it is possible to
is found in the reference patch. From a practical point of view, combine matching targets and derive an XYZ coordinate for
strip targets are measured instantaneously. each point that has been qualified as a valid target. In some
cases false targets will be found but with appropriate checks it is
Another feature easily implemented is a process known as possible to almost always identify and remove these errant
driveback. Once an approximate camera location has been points. This process is termed AutoMatching within the V-
established through space resection it is possible to compute the STARS system and as the case studies will show, it is a very
initial target xy locations in the image space. These xy locations powerful automation tool in vision metrology.
are used in combination with a search patch. If a target is found
within this search window then it is measured. If no target is Yet another powerful automation tool is that of the construction
found then the point is skipped in that image. template or macro. Construction templates can be utilised to
complete all manner of repeated analysis. As an example,
Determining the exterior orientation of an image in an consider the situation in Figure 3 where the perpendicularity of
automated fashion is considerably more difficult to implement two planes is required. To solve this problem it is first
than line following or driveback. The orientation of an image is necessary to determine the location of at least three points on
typically determined by identifying four or more points of each plane (four for redundancy), fit planes to the data and then
known approximate XYZ coordinates. Once these have been compute the intersection angles between the planes. This is a
identified, the camera exterior orientation can be computed fairly simple piece of analysis and might only take a few
using a closed-form space resection. To automate the space minutes to complete. Now, consider the case where the same
resection procedure it is necessary to use exterior orientation information is required for 100 planes. It is obvious that the
(EO) devices and/or coded targets. Examples of these are remaining 99 will be solved in the same manner as the first.
shown in Figures 2a and 2b. If either an EO device or coded
targets are seen in any image they are identified and decoded, A construction template would facilitate the automation of the
and if enough object points with approximately known 3D remainder of this analysis. This makes it possible to extend the
coordinates are available the exterior orientation can be “one button” notion for photogrammetric triangulation all the
completed. way to the final 3D coordinate analysis phase. The emergence
of construction templates will bridge the gap between simple
XYZ data and meaningful dimensions. It will also facilitate the Image compression reduces the amount of time required to write
introduction of real-time or quasi real-time measurement the data to disk.
analysis in production facilities Image Transfer: Image transfer involves the transfer of the
acquired images from the PCMCIA disk to the local PC hard
drive. In the automated case these are compressed images while
p1
in the manual case there is no image compression. The time
p2 P1
p3 difference develops due to the difference in image size. This
p5
b4 step is optional given that videogrammetry software can
p7
pn
b7
p6 P2 B1 p4 typically access the image data directly from the PCMCIA disk.
p n-1 p n-2 B5
bn Pn B2 p8
b2
b3 For archiving, it is necessary to transfer the images from the
b n-1
Bn p n-3 b8 PCMCIA disk to long-term storage media.
b6
b n-3 b n-2 Initial Exterior Orientation: The Initial EO is necessary to
approximately locate the camera locations at the time of
exposure. As mentioned earlier, in the automated case this is
p1 p2 p3 p4 → P1 achieved through coded targets and/or an EO device. In the
b1 b2 b3 b4 → B1 manual case this involves manually locating coordinated points.
P1 B1 → α1 Initial Bundle Adjustment: The objective of the Initial Bundle
Adjustment in the automated case is the coordination of coded
p5 p6 p7 p8 → P2 targets. This is based on the coordinate system established by
b5 b6 b7 b8 → B2 the EO device (i.e. the AutoBar). In the manual case the bundle
P2 B2 → α2 adjustment is used to triangulate points measured manually to
facilitate driveback in subsequent images.
:
New Point Determination: In the automated case new points
:
are determined through the use of a set of mathematical rules
pn pn-1 pn-2 pn-3 → Pn governing image point correspondences, which must be satisfied
bn bn-1 bn-2 bn-3 → Bn by AutoMatching. To determine a new point in a manual
Pn Bn → αn system it is necessary to label the point in at least two images in
Where p = vertical plane points which the point is seen.
b = horizontal plane points Final Bundle Adjustment: Once all the points have been
P = vertical plane
B = horizontal plane measured in the image files they can be combined in the Final
α = angle of intersection between P & B Bundle Adjustment to produce the XYZ position of each point.
The time taken to complete this task will not vary between the
Figure 3: Example of construction templates.
automated and manual cases.
Point Renaming: In the automated case the points found are
3. CASE STUDIES
assigned arbitrary labels. If specific labels are required, then
each point must be located and re-labeled. This only has to be
3.1 Overview
completed in one image for each point. Re-labeling is not a
requirement in the manual case as the points are assigned the
In order to quantify the advantages of automation, two case
correct labels at the time of initial measurement. The
studies will be considered. The first involves a modest sized
underlying need for labels in the manual case is driven by the
network whereas the second is more complex, requiring a large
fact that the user needs to establish point correspondence
number of images. The data collected has been processed both
between images. In most cases the labels of the points are not
manually and automatically. Driveback and line following are
important and the step of re-labeling can be bypassed. In the
such common features in modern videogrammetry systems that
case of repeat measurements the new points need only be placed
these have been used in the manual case even though they are
approximately and a label transformation can be utilised to
clearly automation features.
automatically re-label points.
Each of the case studies was carried out using the following Clean up and ‘Re-Bundle’: In some instances, the AutoMatch
equipment: will coordinate points that satisfy the prescribed mathematical
rules, but are not actual target points. These points are typically
Camera: GSI INCA 6.3 (3K x 2K sensor) easily identified and deleted. Once these points are eliminated it
Storage: Viper 340Mb PCMCIA Hard Disk is best, but not necessary, to ‘Re-Bundle’. The corresponding
Processor: Pentium 133Mhz laptop with 48Mb RAM problem in the manual case involves incorrect identification of
Software: GSI V-STARS/S points in the two initial images. This problem will normally
emerge prior to the Final Bundle process.
The analysis was carried out by recording the time taken to
complete the following key functions:
Time(sec)
400
in all). The antenna and target configuration are shown in New Point Determination
300 Initial Bundle
Figure 4.
200 Initial EO
Image Transfer
100
Image Aqusition
0
Manual Automated
Four stations were used and the camera was rolled through 90
degrees at each of these stations. The images collected were Figure 7. The final object point cloud for the antenna
processed both manually and with as much automation as measurement.
possible. The times recorded in seconds for each of the steps 3.3 Case Study 2 – Automotive Master Model
described earlier are shown in Table 1 and represented
graphically in Figure 6. The second case study involved the measurement of a more
Table 1. Times recorded for each key component between complex object. The object selected was an automotive master
Manual and Automated Case model cubing block. The block consists of a number of parallel
Category Man(sec) Auto(sec) and perpendicular plates with component-securing points
Image Acquisition 180 30 located in a grid. Each of the master model components are
Image Transfer 75 15 attached to the block in their prescribed locations. These
Initial EO 45 30 components can then be checked against the CAD model that
Initial Bundle 5 5 represents the particular vehicle. Production line parts can also
New Point Determination 270 15 be attached to the block to verify that production components
Final Bundle 15 15 meet design standards. The block consisted of five plates of
Point Renaming 0 135 interest. These plates were targeted with 83 coded targets, an
Clean up and Re-Bundle 15 20 AutoBar and 205 stick-on targets. The object is shown in
Total 605 265 Figure 8.
These results are represented graphically in Figure 10. It is
clear from the listed times in Table 2 that the timesaving is even
greater for complicated measurement networks. In this
particular case approximately 2.5 hours were saved. Once
again, if no line following or drive back had been employed in
the manual case then the time saving would have been far
greater. Also, if there was no requirement for pre-determined
point labels then a further 30 minutes could have been saved,
reducing the overall time to just over 30 minutes. Figure 11
graphs the results in the case that no pre-defined labels were
required. The biggest savings were achieved at the stages of
image acquisition acquisition and new point determination. At
the new point determination stage a time of 2 hours was saved.
250
Clean Up and Re-bundle
Time (minutes)
this stage the master model was still in place. 150 New Point Determination
Initial Bundle
100 Initial EO
A total of 71 images covering the 293 object points on the 50
Image Transfer
3.4 Evaluation
5. ACKNOWLEDGEMENTS
6. REFERENCES
1 PC Projector