Structured-Light 3D Surface Imaging - Aop-3!2!128
Structured-Light 3D Surface Imaging - Aop-3!2!128
a tutorial
Jason Geng
IEEE Intelligent Transportation System Society, 11001 Sugarbush Terrace, Rockville,
Maryland 20852, USA ([email protected])
Received September 22, 2010; revised December 10, 2010; accepted December 20,
2010; published March 31, 2011 (Doc. ID 134160)
We provide a review of recent advances in 3D surface imaging technologies.
We focus particularly on noncontact 3D surface measurement techniques
based on structured illumination. The high-speed and high-resolution pattern
projection capability offered by the digital light projection technology, together
with the recent advances in imaging sensor technologies, may enable new
generation systems for 3D surface measurement applications that will provide
much better functionality and performance than existing ones in terms of
speed, accuracy, resolution, modularization, and ease of use. Performance
indexes of 3D imaging system are discussed, and various 3D surface imaging
schemes are categorized, illustrated, and compared. Calibration techniques are
also discussed, since they play critical roles in achieving the required precision.
Numerous applications of 3D surface imaging technologies are discussed with
c 2011 Optical Society of America
several examples.
OCIS codes: 150.6910, 110.6880
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
2. Sequential Projection Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
2.1. Binary Patterns and Gray Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
2.2. Gray-Level Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
2.3. Phase Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2.4. Hybrid Method: Phase Shift + Gray Coding . . . . . . . . . . . . . . . . . . 137
2.5. Photometrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3. Full-Frame Spatially Varying Color Pattern . . . . . . . . . . . . . . . . . . . . . . . 139
3.1. Rainbow 3D Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
3.2. Continuously Varying Color Coding . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4. Stripe Indexing (Single Shot). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
4.1. Stripe Indexing Using Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.2. Stripe Indexing Using Segment Pattern . . . . . . . . . . . . . . . . . . . . . . . 141
4.3. Stripe Indexing Using Repeated Gray-Scale Pattern . . . . . . . . . . . 141
4.4. Stripe Indexing Based on De Bruijn Sequence . . . . . . . . . . . . . . . . . 142
5. Grid Indexing: 2D Spatial Grid Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.1. Pseudo-random Binary Array (PRBA) . . . . . . . . . . . . . . . . . . . . . . . . 144
Advances in Optics and Photonics 3, 128160 (2011) doi:10.1364/AOP.3.000128
c OSA
1943-8206/11/020128-33/$15.00
128
129
1. Introduction
The physical world around us is three-dimensional (3D); yet traditional
cameras and imaging sensors are able to acquire only two-dimensional
(2D) images that lack the depth information. This fundamental
restriction greatly limits our ability to perceive and to understand the
complexity of real-world objects. The past several decades have marked
tremendous advances in research, development, and commercialization
of 3D surface imaging technologies, stimulated by application demands
in a variety of market segments, advances in high-resolution and highspeed electronic imaging sensors, and ever-increasing computational
power. In this paper, we provide an overview of recent advances in
surface imaging technologies by use of structured light.
The term 3D imaging refers to techniques that are able to acquire
true 3D data, i.e., values of some property of a 3D object, such as
the distribution of density, as a function the 3D coordinates (x, y, z).
Examples from the medical imaging field are computed tomography
(CT) and magnetic resonance imaging (MRI), which acquire volumetric
pixels (or voxels) of the measured target, including its internal structure.
By contrast, surface imaging deals with measurement of the (x, y, z)
coordinates of points on the surface of an object. Since the surface is,
in general, nonplanar, it is described in a 3D space, and the imaging
problem is called 3D surface imaging. The result of the measurement
may be regarded as a map of the depth (or range) z as a function of the
position (x, y) in a Cartesian coordinate system, and it may be expressed
in the digital matrix form {zij = (xi , yj , ), i = 1, 2, . . . , L, j = 1, 2, . . . , M}.
This process is also referred to as 3D surface measurement, range finding,
range sensing, depth mapping, surface scanning, etc. These terms are
used in different application fields and usually refer to loosely equivalent
basic surface imaging functionality, differing only in details of system
design, implementation, and/or data formats.
A more general 3D surface imaging system is able to acquire a scalar
value, such as surface reflectance, associated with each point on the
nonplanar surface. The result is a point cloud {Pi = (xi , yi , zi , fi ), i =
1, 2, . . . , N}, where fi represents the value at the ith surface point in
the data set. Likewise, a color surface image is represented by {Pi =
(xi , yi , zi , ri , gi , bi ), i = 1, 2, . . . , N}, where the vector (ri , gi , bi ) represents
the red, green, and blue color components associated with the ith surface
Advances in Optics and Photonics 3, 128160 (2011) doi:10.1364/AOP.3.000128
130
Figure 1
sin()
.
sin( + )
131
Figure 2
Camera
Sensed image
3D surface
132
Figure 3
133
Figure 4
possesses a unique binary code that differs from any other codes of
different points. In general, N patterns can code 2N stripes. Figure 4
shows a simplified 5-bit projection pattern. Once this sequence of
patterns is projected onto a static scene, there are 32 (=25 ) unique
areas coded with unique stripes. The 3D coordinates (x, y, z) can be
computed (based on a triangulation principle) for all 32 points along each
horizontal line, thus forming a full frame of the 3D image.
Binary coding technique is very reliable and less sensitive to the surface
characteristics, since only binary values exist in all pixels. However, to
achieve high spatial resolution, a large number of sequential patterns
need to be projected. All objects in the scene have to remain static. The
entire duration of 3D image acquisition may be longer than a practical
3D application allows for.
134
Figure 5
f1(x)
1
0.8
0.6
0.4
0.2
0
-0.2
f2(x)
1
0.8
0.6
0.4
0.2
0
-0.2
0 20 40 60 80 100 120
0 20 40 60 80 100120
f3(x)
1
0.8
0.6
0.4
0.2
0
-0.2
0 20 40 60 80 100120
135
Figure 6
I1(x)
(x)
Imod
I0
I2(x)
I0
x
I3(x)
I0
Phase shift with three projection patterns and an example of a fringe image.
I1 (x, y) I3 (x, y)
0
= arctan
3
.
2I2 (x, y) I1 (x, y) I3 (x, y)
The discontinuity of the arc tangent function at 2 can be removed by
adding or subtracting multiples of 2 on the 0 (x, y) value (Fig. 7):
(x, y) = 0 (x, y) + 2k,
where k is an integer representing projection period. Note that
unwrapping methods only provide a relative unwrapping and do not
solve for the absolute phase. The 3D (x, y, z) coordinates can be calculated
based on the difference between measured phase (x, y) and the phase
value from a reference plane [9]. Figure 8 illustrates a simple case, where
d
Z
= ,
LZ
B
or Z =
LZ
d.
B
L
L
d ( 0 ).
B
B
136
Figure 7
Figure 8
137
Figure 9
Combining gray code with phase shift. Reproduced from C. Brenner, J. Boehm, and
J. Guehring, "Photogrammetric calibration and accuracy evaluation of a cross-pattern
stripe projector," Proc. SPIE 3641, 164-162 (1998).
2.5. Photometrics
Photometric stereo, pioneered by Woodham [13], is a variant approach
to shape from shading. It estimates local surface orientation by using a
sequence of images of the same surface taken from the same viewpoint
but under illumination from different directions [1416] (Fig. 10). It
thus solves the ill-posed problems in shape from shading by using
multiple images. Photometric stereo requires all light sources to be point
Figure 10
Photometric stereo scheme where eight images of the same object are taken
under illumination from eight different locations.
138
Figure 11
light and only estimates the local surface orientation (gradients p, q). It
assumes continuities of the 3D surface and needs a starting point (a
point on object surface whose (x, y, z) coordinates are known) for its 3D
reconstruction algorithms.
139
Figure 12
Red Channel Intensity Variation Pattern
140
Figure 13
141
Figure 14
Figure 15
142
Figure 16
Figure 17
143
Figure 18
144
Figure 19
Figure 20
145
Figure 21
146
Figure 22
147
Figure 23
148
Figure 24
149
Each homography can provide two constrains on the intrinsic parameters. As K T K 1 in the equation above is a symmetry matrix, it can be
defined with a 6D vector:
A1 A2 A4
A K T K 1 = A2 A3 A5 ,
A4 A5 A6
a = [A1 , A2 , A3 , A4 , A5 , A6 ].
Let the ith column vector of H be hi = [hi1 , hi2 , hi3 ]. Then, we have
hTi Ahj = vij a,
where vij = [hi1 hj1 , hi1 hj2 + hi2 hj1 , hi2 hj2 , hi3 hj1 + hi1 hj3 , hi3 hj2 +
hi2 hj3 , hi3 hj3 ]T . The two constraints can then be rewritten as a homogeneous equation. In order to solve a, at least three images from different
viewpoints are needed. In practice, more images are used to reduce
the effect of noise, and a least-squares error solution is obtained with
singular value decomposition. Finally, the result can be optimized to
minimize the reprojection error by minimizing the energy function
below:
n X
m
X
_
k mij m(K,
R, t, Mj ) k2 .
i=1 j=1
150
Figure 25
250
200
150
100
50
50
100
150
200
250
300
Intensity calibration of the projector. The blue curve is a plot of the fitted
function. The green curve is the inverse function. The red curve is rectified
intensity, which ought to be a straight line.
Figure 26
151
Figure 27
152
Figure 28
Figure 29
153
Figure 30
Figure 31
hearing aid device within a one-day time frame (Fig. 30). More
important, the digital impression technology to be developed herein
could improve the quality of fit, thus enhancing the hearing functionality
for impaired people.
154
Figure 32
155
Figure 33
Figure 34
can illustrate the facts clearly and effectively. An accident scene can be
reenacted from any camera angle from a virtually unlimited number of
possible vantage points (Fig. 34).
9. Conclusions
This tutorial provided a comprehensive review of recent advances in
structured-light 3D surface imaging technologies and a few examples of
their applications to a variety of fields. We established a classification
framework to accommodate vast variations of 3D imaging techniques,
organized and presented in a systematic manner. Representative
techniques are briefly described and illustrative figures are provided
to help readers grasp their basic concepts. Performance indexes of 3D
imaging systems are also reviewed, and calibration techniques for both
camera and projector are presented. Selective examples of applications
of 3D imaging technology are presented.
There is a reason why so many 3D imaging techniques have been
developed to date. There is no single technique that can be applied to
Advances in Optics and Photonics 3, 128160 (2011) doi:10.1364/AOP.3.000128
156
each and every application scenario. Each 3D imaging technique has its
own set of advantages and disadvantages. When selecting a 3D imaging
technique for a specific application, readers are encouraged to make
careful trade-offs among their specific application requirements and to
consider key performance indexes such as accuracy, resolution, speed,
cost, and reliability. Sometimes, multiple-modality sensor systems will
be needed to address demands that cannot be met by a single modality.
3D imaging is an interdisciplinary technology that draws contributions
from optical design, structural design, sensor technology, electronics,
packaging, and hardware and software. Traditionally, the 3D imaging
research activities from these disciplines are more or less independently pursued with different emphases. The recent trend in 3D
imaging research calls for an integrated approach, sometimes called
the computational imaging approach, in which optical design, the
sensor characteristics, and software processing capability are taken
into consideration simultaneously. This new approach promises to
significantly improve the performance and price/cost ratio of future 3D
imaging systems and is a worthwhile direction for future 3D imaging
technology development.
The field of 3D imaging technology is still quite young, compared
with its 2D counterpart that has developed over several decades
with multibillion dollar investments. It is our hope that our work
in developing and applying 3D imaging technologies to variety of
applications could provide some stimulation and attraction to more
talented researchers from both theoretical and application background
to this fascinating field of research and development.
157
158
159
160