Camera Calibration Using Three Sets of Parallel Lines Echigo1990
Camera Calibration Using Three Sets of Parallel Lines Echigo1990
Abstract. This paper presents a new method for three- that give its location. The imaging distance from the
dimensional camera calibration in which the rotation pa- lens to the image plane should also be defined as a
rameters are decoupled from the translation parameters. calibration parameter, because it is adjusted when-
First, the rotation parameters are obtained by projecting ever the camera is focused. The parameters must be
three sets of parallel lines independently of the translation calibrated every time the camera is fixed, and there-
parameters and the imaging distance from the lens to the fore it is necessary to establish a convenient method
image plane. The virtual line passing through the image
of camera calibration. The purpose of this research
center, which is calculated by perspective projection of a
set of parallel lines, depends only on the rotation parame- is to develop an easy and robust method of deter-
ters. Next, the translation parameters and the imaging mining camera parameters.
distance are analytically obtained. Experimental results The problem of camera calibration has been ex-
are used to show how the camera model can be accu- tensively studied. Sobel (1974) and Gennery (1979)
rately reconstructed in an easily prepared environment. solved this problem by using full-scale nonlinear op-
timization. Ganapaphy (1984) derived a noniterative
technique using linear equations. Martins, Birk, and
Key Words: camera calibration, rotation parameters, Kelley (198l) developed the two-planes method,
vanishing point, algebraic solution, reconstructed geo- which does not assume that the camera model is a
metric camera model pinhole model. Tsai (1985) described a two-stage
technique that decomposes the camera parameters
into two groups: those that can be obtained by solv-
ing linear equations with five unknowns and those
1 Introduction
that can be obtained by nonlinear optimization, us-
The purpose of camera calibration is to establish the ing an initial estimation given by solving linear
relationship between three-dimensional world coor- equations with two unknowns. In these conven-
dinates and their corresponding two-dimensional tional methods the parameters are detected from
image coordinates seen by the camera. Once this points in three-dimensional space and from their
relationship is established, three-dimensional infor- corresponding projection points on the image plane,
mation can be inferred from two-dimensional im- Such methods require accurate positioning of the
ages by using a stereo method or a structured light- projection points on the image plane for accurate
ing method. This paper describes a new method of detection of the parameters. However, as the posi-
three-dimensional camera calibration. A camera tions of the projection points on the image plane are
has 6 degrees of freedom; that is, the relationship easily shifted by noise or changing light sources, it
between the three-dimensional world coordinates is difficult to measure them accurately. To deter-
and the camera coordinates consists of six parame- mine accurate parameters, it is important to ignore
ters: three rotation parameters that give the orienta- the effects of isolated points that are in gross error.
tion of the camera and three translation parameters In the method presented here the parameters are
obtained from an image of parallel straight lines,
whose features can be extracted more accurately
Address reprint requests to: Tomio Echigo, IBM Research,
than those of projection points. Furthermore, the
Tokyo Research Laboratory, 5-19, Samban-cho, Chiyoda-ku, influence of errors in the extraction of projected
Tokyo 102, Japan. lines is suppressed, because this technique uses fea-
160 Echigo: A Camera Calibration Teehnklue
tures calculated from several projected lines by the 9 Location of the center of the lens
least square method. A characteristic of the ne:w " The imaging distm~ce from the lens to the image
method is that the rotation parameters can be de- plane
rived independently of the translation parameters
and the imaging distance from the lens to the image The first group of parameters defines the orientation
plane, so that rotation parameters are not influ- of the camera, which consists of three rotation pa-
enced by errors in Ne, translation parameters and rm-neters. The second defines the location of the
the imaging distance. camera, which consists of three tran,~'4ationparame-
ters, The third is often adjusted so that the lens is
focused on the image plane. In this method the
2 Definition of the Camera Parameters and
imaging distance is measured in pixels. Conse-
the Geometric Camera Model quently, the scaling factor of the discrete device is
The camera parameters are classified into intrir~sic not used, and the ctmaera coordinates are repre-
parameters, which are fixed by the camera, 'the sented in pixels, Since the intrinsic parameters are
tens, and the signal converter from analog image fixed by the camera, the tens, and the image acquisi-
signals to digitalized pkxeI data, and extrinsic pa- tion hardware, once calibrated, they should not be
rameters, which are changeable whenever the cam- changed. However, the extrinsic parameters must
era is set up. be calibrated every time the camera is set up. This
The intrinsic parameters are defined as follows: paper gives a method for determining the extrinsic
p~ameters.
- Aspect ratio of an image frame In this method the ideal camera is represented by
- Coordinates of the intersection of ~he opticat axis a pinhole model. The image of a three-dimensional
of the lens with the two-dimensional image Nane point is the intersection of the line-of-sight ray with
(image center). a front image plane, The image plane is at a distance
f ( t h e imaging distance) from the tens center on the
The. size of a discrete array device (such as a CCD option axis and intersects perper~'licularly with the
or MOS) is specified by the device manufacturer, optical axis. AU lines of sight intersect at the te~s
and its spacing between sensor elements is accu- center, Figure t shows a schematic of the world
rate. The vertical factor of the image plane is fixed, coordinate system and the camera coordinate sys-
because of the television (TV) scanning signal. The tem. The world coordinate system is denoted as O-
horizontat factor of the image Nane is determined X Y Z and the came~.a coordinate system as o-xyz;
by the camera and ~ analog-to~digimt (Aft)) con- both have r/ght-handed coordinates,
verter from TV signals to pixet data. Data from the Let the origin of the camera coordinate system
discrete array device are converted into TV signals be the lens center and let the z-axis correspond to
consisting of sequences of scanning lines, with the the optical axis of the lens. The relation of the two
interna! clock of the camera as a ho~qzontal syn- coordinate systems can be described as follows, us-
chronous signal. These signals are then transferred ing tr~.,msformation matrices for rotation (R) and
to image acquisition hardware, The analog value translation (T).
between horizontal synchronous pulses is sampled
by the clock of an A/D converter in the image ac-
quisition hardware and transformed irate pixet da~~ =R +T (I)
Thus, the aspect ratio is obtained s the ratio of a
row and a column of elements of the discrete array
device ~ d the timing ratio of the camera and A/D
converter. The image center is the intersection of
the Optical axis of the lens and the image plane,
~ i c h is the surface of the discrete device. The lo-
cation of the image center depends on the accuracy
with which the manuNcmrer attached the device to
the camera and the quality of the camera and lens,
For the same camera different lenses may yieN dif-
ferent image centers,
The extrinsic parameters are defined as follows:
Fiffure 1, World coordinates (O.XYZ) and camera coordi-
9 Direction of the optical axis of the tens nates (o.xyz).
Echigo: A Camera Calibration Technique 161
+
2
Exz
E2,,, + G
:l (8) the following equation is neglected, the three
straight lines are calculated again, and the rotation
parameters are then redetermined.
Ez can be calculated from its orthogonality to Ex and
Ey. Details are given in Appendix B. Thus, all the ~i : 2 I(ai- aj)Gx + (bi- bj)Eyxl (9)
factors of the rotation matrix R can be obtained J
from three sets of parallel lines such as those in
Figure 3. In the new method the rotation parame- This procedure is repeated until the value e; is
ters can be derived from three sets of parallel lines less than some preset threshold, or for a preset
independently of the translation parameters and the number of iterations.
imaging distance.
Another characteristic of the new method is that 4 Translation Parameters
The translation parameters can be determined from
some known points in world coordinates and their
corresponding projection points on the image plane.
Figure 4 illustrates the parallel projection of a point
(X;, Yi:, Z;) in a three-dimensional scene and its
image point (xi, yi) onto the x-y plane in camera
camera coordinates
(xi. v i ) ~ Y
Ex'
x - y plane
Ex, E'y', Ez (z=O)
Figure 3. Rotation parameters from three sets of parallel Figure 4. Parallel projection of a line of sight (to the x-y
lines. plane of the camera coordinates).
Echigo: A Camera Calibration Technique 163
coordinates. In this figure the world coordinates X; Since Eqs. (12) and (14) are both linear, Tx', Ty, T~,
and I1,.,are expressed by the following camera coor- and f are calculated from more than two points. The
dinates: translation matrix T is then obtained from the fol-
lowing equation:
rx;1
L
r;/=r-,
Z'iJ
[1
Z~
+x' (10) T = -RT' (15)
5 Accuracy Assessment
where
In order to evaluate the accuracy of the calculated
~x E~ camera parameters, the geometric camera model is
R -1 = Ey~ Eyy Ey z reconstructed. When a three-dimensional point is
LE~ E=y Ezz given in world coordinates, its corresponding image
point is projected onto the real image plane. When
T'=-R-'T= t[T~ ry the same point is given to the calculated camera
model, however, the point of intersection of the vir-
From the ratio of (X;, Yi) to (xi, yi) tual image plane of the calculated camera model and
the line connecting the three-dimensional point and
Xi __ X; the calculated lens center is obtained as in Figure 6.
(11)
The real image plane and the virtual image plane are
overlapped, and the calculated point on the virtual
the following equation is derived:
plane is compared with the real projected point on
the image plane. Assuming that the calculated cam-
[__Vi Xi ] [T;~
LTy] = -xi(EyxXi + E,,yYi + EyzZi) era model conforms to the geometry of the real
camera, the calculated point on the virtual plane
+yi(ExxXi + ExyYi + ErzZi) (12) coincides with the real projected point. However,
the calculated point obtained from a camera model
The factor T; of the translation parameters in the containing errors may be plotted to a different posi-
Z direction and the imaging distance f from the lens tion from the real projected point. In this paper the
to the image plane are also determined on the plane measure for evaluating the camera parameters de-
that includes the line of sight and the optical axis of termined by the proposed method is defined as the
the lens, as shown in Figure 5. In the same way, distance between the points.
from the ratio
f Z[ 6 Experimental Results
~x-x2 + y/Z ~Sz+ y[2 (13)
6.1 Real Experiments
Before the camera was set up, the intrinsic parame-
the following equation is derived: ters were determined.
t ~Zi '/
Figure 6. Comparison of the real image point and the calcu-
Figure 5. Plane including a line of sight and the optical axis lated point from the geometric camera model reconstructed
of the lens (using the camera coordinates). by using calibrated parameters.
164 Echigo: A Camera Calibration Technique
were compared with real projection points on the model was reconstructed with an accuracy of coin-
real image plane. When the real image plane was cidence between the real imaging position and the
compared with the calculated virtual plane, as in calculated position of about two pixels. In practice,
Figure 10, the distances from the real image points the lens center was about 850 mm from the origin of
to the calculated points were given, as shown in the world coordinates. Therefore, a line of sight
Figure 11. The image obtained for a different cam- computed from the calculated geometric camera
era setting and the calculated points are shown in model was within 0.4 mm of the vertical plane of the
Figure 12. In the same way, the distances from the optical axis of the camera in the field of view.
real points to the calculated points are shown in
Figure 13. The results of the two experiments were 6.2 Simulation Experiments
similar, despite the different settings of the cam- In the simulation experiments the camera setting
eras. The new method was able to determine cam- was the same as in the first real experiment de-
era parameters robustly, independently of the set- scribed earlier. When the intrinsic and extrinsic pa-
ting of the camera. In the experiments this rameters are selected to have the same values as in
calibration method was able to determine the cam- the first real experiment, the geometric camera
era parameters from which the geometric camera model can be reconstructed. When a three-dimen-
2 ................................................................................................... 2 ....................................................................................... i
i o o i i
o i Z '= !
...............................= ! ~ o ~ .......... i ..... ! .................................
.x • i i :. i o O [ i i
i o ~ io
o d 0
o "T:
o o
............................................. ~ b ~ o - i ; d o - D ........................................... i ......................................
>-
7-
........................... t tto ~ ~ I ................................... Q......o..s o ......... -
-1
i io ~176176
oo
i
i
-2 -2
i i
-2 -1 0 -1 0 1
Figure 11. Distances from the real image points to the cal- Figure 13. Distances from the real image points to the cal-
culated points. culated points in another setting of the camera.
166 Echigo: A Camera Calibration Technique
Table 1. Comparison of simulation results using our method and Tsai's method
Added Noise Range Our Method Tsai's
Method
sional point is given in world coordinates, its corre- of this method is that the rotation parameters can be
sponding projected point can be calculated on the derived independently of the translation parameters
virtual image plane. The data of the image lines for and the imaging distance from the lens to the image
the simulation experiments can be calculated by the plane, so that the rotation parameters are not influ-
least square method from projections of three-di- enced by errors in other extrinsic parameters. An-
mensional points along parallel lines in the scene. other characteristic is that the influence of errors in
The simulation experiments were performed using the extraction of projected lines is suppressed, be-
the image lines, which were determined from pro- cause the method uses three straight lines passing
jected points by adding a noise at random on the through the image center, which are calculated from
image plane. The way to add noise to projected several projected lines by the least square method.
points is described as (xi + o~, Yi +/3), where ~,/3 Experimental results showed that the geometric
have subpixels in the range of [ - n , +n]. In the X - Y camera model could be accurately reconstructed
plane of world coordinates five lines parallel to the and used for three-dimensional data acquisition.
X-axis and five lines parallel to the Y-axis, and on Accuracy assessment is defined in this paper as
the Y-Z plane, five lines parallel to the Y-axis and finding the differences between real image points
five lines parallel to the Z-axis were used for the and points calculated by using the proposed model.
simulation experiments, and seven extrinsic camera It serves to clarify the limits of the area to be
parameters were obtained from them. For evalua- searched for correspondences of stereo images.
tion, the simulation results were verified by accu- Correspondences of stereo images exist within the
racy assessment in the same way as in the real ex- preceding area, which is centered on the coordi-
periments. The intersections of the simulation data nates calculated by the constraint of epipolar geom-
in the noise-free case were compared with projected etry.
points of the intersections of 10-mm grid pattern on
the X-Y plane and the Y-Z plane in world coordi-
nates, using the camera model reconstructed from
the simulation results. Table 1 shows the results of
accuracy evaluation when the noise range is from Appendix A Determining the Straight Line
within one pixel to within three pixels. The results
Passing Through the Image
of the simulation experiments show that the calcu-
Center and the Vanishing Point
lated parameters are reasonably accurate. The
other results were obtained from a simulation using
from the Image Lines
Tsai's single-plane calibration method (Tsai 1985), The line passing through the image center and the
which uses image points as data for calibration. The vanishing point of a group of lines can be calculated
data for the simulation were the intersections of the from the image lines projected from a set of parallel
same lines as in the simulation of our method. The lines in the scene by the least square method. Let
simulation results show that the method presented the equation of an image line be
here, which uses image lines as features, is more
robust when noise appears than the conventional a i x + b i y = ci (A.1)
method, which uses image points as features.
The surface normal Ni of a plane including the lens
7 Conclusions center and the image line is then expressed as
This paper presented a new method of three-dimen-
Ci
sional camera calibration. One of the characteristics Hi = aiEx + biEy - ~ Ez (A.2)
J
Echigo: A Camera Calibration Technique 167
~ at( a1
t
xx -~1 b3 - a3 + Exy -~ b3 - a3 + a3 =
~ a2 a2
i} In: Proceedings of CVPR-86, pp 594-601
Sobet I (1974) On calibrating computer controlled cam-
eras for perceiving 3-D scenes. Artificial Intelligence
5:185-198
g a3 - g b3 + E 5 ,,3 - g b3 + b3 Tsai RY (1985) A versatile camera calibration technique
for high accuracy 3D machine vision metrology using
off-the-shelf TV cameras and lenses. IBM Research
(A.5) Report, RC51342, May 8