SlideShare a Scribd company logo
2
Most read
4
Most read
6
Most read
Geometric Camera Parameters

• What assumptions have we made so far?

- All equations we have derived for far are written in the camera reference frames.

- These equations are valid only when:

    (1) all distances are measured in the camera’s reference frame.

    (2) the image coordinates have their origin at the principal point.




- In general, the world and pixel coordinate systems are related by a set of physical
parameters such as:

    * the focal length of the lens
    * the size of the pixels
    * the position of the principal point
    * the position and orientation of the camera
-2-

• Types of parameters (Trucco 2.4)

- Two types of parameters need to be recovered in order for us to reconstruct the 3D
structure of a scene from the pixel coordinates of its image points:

Extrinsic camera parameters: the parameters that define the location and orientation
of the camera reference frame with respect to a known world reference frame.

Intrinsic camera parameters: the parameters necessary to link the pixel coordinates of
an image point with the corresponding coordinates in the camera reference frame.

                    Object Coordinates (3D)



                    World Coordinates (3D)

                                                                extrinsic camera
                                                                  parameters
                    Camera Coordinates (3D)




                   Image Plane Coordinates (2D)
                                                                intrinsic camera
                                                                  parameters


                    Pixel Coordinates (2D, int)
-3-

• Extrinsic camera parameters

- These are the parameters that identify uniquely the transformation between the
unknown camera reference frame and the known world reference frame.

- Typically, determining these parameters means:

    (1) finding the translation vector between the relative positions of the origins of
    the two reference frames.

    (2) finding the rotation matrix that brings the corresponding axes of the two
    frames into alignment (i.e., onto each other)
-4-

- Using the extrinsic camera parameters, we can find the relation between the coordi-
nates of a point P in world (P w ) and camera (P c ) coordinates:

                                                   r 11          r 12   r 13 
                      P c = R(P w − T ) where R =  r 21          r 22   r 23 
                                                                             
                                                   r 31          r 32   r 33 

            Xc              Xw 
- If P c =  Y c  and P w =  Y w , then
                                
            Zc              Zw 


                            X c   r 11    r 12    r 13   X w − T x 
                           Y  = r         r 22    r 23   Y w − T y 
                            c   21                                 
                            Z c   r 31    r 32    r 33   Z w − T z 

or

                                      X c = RT (P w − T )
                                             1

                                      Y c = RT (P w − T )
                                             2

                                      Z c = RT (P w − T )
                                             3

where RT corresponds to the i-th row of the rotation matrix
       i
-5-

• Intrinsic camera parameters

- These are the parameters that characterize the optical, geometric, and digital charac-
teristics of the camera:

     (1) the perspective projection (focal length f ).
     (2) the transformation between image plane coordinates and pixel coordinates.
     (3) the geometric distortion introduced by the optics.

From Camera Coordinates to Image Plane Coordinates

     - Apply perspective projection:

                            Xc    RT (P w − T )                    Yc    RT (P w − T )
                       x= f    = f T
                                   1
                                                ,             y= f    = f T
                                                                          2
                            Zc    R3 (P w − T )                    Zc    R3 (P w − T )

From Image Plane Coordinates to Pixel coordinates


                                                                    pixel frame                  P
                                                          x im
                                                                       yim


             Yc
                   Xc                             y
  Camera                                                                                   Yw
  Frame                      optical axis                 x                                     Xw
                    Zc                            o ,oy
                                                  x                                World
                                                                                   Frame
       center of                                                                                 Zw
   perspective projection                      image             principal point
                                               plane
                                               frame


                                x = − (x im − o x )s x or x im = − x/s x + o x

                                 y = − (y im − o y )s y or y im = − y/s y + o y

     where (o x , o y ) are the coordinates of the principal point (in pixels, e.g., o x = N /2,
     o y = M/2 if the principal point is the center of the image) and s x , s y correspond
     to the effective size of the pixels in the horizontal and vertical directions (in mil-
     limeters).
-6-

- Using matrix notation:

                               x         −1/s x       0          ox  x
                              im                                     
                              y im  =  0          −1/s y       oy   y 
                                                                     
                              1   0                 0          1  1

Relating pixel coordinates to world coordinates

                               RT (P w − T )                                  RT (P w − T )
          −(x im − o x )s x = f T
                                1
                                             ,           −(y im − o y )s y = f T
                                                                               2
                               R3 (P w − T )                                  R3 (P w − T )

or

                           RT (P w − T )                                RT (P w − T )
            x im   = − fs x T
                            1
                                         + ox,           y im   = − fs y T
                                                                         2
                                                                                      + oy
                           R3 (P w − T )                                R3 (P w − T )

Image distortions due to optics

     Assuming radial distortion:
                                      x = x d (1 + k 1 r 2 + k 2 r 4 )

                                      y = y d (1 + k 1 r 2 + k 2 r 4 )

     where (x d , y d ) are the coordinates of the distorted points (r 2 = x 2 + y 2 )
                                                                             d     d

     k 1 and k 2 are intrinsic parameters too but will not be considered here...
-7-

• Combine extrinsic with intrinsic camera parameters

- The matrix containing the intrinsic camera parameters:

                                        − f /s x          0        ox 
                                M in =  0              − f /s y    oy 
                                                                      
                                        0                 0        1 

- The matrix containing the extrinsic camera parameters:

                                         r 11   r 12    r 13      −RT T 
                                                                     1
                               M ex   =  r 21   r 22    r 23        T 
                                                                   −R2 T
                                                                        
                                         r 31   r 32    r 33      −RT T 
                                                                     3

- Using homogeneous coordinates:

                           Xw     Xw   m                                       m 14  
                                                                                             Xw 
         xh              Yw     Y w   11
                                                                    m 12     m 13
                                                                                            Yw 
        y = M M             =M       = m 21                   m 22     m 23   m 24      
         h   in ex
                           Zw     Zw                                                  Zw 
        w                1      1 
                                             m 31                  m 32     m 33   m 34 
                                                                                            1 

- Homogenization is needed to obtain the pixel coordinates:

                               x h m 11 X w + m 12Y w + m 13 Z w + m 14
                      x im =      =
                               w    m 31 X w + m 32Y w + m 33 Z w + m 34
                               y h m 21 X w + m 22Y w + m 23 Z w + m 24
                      y im =      =
                               w    m 31 X w + m 32Y w + m 33 Z w + m 34

- M is called the projection matrix (it is a 3 x 4 matrix).

Note: the relation of 3D points and their 2D projections can be seen as a linear trans-
formation from the projective space (X w , Y w , Z w , 1)T to the projective plane
(x h , y h , w)T .
-8-

• The perspective camera model (using matrix notation)

- Assuming o x = o y = 0 and s x = s y = 1

                               − fr 11      − fr 12     − fr 13    fRT T 
                                                                      1
                        M p =  − fr 21      − fr 22     − fr 23      T 
                                                                    fR2 T
                                                                         
                               r 31          r 32        r 33      −RT T 
                                                                      3

- Let’s verify the correctness of the above matrix:

                             − fRT
                                  1         fRT T 
                                              1             − fRT (P w − T ) 
                                                                 1
                             − fRT           T   Pw     − fRT (P − T ) 
               p = M p Pw =                 fR2 T        =
                             T2                    1   T2 w              
                             R3            −RT T 
                                              3             R3 (P w − T ) 

- After homogenization (we get the same equations as in page 23):

                       RT (P w − T )                           RT (P w − T )
                  x=− f T
                        1
                                                          y=− f T
                                                                2
                       R3 (P w − T )                           R3 (P w − T )

• The weak perspective camera model (using matrix notation)

                               − fr 11   − fr 12      − fr 13        fRT T 
                                                                        1
                     M wp   =  − fr 21   − fr 22      − fr 23        fR2 T 
                                                                        T
                                                                              
                               0           0            0         RT (P − T ) 
                                                                    3

where P is the centroid of the object (i.e., object’s average distance from the camera)

- We can verify the correctness of the above matrix:

                           − fRT
                                1            fRT T 
                                               1                   − fRT (P w − T ) 
                                                                         1
           p = M wp P w =  − fRT              T
                                             fR2 T      P w  =  − fRT (P − T ) 
                               2
                                                       1             2   w
                                                                                     
                          0 0 0          R3 (P − T ) 
                                           T
                                                                     R3 (P − T ) 
                                                                        T


- After homogenization:

                            RT (P w − T )                             RT (P w − T )
                 x=− f       1
                                                           y=− f       2
                            RT (P − T )
                              3                                       RT (P − T )
                                                                        3
-9-

• The affine camera model

- The entries of the projection matrix are totally unconstrained:

                                    a11   a12   a13   a14 
                             M a =  a21   a22   a23   a24 
                                                          
                                    0      0     0    a34 

- The affine model does not appear to correspond to any physical camera.

- Leads to simple equations and appealing geometric properties.

- Does not preserve angles but does preserve parallelism.

More Related Content

What's hot (20)

DOCX
Digtial Image Processing Q@A
Chung Hua Universit
 
PDF
Machine learning in image processing
Data Science Thailand
 
PPTX
Spatial Transformation
Ehsan Hamzei
 
PPTX
Electromagnetic theory Chapter 1
Ali Farooq
 
PPTX
Elements of visual perception Eye vision .pptx
ssuser7ec6af
 
PPTX
IMAGE SEGMENTATION.
Tawose Olamide Timothy
 
PPT
Chapter 5 Image Processing: Fourier Transformation
Varun Ojha
 
PPTX
Segmentation Techniques -II
Hemantha Kulathilake
 
PPT
Three dimensional concepts - Computer Graphics
Kongunadu College of engineering and Technology, Namakkal
 
PPTX
Digital image processing img smoothning
Vinay Gupta
 
PPTX
Ray tracing
Linh Nguyen
 
PPT
Image transforms 2
Ali Baig
 
PPT
Image formation
Rishabh Bhandari
 
PPTX
Radon Transform - image analysis
Vanya Valindria
 
PPTX
Realizations of discrete time systems 1 unit
HIMANSHU DIWAKAR
 
PDF
Dsp U Lec06 The Z Transform And Its Application
taha25
 
PPT
Two dimentional transform
Patel Punit
 
PPTX
Lecture Summary : Camera Projection
홍배 김
 
PDF
Higher Differential Equation
Abdul Hannan
 
Digtial Image Processing Q@A
Chung Hua Universit
 
Machine learning in image processing
Data Science Thailand
 
Spatial Transformation
Ehsan Hamzei
 
Electromagnetic theory Chapter 1
Ali Farooq
 
Elements of visual perception Eye vision .pptx
ssuser7ec6af
 
IMAGE SEGMENTATION.
Tawose Olamide Timothy
 
Chapter 5 Image Processing: Fourier Transformation
Varun Ojha
 
Segmentation Techniques -II
Hemantha Kulathilake
 
Three dimensional concepts - Computer Graphics
Kongunadu College of engineering and Technology, Namakkal
 
Digital image processing img smoothning
Vinay Gupta
 
Ray tracing
Linh Nguyen
 
Image transforms 2
Ali Baig
 
Image formation
Rishabh Bhandari
 
Radon Transform - image analysis
Vanya Valindria
 
Realizations of discrete time systems 1 unit
HIMANSHU DIWAKAR
 
Dsp U Lec06 The Z Transform And Its Application
taha25
 
Two dimentional transform
Patel Punit
 
Lecture Summary : Camera Projection
홍배 김
 
Higher Differential Equation
Abdul Hannan
 

Similar to Camera parameters (20)

PDF
3-D Visual Reconstruction: A System Perspective
Guillermo Medina Zegarra
 
PDF
Lecture9
Krishna Karri
 
PDF
Lecture13
zukun
 
PDF
Saad alsheekh multi view
SaadAlSheekh1
 
PDF
Lecture12
zukun
 
PDF
introduction to camera, the operation of camera
Indra Hermawan
 
PDF
Camera Calibration from a Single Image based on Coupled Line Cameras and Rect...
Joo-Haeng Lee
 
PPT
2D Geometric_Transformations in graphics.ppt
mohana210405
 
PDF
Lecture16
zukun
 
PPT
Cameras
gayu r
 
PDF
426 Lecture5: AR Registration
Mark Billinghurst
 
PPT
Modeling Transformations
Tarun Gehlot
 
PDF
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Joo-Haeng Lee
 
PDF
Putting Objects in Perspective
uisp dsin
 
PPT
Cs559 11
Arun Kandukuri
 
PPTX
2 d transformations and homogeneous coordinates
Tarun Gehlot
 
PDF
Lecture25
zukun
 
PPTX
Elements of Analytical Photogrammetry
Nadia Aziz
 
PDF
相机模型经典Camera
海彦 庞
 
3-D Visual Reconstruction: A System Perspective
Guillermo Medina Zegarra
 
Lecture9
Krishna Karri
 
Lecture13
zukun
 
Saad alsheekh multi view
SaadAlSheekh1
 
Lecture12
zukun
 
introduction to camera, the operation of camera
Indra Hermawan
 
Camera Calibration from a Single Image based on Coupled Line Cameras and Rect...
Joo-Haeng Lee
 
2D Geometric_Transformations in graphics.ppt
mohana210405
 
Lecture16
zukun
 
Cameras
gayu r
 
426 Lecture5: AR Registration
Mark Billinghurst
 
Modeling Transformations
Tarun Gehlot
 
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Joo-Haeng Lee
 
Putting Objects in Perspective
uisp dsin
 
Cs559 11
Arun Kandukuri
 
2 d transformations and homogeneous coordinates
Tarun Gehlot
 
Lecture25
zukun
 
Elements of Analytical Photogrammetry
Nadia Aziz
 
相机模型经典Camera
海彦 庞
 
Ad

More from TheYacine (8)

PDF
كيف تعد رسالة دكتوراه ـ أومبرتو إيكو
TheYacine
 
PDF
Latex
TheYacine
 
DOCX
The best 31 websites for online courses
TheYacine
 
PDF
التلفزيون وآليات التلاعب بالعقول بيير بورديو
TheYacine
 
PDF
Algorithmique graphique
TheYacine
 
PDF
Holy quran
TheYacine
 
PDF
الإسلام ومستقبل النقود عمران حسين
TheYacine
 
PDF
كيف تقرأ كتابا
TheYacine
 
كيف تعد رسالة دكتوراه ـ أومبرتو إيكو
TheYacine
 
Latex
TheYacine
 
The best 31 websites for online courses
TheYacine
 
التلفزيون وآليات التلاعب بالعقول بيير بورديو
TheYacine
 
Algorithmique graphique
TheYacine
 
Holy quran
TheYacine
 
الإسلام ومستقبل النقود عمران حسين
TheYacine
 
كيف تقرأ كتابا
TheYacine
 
Ad

Camera parameters

  • 1. Geometric Camera Parameters • What assumptions have we made so far? - All equations we have derived for far are written in the camera reference frames. - These equations are valid only when: (1) all distances are measured in the camera’s reference frame. (2) the image coordinates have their origin at the principal point. - In general, the world and pixel coordinate systems are related by a set of physical parameters such as: * the focal length of the lens * the size of the pixels * the position of the principal point * the position and orientation of the camera
  • 2. -2- • Types of parameters (Trucco 2.4) - Two types of parameters need to be recovered in order for us to reconstruct the 3D structure of a scene from the pixel coordinates of its image points: Extrinsic camera parameters: the parameters that define the location and orientation of the camera reference frame with respect to a known world reference frame. Intrinsic camera parameters: the parameters necessary to link the pixel coordinates of an image point with the corresponding coordinates in the camera reference frame. Object Coordinates (3D) World Coordinates (3D) extrinsic camera parameters Camera Coordinates (3D) Image Plane Coordinates (2D) intrinsic camera parameters Pixel Coordinates (2D, int)
  • 3. -3- • Extrinsic camera parameters - These are the parameters that identify uniquely the transformation between the unknown camera reference frame and the known world reference frame. - Typically, determining these parameters means: (1) finding the translation vector between the relative positions of the origins of the two reference frames. (2) finding the rotation matrix that brings the corresponding axes of the two frames into alignment (i.e., onto each other)
  • 4. -4- - Using the extrinsic camera parameters, we can find the relation between the coordi- nates of a point P in world (P w ) and camera (P c ) coordinates:  r 11 r 12 r 13  P c = R(P w − T ) where R =  r 21 r 22 r 23     r 31 r 32 r 33   Xc   Xw  - If P c =  Y c  and P w =  Y w , then      Zc   Zw   X c   r 11 r 12 r 13   X w − T x  Y  = r r 22 r 23   Y w − T y   c   21    Z c   r 31 r 32 r 33   Z w − T z  or X c = RT (P w − T ) 1 Y c = RT (P w − T ) 2 Z c = RT (P w − T ) 3 where RT corresponds to the i-th row of the rotation matrix i
  • 5. -5- • Intrinsic camera parameters - These are the parameters that characterize the optical, geometric, and digital charac- teristics of the camera: (1) the perspective projection (focal length f ). (2) the transformation between image plane coordinates and pixel coordinates. (3) the geometric distortion introduced by the optics. From Camera Coordinates to Image Plane Coordinates - Apply perspective projection: Xc RT (P w − T ) Yc RT (P w − T ) x= f = f T 1 , y= f = f T 2 Zc R3 (P w − T ) Zc R3 (P w − T ) From Image Plane Coordinates to Pixel coordinates pixel frame P x im yim Yc Xc y Camera Yw Frame optical axis x Xw Zc o ,oy x World Frame center of Zw perspective projection image principal point plane frame x = − (x im − o x )s x or x im = − x/s x + o x y = − (y im − o y )s y or y im = − y/s y + o y where (o x , o y ) are the coordinates of the principal point (in pixels, e.g., o x = N /2, o y = M/2 if the principal point is the center of the image) and s x , s y correspond to the effective size of the pixels in the horizontal and vertical directions (in mil- limeters).
  • 6. -6- - Using matrix notation: x  −1/s x 0 ox  x  im      y im  =  0 −1/s y oy   y    1   0 0 1  1 Relating pixel coordinates to world coordinates RT (P w − T ) RT (P w − T ) −(x im − o x )s x = f T 1 , −(y im − o y )s y = f T 2 R3 (P w − T ) R3 (P w − T ) or RT (P w − T ) RT (P w − T ) x im = − fs x T 1 + ox, y im = − fs y T 2 + oy R3 (P w − T ) R3 (P w − T ) Image distortions due to optics Assuming radial distortion: x = x d (1 + k 1 r 2 + k 2 r 4 ) y = y d (1 + k 1 r 2 + k 2 r 4 ) where (x d , y d ) are the coordinates of the distorted points (r 2 = x 2 + y 2 ) d d k 1 and k 2 are intrinsic parameters too but will not be considered here...
  • 7. -7- • Combine extrinsic with intrinsic camera parameters - The matrix containing the intrinsic camera parameters:  − f /s x 0 ox  M in =  0 − f /s y oy     0 0 1  - The matrix containing the extrinsic camera parameters:  r 11 r 12 r 13 −RT T  1 M ex =  r 21 r 22 r 23 T  −R2 T    r 31 r 32 r 33 −RT T  3 - Using homogeneous coordinates:  Xw   Xw   m m 14   Xw   xh   Yw   Y w   11 m 12 m 13  Yw  y = M M  =M   = m 21 m 22 m 23 m 24     h in ex  Zw   Zw     Zw  w  1   1   m 31 m 32 m 33 m 34   1  - Homogenization is needed to obtain the pixel coordinates: x h m 11 X w + m 12Y w + m 13 Z w + m 14 x im = = w m 31 X w + m 32Y w + m 33 Z w + m 34 y h m 21 X w + m 22Y w + m 23 Z w + m 24 y im = = w m 31 X w + m 32Y w + m 33 Z w + m 34 - M is called the projection matrix (it is a 3 x 4 matrix). Note: the relation of 3D points and their 2D projections can be seen as a linear trans- formation from the projective space (X w , Y w , Z w , 1)T to the projective plane (x h , y h , w)T .
  • 8. -8- • The perspective camera model (using matrix notation) - Assuming o x = o y = 0 and s x = s y = 1  − fr 11 − fr 12 − fr 13 fRT T  1 M p =  − fr 21 − fr 22 − fr 23 T  fR2 T    r 31 r 32 r 33 −RT T  3 - Let’s verify the correctness of the above matrix:  − fRT 1 fRT T  1  − fRT (P w − T )  1  − fRT T   Pw   − fRT (P − T )  p = M p Pw = fR2 T =  T2   1   T2 w   R3 −RT T  3  R3 (P w − T )  - After homogenization (we get the same equations as in page 23): RT (P w − T ) RT (P w − T ) x=− f T 1 y=− f T 2 R3 (P w − T ) R3 (P w − T ) • The weak perspective camera model (using matrix notation)  − fr 11 − fr 12 − fr 13 fRT T  1 M wp =  − fr 21 − fr 22 − fr 23 fR2 T  T    0 0 0 RT (P − T )  3 where P is the centroid of the object (i.e., object’s average distance from the camera) - We can verify the correctness of the above matrix:  − fRT 1 fRT T  1  − fRT (P w − T )  1 p = M wp P w =  − fRT T fR2 T   P w  =  − fRT (P − T )   2  1   2 w  0 0 0 R3 (P − T )  T  R3 (P − T )  T - After homogenization: RT (P w − T ) RT (P w − T ) x=− f 1 y=− f 2 RT (P − T ) 3 RT (P − T ) 3
  • 9. -9- • The affine camera model - The entries of the projection matrix are totally unconstrained:  a11 a12 a13 a14  M a =  a21 a22 a23 a24     0 0 0 a34  - The affine model does not appear to correspond to any physical camera. - Leads to simple equations and appealing geometric properties. - Does not preserve angles but does preserve parallelism.