Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
18 views
36 pages
Block 3
Uploaded by
ashutoshprabhakar
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Block-3 For Later
Download
Save
Save Block-3 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
18 views
36 pages
Block 3
Uploaded by
ashutoshprabhakar
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Block-3 For Later
Carousel Previous
Carousel Next
Download
Save
Save Block-3 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 36
Search
Fullscreen
~ |lignou UNIVERSITY Indira Gandhi National Open University Block 3 COMPU VISION-I Unit 8 Introduction to computer Vision Unit 9 — Single Camera Models Unit 10 Multiple Cameras MCS-230 Digital Image Processing and Computer Vision 211 224 236 207208 PROGRAMME DESIGN COMMITTEE Prof, (Retd) SK. Gupta, IT, Delhi ‘Sh, Shashi Bhushan Sharma, Associate Professor, SOCIS, IGNOU Prof, Ela Kumar, IGDTUW, Delhi Sh, Akshay Kumar, Associate Professor, SOCIS, IGNOU. Prof. T:V, Vijay Kumar JNU, New Delhi Dr P, Venkata Suresh, Associate Professor, SOCIS, IGNOU Prof. Gayatri Dhingra, GVMITM, Sonipat Dr. V.V. Subrahmanyam, Associate Professor, SOCIS, IGNOU Mr Milind Mahajan, Impressico Business Sh. M.P. Mishra, Assistant Professor, SOCIS, IGNOU Solutions, New Deli Dr. Sudansh Sharma, Assistant Professor, SOCIS, IGNOU COURSE DESIGN COMMITTEE Prof. T.V. Vijay Kumar, JNU, New Delhi Sh. Shashi Bhushan Sharma, Associate Professor, SOCIS, IGNOU Prof, S.Balasundaram, INU, New Delhi _ Sh, Akshay Kumar, Associate Professor, SOCIS, IGNOU Prof, DLP. Vidyarti, INU, New Delhi Dr. P, Venkata Suresh, Associate Professor, SOCIS, IGNOU Prof, Anjana Gosain, USICT, GGSIPU Dr. V.V. Subrahmanyam, Associate Professor, SOCIS, IGNOU New Delhi Sh. MP. Mishra, Assistant Professor, SOCIS, IGNOU Dr. Ayesha Choudhary, JNU, New Delhi _Dr. Suhansh Sharma, Assistant Professor, SOCIS, IGNOU SOCIS FACULTY Prof. P. Venkata Suresh, Director, SOCIS, IGNOU Prof. V.V, Subrahmanyam, SOCIS, IGNOU Prof, Sandeep Singh Rawat, SOCIS, IGNOU Prof, Divakar Yadav, SOCIS, IGNOU, New Delhi De. Akshay Kumar, Associate Professor, SOCIS, IGNOU Dr. MP. Mishra, Associate Professor, SOCIS, IGNOU De. Sudhansh Sharma, Assistant Professor, SOCIS, IGNOU, COURSE PREPARATION TEAM This Course is adapted from MMTE-003-Digital Image Processing and Pattern Recognition of School of Sciences, IGNOU Content Editor Course Writer Prof §, Balasundaram Unit 8, Unit 9and Unit 10 School of Computers and Systems Sciences Dr, Ayesha Choudhary INU, New Delhi School of Computers and Systems Sciences INU, New Delhi COURSE COORDINATOR DeSudhansh Sharma, Assistant Professor, School of Computers and Information Sciences, IGNOU. PRINT PRODUCTION Sh Sanjay Aggarwal Assistant Registrar, MPDD, IGNOU, New Delhi Tone, 2055 Indira Gandhi National Open University, 2023 All rights reserved, No part ofthis work may be reproduced in any form, by mimeograph or any other means, without permission in writing from the Indira Gandhi National Open University Further information onthe Indira Gandhi National Open University courses may be obtained from the University’s office at Maidan Garhi, New Delhi-! 10068. Printed and published on bebalf ofthe Indira Gandhi National Open University, New Delhi by MPDD, IGNOU. Laser Typesetter: Tessa Media & Computers, C-206, Shaheen Bagh, Jamia Nagar, New Delhi-1 10025BLOCK 3 INTRODUCTION This Block relates to the coverage of the various topics, relevant from the point of view of computer vision, which includes the introduction to the actual meaning of computer vision, along with various camera models and the transformations involved in computer vision, The block also coves the concepts related to single camera and multiple camera environments. The unit wise distribution of content is given below: Unit 8, includes the introduction to the actual meaning of computer vision, along with various camera models and the transformations involved in computer vision Unit 9, includes the concepts related to single camera model environment viz. perspective projection, homography, camera calibration, and affine motion models. Finally, Unit 10 involves the various concepts relevant to multiple camera environment like stereo vision, point correspondence, epipolar geometry and optical flow. 209ignou THE PEOPLE’S UNIVERSITY ©)UNIT 8 INTRODUCTION TO COMPUTER VISION Structure Page No. 8.1 Introduction 2 8.2 Objectives 2 8.3. Introduction to Computer Vision 212 8.4 Camera Models 212 8.5. Projections 214 8.6 Transformations 215 8.7 Summary 222 8.8 Solutions / Answers 223 8.8 References 223 8.1 INTRODUCTION The purpose of this chapter is to introduce the subject of computer vision. Till now, we have been going through the concepts of image processing. In the previous units, we have learnt about digital images, and the various algorithms for processing of digital images. We would now like to introduce the subject of computer vision and its various concepts. We know that a camera is modelled on the human vision, and we require two eyes to be able to see in the three dimensional world. As we read further, we shall first discuss the camera models, and the various transformations that occur for imaging to occur In Section 8,2, we shall discuss the objectives of the unit.In Section 8.3, we shall introduce the subject of computer vision. We then discuss the various camera models in Section 8.4 and transformations in Section 8.5, Finally, we summarise the discussion in Section 8.6 and Section 8.7, we discuss some problems and obtain their solutions. 8.2. OBJECTIVES The objective of this unit is to introduce the subject of computer vision, “A picture is worth a thousand words” holds true asonecan see that an image of any scene has a lot of detailed information, Further, it is easy to note thata color image has more information than a grey-scale image. In today’s world, camera technology has become very cheap and ubiquitous Cameras are the instruments through which we capture an image, The mathematics behind the camera model helps us to understand computer vision. Therefore, we need to understand the various camera models. As it has beendiscussed earlier in digital image processing, that a digital image is a matrix of non-negative integers, the a digital image efore, any transformation applied on a matrix can be applied on 2u1Computer Visions1 212 In this unit, we shall study various geometric transformations. We shall finally summarise the unit 8.3_— INTRODUCTION TO COMPUTER VISIO) Computer vision is the field of study that endeavours to develop techniques: to help machines understand the content of images and videos captured by single and/or multiple cameras. It seems to be a trivial problem for human beings to understand and recognize contents of images and videos once seen, however, images and videos have a lot of content and it is not easy for a machine to focus on the relevant portions of an image, understand the content, recognize familiar objects and faces and “tell the story”. However, for machines to carry out intelligent tasks by “seeing” its surroundings, current computer vision and machine leaning techniques need to be learnt, understood and further developed. Applications of computer vision exist in every sphere of life, from medical image understanding to autonomous vehicles and robotics. Therefore, it is an important field of study. 8.4 _CAMERA MODELS ‘A camera is a basic tool in computer vision. It is essential to record an image of the scene around us to be able to analyse the scene. Therefore, it is important to learn the model of the camera. In general, a camera is a physical device that captures an image by allowing light to pass through a small hole (the aperture) onto a light sensitive surface. The lens in a camera focus the light entering through the aperture onto the imaging surface or the light- sensitive surface. Also, a shutter mechanism controls the amount of time for which the photosensitivesurface is exposed to light. A camera therefore, maps the 3D world onto a 2D plane (the image). There are two major classes of camera models: camera models with a finite center and camera models with center at infinity. We shall focus our attention to the camera model with a finite center and discuss the simplest camera model: the pin-hole camera model. Figure 1: The pin-hole camera geometry. Figure taken from [1]8.4.1 The Pin-Hole Camera Model Introduction to Computer Ina pin-hole camera, a barrier with a small hole (pin-hole) is placed between the 3D object and the 2D image plane (light-sensitive plane). Each point on the 3D object emits/ reflects multiple light rays out of which only one or very few of these light rays fall on the image plane by passing through the pin- hole, Thereby, one can see that there exists a mapping between the points of the 3D object and the 2D plane forming an image and such a mapping will be a projection where the pin-hole is called the centre of projection. Formulation of the pin-hole camera model Assume that the centreof projection is the origin of the 3D Euclidean coordinate system. Let Z = foe the image plane or the focal plane. Then, the centre of projection (‘C’) is called the camera centre or optical centre and fis the focal length. The principal axis or the principal ray is the line from the camera centre that is perpendicular to the image plane. The point of intersection of the principal axis and the image plane is known as the principal point. The principal plane/projection plane is the plane passing through the principal point parallel to the image plane, ie., the image plane becomes the projection plane. Under the pin-hole camera projection, a 3D point X=(X,Y,Z)’ is mapped to a 2D point x=(x,y)’ on the image plane where, the line joining the camera centre Cto the 3D point X intersects the image plane at x. From Figure 1, it can be seen that by rules of similar triangles, Sand! =. £ Sand! =25x = Bandy zr z gz Zz ‘This impliesthatthe point X = (X, ¥, Z)" eR’ is mapped to the point x = (XZ, f¥/Z)" eR? on the image plane. Therefore, a pin-hole camera does a perspective projection of a 3D scene onto a 2D plane. Note that it is not possible to write the above perspective projection transformation in matrix form. To overcome this problem, we introduce homogeneous coordinates system. 8.4.2 Homogeneous Coordinates A coordinate system where every point having three coordinates, (x, y, w)" , is called a homogeneous coordinatesystem provided w- 0 in which for points Adem)’ and x, vw), , system belong to the 2- dimensional space, K? \ (0,0,0)7. ‘A two dimensional point in the Euclidean coordinate system can be represented as a point in the homogeneous coordinate system and vice-versa, 213Computer Visions1 214 In fact, for a 2D point (x,y)'its corresponding point in the homogeneous coordinate systemistaken as (x,y,1)". Similarly, for a given a point (x, »,w)' and w #0 in homogeneous coordinates, its corresponding 2D point xe x/w sew 2) (2m ) > iw ww fw 1 and therefore its corresponding 2D point becomes (x/w, y/W). Likewise, as a point in the homogeneous coordinate system having four coordinates (X,Y,Z,W)" such that W#0, the following additional is obtained as: Since condition holds: For X,=(X,,%,Z,,M4)" .X: ¥,,Z,,W,)' from the homogenous coordinate systemwe have (KY Z MY =X aY Za" eo MMe BM yg 22 | i wm Ww mM WM, Clearly, these points belong to R* \, (0,0,0,0)". As in the 2D case, a 3D point (X, Y, Z) will be associated with its corresponding homogeneous coordinates point (X, Y, Z,1). Conversely, any point (X, Y, Z, W) from the homogeneous coordinate system can be associated to its corresponding 3D point in the Euclidean spaceto be (X/W, Yiw, ZW). An important point to remember is that all scalar multiples of a homogeneous vector represent the same point. 8.5 PROJECTIONS (i) Perspective projection Consider the pin-hole camera model as a perspective projection transformation discussed in 8.4. Using matrix transformation in homogeneous coordinates f 0 0 OVX) (fx) pool fer oo f oz] |rz loo 1 ola] the projection point in the Euclidean space can be obtained =fh yart HSS Zz This way the perspective projection of a 3D point via the center of projection (the pin-hole) onto a point on the image plane Z = f can be obtained in matrix form.(ii) Orthographic projection Orthographic projection is the projection of a 3D object onto a plane by a set of parallel rays that are orthogonal tothe image plane, ic., it is a parallel projection. In this projection, the center of proj infinity, tion is taken at For any point (X,Y, Z)" from the object, its orthographic projection on ‘the image plane = fis given by x=X,y=¥ The important properties of the orthographic projection are that parallel lines remain parallel and the size of the object does not change Figure 2. Orthographic projection is shown in the above image. (Image source: Internet) (iii) Weak perspective projection A perspective projection is a non-linear transformation, however, under certain circumstances, we can approximate it by a linear transformation, that is, as a scaled orthographic projection. The important conditions for these are that the object lies close to the optical axis and that the dimension of the object are small compared to its average distance Z from the camera. Equations of the weak perspective projection are: NIR NR in which each point is scaled by £ 8.6 TRANSFORMATIONS Geometric transformations play an important role in Computer Vision. In this section, we discuss the important transformations that we shall require in the future in this course. Introduction to Computer 215Computer Visions1 216 8.6.1 Euclidean Transformations The most important property of Euclidean transformation is that it preserves lengths and angle measures. They are the most commonly used transformations consisting of translation and rotation. (i) Translation When a point is moved from one location to another along straight line paths, it is known as a translation. In 2D, let (x, y)? be any point and t, and t, denote the trans! ns along x- and y- directions respectively. ‘Then, the new coordinates of the point (x’,y’)" are given by x'=x+t,andy =y+t, "ee () In homogeneous coordinat as: eI) fl 0 ty x (*)-[e 1 |G) 1 oo 141 translation is given as a matrix vector product —_ > Fig (a) An object (b) Object translated to the origin For example: If there is a rectangle with coordinates (2, 2), (2, 6), (5,2) and (5, 6) and itis to be translated to the origin. Then the translation EEE mel Af P-BLE-EI Similarly, (3]-[‘The graphical representation is given below: Introduction to ‘Computer Vision (He) 5,6) (0,4) (3,4) bh 6.2) (00) | (00)' 3,0) Figs) Square Before Translation 1) Squate after Translation Rotation A rotation is specified by an angle 0, and the pivot-point,(x,y)" about which the object is to be rotated. A two-dimensional rotation applied to an object re-positions it along a circular path in the 2D-plane. A positive value of 0 definescounter-clockwise rotation about the pivot point while a negative value of @ defines clockwise rotation, Rotation about the originis given as G)-ke el6) where, R= [2252 529) is known as the rotation matrix, in® cos® In homogeneous coordinates, the rotation about the origin with respect to angle 0 is given as, oc! cos@ —sin@ 0} /% v)=[e Sv JC) 1 0 0 uy Fig. (¢) Object at the origin (d) Object rotated at the origin Example 2: Perform a 45° rotation of a triangle ABC with coordinates A: (0,0),B: (,1),C: (5,2) about the origin, Solution: We can represent the given triangle, in matrix form, using homogeneous coordinates of the vertices: fa] foo 4 EEE cl] ls 2 217Computer Visions1 cos45 singS 1] ‘The matrix of rotation is: Ro=Rass=|—sin45 cos45. 1 0 0 So the new coordinates A’B’C’ of the rotated triangle ABC can be found as: "TA 0 0 1){ v2/2 V2/2 0] 0 oot [=| [2] 11 1})-y2/2 2/2 0 0 1 "tle 52 Thus 4'=(0,0), B=( 1 0 ool 3N2/2 72/2 1 3V2/2,7V2/2) The following Figure (a) shows the original, triangle ABC and Figure (b) shows the triangle ABC after the rotation. Figure (a) Figure (b) A rigid body transformation are combinations of rotation and translations. A general rigid body transformation can be represented using the homogencous coordinates as x'\ foos® —sin® t,] 7x (°)-[me =e 56) 1 0 0 ait 218Fig (0) Object rotated and transtated Fig (c) Object at origin after rotation from initial positioning Fig (e) The rigid body transformations preserve angles between vectors and length of vectors therefore parallel lines remain parallel after a rigid body transformation. Example: Consider the triangle (1,0), (0,1), (1,1). To translate it by (ta, fy) = (1, 0), and rotation by @ = 30° ‘xy'xp'tq"\ [cos 30 -sin30 1]/1 0 1 yi'ya'y3'] = ]sin30 cos30 of(0 1 1 tii 0 oO 11 1+ cos30 1—sin30 cos30—sin30 +1 =| sin30 cos30 ——sin30 + cos 30 1 1 1 05 0866 1366 [es 0s 1 1 1 1 03) ? cas SJ 8.6.2 Affine Transformations Euclidean transformations do not change the shape of the object. Lines transform to lines, planes to planes and circles to circles. However, the lengths and angles are preserved by Euclidean transformation. Affine transformations are an extension of the Euclidean transformations which do not preserve lengths and angles, That is, under an affine transformation, circle may transform to an ellipse, however, a line will transform to a line, The important affine transformations are scaling and shear. Moreover, Introduction to ‘Computer Vision 219‘Computer Vision-t translation and rotation also are affine transformations, since affine transformations are an extension of the Euclidean transformations. (i) Sealing By scaling, the dimensions of an object are either compressed or expanded. A scaling transformation is carried out by multiplying the coordinate values of eachvertex(x, y)"by scaling factors Sxand Sy, in the x and y- directions respectively, to produce the transformed coordinates ()-& 210) Example: Consider the triangle ABC with coordinates A =(0,0), B = (,0)and C = (1,1). If the scaling factor along X-axis, 5, = 4 and along Y-axis, S,= 1, then calculate the new coordinates of the sheared triangle ABC. In matrix form, the shear transformation will be ()=@ SIG) «= ()-810 890-0 90)-€) Therefore, the scaled triangle, A'B'C'—will_—have the coordinates, A’ = (0,0), B' = (4,0) and C’ = (4,1) Then, @ "4D B’(4,0) Fig (a) The original triangle ABC Fig(b) The Sealed triangle 4’B'C’ 220(i) Shear ‘A transformation that slants the shape of an object is called the shear transformation. Shearing transformation can also be carried out in both X, Y directions or only one the directions. The new coordinates after shearing in X direction are given by: ction is given by: ()-L& 316) Here., shyand sh, are the shearing factors along the X- and Y- directions respectively and are given as inputs. Then, in matrix form, x\ p41 shy bs, IQ) Consider the triangle ABC with coordinates A = (0,0), B = (1,0) and C= (1,1). If the shearing factor along X-axis, sh, =4 and along Y-axis, Example: sh, = 1, then calculate the new coordinates of the sheared triangle ABC. In matrix form, the shear transformation will be ()-E 46) Then, ! eo) a=(F = fi 32 1 1 1 1 Therefore, the sheared triangle, A'B'C' will have the coordinates A’ = (0.0),8" = (1,1),and C= (5,2) Introduction to Computer 221Computer Visions1 222 4A o)| 8 (0) “aOo) BUD General Affine Transformation Using the homogeneous coordinates, a general affine transformation is defined of the form: ‘abe A=(d e oo 1 where the sub-matrix (4) need not be a rotation matrix, An affine e transformation preserves parallelism but does not preserve angles, 8.6.3 Projective Transformations Projective transformations are the most ‘general’ forms of linear transformations. It does not preserve angles, distances, therefore, parallelism also. Therefore, under the projective transformation, parallel lines do not remain parallel. The only thing that it preserves is collinearity of points and therefore straight lines remain straight, A projective transformation in the most generic form in homogeneous coordinate system is given as: ab c e-(2 <7) gh 8.7 SUMMARY In this unit, we have studied an introduction to computer vision, the basic pin-hole camera model. Homogeneous coordinates were introduced, which helps in defining the projection transformations in terms of matrices. We have discussed perspective projection, orthographic projection and weak perspective projection. Geometric transformations are important for computer vision and we have discussed Euclidean, Affine and Projective transformations and their representation in homogeneous coordinates.88 QUESTIONS AND SOLUTIONS Lat Computer QI. Which projection does the pin-hole camera represent? Ans. 1 Pin-hole camera represents the perspective projection, such that it represents a mapping between the points on the 3D object and its image formed on the 2D plane Q2. What are the rigid body transformations? Ans.2° The rigid body transformations are translation and rotation. Translation in movement along a line, while rotation moves a point by an angle around a pivot point. Q3. — Whatare the three classes of transformations? Ans 3. The three classes of transformations are: Projective, Affine and Euclidean transformations. The projective transformation preserves only collinearity, affine transformation preserves parallelism, while Euclidean transformation preserves lengths and angles. 8.9 REFERENCES [1] Richard Hartley and Andrew Zisserman “Multiple View Geometry in Computer Vision”, Second Edition, Cambridge University Press, April 2004. 223224 UNIT 9 SINGLE CAMERA Structure Page No. 9.1 Introduction 224 9.2 Objectives 224 9.3 Camera Models 224 9.4 Perspective Projection 226 9.5 Homography 229 9.6 Camera Calibration 231 9.7 Affine Motion Models 234 9.8 Summary 235 9.9 Solutions / Answers 235 9.1 'TRODUCTION In this unit, we shall study the various aspects of computer vision related to single camera. This is important since in many applications, we may have data from only a single camera to work with. In the previous unit, we discussed about the pin-hole camera model, the perspective projection and homogeneous coordinates, We saw how homogeneous coordinates helped in representing the perspective projection as a matrix. In this unit, we shall discuss the pin-hole camera model and perspective projection in more detail and also dis cuss the camera matrix. We shall then discuss the camera calibration process for a single camera, and the affine motion models. Finally, we shall summarize this unit in Si 9.2 The 9.3 tion 9.8, OBJECTIVES objectives of this unit are as follows: To learn about the camera model in detail, To understand the concept of camera matrix, To under and the process of camera calibration To understand the affine motion model. To give an overview of the aspects of computer vision related to a single camera and the estima ion of 3D parameters from a single camera. CAMERA MODELS As discussed in Unit- 8, a pin-hole camera does a perspective projection of a 3D scene onto a 2D plane.A om ee ae Figure-1: The pin-hole eamera model [1] Under the pin-hole camera projection, a 3D point X = (X, Y, Z) is mapped(~) to a 2D point x = (x, y)on the image plane where, the line joining, the camera centreC to the 3D point X intersects the image plane at x. Using the homogeneous coordinates, we represent X = (X, Y, Z, 1) and x= (a, y,1) and therefore, the pin-hole camera model can be represented as ()-(6 } 00)(8) ay o 0 10/\t In this simplified version, we have assumed that the origin of the world coordinate system is at the centre of projection (pin-hole) and that f is the focal length of the camera. The focal length is the distance between the camera centre (pin-hole/ centre of projection) to the image plane. To assume that the image is not inverted, it is assumed that the image plane is in front of the pin-hole, although physically the image plane is present behind the pin- hole. Therefore, we see that in the pin-hole camera model a perspective projection irs. The image coordinates are related to the world coordinates as given by Equation lunder this perspective projection where the camera centre is at the origin of the world coordinate system. However, in real-life it is not always possible to keep the camera (pin-hole/ centre of lens) at the origin of the world coordinate system. To bring the object into the camera’s view, we may need to move the camera away from the origin. In Unit-8, we studied about transformations. In real-world, we shall have to carry out a sequence of translations and rotations of the camera to bring the object of interest in its view. In the next section, we shall discuss two important concepts: (a) intrinsic parameters of a camera and, (b) extrins oct c parameters of a camera, Before we discuss these parameters, we have to understand that both the pixel coordinate system and the world coordinate systems are related by the following physical parameters: (a) size of pixels (b) position of principal Single Camera 225Computer Vision-1 226 point (c) focal length of the lens and (d) position and orientation of the camera. The internal or intrinsic parameters of the camera define the relation between the pixel coordinates of a point on the image with the correspon coordinates that exist in the camera reference frame. The external! extrinsic parameters of the camera are the parameters that define the location and orientation of the camera coordinate frame with respect to a known world coordinate frame. Questions to check your progress: 1. What is a pinhole camera? 2. What does f, the focal length represent in a pin-hole camera model? 94 PERSPECTIVE PROJECTION A camera projects the 3D world onto a 2D image plane.tt is a perspective projection and is represented by a 3x4 matrix such that the left 3x3 submatrix is non-singular. We shall now see how the camera matrix encodes the camera parameters. 9.4.1 External/Extrinsic Parameters of a Camera ‘The external camera parameters define the relation between the known world coordinate frame and the unknown camera coordinate frame. The two coordinate systems are related by a rotation and translation, Therefore, determining the external parameters of the camera implies: (a) finding the translation vector between the relative positions of the origins of the camera coordinate frame and the world coordinate frame, (b) Finding the elements of the rotation matrix to align the corresponding axes of the two coordinate frames. Therefore, the extrinsic camera parameters help us in finding the relation between the coordinates of a 3D point in the world coordinate system with its coordinates in the camera coordinate system, Let R be the rotation matrix, TM he Ta R= |" te To Is, Taz Tas. and T be the translation vector,iT, r= |T, T. 2 g Xe Then, for a 3D point A, whose world coordinates be 4, [= and camera Z, Xe coordinates are A, fe R(Ay —T) ia M2 Mis] [Mw Te ae Te 3] Yo Ty | 2) is1 Tez Tha. Xe [i Ze Xe = RU (Aw — 7) Yo = RE(Ay —T) 2¢= RUA, —T) whereR? corresponds to the i-th row of the rotation matrix Equation (2) gives the relation between the coordinates of a 3D point A, in the camera coordinate system and the world coordinate system using the extrinsic camera parameters, Question for review of progress 1. Define and describe the external parameters of a camera 2. What does the rotation matrix in the camera external parameters represent? 3. What does the translation vector in the camera external parameters represent? 9.4.2 Internal/Intrinsic Parameters of a Camera Equation | assumes that the coordinates in the image plane are measured with the principal point as the origin. As we discussed in Unit-8, the principal axis or the principal ray is the line from the camera centre that is perpendicular to the image plane. Therefore, the principal point is the point of intersection of the principal axis and the image plane, In general, the pine hole camera model is a geometrical model, implying that all points and focal length are measured in millimetres or centimetres or meter. However, the camera coordinates are measured in pixel distance or height and width of pixels (sampling distance). Single Camera 227Computer Vision-1 228 ‘Therefore, the internal camera parameters characterize the following: a) The perspective projection (focal length) b) The relation between pixel size and image plane coordinates ©) The geometric distortions introduced by the optics The relation between the camera coordinates and the image plane coordinates is given by the perspective projection: Beep enT), ay) raf Fan Fz. - Sika ® 9.4.2.1 Relation between image plane coordinates and pixels Let us and uy be the coordinates of the principal point in pixels and sx, syare the sizes of pixels in the horizontal and vertical directions in millimeters. ‘Therefore, X= = (Kin Ux)Sx=PXin= — HISc+ Ue in= Uy)8y => Yin= = Yisyrk yas -(4) y where, (xy) are the pixel coordinates of the image of the 3D points. f-[* 4. 9» i} 0 1 Hence, we can relate the pixel coordinates with the world coordinates from Equations (3) and (4) as ‘Therefore, Therefore, the matrix of intrinsic parameters, also known as the camera calibration matrix is Te 0 ty lo Me, uy o 0 1- And the matrix representing the external or extrinsic parameters of the camera isTy Single Camera Ty Ty, m1. M2 Ts [RIT] = |r 2 Teo Te M2 Tas In homogeneous coordinates, ps] A]. 0 ue)iris mera) Te z ‘in| = -f, 21 a2 M37, |] ] (6) 4 ° Is, Ying ome ral 7] |Z 0 oo o4 1 P = K[R[T] is called the camera projection matrix, which is a 3x4 matrix, where K represents the intrinsic parameters and [RIT] represent the extrinsic parameters of the camera. Questions for review of progress: 1. What do the internal parameters of a camera represent? 2, What role do the coordinates of the principal point play? 9.55 HOMOGRAPHY A planar homography is a transformation that relates two planes. Therefore, a homography relates two images of the same scene, i.e. homography maps an image from one view to another. Given points in one image and the homography matrix, H, we can find the corresponding points in the other image. A transformation or a mapping h: P? P? is a homography if and only if there exist a non-singular 3x3 matrix H such that for any point in x € P?, then h(x) = Ha An important condition for a transformation to be a homography is that it maps collinear points to collinear points, that is, if.x1, x2 and x3 lie on a line then, h(x), h(x) and h(xs) also lie on a line. 229Computer Vision-1 230 Fig. 2. The projection maps points on one plane to points on another plane. These points are related by a homography, H, such that x’ = Hx. (Fig. taken from [1]) 9.5.1 _Homography Estimation: Direct Linear Transformation Algorithm We can estimate the homography between two images of the same scene under the assumption that the point correspondences are given and that there is planar motion of the camera. We assume that x{ © x, are the given set of point correspondences. We need to compute the 3x3 homography matrix H. For each point correspondence, we have, fay ie ia x! ig is Therefore, one pai of correspondences will give two independent equations. There are 8 unknowns, since H has 9 entries but is defined up to scale. In this case, we can divide each entry of H by fgg, and fix the last entry as 1 Therefore, H has 8 degrees of freedom. Each point correspondence gives 2 independent equations, therefore, to find the 8 unknowns we shall need at least 4 point correspondences. However, in practice, the point correspondences may not be free of noise, therefore a larger number of point correspondences are used to estimate the best solution, Direct Linear Transformation Algorithm Each point correspondence gives rise to two independent equations. Xi Xia + Yitya + wilas wy Xihtg, + yan + Wilts vi Miho + yaa + wig wy xia, + yiRan + wihas ‘Therefore, if we simplify, we get the equation 4; h = O where, prix selon oo 0 —x}x, —x{y—xlwy 0 wia, WEE WEMimyi, ymAnd by stacking all the equations of this form, we will get the system of equation Ah = 0. To obtain the solution of the system of equations Ah = we obtain the Singular Value Decomposition (SVD)[2] of A. If SVD(A) UDV! where, D is the diagonal matrix with singular values arranged in descending order, then the solution, h is the last column of V, since for the system of equation Ah=0, the solution his the smallest singular veetor which corresponds to the smallest singular value. Planar homography finds a wide range of applications such as in removing perspective distortion, creating panoramas, 3D reconstruction, ete, For examph ria reso tT arti eee a Fig. 3. The figure shows that 4 point correspondences suffice to remove the perspective distortion from a planar building fagade . Fig. taken from [1] Questions for review of progress: 1, Planar homography helps in relating the points on one plane with the corresponding points on another plane, True or False. 2. Write out the complete equations to find the system Ah = 0. 9.6 CAMERA CALIBRATION The aim of camera calibration is to estimate the intrinsic and extrinsic parameters of a camera, To compute the intrinsic and extrinsic camera parameters, we need to know a set of correspondences between the world points (X, Y, Z) and their projections on the image (x,y). Therefore, the first step is to establish the set of correspondences between the world points and Single Camera 231ter Vision their projections on the image plane. To do so, in general, images of a known calibration object are used. The calibration object has a known 3D geometry and location in space. Moreover, it generates image points which can be accurately located. Fig 4, Image of a calibration object. (Image taken from [1]) If we consider the object in Fig. 4, then it can be seen that there are equal spaced black squares on a white background, such that the comer points are clearly visible and can be extracted easily. Also, if we assume a point on this object to be the origin of the world coordinate system, and define the 3 axes, then we can find the coordinate of each corner point in 3D. Moreover, the projection of these comer points can be found in the image and therefore, we can establish a set of correspondences between 3D points and their corresponding image points by using a known calibration object. We discuss the linear solution to compute the estimate of the camera matrix P given n> 6, world to image corresponding points {x © x,}Gold Standard algorithm is the algorithm for estimating the camera matrix from a world to image corresponding points. urate Step 1: Normalization, In this step, we compute a similarity transformation T to normalise the 2D image points. To carry out normalization, the points should be translated such that their centroid should be at the origin and scaled in such a manner that the root mean squared distance from the origin \2.Similarly, compute a similarity transformation U to normalise the 3D. world points. The 3D points should also be normalised such that centroid of the points is translated to the origin and root mean squared distance from the origin is 13. (This works well for the case when the variation in depth of the points is less such as in a calibration object). Step 2: Direct Linear Transform (DLT) Assuming that the camera matrix is represented by My, My Ms Mg M Moz Mag MagMog M3, Mgq_ Mag Mag, 232For each corresponding point, we have x, =MX, ‘Therefore, Xy X, x y_ | fm m2 ms mally’ ve} =MY OC" |=) mama ms mag || 7 w Tf Ls msa mss mse I} “7 Xa _ MiXy + mnaYw + mnisZy + mig a Th Mu FM TE my Xp + M32Vy + My3Zy + 3g ya the mg Xwy + magVy + m3Zy + M24 Ww m3) Xy + my7¥y + mg3Zy, + M34 Therefore, each point correspondence gives us two independent equations. Therefore, Ag = [Whe Wwe wi 0 0 0 O- Xe —Xnle —XnZw— Xn LO 0 0 OWN, WYWZy W—YnXw —YakwtnZw —Yn and, We can write by stacking 4y for cach corresponding point, we generate the 2nx12 matrix A, such that Am = 0, where m is the vector that contains the entries of the matrix M, The solution of Am = 0, subject to ||zl| = 1 is then obtained from the unit singular vector of A corresponding to the smallest singular value of A. We use Singular Value Decomposition (SVD) to find the solution as in the case of homography computation. This gives us the linear solution for M, which is used as an initial estimate of M. Step 3: The measurement errors need to be reduced. Therefore, we minimize the geometric error ¥, d(x;,MX,)? over M using the linear estimate as the starting point and an iterative algorithm such as Levenberg-Marquardt Single CameraComputer Vision-1 234 Step 4: De-normalization: Finally, the camera matrix for the original (unnormalized) coordinates is obtained as M'= T"'MU Therefore, M'is the camera matrix. Using QR-decomposition{2], the camera matrix can be decomposed as M' = MinM,eey Where, Ming consists of the internal parameters of the camera and is an upper-triangular matrix and M,. is the matrix of external parameters of the camera. Questions for review: 1, What are the minimum number of point correspondences between 3D points and 2D points required for camera calibration? 2. Write down the equations needed to set up the system Am = 0. 9.7__ AFFINE MOTION MODE We discussed the Affine transformation in Unit 8. To recall, an affine transformation is a geometric transformation that preserves lines and parallelism. In general, an affine projection is a combination of a linear mapping + translation in homogeneous coordinates. That is, a [1 G2 15) x] 6) a [ee a2, al [| * fl =e4 where, bdenotes the projection of the world origin. ‘We now define an affine camera, 9.7.1 Affine Camera An affine camera is the camera whose projection matrix M has the last row of the form (0,0,0,1). An important property of the affine camera is that it maps the points at infinity to points at infinity. Therefore, an affine camera is a camera at infinity, implying that the camera center lies on the plane at infinity. The affine camera preserves parallelism. As we calibrate a projective camera, similarly we can estimate an affine camera also, An affine camera matrix is a 3x4 projection matrix with the last row given as (0,0,0,1). In general, the affine motion model is used for approximating the flow pattern of the camera motion in a video.9.8 SUMMARY In this unit, we have discussed the single camera geometry, including the camera model, perspective projection, the camera parameters, homography, camera calibration and the affine motion model. The camera parameters are of two types, the internal camera parameters and the external camera parameters which together form the camera matrix. We also discussed how image to image homography can be computed with the knowledge of sufficient number of corresponding points. We also discussed that if sufficient number of world to image corresponding points are known then the camera can be calibrated and the projection matrix can be estimated. This can be done easily with a calibration object. We also discussed the affine transformations, the affine camera, and the affine motion model 9.9 | REFERENCES [1] Richard Hartley and Andrew Zisserman “Multiple View Geometry in Computer Vision”, Second Edition, Cambridge University Press, 2004. [2] Gene H. Golub and Charles F. Van Loan, “Matrix Computations”, Third Edition, The John Hopkins University Press, 1996. Single Camera 235236 UNIT 10 MULTIPLE CAMERA Structure Page No. 10.1 Introduction 236 10.2. Objectives 236 10.3: Stereo Vision 237 10.4 Point correspondences 237 10.5 Epipolar Geometry 238 10.6 Motion: Optical Flow 240 10.7 Summary 242 10.8 Solutions / Answers: 242 10.1_ INTRODUCTION In the previous unit, we learnt about the camera models and how an image is created, We also learnt that there is loss of depth when a picture is taken by a single camera. In this unit, we shall see how we can recover the depth of a scene when there are multiple cameras. In this course, we shall study about the case when there are two cameras, that is also known as Stereo Vision. There are more limitations of a single camera system, Apart from loss of depth information, a camera is a sensor that has a fixed view volume. This implies that there is a range of scene that a single camera can see. Therefore, if the object of interest moves out of the yiew volume, it can no longer be observed by the camera. This may cause issues in applications like security and surveillance in wide areas. Moreover, as we saw in the study of the camera model, we can only see what lies between the camera lens and the image plane, implying that if the object of interest lies behind another object, then occlusion occurs and we are not able to view the object of interest. However, if there was another camera that could view the object of interest from another view, then we will be able to observe the object of interest despite occlusion from one view. Therefore, multiple camera systems have many advantages over single camera systems. However, this also leads to questions such as how many cameras are enough, where should the cameras be placed and how much overlap should the cameras have between their views. The answers to these questions are dependent on factors such as the cost of equipment, the type of cameras used and the application of the camera system, 10.2 OBJECTIVES The objectives of this unit are to : ‘+ Learn about the stereo vision system. ‘© Discuss the concept of point correspondences and epipolar geometry. ‘© Discuss the concept of motion and optical flow that allows a computer vision system to find the movement pattern in a video.10.3 STEREO VISION A stereo vision system consists of two cameras such that both the cameras can capture the same scene, however with some disparity between the views. ‘One stereo vision system that we can easily relate to are the two eyes that we have. The process of using two images of the scene captured by a stereo camera system to extract 3D information is known as stereo vision. Since the stereo pair, or the images taken by the stereo system, enable us to get the 3D information of the scene, therefore, it finds wide application in autonomous navigation of robots, autonomous cars, virtual and augmented reality, ete. In stereo vision, the 3D information is obtained by estimating the relative depth of the points in the scene. The corresponding points in the stereo pair are matched and a disparity map is created which helps in estimating the depth of the points. We shall first understand the concept of corresponding points, 10.4 POINT CORRESPONDENCES Fig10.1 The figure shows the concept of a stereo pair. C and C’ are the two camera centers and x and x’ are the images of the 3D point X in the corresponding image planes. x and x’ are said to be corresponding points. Fig taken from [1] ‘As shown in Figure 10.1, the stereo pair consist of two cameras that are at a certain distance apart, The line joining the camera centers is known as the baseline. The view volume of the two cameras is such that they view a ‘common area in the scene. A 3D point that lies in the common view volume of the two cameras will be imaged by both the cameras. Therefore, as shown in Figure 10.1, the 3D point X is visible to both the cameras and therefore, it has an image x in Camera | (with camera center C) and image x’ in Camera 2 (with camera center C’). Therefore, x and x" are called corresponding points. ‘Multiple Camera 237Computer Vision-1 238 Given a point in one image, we can find it corresponding point in the other image because of epipolar geometry which we shall study next. 10.5 EPIPOLAR GEOMETRY @ ) Figure 10.2. (a) (b) Epipolar geometry (Image taken from [2]) Epipolar geometry describes the geometric relation between two views of the same scene captured by a stereo vision system. In Figure 10.2 (a), it can be seen that the two cameras (Cy and C,) are related by a rotation (R) and translation (¢), implying that the relative pose of the cameras are an inherent part of epipolar geometry. The 3D point pis imaged in both the image planes In and 1s, In Jo, the image of p is denoted by x9 and in /:, the image of p i denoted by xy. It can also be seen that the two camera centres, Co and C; and the 3D point p are coplanar. This plane JZ is known as the Epipolar plane. Moreover, the ray joining Co and xgextends towards infinity pz and is intersected by the ray joining C; and x, at the 3D point p. Therefore, we can see that in the case of stereo vision, it is possible to find the 3D point if it is imaged by both the cameras and the point correspondences are known. ‘The line joining the two camera centres Co and C1 is known as the baseline. The baseline intersects the two image planes at @y and e, respectively. The point of intersection of the baseline with an image plane is known as an epipole. Therefore, eg is the left epipole and e, is the right epipole. The left epipole eg is the image of the camera center Cy in the left image plane Ip while, the right epipole e, is the image of the camera centreC, in the right image planel,. The epipolar line is the intersection of the epipolar plane with the image plane. An important point to be noted is that every epipolar line passes through the epipole, therefore, the point of intersection of all the epipolar lines is the epipole. Another point to be noted is that the epipolarline 1, in Z,is the image of the back projected ray joining the camera centreCy with the 3D pointp, while the epipolar lime Up in Ig is the image of the back projected ray, that is the ray from the camera centreC,with the 3D point p Therefore, given xq, the corresponding point x, in the second image is constrained to lie on the corresponding epipolar line. The Fundamental matrix, F, represents the epipolar geometry algebraically. An important point to be noted is that the fundamental matrix defines the relation between corresponding points as given by Equation 10.1 xiF x =0 (10.1) More precisely, the fundamental matrix F maps a given point xq rom the first image to the epipolar line I,in the second image, 1,= Fxgand since the corresponding point x,0f xp in the second image lies on L, implies Equation 10.1 is satisfied. Since x71, = 0, and 1,= Fxg therefore] F x5 F is a matrix of size 3x3 with rank 2 and therefore, if we have at least 8 correspondences, then F can be computed from the point correspondences. An important property of the fundamental matrix is that I= Fary and Ig= Fx, Moreover, since the epipole lies on the epipolar lines, therefore, Fey = Oand e[ F = Solving for the Fundamental Matrix Given point correspondences, x; and x} we can setup a system of equations using Equation (1) to solve for F. xj" F x, = Gimplies fr ow [fa fie fs fax fia fra, e: which gives Equation 10.3 on solving xthorit fav t+ Xifis + Views + Yiheyi + Vifia + faaxit fay t fa3 = 0 fio fia fis\p% pl-e (10.2) 1 (10.3) ‘Multiple Camera 239Computer Vision-1 240 If we consider msuch correspondences, then we can solve the system of ‘equation for the 9 unknowns in F. Te BY MT MH OM eh Mm 1) I fy mith mh, Em Yn Yat Ym Bn th LS | Fe (10.4) This system requires atleast 8 points to solve since F has 8 degrees of freedom. Therefore, this is also known as the 8-point algorithm. We use Singular Value Decomposition (SVD) to solve the system of equations in Equation 10.4.If the cameras are calibrated and the matrices KandK'of internal parameters are known then, we can use the fundamental matrix, F, to compute the essential matrix, E, given by Equation 10.5. E=K'FK (10.5) ‘The Essential matrix, E, is used to determine the camera pose ie. the positioning and alignment of camera. 10.6 MOTION In our daily lives, we perceive, understand and predict motion rather easily. Motion perception is very strong in the human vision system. Motion or perception of motion during the imaging process can be caused by some of the following reasons: The camera is static but the object is moving, ‘The camera is moving but the object is static. Both the camera and objects are moving The camera is static but light source and objects are moving, ae S Motion perception plays an important role in applications of computer vision such as activity recognition, surveillance, 3D reconstruction, and many others. Therefore, it is important to estimate the motion from images. 10.6.1 Optical Flow Estimating motion of the pixels in a sequence of images or videos has a very large number of applications. Optical flow is used to compute where the motion information between sequence of images. It helps to determine a dense point to point correspondence between pixels of an image at time twith the pixels in the image at time t1. Therefore, optical flow is said to Multiple Camera compute the motion in a video or sequence of images. Optical flow is based on the assumption that across consecutive frames, the pixel intensities do not rapidly change. It also assumes that neighbouring pixels have similar pattern of motion. Most often, it also assumes that the luminance remains constant, We assume that f(xy;) is a pixel in the image taken at time moves to a point (c+dy, y+dy) at the time £84, that is, toff+dx, y-téy, +31). Since they are the same point,and therefore, the above assumption can be written mathematically as F(xt dey + dy,t+5t)= F(xy,t) (10.6) Equation (10.6) forms the basis of the 2D motion constraint equation and holds true given that &, &y, 8¢ are small. ‘Taking into consideration the first order Taylor series expansion about f(x,y, t) in Equation (10.6), we get Fxt dey + dy,t+ 8) = fey + Lore Lot 2 ot +(Htigher order terms) — (10.7) Assuming that the higher order terms are very small, we ignore them, Then, from Equations (3) and (4), we get: LoL ax Oy af & of By, OF or, LF ex 8 ay St ot o, Ly 4%, 429 (10.8) ax ay at x ¢ elocitie: c where, v, = =,v, = % ate the image velocities or optical flow at (x,y) at time t. af »tap ta Hap ta5% are the image intensity derivatives at (x, y). ‘Then, from Equation (10.8), we get (fof): (%er%s) = fe (10.9) Equation (10.9) is called the 2D Motion Constraint Equation. There are various methods for solving for optical flow. The Lucas and Kanade optical flow algorithm [3] and Hom and Shunck [4Joptical flow methods are two of the most popular methods for estimating the optical flow. 241Computer Vision-1 242 10.7 SUMMARY In this unit we leamed about various concepts related to Multiple camera models required for computer vision. The concept of stereo vision was discussed and the concept was extended to point correspondence and Epipolar geometry; finally the unit was completed with the motion oriented concepts which includes optical flow. 10.8 QUESTIONS AND SOLUTIONS QI. What is Stereo Vision ? Sol Ref. 10.3 QZ. Describe the concept of Point Correspondance. Sol. Ref. 10.4 Q3. Discuss the term Epipolar Geometry. Sol. Ref. 10.5 Q4. Explain Optical flow, in context of motion perception in computer vision Sol. Ref. 10.6 10.9 REFERENCES [1] Hartley and Zisserman “Multiple View Geometry, Oxford Pre [2] Computer Vision: Algorithms and Applications by Richard Szeliski [3] B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision”, DARPA Image Understanding Workshop, 1981, pp 121-130. (also JCAI’81, pp674-679) [4] B. K. P. Hom and B. G. Shunck “Determining Optical Flow”, Artificial Intelligence 17, 1981, pp. 185- 204,
You might also like
Unit 1 CV Notes
PDF
No ratings yet
Unit 1 CV Notes
122 pages
CS5330-F22-Lectures
PDF
No ratings yet
CS5330-F22-Lectures
116 pages
Deep Learning for Vision Book 2
PDF
No ratings yet
Deep Learning for Vision Book 2
292 pages
Computer Vision Lecture Notes All
PDF
0% (1)
Computer Vision Lecture Notes All
18 pages
Computer Vision-Unit 1 Notes
PDF
100% (1)
Computer Vision-Unit 1 Notes
21 pages
Computer Vision Lecture Notes Unit 1
PDF
No ratings yet
Computer Vision Lecture Notes Unit 1
21 pages
Computer Vision Lecture Notes All Compress
PDF
No ratings yet
Computer Vision Lecture Notes All Compress
17 pages
Computer Vision Notes
PDF
100% (1)
Computer Vision Notes
97 pages
Camera
PDF
No ratings yet
Camera
48 pages
An Invitation To 3-D Vision From Images To Models
PDF
No ratings yet
An Invitation To 3-D Vision From Images To Models
339 pages
Lecture 02 Cameras
PDF
No ratings yet
Lecture 02 Cameras
38 pages
1_Intro
PDF
No ratings yet
1_Intro
103 pages
01 Introduction
PDF
No ratings yet
01 Introduction
33 pages
CCS338-Computer-Vision-Lecture-Notes-1
PDF
No ratings yet
CCS338-Computer-Vision-Lecture-Notes-1
99 pages
Chapter 2 - Digital Image Fundamentals
PDF
No ratings yet
Chapter 2 - Digital Image Fundamentals
66 pages
CV - Image Formation
PDF
No ratings yet
CV - Image Formation
28 pages
unit 2
PDF
No ratings yet
unit 2
88 pages
unit 2
PDF
No ratings yet
unit 2
88 pages
Administrivia: CMPSCI 370: Introduction To Computer Vision
PDF
No ratings yet
Administrivia: CMPSCI 370: Introduction To Computer Vision
12 pages
Block-3-output
PDF
No ratings yet
Block-3-output
36 pages
Computer Vision Lecture Notes All
PDF
No ratings yet
Computer Vision Lecture Notes All
18 pages
0
PDF
No ratings yet
0
8 pages
Lecture 01
PDF
No ratings yet
Lecture 01
79 pages
Computer VISION - 1
PDF
No ratings yet
Computer VISION - 1
21 pages
CS436 CS5310 EE513 L01 Introduction
PDF
No ratings yet
CS436 CS5310 EE513 L01 Introduction
54 pages
intro
PDF
No ratings yet
intro
66 pages
Ch1_TDMA_Image_Processing
PDF
No ratings yet
Ch1_TDMA_Image_Processing
34 pages
Final Exam Topics
PDF
No ratings yet
Final Exam Topics
9 pages
Computer Vision - Image Formation (Camera) - 1
PDF
No ratings yet
Computer Vision - Image Formation (Camera) - 1
27 pages
Intro Imaging
PDF
No ratings yet
Intro Imaging
41 pages
3DCV_lec01_camera
PDF
No ratings yet
3DCV_lec01_camera
76 pages
AD8703 BCV Unit IV 2023
PDF
No ratings yet
AD8703 BCV Unit IV 2023
93 pages
Unit 1 Chapter 1
PDF
No ratings yet
Unit 1 Chapter 1
27 pages
Cv Unit 1 Overview of Computer Vison and Application
PDF
No ratings yet
Cv Unit 1 Overview of Computer Vison and Application
51 pages
AD8703 BCV Unit III 2023 (Autosaved)
PDF
No ratings yet
AD8703 BCV Unit III 2023 (Autosaved)
96 pages
Unit 5 Introduction Robot Vision
PDF
No ratings yet
Unit 5 Introduction Robot Vision
60 pages
UNIT I
PDF
No ratings yet
UNIT I
25 pages
Computer Vision Introduction
PDF
No ratings yet
Computer Vision Introduction
42 pages
3dv Slides
PDF
No ratings yet
3dv Slides
153 pages
UNIT 1
PDF
No ratings yet
UNIT 1
21 pages
Book
PDF
No ratings yet
Book
2 pages
2 Vision Lec 2
PDF
No ratings yet
2 Vision Lec 2
71 pages
An Invitation To 3-D Vision PDF
PDF
No ratings yet
An Invitation To 3-D Vision PDF
338 pages
Computer Vision
PDF
No ratings yet
Computer Vision
41 pages
INT345 COMPUTER VISION
PDF
No ratings yet
INT345 COMPUTER VISION
2 pages
Computer Vision Notes
PDF
No ratings yet
Computer Vision Notes
72 pages
801de018-d7ac-4b1b-9e5b-3541b3ee9e41
PDF
No ratings yet
801de018-d7ac-4b1b-9e5b-3541b3ee9e41
18 pages
Cv Digital Notes
PDF
No ratings yet
Cv Digital Notes
77 pages
To Computer Vision: Ahmed Eslam Mohammed Solyman (M.D.)
PDF
No ratings yet
To Computer Vision: Ahmed Eslam Mohammed Solyman (M.D.)
29 pages
CV GTU ANSWERS
PDF
No ratings yet
CV GTU ANSWERS
56 pages
CV Notes unit-1
PDF
No ratings yet
CV Notes unit-1
22 pages
CS7.505: Computer Vision: Spring 2022
PDF
No ratings yet
CS7.505: Computer Vision: Spring 2022
46 pages
Mod1 Notes From Studocs
PDF
No ratings yet
Mod1 Notes From Studocs
18 pages
Recording and Displaying of Images
PDF
No ratings yet
Recording and Displaying of Images
33 pages
"Introduction To Computer Vision": Submitted by
PDF
No ratings yet
"Introduction To Computer Vision": Submitted by
45 pages