(AoPS Competition Preparation) Richard Rusczyk - Sandor Lehoczky - The Art of Problem Solving, Volume 2 - and Beyond. 2-AoPS Incorporated (2003) PDF
(AoPS Competition Preparation) Richard Rusczyk - Sandor Lehoczky - The Art of Problem Solving, Volume 2 - and Beyond. 2-AoPS Incorporated (2003) PDF
1. Show that 2”? = 1. (This problem originally appeared on a contest used to determine the Chinese national team.) (MOP)the ART of PROBLEM SOLVING: Volume 2 * >the BIG PICTURE Carl Friedrich Gauss was, among his many other achievements, one of the primary popu- larizers of complex numbers. One of the discoveries of which Gauss himself was proudest was the constructibility of the regular 17-gon. The ancient Greeks were able to construct the equilateral triangle, and the regular pentagon, but no other regular polygons with a prime number of sides, but Gauss was at last able to extend this repertory. Given a segment of length 1, Gauss knew that any integer-length segment could be constructed. Moreover, the sum, difference and quotient of two segments can be constructed, and the square root of a segment can be constructed. Thus any segment whose length is an expression made up of sums, differences, quotients, and square roots of integers can be constructed. Asa simple example of such an expression, Gauss used the 17 seventeenth roots of unity to show that yn7 = 144 2 sa c0s860"/7 = — Fe + Fe V7 + 7 V84—2V17 +i Vi7+3 V7 V34—2-V17 -2 V3442VI7. Once he could construct a segment of length cos 360°/17, Gauss could construct the point (cos 360°/17, sin 360°/17), by laying off cos 360°/17 along the x axis and drawing a perpendicu- lar to the x axis at that point. The intersection, P, of this line and the unit circle has polar angle 360°/17; copying the angle between the positive x axis and OP (O js the origin) seventeen times around the unit circle provides the seventeen vertices of a 17-gon. Gauss's construction of the 17-gon is one of the most compelling examples of the geometry of complex numbers, and Gauss asked that his tombstone be made in the shape of this wonderful, constructible polygon.Chapter 10 Vectors and Matrices 10.1 What is a Vector? or, abstractly, even more dimensions. A vector is typically given a variable-type name, like v, and is denoted by @. Also, the base point of a vector is called the tail and the end of the arrow the head. ‘A vector is simply an arrow from one point to another. For example, at right XN, we have drawn some vectors in two dimensions (2D). Vectors can also be in three —m— ‘The length of the vector @ (distance from tail to head) is denoted by ld. A vector is typically regarded as depending only on its length and direction; the location of the starting point of the arrow is immaterial. Since the starting point doesn’t matter, we can “add” two vectors by moving the tail of one vector to the head of the other, as at left, where the boldface vector is the sum of the other two. ‘We can easily verify that vector addition defined in this way is commutative, a so that # + @ = @ +7. Just draw the two additions and note that the two copies each of and @ are parallel, as at right. Using our definition of vector addition we can quickly expand to multipli- cation by a positive real number: the vector cd for c a positive real is the vector < in the same direction as # but with length c|jdl]. (Convince yourself that this e makes sense.) The vector @ is defined to be the vector with length zero. se Similarly, the vector ~@ is just a vector with the same length as d, but in the opposite direction. This way we get 3 + (-#) 6, as we would normally expect. We can then define i - # = 0 + (-d), and we see from the diagram that i — dis the vector that runs from the head of d to the head of @. This | should not be a surprise since # + (@- 8) = B+ @-d) =a. a8 < 100 >the ART of PROBLEM SOLVING: Volume 2 < 101 10.2. The Dot Product The length of the vector i — 7 can be found using the law of cosines. Since d, , ba and @ — 8 form a triangle whose sides have lengths ||, and {|i — a], we have Uleb ~ AIP = aA? + [BI - 2}:AI:3I| cos 8, (10.1) a where 0 is the angle between # and The expression |[@|l||| cos @ is called the dot product of # and a; it is denoted by &-@. We can then write (10.1) as (lab — AIP = [AP + I? - 20°-w, so that the dot product is given explicitly as. oB= tan — 055 We can establish certain nice properties of the dot product. 1. -@=B-d. (The dot product is commutative.) 2. d-w = 0 for nonzero dand w if and only if Sand are perpendicular. 3. (c8)-@ = e(@-@) for any real number c. 4. i. (8+@) =i. 8+ it-w. (The dot product is distributive.) EXAMPLE 10-1 Prove property 1 above. Proof: Let O,3_azbe the angle from «to ¥, so that 8,3 = —O 93. Now we just write D-F= [BIA] co5 6,5, = lll cos(—O g) = lll cos(8g 3) = 3, where we have used the fact that cos(—8) = cos 0. EXERCISE 10-1 Prove properties 2 and 3 above. Property 4 is proved using coordinates in the next section (see Example 10-2). Properties 3 and 4 mean that the dot product is linear. The use of vectors as abstract “arrows” is most useful in vector geometry, which we do not treat antil Chapter 12. In the next section we will begin to examine vectors in a particular coordinate system, which is more pertinent to elementary problems.102 > CHAPTER 10. VECTORS AND MATRICES 10.3 Coordinate Representation of Vectors The standard way to represent vectors in a coordinate system is to define an origin and place the tails of our vectors there. We can then use regular rectangular coordinates with the given origin at the center, as at right. We associate a vector with the coordinates of its head; if the head coordinates are (x, y), the vector is represented as (x yor (5): The former representation is called a row vector and the latter a column vector. We'll generally use row vectors because they take up less space. ‘The power of the coordinate representation comes from the fact that we can use regular coordinate techniques. For example, the vectorsum (x; y1)+(%2 yn) isjust (Gi +x) (ity). EXERCISE 10-2 What is the length of the vector (2_3) ? In the rectangular coordinate form, the dot product has a nice form. Consider two vectors a = (x1 y1) and & = (x2 yz) which form angles 6; and 62 with the positive x axis. Their dot product is found using polar coordinates: ah [laili@all cos(@x - 82) = |\@illlll(cos @; cos 2 + sin 6; sin 2) ((ldi}lcos 6,)(|Idll cos 62) + (till sin @1)(i@al sin 62) = met yryr EXAMPLE 10-2 Prove that the dot product is distributive. Proof: Leti?= (xo yo),d=(x1 y1),and@= (x2 ya). We have = (F4 w) = oles + x2) + yolys + ¥2) = ox: + yorn) + ore + Yoyr) = 8) + -W), as desired. We can easily extend the coordinate representation into three (or more) dimensions. The dot product in three dimensions (3D) is 1X2 + Yiy2 + 2122 and a similar expression holds in higher dimensions, even though we run out of letters. EXERCISE 10-3 Show that (1 17 ~3 2) isperpendicularto(-6 1 5 2).the ART of PROBLEM SOLVING: Volume 2 < 103 The notion of vector addition gives us a swift, nice proof of the Triangle o Inequality for complex numbers (page 90). If we view d and w as complex numbers v and w, so that = (0 v2) = v1 + opi and [fal] = |o|, the graph at right represents the addition v + w in the complex plane. Since the vectors @, @, and @ + form either a triangle or a straight line (when @ and @ are in the same direction), we have |a}| + || = || + wl] (since |[a,|k||, and a+ @|| are the sides o ofa triangle, which may be degenerate). In complex number notation, this becomes [o| + tol > |v +t, and the equality condition of v = cw follows from the observation that [ll] + lal = lid + al only if ¢ and @ are in the same direction, so 01 = cw, and v2 = cw, 10.4 What is a Matrix? ‘To understand what a matrix is, we will solve the following standard problem: Given a point (x, y), what is the new point obtained after rotating (x, y) by an angle 0 in the plane? We can do this using polar coordinates as we did on page 49. Here, we will rotate the point, not the axes, through an angle of 6 counterclockwise. If the polar coordinates of the point are initially (r, a), then after the rotation the coordinates are (r,a+ 6). Then we can go back to rectangular coordinates. Let's call the new coordinates (x’, y’); thus, we find x =rcos(a+6) = r(cosacos 6 -sinasin 6) y'=rsin(a+0) = r(sinacos6 +cosasin6). Since rcosa = x and rsina = y we can write these in terms of the original coordinates: x’ = xcos@—ysind ¥ = xsind+ycos@. (10.3) (Notice that these equations are different from those on page 49. This is because here we are rotating the point counterclockwise rather than the axes. We can see that the above equations are the same of those by noting that @ on page 49 equals -@ here.) We have boiled the rotation down to a function. from the old coordinates to the new coordinates. The functions for x’ and y' are linear in x and y, so they are completely specified by four coefficients; we can use this to develop an efficient notation for the transformation, If we write the old and new points as the vectors ) and (3). then we can encapsulate the information in the transformation equations (10,3) with the form x ‘cos@ -sin@\ (x (7) - (Ss 8 cosé ) () on) Compare this closely to (10.3). The object by which we “multiply” the vector (3) isa matrix; each ‘tem in itis called an entry. Each entry corresponds to a particular coefficient in equations (10.3), as104 CHAPTER 10. VECTORS AND MATRICES a comparison shows. The upper left entry is the coefficient from x to x’, the upper right is from y to x, the lower left is from x to y’, and the lower right is from y to y’. We have gone from regarding the transformation (10.3) s transforming each coordinate according, to an equation, to seeing it as the application of a transformation matrix to a vector. Let us examine the method by which, given only the right side of (10.4), we can get the left. In other words, given only the initial vector and the transformation matrix, how do we compute the transformed vector? "The answer is quite simple. To compute the first element of the new vector, we go across the first row of the matrix, multiplying the elements of the row by the corresponding elements of the vector. ‘The resulting sum is the first element of the new vector. To compute the second element, we do the same with the second row of the matrix. EXAMPLE 10-3 Let's carty out the process described above to multiply the vector () by the matrix ( 3): ‘The first element of the product will be found by going across the top row of the matrix and down the vector to get (2)(a) + (3)(b). The second element is formed with the second row of the matrix, going down the vector as before to get (4)(a) + (5)(6). Thus we have (3) @)- (Gass): EXERCISE 10-4 Find the product Gs %) () EXERCISE 10-5 i, What is the effect of the matrix ( ‘) ona general vector? Geometrically, what kind of a transformation is this? ii, Find the matrix which takes any vector to itself. This is called the identity matrix. iii, Can you guess what matrix multiplication looks like for 3D vectors? Matrices are often named with letters, like vectors. Usually these are underlined, so the matrix named A is written A. It is easy to verify that matrix multiplication of a vector is linear, meaning that A(é+ @) = Ad+ Ad and A(cd) = cAd. EXERCISE 10-6 Verify that matrix multiplication of a vector is linear. 10.5 Matrix Multiplication Once we understand the role of matrices as transformations of vectors, we can ask: what happens when we apply two transformations in a row? Say we have the vector (i). and we apply first thethe ART of PROBLEM SOLVING: Volume 2 < 105 matrix G ~)otnen the matrix G 2) «The result will be -1 2\[/3 -4) (1 3 -4)|\5 -6) \a)] EXERCISE 10-7 If you still feel uncomfortable with multiplying a vector by a matrix, evaluate the above product explicitly. Applying the matrices one at a time is fine; but suppose we wish to consider the two transfor- mations as one, composite transformation? That is, we want to find a matrix C such that (3 216 )@]-e@): ‘We shall define “matrix multiplication” so that the above holds with C product of the two matrices. How shall we define this product matrix C? In general we have CDC MO] - € DG) = (Mex+ fy) + (gx + hy) (a + fy) + d(gx+ 1) = [ae + bg)x + (af + bh)y ~ \(cotdg)x + (cf + dhdy, _ (aerbg af ron (x ~ \ce+dg of +dh) \y)* Weshall thus define the product (: ’) ( f) tobe the matrix (er a a oh ; then everything will work nicely. If this seems like a rigged game, it is! We have chosen matrix multiplication so that it means something we want. Observe that the way to do matrix multiplication is the same as when we multiplied a vector by a matrix. In fact, if we consider only the first column of ( f) and the first column of the product, we have matrix multiplication of a vector; the same is true for the second columns. “1 2) (3 -4 EXAMPLE 10-4 Find ( 3 2) G =). Solution: To get the top left entry in the product, we go across the top row of the first ma- trix and down the left column of the second, to get (~1)(3) + (2)(5) = 7. To get the top right entry, We go across the top row of the first matrix and down the right column of the second, getting106 > CHAPTER 10. VECTORS AND MATRICES (-1)(-4) + 2)(-6) = -8. To get the bottom left, we go across the bottom row of the first matrix and down the left column, getting (3)(3) + (~4)(6) = -11. To get the bottom right, we use the bottom row of the first matrix and the right column of the second, to get (3)(~4) + (~4)(-6) = 12. Thus the product is ( i B): EXERCISE 10-8 Verify that column by column, matrix multiplication of matrices looks just like matrix multiplication of vectors. EXAMPLE 10-5 By equation (10.4), a 90° counterclockwise rotation is given by ( ° ). anda 180° rotation by CC ‘) Doing one and then the other gives the transformation (GG @): To find the upper left entry in the product, go across the first row of ( 0 4 ) and down the first column of ( ) to get (-1)(0) + (O(1) = 0. For the upper right, go across the same row of the first matrix, but down the second column of the second, to get (—1)(~1) + (0)(0) = 1. For the bottom entries we do the same thing, but going across the second row of the first matrix. The result is ot -1 0)" EXERCISE 10-9 To what rotation does the product above correspond? Is this what you would expect? 2 -3\(1 1 EXERCISE 10-10 Evaluate (3 3) G 2): EXERCISE 10-11 Geometrically, what do you get when you reflect through the x axis and then through the y axis? Show you get the right result in matrices by multiplying the matrix for x -1 0 o1 reflection, ( by the matrix for y reflection. Since matrix multiplication is associative, A(BC) = (AB)C. We usually just write ABC and do the multiplication in whichever order we want. WARNING It seems sensible that matrix multiplication would be commutative, so that AB = BA. However, this is NOT true! EXERCISE 10-12 Show that matrix multiplication is not commutative by finding a simple coun- terexample. Since we can multiply matrices, we can also take them to positive integral powers, just writing AAAA as A‘, for example.the ART of PROBLEM SOLVING: Volume 2 < 107 EXERCISE 10-13 Write down a matrix A for a rotation by 60°. Find A® without any computation. 10.6 Matrices in Higher Dimensions A.2x2 matrix has been used to represent a transformation from one 2D vector to another. If you thought about Exercise 10-5, you may have figured out how to extend this to 3D vectors. In three dimensional space each point has an x and y coordinate as in 2D, but also has a z coordinate to denote its distance above or below the xy plane. The positive x, y, and z axes are situated as shown. Here a transformation froma vector (x y 2) toavector(x’ y/ 2/)hasthe % form x’ = ax + by + cz, ete, There are nine coefficients to the transformation, that from x to x’, from y to x’, from z to x’, from x to y’, etc. (compare this to the discussion of 2 x2 matrix entries on page 104.) We represent the coefficients in exactly the same way as for 2D matrices. y EXERCISE 10-14 Write down the 3 x 3 identity matrix. EXAMPLE 10-6 We can easily write down the 3 x3 matrix for a rotation by angle 0 about the z axis. Clearly, the new z coordinate is the same as the old z coordinate, so the coefficient from z to 2’ is 1, while the coefficients from x and y to 2’ are 0 (no contribution). Also, neither x’ nor y’ is affected by z, so these two coefficients are 0. Finally, the other coefficients come from the standard rotation matrix of equation (10.4). The matrix is thus ‘cos sind 0 sin@ cos@ 0 0 oo. EXERCISE 10-15 Find the 3 x 3 matrix for each of the following: i. Rotation about the x axis. . Squashing all vectors to , the origin. iii. Reflection in the xy plane. Itis pretty simple to show that 3 x 3 matrices work the same as 2 x 2's in terms of associativity, noncommutativity, etc. Since we can imagine vectors of more than 3 dimensions, we can similarly write down matrices which transform those vectors, such as 4X 4,5 x5, etc. We can even write down matrices which are not square! For example, a 2 x 3 matrix (number of rows goes first) takes 3D vectors to 2D ones: @ 123 a+ 2b+3¢ (i 5 ) , = (0). (105) All you need to remember is that to find an entry in the product, we go across the corresponding row and down the corresponding column. Study the above example if this is still unclear.108 © CHAPTER 10. VECTORS AND MATRICES EXERCISE 10-16 Write down and multiply: i, Two 3 x 3 matrices. ii, A2x4 and a4x3 matrix. iii, A1X3 and a3 x1 matrix. (To what does this correspond?) Compare the dimensions of the products to the dimensions of the original matrices in these three cases. Is there a pattern? It seems strange, but only under certain circumstances can we multiply a kx] matrix by anm x1 matrix. To see why, let the first matrix be A and the second B. EXAMPLE 10-7 Consider the multiplication Az. Find the dimensions of # and Az, where Ais akx! matrix. Solution: In the multiplication we will be going across rows of A and down %. Since A has I columns, each of its rows is / entries long. Thus Zhas I entries also, so is dimension I. On the other hand, there will be one entry in the product A¥ for each row of A, as each entry in the product is formed by going across one row of A. Thus the product will have k entries, so will be dimension k. EXERCISE 10-17 Compare the preceding discussion to equation (10.5). Do the two agree? For vectors we have (AB)z = A(B3), by associativity. Clearly # must be an n-dimensional vector if B, which takes n-dimensional vectors to m-dimensional ones, can transform it. The product BY will be n-dimensional. But this product must be I-dimensional to be transformed by A! Thus we must have m = 1. Furthermore, the dimension of ABZ will be k, since A was used last and its outputs are k- dimensional. Thus the product AB takes n-dimensional vectors to k-dimensional ones, and must be akxn matrix. It is this pattern that we were looking for in Exercise 10-16—go back and verify that itis true in those cases if you have not already. 10.7 Better Matrix Notation Up to now we have written a general 2x 2 matrix as A = (¢ ‘): A better way to write this is by labelling each entry by its row and column numbers, We would then write A= @ a) . 40.6) 221 ax, e WARNING: As always with matrices, ROWS GO FIRST—for example, a2 is an entry in the fourth row, second column of A, not the other way around. We'll often use the a;; notation because it is very efficient.the ART of PROBLEM SOLVING: Volume 2 < 109 EXAMPLE 10-8 The 3 x 3 identity matrix is completely specified in this notation by aft isi i= {o ial Do you see why? EXAMPLE 10-9 The product rule takes a nice form in this notation. Given AB = C, with Aan xm matrix, B an m x n, and C therefore a1 x n, we have ij = ainda; + dinbaj +... + ainbnj, or to compress even more, Gy = Saaby. If these expressions are unclear, write A and B in the form (10.6) and multiply out to see. Try starting with small values, like | = 2,m=3,n= 4. o 12 EXERCISE 10-18 Express the matrix { -1 0 1 J in the manner of Example 10-8. -2.-1 0 Problems to Solve for Chapter 10 155. Write down the 2D matrix for a rotation by 45°. 156. Show how the multiplication of rotation matrices can be used to remember the trig identities sin(x + y) = sinxcosy + sin ycosx and cos(x + y) = cosxcos y ~ sinxsin y. 219 -2 1 -1 60 -3)(4 4 -3 157, Find the product 13 2/\3 21 158. Three vertices of parallelogram PQRS are P(-3,-2), Q(1,-8), R(9,1) with P and R diagonally opposite. What is coordinate 5? (AHSME 1963) 159. Find the cosine of the angle between the vectors (3 4 5) and (-1 4 3) 160. Matrix A has two rows, three columns. Matrix B has four rows, two columns. The existing product of these two matrices consists of how many elements? (MA@ 1992) (° 3) 161. Let A be the matrix% —~ 110 > CHAPTER 10. VECTORS AND MATRICES and let x be the sum of the entries of a matrix B such that AB = BA. Find the smallest value of x over all matrices B whose entries are positive integers. (Mandelbrot #3) 3 141 162. What is the image of | 1 ] under the mapping { -2 0 0 }? (mae 1991) 2 32-3 4163. Find min f and max f where x and y are real numbers and f(x,y) = 2sinxcos y + 3sinxsin y + 6cosx. (MaiQ 1991)the ART of PROBLEM SOLVING: Volume 2 < 111 ;——the BIG PICTURE Imagine an atom (particle of matter) sitting in space. There are many possible “states” the atom could be in: vibrating fast or slow, spinning around in different ways, and so on. But as we pointed out in a BIG PICTURE in Volume 1, quantum mechanics can’t tell us exactly which state the atom is in, just the probabilities of its being in each state. These probabilities are often thought of as a vector, B = (P, Pz P3 -+-), where P; is the probability of being in the ith state. So far we're on solid ground. But what if the atom has infinitely many possible states? Suddenly our vector has become infinite-dimensional! Now imagine a particle of light—a photon—flies onto the scene and our atom absorbs it. This will cause our atom to switch from its initial state to some other state. How do we know what other state? We don’t! Again, we only know the probabilities. If the atom started in state i, there is some probability A;-,; that it ends up in state 1, some probability A;2 that it ends up in state 2, and so on. (What does 4j..; represent?) Our original probability vector P thus transforms into a new probability vector @, which describes the probabilities after the absorption. How do we transform one vector to another? With a matrix, of course! We define a transition matrix A, so that AP = G: Aa Ara \ /Pi Qi Ats2 Are | { Po} = [ @ If you are familiar with the rules of probability, you might be able to understand this in more depth. Why, for example, is Q;, the probability we end up in state 1, equal to P)A1 41+ P2Az1+ PsAso1 +--+? Can you see that we must have P, + P2 + P3+--+=Qi+Qr+Q34---=1? To calculate the probability vectors and transition matrices takes some doing, but the math- ematical apparatus of the infinite dimensional vectors involved is more or less the same as that for the humble 2D vectors we define in the text.Chapter 11 Cross Products and Determinants 11.1 The Cross Product ‘We have seen in Chapter 10 that there is a connection between the dot product and vector lengths. We now define a special product between vectors which allows us to discuss areas. So, given two 3D vectors 3 and @, we shall define the cross product x tobe the vector # such that » Zis perpendicular to both and w (so that 2d = #-B = 0); » the length of 7'is the area of the parallelogram spanned by d and , as in the figure at right. EXAMPLE 11-1 For any vectors d and i, we have (8x @) -# = 0, since we have defined the cross product to be such that (8x ) 1 EXERCISE 11-1 What is the area of the parallelogram spanned by @ and w in terms of ||@, |||, and 6, the angle between the two vectors? Given vectors and @, there are two vectors which satisfy our criteria for the cross product, one which points ‘up’ from the plane containing # and @ and another which points ‘down’ from the plane. How do we know which vector is x a? There's no sound mathematical reason to choose ‘one or the other, so we must adopt a convention which we can apply to any pair of vectors. This brings us to the dreaded right hand rule. This is nothing more than a way to remember in which of the two possible directions we choose our cross product to be. > Consider the equation # x @ = #. If you extend the index finger of your right hand along and the middle finger along @, then # will be along your thumb pointing perpendicular to the other two fingers. EXAMPLE 11-2 Ifwe take #=(0 1 0)and@=(1 0 0), then find ox 3. Solution: Take the desired vector to be # = (@ b c). Then the dot products of # with J and Bare b and a respectively; by the first condition on the cross product these must both equal zero, < 112 >the ART of PROBLEM SOLVING: Volume 2 <_ 113 so¥=(0 0 c). By the second condition, the length [|i] = Ic| must be 1 (why?), so that c is +1. The right hand rule tells us that Zis pointing down, so#= (0 0 -1). EXERCISE 11-2 As @ stays fixed and # rotates in a full circle, describe the path followed by the tip of dxd. EXERCISE 11-3 Using the right hand rule, find the relationship between # x w and dx @. WARNING: Exercise 11-3 shows an important difference between the cross product and other products with which you are familiar. The cross product is NOT commutative; that is, 3x @ # Bx@. 11.2 The Cross Product in Coordinates Like the dot product, the cross product takes ona fairly simple form in the coordinate representation. Ifwetaked= (x 1 z)and@=(x2 yo 2), then it can be shown that the vector BxB=(VYi2-yem) (am—2™m) (ry2-x2y1)) (11.1) has the desired properties. Since we have seen that there is only one vector satisfying all three defining conditions, this must be the desired cross product. EXERCISE 11-4 Show that the vector 3 x @ defined in (11.1) is perpendicular to both # and w. Although the cross product is not commutative, it is still linear, so #X (w; +B.) = (Bx) + (Px) and dx (ci) = c( x @). These properties can be easily verified using (11.1). When we take the cross product of two vectors which are only two dimensional, we extend to three dimensions: we pretend that our vectors are actually three dimensional, with a z-component Of 0. So if the vectors are (11 yi) and (x1 ya), we write them as (1) y: 0) and (x y2 0), tacking on a component of zero in the third dimension. We can then use (11.1). EXERCISE 11-5 Use (11-1) to verify that the cross product of two 2D vectors points either straight up or straight down (in three dimensions) 11.3 The Determinant We are now able to ask (and answer) a question which has been hanging about since the introduction of matrices, namely, is there a “size” for matrices? The size of a vector is simply its length. For matrices, however, the size depends on area, which we are only now ready to tackle. A very common way to represent a vector is to break it down in terms of the fundamental unit vectors i= (1 0) and j'=(0 1): (a 6) =ai+0j- (Similarly, in 3D we have (a b c) =ai+bj'+ck, @.