Hanilen G. Catama February 5, 2014 Bscoe 3 Emath11
Hanilen G. Catama February 5, 2014 Bscoe 3 Emath11
Catama BSCoE 3
MATRIX A matrix is a rectangular array of numbers or other mathematical objects, for which operations such as addition and multiplication are defined.[5] Most commonly, a matrix over a field F is a rectangular array of scalars from F. Most of this article focuses on real and complex matrices, i.e., matrices whose elements are real numbers or complex numbers, respectively. More general types of entries are discussed below. For instance, this is a real matrix:
The numbers, symbols or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. SQUARE MATRIX In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. For instance, this is a square matrix of order 4:
The entries aii form the main diagonal of a square matrix. For instance, the main diagonal of the 4-by-4 matrix above contains the elements a11 = 9, a22 = 11, a33 = 4, a44 = 10. Any two square matrices of the same order can be added and multiplied. Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if R is a square matrix representing a rotation (rotation matrix) and v is acolumn vector describing the position of a point in space, the product Rv yields another column vector describing the position of that point after that rotation. If v is a row vector, the same transformation can be obtained using vRT, where RT is the transpose of R.
DETERMINANTS Determinants are mathematical objects that are very useful in the analysis and solution of systems of linear equations. As shown by Cramer's rule, a nonhomogeneous system of linear equations has a unique solution iff the determinant of the system's matrix is nonzero (i.e., the matrix is nonsingular). For example, eliminating , , and from the equations
which is called the determinant for this system of equation. Determinants are defined only for square matrices. If the determinant of a matrix is 0, the matrix is said to be singular, and if the determinant is 1, the matrix is said to be unimodular. The determinant of a matrix ,
is commonly denoted , , or in component notation as , , or (Muir 1960, p. 17). Note that the notation may be more convenient when indicating the absolute value of a determinant, i.e., instead of . The determinant is implemented in Mathematica as Det[m].
TRANSPOSE OF A MATRIX In linear algebra, the transpose of a matrix A is another matrix AT (also written A, Atr,tA or At) created by any one of the following equivalent actions: T reflect A over its main diagonal (which runs from top-left to bottom-right) to obtain A T write the rows of A as the columns of A T write the columns of A as the rows of A Formally, the i th row, j th column element of AT is the j th row, i th column element of A:
If A is an m n matrix then AT is an n m matrix. The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley.
COFACTOR MATRIX In mathematics, a matrix is a rectangular table of numbers. The co-factors of the matrix are generally used for to find the adjoind of the matrix, inverse of the matrix, etc.. The co-factors for the matrix
is given as follow:
The matrix formed by the co-factors taken in order is the co-factor matrix of Z. The co-factor matrix of Z is
FINDING INVERSE MATRIX How do we calculate the Inverse? Well, for a 2x2 Matrix the Inverse is:
In other words: swap the positions of a and d, put negatives in front of b and c, and divideeverything by the determinant (ad-bc). Let us try an example:
How do we know this is the right answer? Remember it must be true that: A A-1 = I
So, let us check to see what happens when we multiply the matrix by its inverse:
And, hey!, we end up with the Identity Matrix! So it must be right. It should also be true that: A-1 A = I
PROPERTIES OF DETERMINANTS The determinant has many properties. Some basic properties of determinants are: 1. 2. 3. 4. For square matrices A and B of equal size, where In is the n n identity matrix.
5. for an n n matrix. 6. If A is a triangular matrix, i.e. ai,j = 0 whenever i > j or, alternatively, whenever i < j, then its determinant equals the product of the diagonal entries:
This can be deduced from some of the properties below, but it follows most easily directly from the Leibniz formula (or from the Laplace expansion), in which the identity permutation is the only one that gives a non-zero contribution. A number of additional properties relate to the effects on the determinant of changing particular rows or columns: 7. Viewing an n n matrix as being composed of n columns, the determinant is an n-linear function. This means that if one column of a matrix A is written as a sum v + w of two column vectors, and all other columns are left unchanged, then the determinant of A is the sum of the determinants of the matrices obtained from A by replacing the column by v and then by w(and a similar relation holds when writing a column as a scalar multiple of a column vector).
8. This n-linear function is an alternating form. This means that whenever two columns of a matrix are identical, or more generally some column can be expressed as a linear combination of the other columns (i.e. the columns of the matrix form a linearly dependent set), its determinant is 0. Properties 1, 7 and 8 which all follow from the Leibniz formula completely characterize the determinant; in other words the determinant is the unique function from n n matrices to scalars that is n-linear alternating in the columns, and takes the value 1 for the identity matrix (this characterization holds even if scalars are taken in any given commutative ring). To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 8) or else 1 (by properties 1 and 11 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear. For matrices over non-commutative rings, properties 7 and 8 are incompatible for n 2, so there is no good definition of the determinant in this setting. Property 2 above implies that properties for columns have their counterparts in terms of rows: 9. Viewing an n n matrix as being composed of n rows, the determinant is an n-linear function. 10. This n-linear function is an alternating form: whenever two rows of a matrix are identical, its determinant is 0. 11. Interchanging two columns of a matrix multiplies its determinant by 1. This follows from properties 7 and 8 (it is a general property of multilinear alternating maps). Iterating gives that more generally a permutation of the columns multiplies the determinant by the sign of the permutation. Similarly a permutation of the rows multiplies the determinant by the sign of the permutation. 12. Adding a scalar multiple of one column to another column does not change the value of the determinant. This is a consequence of properties 7 and 8: by property 7 the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0 by property 8. Similarly, adding a scalar multiple of one row to another row leaves the determinant unchanged. These properties can be used to facilitate the computation of determinants by simplifying the matrix to the point where the determinant can be determined immediately. Specifically, for matrices with coefficients in a field, properties 11 and 12 can be used to transform any matrix into a triangular matrix, whose determinant is given by property 6; this is essentially the method of Gaussian elimination. For example, the determinant of
Here, B is obtained from A by adding 1/2the first row to the second, so that det(A) = det(B). C is obtained from B by adding the first to the third row, so that det(C) = det(B). Finally, D is obtained from C by exchanging the second and third row, so that det(D) = det(C). The determinant of the (upper) triangular matrix D is the product of its entries on the main diagonal: (2) 2 4.5 = 18. Therefore det(A) = det(D) = +18.