Vector and Matrices - Some Articles
Vector and Matrices - Some Articles
Some articles
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Wed, 25 May 2011 17:28:12 UTC
Contents
Articles
Dot product 1
Cross product 7
Triple product 21
Binet–Cauchy identity 24
Inner product space 25
Sesquilinear form 33
Scalar multiplication 36
Euclidean space 37
Orthonormality 40
Cauchy–Schwarz inequality 43
Orthonormal basis 48
Vector space 50
Matrix multiplication 74
Determinant 82
Exterior algebra 97
Geometric algebra 112
Levi-Civita symbol 120
Jacobi triple product 126
Rule of Sarrus 128
Laplace expansion 129
Lie algebra 131
Orthogonal group 136
Rotation group 146
Vector-valued function 150
Gramian matrix 154
Lagrange's identity 156
Quaternion 159
Skew-symmetric matrix 175
Xyzzy 180
Quaternions and spatial rotation 181
Seven-dimensional cross product 192
Octonion 200
Multilinear algebra 205
Pseudovector 208
Bivector 214
References
Article Sources and Contributors 231
Image Sources, Licenses and Contributors 234
Article Licenses
License 236
Dot product 1
Dot product
In mathematics, the dot product is an algebraic operation that takes two equal-length sequences of numbers (usually
coordinate vectors) and returns a single number obtained by multiplying corresponding entries and then summing
those products. The name is derived from the centered dot "·" that is often used to designate this operation; the
alternative name scalar product emphasizes the scalar (rather than vector) nature of the result. At a basic level, the
dot product is used to obtain the cosine of the angle between two vectors.
The principal use of this product is the inner product in a Euclidean vector space: when two vectors are expressed
on an orthonormal basis, the dot product of their coordinate vectors gives their inner product. For this geometric
interpretation, scalars must be taken to be real numbers; while the dot product can be defined in a more general
setting (for instance with complex numbers as scalars) many properties would be different. The dot product contrasts
(in three dimensional space) with the cross product, which produces a vector as result.
Definition
The dot product of two vectors a = [a1, a2, ... , an] and b = [b1, b2, ... , bn] is defined as:
where Σ denotes summation notation and n is the dimension of the vector space.
In dimension 2, the dot product of vectors [a,b] and [c,d] is ac + bd. Similarly, in a dimension 3, the dot product of
vectors [a,b,c] and [d,e,f] is ad + be + cf. For example, the dot product of two three-dimensional vectors [1, 3, −5]
and [4, −2, −1] is
The dot product can also be obtained via transposition and matrix multiplication as follows:
where both vectors are interpreted as column vectors, and aT denotes the transpose of a, in other words the
corresponding row vector.
Dot product 2
Geometric interpretation
Since , then .
In Euclidean geometry, the dot product of vectors expressed in an orthonormal basis is related to their length and
angle. For such a vector , the dot product is the square of the length of , or
The terminal points of both unit vectors lie on the unit circle. The unit circle is where the trigonometric values for the
six trig functions are found. After substitution, the first vector component is cosine and the second vector component
is sine, i.e. for some angle . The dot product of the two unit vectors then takes
and for angles and and returns where
.
As the cosine of 90° is zero, the dot product of two orthogonal vectors is always zero. Moreover, two vectors can be
considered orthogonal if and only if their dot product is zero, and they have non-null length. This property provides a
simple method to test the condition of orthogonality.
Dot product 3
Sometimes these properties are also used for "defining" the dot product, especially in 2 and 3 dimensions; this
definition is equivalent to the above one. For higher dimensions the formula can be used to define the concept of
angle.
The geometric properties rely on the basis being orthonormal, i.e. composed of pairwise perpendicular vectors with
unit length.
Scalar projection
If both and have length one (i.e., they are unit vectors), their dot product simply gives the cosine of the angle
between them.
If only is a unit vector, then the dot product gives , i.e., the magnitude of the projection of in
the direction of , with a minus sign if the direction is opposite. This is called the scalar projection of onto ,
or scalar component of in the direction of (see figure). This property of the dot product has several useful
applications (for instance, see next section).
If neither nor is a unit vector, then the magnitude of the projection of in the direction of is , as
Rotation
A rotation of the orthonormal basis in terms of which vector is represented is obtained with a multiplication of
by a rotation matrix . This matrix multiplication is just a compact representation of a sequence of dot products.
For instance, let
• and be two different orthonormal bases of the same space , with
obtained by just rotating ,
• represent vector in terms of ,
• represent the same vector in terms of the rotated basis ,
• , , , be the rotated basis vectors , , represented in terms of .
Then the rotation from to is performed as follows:
Notice that the rotation matrix is assembled by using the rotated basis vectors , , as its rows, and
these vectors are unit vectors. By definition, consists of a sequence of dot products between each of the three
rows of and vector . Each of these dot products determines a scalar component of in the direction of a
rotated basis vector (see previous section).
If is a row vector, rather than a column vector, then must contain the rotated basis vectors in its columns, and
must post-multiply :
Dot product 4
Physics
In physics, vector magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinate
system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also
a scalar in this sense, given by the formula, independent of the coordinate system. Example:
• Mechanical work is the dot product of force and displacement vectors.
• Magnetic flux is the dot product of the magnetic field and the area vectors.
Properties
The following properties hold if a, b, and c are real vectors and r is a scalar.
The dot product is commutative:
which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula
is commonly used to simplify vector calculations in physics.
Repeated application of the Pythagorean theorem yields for its length |v|
so we conclude that taking the dot product of a vector v with itself yields the squared length of the vector.
Lemma 1
Now consider two vectors a and b extending from the origin, separated by an angle θ. A third vector c may be
defined as
creating a triangle with sides a, b, and c. According to the law of cosines, we have
Substituting dot products for the squared lengths according to Lemma 1, we get
(1)
But as c ≡ a − b, we also have
,
which, according to the distributive law, expands to
(2)
Merging the two c • c equations, (1) and (2), we obtain
Q.E.D.
Dot product 6
Generalization
The inner product generalizes the dot product to abstract vector spaces and is usually denoted by . Due to
the geometric interpretation of the dot product the norm ||a|| of a vector a in such an inner product space is defined as
such that it generalizes length, and the angle θ between two vectors a and b by
In particular, two vectors are considered orthogonal if their inner product is zero
For vectors with complex entries, using the given definition of the dot product would lead to quite different
geometric properties. For instance the dot product of a vector with itself can be an arbitrary complex number, and
can be zero without the vector being the zero vector; this in turn would have severe consequences for notions like
length and angle. Many geometric properties can be salvaged, at the cost of giving up the symmetric and bilinear
properties of the scalar product, by alternatively defining
where bi is the complex conjugate of bi. Then the scalar product of any vector with itself is a non-negative real
number, and it is nonzero except for the zero vector. However this scalar product is not linear in b (but rather
conjugate linear), and the scalar product is not symmetric either, since
.
This type of scalar product is nevertheless quite useful, and leads to the notions of Hermitian form and of general
inner product spaces.
The Frobenius inner product generalizes the dot product to matrices. It is defined as the sum of the products of the
corresponding components of two matrices having the same size.
Generalization to tensors
The dot product between a tensor of order n and a tensor of order m is a tensor of order n+m-2. The dot product is
calculated by multiplying and summing across a single index in both tensors. If and are two tensors with
element representation and the elements of the dot product are given by
This definition naturally reduces to the standard vector dot product when applied to vectors, and matrix
multiplication when applied to matrices.
Occasionally, a double dot product is used to represent multiplying and summing across two indices. The double dot
product between two 2nd order tensors is a scalar.
Dot product 7
External links
• Weisstein, Eric W., "Dot product [1]" from MathWorld.
• A quick geometrical derivation and interpretation of dot product [2]
• Interactive GeoGebra Applet [3]
• Java demonstration of dot product [4]
• Another Java demonstration of dot product [5]
• Explanation of dot product including with complex vectors [6]
• "Dot Product" [7] by Bruce Torrence, Wolfram Demonstrations Project, 2007.
References
[1] http:/ / mathworld. wolfram. com/ DotProduct. html
[2] http:/ / behindtheguesses. blogspot. com/ 2009/ 04/ dot-and-cross-products. html
[3] http:/ / xahlee. org/ SpecialPlaneCurves_dir/ ggb/ Vector_Dot_Product. html
[4] http:/ / www. falstad. com/ dotproduct/
[5] http:/ / www. cs. brown. edu/ exploratories/ freeSoftware/ repository/ edu/ brown/ cs/ exploratories/ applets/ dotProduct/ dot_product_guide.
html
[6] http:/ / www. mathreference. com/ la,dot. html
[7] http:/ / demonstrations. wolfram. com/ DotProduct/
Cross product
In mathematics, the cross product, vector product, or Gibbs vector product is a binary operation on two vectors
in three-dimensional space. It results in a vector which is perpendicular to both of the vectors being multiplied and
normal to the plane containing them. It has many applications in mathematics, engineering and physics.
If either of the vectors being multiplied is zero or the vectors are parallel then their cross product is zero. More
generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular for
perpendicular vectors this is a rectangle and the magnitude of the product is the product of their lengths. The cross
product is anticommutative, distributive over addition and satisfies the Jacobi identity. The space and product form
an algebra over a field, which is neither commutative nor associative, but is a Lie algebra with the cross product
being the Lie bracket.
Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on the
choice of orientation or "handedness". The product can be generalized in various ways; it can be made independent
of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can
be used with a bivector or two-form result. Also, using the orientation and metric structure just as for the traditional
3d cross product, one can in n dimensions take the product of n - 1 vectors to produce a vector perpendicular to all of
them. But if the product is limited to non-trivial binary products with vector results it exists only in three and seven
dimensions.
Cross product 8
Definition
The cross product of two vectors a and b is denoted by a × b. In
physics, sometimes the notation a∧b is used,[1] though this is avoided
in mathematics to avoid confusion with the exterior product.
The cross product a × b is defined as a vector c that is perpendicular to
both a and b, with a direction given by the right-hand rule and a
magnitude equal to the area of the parallelogram that the vectors span.
The cross product is defined by the formula[2] [3]
where θ is the measure of the smaller angle between a and b (0° ≤ θ ≤ 180°), a and b are the magnitudes of vectors a
and b (i.e., a = |a| and b = |b|), and n is a unit vector perpendicular to the plane containing a and b in the direction
given by the right-hand rule as illustrated. If the vectors a and b are parallel (i.e., the angle θ between them is either
0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.
The direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right hand
in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see
the picture on the right). Using this rule implies that the cross-product is anti-commutative, i.e., b × a = -(a × b). By
pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the
opposite direction, reversing the sign of the product vector.
Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the
definition above). If a left-handed coordinate system is used, the direction of the vector n is given by the left-hand
Cross product 9
Coordinate notation
The basis vectors i, j, and k satisfy the following equalities:
Together with the skew-symmetry and bilinearity of the product, these three identities are sufficient to determine the
cross product of any two vectors. In particular, the following identities can be established:
Matrix notation
The definition of the cross product can also be represented by the determinant of a formal matrix:
Using Cofactor expansion along the first row instead, it expands to[4]
Properties
Geometric meaning
The magnitude of the cross product can be interpreted as
the positive area of the parallelogram having a and b as
sides (see Figure 1):
Indeed, one can also compute the volume V of a parallelepiped having a, b and c as sides by using a combination of
a cross product and a dot product, called scalar triple product (see Figure 2):
Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute
value. For instance,
Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product
can be thought of as a measure of "perpendicularness" in the same way that the dot product is a measure of
"parallelness". Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a
magnitude of zero if the two are parallel. The opposite is true for the dot product of two unit vectors.
Cross product 11
Algebraic properties
The cross product is anticommutative,
Distributivity, linearity and Jacobi identity show that R3 together with vector addition and the cross product forms a
Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3).
The cross product does not obey the cancellation law: a × b = a × c with non-zero a does not imply that b = c.
Instead if a × b = a × c:
If neither a nor b - c is zero then from the definition of the cross product the angle between them must be zero and
they must be parallel. They are related by a scale factor, so one of b or c can be expressed in terms of the other, for
example
so b − c is both parallel and perpendicular to the non-zero vector a, something that is only possible if b − c = 0 so
they are identical.
From the geometrical definition the cross product is invariant under rotations about the axis defined by a × b. More
generally the cross product obeys the following identity under matrix transformations:
Differentiation
The product rule applies to the cross product in a similar manner:
This identity can be easily proved using the matrix multiplication representation.
It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order
that's an even permutation of the above ordering. The following therefore are equal:
The vector triple product is the cross product of a vector with the result of another cross product, and is related to the
dot product by the following formula
The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This
formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector
calculus, is
Alternative formulation
The cross product and the dot product are related by:
The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the
vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in
terms of the angle θ between the two vectors, as:
which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by
a and b (see definition above).
The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b
provides an alternative definition of the cross product.[5]
Cross product 13
Lagrange's identity
The relation:
can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as:[6]
where a and b may be n-dimensional vectors. In the case n=3, combining these two equations results in the
expression for the magnitude of the cross product in terms of its components:[7]
The same result is found directly using the components of the cross-product found from:
In R3 Lagrange's equation is a special case of the multiplicativity |vw| = |v||w| of the norm in the quaternion algebra.
It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case
of the Binet-Cauchy identity:[8] [9]
where superscript T refers to the Transpose matrix, and [a]X is defined by:
then
This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors
can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is
equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to
vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher
dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to
Cross product 14
vectors.
This notation is also often much easier to work with, for example, in epipolar geometry.
From the general properties of the cross product follows immediately that
and
and from fact that is skew-symmetric it follows that
The above-mentioned triple product expansion (bac-cab rule) can be easily proven using this notation.
The above definition of means that there is a one-to-one mapping between the set of 3×3 skew-symmetric
matrices, also known as the Lie algebra of SO(3), and the operation of taking the cross product with some vector .
Index notation
The cross product can alternatively be defined in terms of the Levi-Civita symbol, εijk:
where the indices correspond, as in the previous section, to orthogonal vector components. This
characterization of the cross product is often expressed more compactly using the Einstein summation convention as
in which repeated indices are summed from 1 to 3. Note that this representation is another form of the
skew-symmetric representation of the cross product:
In classical mechanics: representing the cross-product with the Levi-Civita symbol can cause
mechanical-symmetries to be obvious when physical-systems are isotropic in space. (Quick example: consider a
particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are
"special" in any sense, so symmetries lie in the cross-product-represented angular-momentum which are made clear
by the abovementioned Levi-Civita representation).
Mnemonic
The word "xyzzy" can be used to remember the definition of the cross product.
If
where:
then:
The second and third equations can be obtained from the first by simply vertically rotating the subscripts, x → y → z
→ x. The problem, of course, is how to remember the first equation, and two options are available for this purpose:
either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy
sequence.
Cross product 15
Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned matrix, the first
three letters of the word xyzzy can be very easily remembered.
Cross Visualization
Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation.
While this method does not have any real mathematical basis, it may help you to remember the correct Cross Product
formula.
If
then:
If we want to obtain the formula for we simply drop the and from the formula, and take the next two
components down -
It should be noted that when doing this for the next two elements down should "wrap around" the matrix so that
after the z component comes the x component. For clarity, when performing this operation for , the next two
components should be z and x (in that order). While for the next two components should be taken as x and y.
For then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we
can take the first element on the left and simply multiply by the element that the cross points to in the right hand
matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as
well. This results in our formula -
We can do this in the same way for and to construct their associated formulas.
Applications
Computational geometry
The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in
computer graphics.
In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by
three points , and . It corresponds to the direction of the cross product of the two
coplanar vectors defined by the pairs of points and , i.e., by the sign of the expression
. In the "right-handed" coordinate system, if the result is 0, the points are
collinear; if it is positive, the three points constitute a negative angle of rotation around from to , otherwise
a positive angle. From another point of view, the sign of tells whether lies to the left or to the right of line
.
Cross product 16
Mechanics
Moment of a force applied at point B around point A is given as:
Other
The cross product occurs in the formula for the vector operator curl. It is also used to describe the Lorentz force
experienced by a moving electrical charge in a magnetic field. The definitions of torque and angular momentum also
involve the cross product.
The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and
multi-view geometry, in particular when deriving matching constraints.
This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in three
dimensions is the result an oriented line element – a vector – whereas, for example, in 4 dimensions the Hodge dual
of a bivector is two-dimensional – another oriented plane element. So, in three dimensions only is the cross product
of a and b the vector dual to the bivector a∧b: it is perpendicular to the bivector, with orientation dependent on the
coordinate system's handedness, and has the same magnitude relative to the unit normal vector as a∧b has relative to
the unit bivector; precisely the properties described above.
Generalizations
There are several ways to generalize the cross product to the higher dimensions.
Lie algebra
The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are
axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity.
Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory.
For example, the Heisenberg algebra gives another Lie algebra structure on In the basis the product is
Quaternions
The cross product can also be described in terms of quaternions, and this is why the letters i, j, k are a convention for
the standard basis on . The unit vectors i, j, k correspond to "binary" (180 deg) rotations about their respective
axes (Altmann, S. L., 1986, Ch. 12), said rotations being represented by "pure" quaternions (zero scalar part) with
unit norms.
For instance, the above given cross product relations among i, j, and k agree with the multiplicative relations among
the quaternions i, j, and k. In general, if a vector [a1, a2, a3] is represented as the quaternion a1i + a2j + a3k, the cross
product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result.
The real part will be the negative of the dot product of the two vectors.
Alternatively and more straightforwardly, using the above identification of the 'purely imaginary' quaternions with
, the cross product may be thought of as half of the commutator of two quaternions.
Cross product 18
Octonions
A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the
quaternions. The nonexistence of such cross products of two vectors in other dimensions is related to the result that
the only normed division algebras are the ones with dimension 1, 2, 4, and 8; Hurwitz theorem.
Wedge product
In general dimension, there is no direct analogue of the binary cross product. There is however the wedge product,
which has similar properties, except that the wedge product of two vectors is now a 2-vector instead of an ordinary
vector. As mentioned above, the cross product can be interpreted as the wedge product in three dimensions after
using Hodge duality to identify 2-vectors with vectors.
The wedge product and dot product can be combined to form the Clifford product.
Multilinear algebra
In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor) obtained from
the 3-dimensional volume form,[10] a (0,3)-tensor, by raising an index.
In detail, the 3-dimensional volume form defines a product by taking the determinant of the matrix
given by these 3 vectors. By duality, this is equivalent to a function (fixing any two inputs gives a
function by evaluating on the third input) and in the presence of an inner product (such as the dot product;
more generally, a non-degenerate bilinear form), we have an isomorphism and thus this yields a map
which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a
(1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index".
Translating the above algebra into geometry, the function "volume of the parallelepiped defined by " (where
the first two vectors are fixed and the last is an input), which defines a function , can be represented uniquely
as the dot product with a vector: this vector is the cross product From this perspective, the cross product is
defined by the scalar triple product,
In the same way, in higher dimensions one may define generalized cross products by raising indices of the
n-dimensional volume form, which is a -tensor. The most direct generalizations of the cross product are to
define either:
• a -tensor, which takes as input vectors, and gives as output 1 vector – an -ary vector-valued
product, or
• a -tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank n−2 – a
binary product with rank n−2 tensor values. One can also define -tensors for other k.
These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity.
The -ary product can be described as follows: given vectors in define their generalized
cross product as:
• perpendicular to the hyperplane defined by the
• magnitude is the volume of the parallelotope defined by the which can be computed as the Gram determinant
of the
• oriented so that is positively oriented.
This is the unique multilinear, alternating product which evaluates to , and so forth
for cyclic permutations of indices.
In coordinates, one can give a formula for this -ary analogue of the cross product in Rn by:
Cross product 19
This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the
row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the
ordered vectors (v1,...,vn-1,Λ(v1,...,vn-1)) have a positive orientation with respect to (e1,...,en). If n is odd, this
modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product.
In the case that n is even, however, the distinction must be kept. This -ary form enjoys many of the same
properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each
argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector
cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the
arguments.
History
In 1773, Joseph Louis Lagrange introduced the component form of both the dot and cross products in order to study
the tetrahedron in three dimensions.[11] In 1843 the Irish mathematical physicist Sir William Rowan Hamilton
introduced the quaternion product, and with it the terms "vector" and "scalar". Given two quaternions [0, u] and [0,
v], where u and v are vectors in R3, their quaternion product can be summarized as [−u·v, u×v]. James Clerk
Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other
reasons quaternions for a time were an essential part of physics education.
In 1878 William Kingdon Clifford published his Introduction to Dynamic which brought together many
mathematical ideas. He defined the product of two vectors to have magnitude equal to the area of the parallelogram
of which they are two sides, and direction perpendicular to their plane.
Oliver Heaviside in England and Josiah Willard Gibbs, a professor at Yale University in Connecticut, also felt that
quaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. Thus,
about forty years after the quaternion product, the dot product and cross product were introduced—to heated
opposition. Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reduce
the equations of electromagnetism from Maxwell's original 20 to the four commonly seen today.[12]
Largely independent of this development, and largely unappreciated at the time, Hermann Grassmann created a
geometric algebra not tied to dimension two or three, with the exterior product playing a central role. William
Kingdon Clifford combined the algebras of Hamilton and Grassmann to produce Clifford algebra, where in the case
of three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the cross
product.
The cross notation, which began with Gibbs, inspired the name "cross product". Originally it appeared in privately
published notes for his students in 1881 as Elements of Vector Analysis. The utility for mechanics was noted by
Aleksandr Kotelnikov. Gibbs's notation —and the name— later reached a wide audience through Vector Analysis, a
textbook by Edwin Bidwell Wilson, a former student. Wilson rearranged material from Gibbs's lectures, together
with material from publications by Heaviside, Föpps, and Hamilton. He divided vector analysis into three parts:
First, that which concerns addition and the scalar and vector products of vectors. Second, that which concerns
the differential and integral calculus in its relations to scalar and vector functions. Third, that which contains
the theory of the linear vector function.
Two main kinds of vector multiplications were defined, and they were called as follows:
• The direct, scalar, or dot product of two vectors
• The skew, vector, or cross product of two vectors
Cross product 20
Several kinds of triple products and products of more than three vectors were also examined. The above mentioned
triple product expansion was also included.
Notes
[1] Jeffreys, H and Jeffreys, BS (1999). Methods of mathematical physics (http:/ / worldcat. org/ oclc/ 41158050?tab=details). Cambridge
University Press. .
[2] Wilson 1901, p. 60–61
[3] Dennis G. Zill, Michael R. Cullen (2006). "Definition 7.4: Cross product of two vectors" (http:/ / books. google. com/
?id=x7uWk8lxVNYC& pg=PA324). Advanced engineering mathematics (3rd ed.). Jones & Bartlett Learning. p. 324. ISBN 076374591X. .
[4] Dennis G. Zill, Michael R. Cullen (2006). "Equation 7: a × b as sum of determinants" (http:/ / books. google. com/ ?id=x7uWk8lxVNYC&
pg=PA321). cited work. Jones & Bartlett Learning. p. 321. ISBN 076374591X. .
[5] WS Massey (Dec. 1983). "Cross products of vectors in higher dimensional Euclidean spaces" (http:/ / www. jstor. org/ stable/ 2323537). The
American Mathematical Monthly (The American Mathematical Monthly, Vol. 90, No. 10) 90 (10): 697–701. doi:10.2307/2323537. .
[6] Vladimir A. Boichenko, Gennadiĭ Alekseevich Leonov, Volker Reitmann (2005). Dimension theory for ordinary differential equations (http:/
/ books. google. com/ ?id=9bN1-b_dSYsC& pg=PA26). Vieweg+Teubner Verlag. p. 26. ISBN 3519004372. .
[7] Pertti Lounesto (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC& pg=PA94& dq="which+ in+
coordinate+ form+ means+ Lagrange's+ identity"& cd=1#v=onepage& q="which in coordinate form means Lagrange's identity") (2nd ed.).
Cambridge University Press. p. 94. ISBN 0521005515. .
[8] Shuangzhe Liu and Gõtz Trenkler (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products" (http:/ / www. math. ualberta. ca/
ijiss/ SS-Volume-4-2008/ No-1-08/ SS-08-01-17. pdf). Int J Information and systems sciences (Institute for scientific computing and
education) 4 (1): 160–177. .
[9] by Eric W. Weisstein (2003). "Binet-Cauchy identity" (http:/ / books. google. com/ ?id=8LmCzWQYh_UC& pg=PA228). CRC concise
encyclopedia of mathematics (2nd ed.). CRC Press. p. 228. ISBN 1584883472. .
[10] By a volume form one means a function that takes in n vectors and gives out a scalar, the volume of the parallelotope defined by the vectors:
This is an n-ary multilinear skew-symmetric form. In the presence of a basis, such as on this is given by the
determinant, but in an abstract vector space, this is added structure. In terms of G-structures, a volume form is an -structure.
[11] Lagrange, JL (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires". Oeuvres. vol 3.
[12] Nahin, Paul J. (2000). Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age. JHU Press. pp. 108–109.
ISBN 0-801-86909-9.
References
• Cajori, Florian (1929). A History Of Mathematical Notations Volume II (http://www.archive.org/details/
historyofmathema027671mbp). Open Court Publishing. p. 134. ISBN 978-0-486-67766-8
• William Kingdon Clifford (1878) Elements of Dynamic (http://dlxs2.library.cornell.edu/cgi/t/text/
text-idx?c=math;cc=math;view=toc;subview=short;idno=04370002), Part I, page 95, London: MacMillan & Co;
online presentation by Cornell University Historical Mathematical Monographs.
• E. A. Milne (1948) Vectorial Mechanics, Chapter 2: Vector Product, pp 11 –31, London: Methuen Publishing.
• Wilson, Edwin Bidwell (1901). Vector Analysis: A text-book for the use of students of mathematics and physics,
founded upon the lectures of J. Willard Gibbs (http://www.archive.org/details/117714283). Yale University
Press
External links
• Weisstein, Eric W., " Cross Product (http://mathworld.wolfram.com/CrossProduct.html)" from MathWorld.
• A quick geometrical derivation and interpretation of cross products (http://behindtheguesses.blogspot.com/
2009/04/dot-and-cross-products.html)
• Z.K. Silagadze (2002). Multi-dimensional vector product. Journal of Physics. A35, 4949 (http://uk.arxiv.org/
abs/math.la/0204357) (it is only possible in 7-D space)
• Real and Complex Products of Complex Numbers (http://www.cut-the-knot.org/arithmetic/algebra/
RealComplexProducts.shtml)
• An interactive tutorial (http://physics.syr.edu/courses/java-suite/crosspro.html) created at Syracuse
University - (requires java)
Cross product 21
• W. Kahan (2007). Cross-Products and Rotations in Euclidean 2- and 3-Space. University of California, Berkeley
(PDF). (http://www.cs.berkeley.edu/~wkahan/MathH110/Cross.pdf)
Triple product
In mathematics, the triple product is a product of three vectors. The name "triple product" is used for two different
products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.
Geometric interpretation
Geometrically, the scalar triple product
is the (signed) volume of the parallelepiped defined by the three vectors given.
Properties
The scalar triple product can be evaluated numerically using any one of the following equivalent characterizations:
Switching the two vectors in the cross product negates the triple product, i.e.:
.
The parentheses may be omitted without causing ambiguity, since the dot product cannot be evaluated first. If it
were, it would leave the cross product of a scalar and a vector, which is not defined.
The scalar triple product can also be understood as the determinant of the 3 × 3 matrix having the three vectors as its
rows or columns (the determinant of a transposed matrix is the same as the original); this quantity is invariant under
coordinate rotation.
Note that if the scalar triple product is equal to zero, then three vectors a, b, and c are coplanar, since the
"parallelepiped" defined by them would be flat and have no volume.
There is also this property of triple products:
Triple product 22
Scalar or pseudoscalar
Although the scalar triple product gives the volume of the parallelepiped it is the signed volume, the sign depending
on the orientation of the frame or the parity of the permutation of the vectors. This means the product is negated if
the orientation is reversed, by for example by a parity transformation, and so is more properly described as a
pseudoscalar if the orientation can change.
This also relates to the handedness of the cross product; the cross product transforms as a pseudovector under parity
transformations and so is properly described as a pseudovector. The dot product of two vectors is a scalar but the dot
product of a pseudovector and a vector is a pseudoscalar, so the scalar triple product must be pseudoscalar valued.
As an exterior product
In exterior algebra and geometric algebra the exterior product of two
vectors is a bivector, while the exterior product of three vectors is a
trivector. A bivector is an oriented plane element and a trivector is an
oriented volume element, in the same way that a vector is an oriented
line element. Given vectors a, b and c, the product
is a trivector with magnitude equal to the scalar triple product, and is the pseudoscalar dual of the triple product. As
the exterior product is associative brackets are not needed as it does not matter which of a ∧ b or b ∧ c is calculated
first, though the order of the vectors in the product does matter. Geometrically the trivector a ∧ b ∧ c corresponds to
the parallelepiped spanned by a, b, and c, with bivectors a ∧ b, b ∧ c and a ∧ c matching the parallelogram faces of
the parallelepiped.
The first formula is known as triple product expansion, or Lagrange's formula,[1] [2] although the latter name is
ambiguous (see disambiguation page). Its right hand member is easier to remember by using the mnemonic “BAC
minus CAB”, provided one keeps in mind which vectors are dotted together. A proof is provided below.
These formulas are very useful in simplifying vector calculations in physics. A related identity regarding gradients
and useful in vector calculus is Lagrange's formula of vector cross-product identity:[3]
This can be also regarded as a special case of the more general Laplace-de Rham operator .
Triple product 23
Proof
The x component of is given by:
uy(vxwy − vywx) − uz(vzwx − vxwz)
or
vx(uywy + uzwz) − wx(uyvy + uzvz)
By adding and subtracting uxvxwx, this becomes
vx(uxwx + uywy + uzwz) − wx(uxvx + uyvy + uzvz) = vx − wx
Similarly, the y and z components of are given by:
vy − wy
and
v −w
z z
By combining these three components we obtain:
[4]
Vector or pseudovector
Where parity transformations need to be considered, so the cross product is treated as a pseudovector, the vector
triple product is vector rather than pseudovector valued, as it is the product of a vector a and a pseudovector b × c.
This can also be seen from the expansion in terms of the dot product, which consists only of a sum of vectors
multiplied by scalars so must be vector valued.
Notation
Using the Levi-Civita symbol, the triple product is
and
Note
[1] Joseph Louis Lagrange did not develop the cross product as an algebraic product on vectors, but did use an equivalent form of it in
components: see Lagrange, J-L (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires". Oeuvres. vol 3. He may
have written a formula similar to the triple product expansion in component form. See also Lagrange's identity and Kiyoshi Itō (1987).
Encyclopedic Dictionary of Mathematics. MIT Press. p. 1679. ISBN 0262590204.
[2] Kiyoshi Itō (1993). "§C: Vector product" (http:/ / books. google. com/ books?id=azS2ktxrz3EC& pg=PA1679). Encyclopedic dictionary of
mathematics (2nd ed.). MIT Press. p. 1679. ISBN 0262590204. .
[3] Pengzhi Lin (2008). Numerical Modelling of Water Waves: An Introduction to Engineers and Scientists (http:/ / books. google. com/
books?id=x6ALwaliu5YC& pg=PA13). Routledge. p. 13. ISBN 0415415780. .
[4] J. Heading (1970). Mathematical Methods in Science and Engineering. American Elsevier Publishing Company, Inc. pp. 262–263.
Triple product 24
References
• Lass, Harry (1950). Vector and Tensor Analysis. McGraw-Hill Book Company, Inc.. pp. 23–25.
Binet–Cauchy identity
In algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy,
states that [1]
for every choice of real or complex numbers (or more generally, elements of a commutative ring). Setting ai = ci and
bi = di, it gives the Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for the
Euclidean space .
where a, b, c, and d are vectors. It may also be written as a formula giving the dot product of two wedge products, as
In the special case of unit vectors a=c and b=d, the formula yields
When both vectors are unit vectors, we obtain the usual relation
Proof
Expanding the last term,
where the second and fourth terms are the same and artificially added to complete the sums as follows:
This completes the proof after factoring out the terms indexed by i.
Binet–Cauchy identity 25
Generalization
A general form, also known as the Cauchy–Binet formula, states the following: Suppose A is an m×n matrix and B is
an n×m matrix. If S is a subset of {1, ..., n} with m elements, we write AS for the m×m matrix whose columns are
those columns of A that have indices from S. Similarly, we write BS for the m×m matrix whose rows are those rows
of B that have indices from S. Then the determinant of the matrix product of A and B satisfies the identity
where the sum extends over all possible subsets S of {1, ..., n} with m elements.
We get the original identity as special case by setting
An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space. A
complete space with an inner product is called a Hilbert space. An incomplete space with an inner product is called a
pre-Hilbert space, since its completion with respect to the norm, induced by the inner product, becomes a Hilbert
space. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces.
Inner product space 26
Definition
In this article, the field of scalars denoted is either the field of real numbers or the field of complex numbers
.
Formally, an inner product space is a vector space V over the field together with an inner product, i.e., with a
map
that satisfies the following three axioms for all vectors and all scalars :[1] [2]
• Conjugate symmetry:
• Positive-definiteness:
with equality only for
Notice that conjugate symmetry implies that is real for all , since we have
Conjugate symmetry and linearity in the first variable gives
so an inner product is a sesquilinear form. Conjugate symmetry is also called Hermitian symmetry, and a conjugate
symmetric sesquilinear form is called a Hermitian form. While the above axioms are more mathematically
economical, a compact verbal definition of an inner product is a positive-definite Hermitian form.
In the case of , conjugate-symmetric reduces to symmetric, and sesquilinear reduces to bilinear. So, an
inner product on a real vector space is a positive-definite symmetric bilinear form.
From the linearity property it is derived that implies while from the positive-definiteness
axiom we obtain the converse, implies Combining these two, we have the property that
if and only if
The property of an inner product space that
and
is known as additivity.
Remark: Some authors, especially in physics and matrix algebra, prefer to define the inner product and the
sesquilinear form with linearity in the second argument rather than the first. Then the first argument becomes
conjugate linear, rather than the second. In those disciplines we would write the product as (the
bra-ket notation of quantum mechanics), respectively (dot product as a case of the convention of forming the
matrix product AB as the dot products of rows of A with columns of B). Here the kets and columns are identified
with the vectors of V and the bras and rows with the dual vectors or linear functionals of the dual space V*, with
conjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature,
e.g., Emch [1972], taking to be conjugate linear in x rather than y. A few instead find a middle ground by
recognizing both and as distinct notations differing only in which argument is conjugate linear.
There are various technical reasons why it is necessary to restrict the basefield to and in the definition.
Briefly, the basefield has to contain an ordered subfield (in order for non-negativity to make sense) and therefore has
to have characteristic equal to 0. This immediately excludes finite fields. The basefield has to have additional
Inner product space 27
structure, such as a distinguished automorphism. More generally any quadratically closed subfield of or will suffice
for this purpose, e.g., the algebraic numbers, but when it is a proper subfield (i.e., neither nor ) even
finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner
product spaces over or , such as those used in quantum computation, are automatically metrically complete and
hence Hilbert spaces.
In some cases we need to consider non-negative semi-definite sesquilinear forms. This means that is only
required to be non-negative. We show how to treat these below.
Examples
• A simple example is the real numbers with the standard multiplication as the inner product
n
More generally any Euclidean space with the dot product is an inner product space
n
• The general form of an inner product on is given by:
with M any Hermitian positive-definite matrix, and y* the conjugate transpose of y. For the real case this
corresponds to the dot product of the results of directionally differential scaling of the two vectors, with
positive scale factors and orthogonal directions of scaling. Up to an orthogonal transformation it is a
weighted-sum version of the dot product, with positive weights.
• The article on Hilbert space has several examples of inner product spaces wherein the metric induced by the inner
product yields a complete metric space. An example of an inner product which induces an incomplete metric
occurs with the space C[a, b] of continuous complex valued functions on the interval [a, b]. The inner product is
This space is not complete; consider for example, for the interval [−1,1] the sequence of "step" functions
{ fk }k where
• fk(t) is 0 for t in the subinterval [−1,0]
• fk(t) is 1 for t in the subinterval [1/k, 1]
• fk is affine in [0, 1/k].
This sequence is a Cauchy sequence which does not converge to a continuous function.
• For random variables X and Y, the expected value of their product
is an inner product. In this case, <X, X>=0 if and only if Pr(X=0)=1 (i.e., X=0 almost surely). This definition of
expectation as inner product can be extended to random vectors as well.
• For square real matrices, with transpose as conjugation ( ) is an
inner product.
Inner product space 28
where p ≠ 2 is a normed space but not an inner product space, because this norm does not satisfy the parallelogram
equality required of a norm to have an inner product associated with it.[3] [4]
However, inner product spaces have a naturally defined norm based upon the inner product of the space itself that
does satisfy the parallelogram equality:
This is well defined by the nonnegativity axiom of the definition of inner product space. The norm is thought of as
the length of the vector x. Directly from the axioms, we can prove the following:
• Cauchy–Schwarz inequality: for x, y elements of V
with equality if and only if x and y are linearly dependent. This is one of the most important inequalities in
mathematics. It is also known in the Russian mathematical literature as the Cauchy–Bunyakowski–Schwarz
inequality.
Because of its importance, its short proof should be noted.
It is trivial to prove the inequality true in the case y = 0. Thus we assume ⟨y, y⟩ is nonzero, giving us the
following:
We assume the value of the angle is chosen to be in the interval [0, +π]. This is in analogy to the situation in
two-dimensional Euclidean space.
In the case F = , the angle in the interval [0, +π/2] is typically defined by
Correspondingly, we will say that non-zero vectors x and y of V are orthogonal if and only if their inner
product is zero.
• Homogeneity: for x an element of V and r a scalar
The last two properties show the function defined is indeed a norm.
Inner product space 29
Because of the triangle inequality and because of axiom 2, we see that ||·|| is a norm which turns V into a
normed vector space and hence also into a metric space. The most important inner product spaces are the ones
which are complete with respect to this metric; they are called Hilbert spaces. Every inner product V space is a
dense subspace of some Hilbert space. This Hilbert space is essentially uniquely determined by V and is
constructed by completing V.
• Pythagorean theorem: Whenever x, y are in V and ⟨x, y⟩ = 0, then
The proof of the identity requires only expressing the definition of norm in terms of the inner product and
multiplying out, using the property of additivity of each component.
The name Pythagorean theorem arises from the geometric interpretation of this result as an analogue of the
theorem in synthetic geometry. Note that the proof of the Pythagorean theorem in synthetic geometry is
considerably more elaborate because of the paucity of underlying structure. In this sense, the synthetic
Pythagorean theorem, if correctly demonstrated is deeper than the version given above.
An induction on the Pythagorean theorem yields:
• If x1, ..., xn are orthogonal vectors, that is, for distinct indices j, k, then
In view of the Cauchy-Schwarz inequality, we also note that is continuous from V × V to F. This allows
us to extend Pythagoras' theorem to infinitely many summands:
• Parseval's identity: Suppose V is a complete inner product space. If {xk} are mutually orthogonal vectors in V then
provided the infinite series on the left is convergent. Completeness of the space is needed to ensure that the
sequence of partial sums
The Parallelogram law is, in fact, a necessary and sufficient condition for the existence of a scalar product,
corresponding to a given norm. If it holds, the scalar product is defined by the polarization identity:
Orthonormal sequences
Let V be a finite dimensional inner product space of dimension n. Recall that every basis of V consists of exactly n
linearly independent vectors. Using the Gram-Schmidt Process we may start with an arbitrary basis and transform it
into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In
symbols, a basis is orthonormal if if and for each i.
This definition of orthonormal basis generalizes to the case of infinite dimensional inner product spaces in the
following way. Let V be a any inner product space. Then a collection is a basis for V if the
subspace of V generated by finite linear combinations of elements of E is dense in V (in the norm induced by the
inner product). We say that E is an orthonormal basis for V if it is a basis and if and
for all .
Using an infinite-dimensional analog of the Gram-Schmidt process one may show:
Theorem. Any separable inner product space V has an orthonormal basis.
Using the Hausdorff Maximal Principle and the fact that in a complete inner product space orthogonal projection
onto linear subspaces is well-defined, one may also show that
Theorem. Any complete inner product space V has an orthonormal basis.
The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The
answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from
Halmos's A Hilbert Space Problem Book (see the references).
Proof
Recall that the dimension of an inner product space is the cardinality of a maximal orthonormal system that it contains (by Zorn's lemma it contains
at least one, and any two have the same cardinality). An orthonormal basis is certainly a maximal orthonormal system, but as we shall see, the
converse need not hold. Observe that if G is a dense subspace of an inner product space H, then any orthonormal basis for G is automatically an
orthonormal basis for H. Thus, it suffices to construct an inner product space space H with a dense subspace G whose dimension is strictly smaller
than that of H.
Let K be a Hilbert space of dimension (for instance, ). Let E be an orthonormal basis of K, so . Extend E to a Hamel
basis for K, where . Since it is known that the Hamel dimension of K is c, the cardinality of the continuum, it must be that
.
Let L be a Hilbert space of dimension c (for instance, ). Let B be an orthonormal basis for L, and let be a bijection.
Then there is a linear transformation such that for , and for .
Let and let be the graph of T. Let be the closure of G in H; we will show . Since for
any we have , it follows that .
Next, if , then for some , so ; since as well, we also have . It
follows that , so , and G is dense in H.
Finally, is a maximal orthonormal set in G; if
for all then certainly , so is the zero vector in G. Hence the dimension of G is , whereas it is clear
that the dimension of H is c. This completes the proof.
Theorem. Let V be the inner product space . Then the sequence (indexed on set of all integers) of
continuous functions
is an orthonormal basis of the space with the L2 inner product. The mapping
Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally
the fact that the sequence has a dense algebraic span, in the inner product norm, follows from the fact that the
sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the
uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.
Generalizations
Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are
closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is
weakened.
Related products
The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in
coordinates, the inner product is the product of a 1×n covector with an n×1 vector, yielding a 1×1 matrix (a scalar),
while the outer product is the product of an m×1 vector with a 1×n covector, yielding an m×n matrix. Note that the
outer product is defined for different dimensions, while the inner product requires the same dimension. If the
dimensions are the same, then the inner product is the trace of the outer product (trace only being properly defined
for square matrices).
On an inner product space, or more generally a vector space with a nondegenerate form (so an isomorphism
) vectors can be sent to covectors (in coordinates, via transpose), so one can take the inner product and
outer product of two vectors, not simply of a vector and a covector.
In a quip: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out".
More abstractly, the outer product is the bilinear map sending a vector and a covector
to a rank 1 linear transformation (simple tensor of type (1,1)), while the inner product is the bilinear evaluation map
given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the
covector/vector distinction.
The inner product and outer product should not be confused with the interior product and exterior product, which are
instead operations on vector fields and differential forms, or more generally on the exterior algebra.
As a further complication, in geometric algebra the inner product and the exterior (Grassmann) product are
combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors
(1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this
context the exterior product is usually called the "outer(alternatively,wedge) product". The inner product is more
correctly called a scalar product in this context, as the nondegenerate quadratic form in question need not be positive
definite (need not be an inner product).
Inner product space 33
References
• Axler, Sheldon (1997). Linear Algebra Done Right (2nd ed.). Berlin, New York: Springer-Verlag.
ISBN 978-0-387-98258-8
• Emch, Gerard G. (1972). Algebraic methods in statistical mechanics and quantum field theory.
Wiley-Interscience. ISBN 978-0-471-23900-0
• Young, Nicholas (1988). An introduction to Hilbert space. Cambridge University Press.
ISBN 978-0-521-33717-5
Sesquilinear form
In mathematics, a sesquilinear form on a complex vector space V is a map V × V → C that is linear in one argument
and antilinear in the other. The name originates from the numerical prefix sesqui- meaning "one and a half".
Compare with a bilinear form, which is linear in both arguments; although many authors, especially when working
solely in a complex setting, refer to sesquilinear forms as bilinear forms.
A motivating example is the inner product on a complex vector space, which is not bilinear, but instead sesquilinear.
See geometric motivation below.
where is the complex conjugate vector space to V. By the universal property of tensor products these are in
one-to-one correspondence with (complex) linear maps
For a fixed z in V the map is a linear functional on V (i.e. an element of the dual space V*).
Likewise, the map is a conjugate-linear functional on V.
Given any sesquilinear form φ on V we can define a second sesquilinear form ψ via the conjugate transpose:
Sesquilinear form 34
In general, ψ and φ will be different. If they are the same then φ is said to be Hermitian. If they are negatives of one
another, then φ is said to be skew-Hermitian. Every sesquilinear form can be written as a sum of a Hermitian form
and a skew-Hermitian form.
Geometric motivation
Bilinear forms are to squaring (z2), what sesquilinear forms are to Euclidean norm (|z|2 = z*z).
The norm associated to a sesquilinear form is invariant under multiplication by the complex circle (complex numbers
of unit norm), while the norm associated to a bilinear form is equivariant (with respect to squaring). Bilinear forms
are algebraically more natural, while sesquilinear forms are geometrically more natural.
If B is a bilinear form on a complex vector space and |x|B := B(x,x) is the associated norm, then |ix|B = B(ix,ix)=i2
B(x,x) = -|x|B.
By contrast, if S is a sesquilinear form on a complex vector space and is the associated norm,
then .
Hermitian form
The term Hermitian form may also refer to a different concept than that explained below: it may refer to a
certain differential form on a Hermitian manifold.
A Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × V → C such that
More generally, the inner product on any complex Hilbert space is a Hermitian form.
A vector space with a Hermitian form (V,h) is called a Hermitian space.
If V is a finite-dimensional space, then relative to any basis {ei} of V, a Hermitian form is represented by a Hermitian
matrix H:
Skew-Hermitian form
A skew-Hermitian form (also called an antisymmetric sesquilinear form), is a sesquilinear form ε : V × V → C
such that
References
• Hazewinkel, Michiel, ed. (2001), "Sesquilinear form" [1], Encyclopaedia of Mathematics, Springer,
ISBN 978-1556080104
References
[1] http:/ / eom. springer. de/ s084710. htm
Scalar multiplication 36
Scalar multiplication
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra[1] [2] [3]
(or more generally, a module in abstract algebra[4] [5] ). In an intuitive geometrical context, scalar multiplication of a
real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction.
The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is
different from the scalar product, which is an inner product between two vectors.
Definition
In general, if K is a field and V is a vector space over K, then scalar multiplication is a function from K × V to V. The
result of applying this function to c in K and v in V is denoted cv.
Scalar multiplication obeys the following rules (vector in boldface):
• Left distributivity: (c + d)v = cv + dv;
• Right distributivity: c(v + w) = cv + cw;
• Associativity: (cd)v = c(dv);
• Multiplying by 1 does not change a vector: 1v = v;
• Multiplying by 0 gives the null vector: 0v = 0;
• Multiplying by -1 gives the additive inverse: (-1)v = -v.
Here + is addition either in the field or in the vector space, as appropriate; and 0 is the additive identity in either.
Juxtaposition indicates either scalar multiplication or the multiplication operation in the field.
Scalar multiplication may be viewed as an external binary operation or as an action of the field on the vector space.
A geometric interpretation to scalar multiplication is a stretching or shrinking of a vector.
As a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply the
multiplication in the field. When V is Kn, then scalar multiplication is defined component-wise.
The same idea goes through with no change if K is a commutative ring and V is a module over K. K can even be a
rig, but then there is no additive inverse. If K is not commutative, then the only change is that the order of the
multiplication may be reversed from what we've written above.
References
[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.
[2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.
[3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.
[4] Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9.
[5] Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X.
Euclidean space 37
Euclidean space
In mathematics, Euclidean space is the Euclidean
plane and three-dimensional space of Euclidean
geometry, as well as the generalizations of these
notions to higher dimensions. The term “Euclidean”
distinguishes these spaces from the curved spaces of
non-Euclidean geometry and Einstein's general theory
of relativity, and is named for the Greek mathematician
Euclid of Alexandria.
From the modern viewpoint, there is essentially only one Euclidean space of each dimension. In dimension one this
is the real line; in dimension two it is the Cartesian plane; and in higher dimensions it is the real coordinate space
with three or more real number coordinates. Thus a point in Euclidean space is a tuple of real numbers, and distances
are defined using the Euclidean distance formula. Mathematicians often denote the n-dimensional Euclidean space
by , or sometimes if they wish to emphasize its Euclidean nature. Euclidean spaces have finite dimension.
Intuitive overview
One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of
distance and angle. For example, there are two fundamental operations on the plane. One is translation, which means
a shifting of the plane so that every point is shifted in the same direction and by the same distance. The other is
rotation about a fixed point in the plane, in which every point in the plane turns about that fixed point through the
same angle. One of the basic tenets of Euclidean geometry is that two figures (that is, subsets) of the plane should be
considered equivalent (congruent) if one can be transformed into the other by some sequence of translations,
rotations and reflections. (See Euclidean group.)
In order to make all of this mathematically precise, one must clearly define the notions of distance, angle, translation,
and rotation. The standard way to do this, as carried out in the remainder of this article, is to define the Euclidean
plane as a two-dimensional real vector space equipped with an inner product. For then:
• the vectors in the vector space correspond to the points of the Euclidean plane,
• the addition operation in the vector space corresponds to translation, and
• the inner product implies notions of angle and distance, which can be used to define rotation.
Once the Euclidean plane has been described in this language, it is actually a simple matter to extend its concept to
arbitrary dimensions. For the most part, the vocabulary, formulas, and calculations are not made any more difficult
by the presence of more dimensions. (However, rotations are more subtle in high dimensions, and visualizing
high-dimensional spaces remains difficult, even for experienced mathematicians.)
Euclidean space 38
A final wrinkle is that Euclidean space is not technically a vector space but rather an affine space, on which a vector
space acts. Intuitively, the distinction just says that there is no canonical choice of where the origin should go in the
space, because it can be translated anywhere. In this article, this technicality is largely ignored.
where each xi is a real number. The vector space operations on Rn are defined by
Rn is the prototypical example of a real n-dimensional vector space. In fact, every real n-dimensional vector space V
is isomorphic to Rn. This isomorphism is not canonical, however. A choice of isomorphism is equivalent to a choice
of basis for V (by looking at the image of the standard basis for Rn in V). The reason for working with arbitrary
vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner (that is, without
choosing a preferred basis).
Euclidean structure
Euclidean space is more than just a real coordinate space. In order to apply Euclidean geometry one needs to be able
to talk about the distances between points and the angles between lines or vectors. The natural way to obtain these
quantities is by introducing and using the standard inner product (also known as the dot product) on Rn. The inner
product of any two vectors x and y is defined by
The result is always a real number. Furthermore, the inner product of x with itself is always nonnegative. This
product allows us to define the "length" of a vector x as
This length function satisfies the required properties of a norm and is called the Euclidean norm on Rn.
The (non-reflex) angle θ (0° ≤ θ ≤ 180°) between x and y is then given by
Euclidean space 39
This distance function is called the Euclidean metric. It can be viewed as a form of the Pythagorean theorem.
Real coordinate space together with this Euclidean structure is called Euclidean space and often denoted En. (Many
authors refer to Rn itself as Euclidean space, with the Euclidean structure being understood). The Euclidean structure
makes En an inner product space (in fact a Hilbert space), a normed vector space, and a metric space.
Rotations of Euclidean space are then defined as orientation-preserving linear transformations T that preserve angles
and lengths:
Generalizations
In modern mathematics, Euclidean spaces form the prototypes for other, more complicated geometric objects. For
example, a smooth manifold is a Hausdorff topological space that is locally diffeomorphic to Euclidean space.
Diffeomorphism does not respect distance and angle, so these key concepts of Euclidean geometry are lost on a
smooth manifold. However, if one additionally prescribes a smoothly varying inner product on the manifold's
tangent spaces, then the result is what is called a Riemannian manifold. Put differently, a Riemannian manifold is a
space constructed by deforming and patching together Euclidean spaces. Such a space enjoys notions of distance and
angle, but they behave in a curved, non-Euclidean manner. The simplest Riemannian manifold, consisting of Rn with
a constant inner product, is essentially identical to Euclidean n-space itself.
If one alters a Euclidean space so that its inner product becomes negative in one or more directions, then the result is
a pseudo-Euclidean space. Smooth manifolds built from such spaces are called pseudo-Riemannian manifolds.
Perhaps their most famous application is the theory of relativity, where empty spacetime with no matter is
represented by the flat pseudo-Euclidean space called Minkowski space, spacetimes with matter in them form other
pseudo-Riemannian manifolds, and gravity corresponds to the curvature of such a manifold.
Our universe, being subject to relativity, is not Euclidean. This becomes significant in theoretical considerations of
astronomy and cosmology, and also in some practical problems such as global positioning and airplane navigation.
Nonetheless, a Euclidean model of the universe can still be used to solve many other practical problems with
sufficient precision.
Euclidean space 40
References
• Kelley, John L. (1975). General Topology. Springer-Verlag. ISBN 0-387-90125-6.
• Munkres, James (1999). Topology. Prentice-Hall. ISBN 0-13-181629-2.
Orthonormality
In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and both of unit
length. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit
length. An orthonormal set which forms a basis is called an orthonormal basis.
Intuitive overview
The construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicular
vectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be perpendicular if the angle
between them is 90° (i.e. if they form a right angle). This definition can be formalized in Cartesian space by defining
the dot product and specifying that two vectors in the plane are orthogonal if their dot product is zero.
Similarly, the construction of the norm of a vector is motivated by a desire to extend the intuitive notion of the length
of a vector to higher-dimensional spaces. In Cartesian space, the norm of a vector is the square root of the vector
dotted with itself. That is,
Many important results in linear algebra deal with collections of two or more orthogonal vectors. But often, it is
easier to deal with vectors of unit length. That is, it often simplifies things to only consider vectors whose norm
equals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be
given a special name. Two vectors which are orthogonal and of length 1 are said to be orthonormal.
Simple example
What does a pair of orthonormal vectors in 2-D Euclidean space look like?
Let u = (x1, y1) and v = (x2, y2). Consider the restrictions on x1, x2, y1, y2 required to make u and v form an
orthonormal pair.
• From the orthogonality restriction, u • v = 0.
• From the unit length restriction on u, ||u|| = 1.
• From the unit length restriction on v, ||v|| = 1.
Expanding these terms gives 3 equations:
1.
2.
3.
Converting from Cartesian to polar coordinates, and considering Equation and Equation immediately gives
the result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unit
circle.
After substitution, Equation becomes . Rearranging gives
. Using a trigonometric identity to convert the cotangent term gives
Orthonormality 41
It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals
90°.
Definition
Let be an inner-product space. A set of vectors
where is the Kronecker delta and is the inner product defined over .
Significance
Orthonormal sets are not especially significant on their own. However, they display certain features that make them
fundamental in exploring the notion of diagonalizability of certain operators on vector spaces.
Properties
Orthonormal sets have certain very appealing properties, which make them particularly easy to work with.
• Theorem. If {e1, e2,...,en} is an orthonormal list of vectors, then
Existence
• Gram-Schmidt theorem. If {v1, v2,...,vn} is a linearly independent list of vectors in an inner-product space ,
then there exists an orthonormal list {e1, e2,...,en} of vectors in such that span(e1, e2,...,en) = span(v1, v2,...,vn).
Proof of the Gram-Schmidt theorem is constructive, and discussed at length elsewhere. The Gram-Schmidt theorem,
together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possibly
the most significant use of orthonormality, as this fact permits operators on inner-product spaces to be discussed in
terms of their action on the space's orthonormal basis vectors. What results is a deep relationship between the
diagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterized
by the Spectral Theorem.
Examples
Standard basis
The standard basis for the coordinate space Fn is
Orthonormality 42
e2 = (0, 1, ... , 0)
en = (0, 0, ..., 1)
Any two vectors ei, ej where i≠j are orthogonal, and all vectors are clearly of unit length. So {e1, e2,...,en} forms an
orthonormal basis.
Real-valued functions
When referring to real-valued functions, usually the L² inner product is assumed unless otherwise stated. Two
functions and are orthonormal over the interval if
Fourier series
The Fourier series is a method of expressing a periodic function in terms of sinusoidal basis functions. Taking
C[-π,π] to be the space of all real-valued functions continuous on the interval [-π,π] and taking the inner product to
be
References
• Axler, Sheldon (1997), Linear Algebra Done Right (2nd ed.), Berlin, New York: Springer-Verlag,
ISBN 978-0-387-98258-8
Cauchy–Schwarz inequality 43
Cauchy–Schwarz inequality
In mathematics, the Cauchy–Schwarz inequality (also known as the Bunyakovsky inequality, the Schwarz
inequality, or the Cauchy–Bunyakovsky–Schwarz inequality), is a useful inequality encountered in many
different settings, such as linear algebra, analysis, in probability theory, and other areas. It is considered to be one of
the most important inequalities in all of mathematics.[1] It has a number of generalizations, among them Hölder's
inequality.
The inequality for sums was published by Augustin-Louis Cauchy (1821), while the corresponding inequality for
integrals was first stated by Viktor Bunyakovsky (1859) and rediscovered by Hermann Amandus Schwarz (1888)
(often misspelled "Schwartz").
where is the inner product. Equivalently, by taking the square root of both sides, and referring to the norms of
the vectors, the inequality is written as
Moreover, the two sides are equal if and only if x and y are linearly dependent (or, in a geometrical sense, they are
parallel or one of the vectors is equal to zero).
If and are any complex numbers and the inner product is the standard inner
product then the inequality may be restated in a more explicit way as follows:
When viewed in this way the numbers x1, ..., xn, and y1, ..., yn are the components of x and y with respect to an
orthonormal basis of V.
Even more compactly written:
Equality holds if and only if x and y are linearly dependent, that is, one is a scalar multiple of the other (which
includes the case when one or both are zero).
The finite-dimensional case of this inequality for real vectors was proved by Cauchy in 1821, and in 1859 Cauchy's
student Bunyakovsky noted that by taking limits one can obtain an integral form of Cauchy's inequality. The general
result for an inner product space was obtained by Schwarz in 1885.
Proof
Let u, v be arbitrary vectors in a vector space V over F with an inner product, where F is the field of real or complex
numbers. We prove the inequality
This inequality is trivial in the case v = 0, so we may assume from hereon that v is nonzero. In fact, as both sides of
the inequality clearly multiply by the same factor when is multiplied by a positive scaling factor , it suffices
to consider only the case where is normalized to have magnitude 1, as we shall assume for convenience in the rest
of this section.
Cauchy–Schwarz inequality 44
Any vector can be decomposed into a sum of components parallel and perpendicular to ; in particular, can be
decomposed into , where is a vector orthogonal to (this orthogonality can be seen by noting that
, so that ).
Accordingly, by the Pythagorean theorem (which is to say, by simply expanding out the calculation of ), we
find that , with equality if and only if (i.e., in the case where is
a multiple of ). This establishes the theorem.
Rn
In Euclidean space Rn with the standard inner product, the Cauchy–Schwarz inequality is
To prove this form of the inequality, consider the following quadratic polynomial in z.
Since it is nonnegative it has at most one real root in z, whence its discriminant is less than or equal to zero, that is,
collecting together identical terms (albeit with different summation indices) we find:
Because the left-hand side of the equation is a sum of the squares of real numbers it is greater than or equal to zero,
thus:
When n = 3 the Cauchy–Schwarz inequality can also be deduced from Lagrange's identity, which takes the form
L2
For the inner product space of square-integrable complex-valued functions, one has
Use
The triangle inequality for the inner product is often shown as a consequence of the Cauchy–Schwarz inequality, as
follows: given vectors x and y:
The Cauchy–Schwarz inequality proves that this definition is sensible, by showing that the right hand side lies in the
interval [−1, 1], and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space.
It can also be used to define an angle in complex inner product spaces, by taking the absolute value of the right hand
side, as is done when extracting a metric from quantum fidelity.
The Cauchy–Schwarz is used to prove that the inner product is a continuous function with respect to the topology
induced by the inner product itself.
The Cauchy–Schwarz inequality is usually used to show Bessel's inequality.
Probability theory
For multivariate case,
Generalizations
Various generalizations of the Cauchy–Schwarz inequality exist in the context of operator theory, e.g. for
operator-convex functions, and operator algebras, where the domain and/or range of φ are replaced by a C*-algebra
or W*-algebra.
This section lists a few of such inequalities from the operator algebra setting, to give a flavor of results of this type.
Since < ƒ, ƒ > ≥ 0, φ(f*f) ≥ 0 for all ƒ in L2(m), where ƒ* is pointwise conjugate of ƒ. So φ is positive. Conversely
every positive functional φ gives a corresponding inner product < ƒ, g >φ = φ(g*ƒ). In this language, the
Cauchy–Schwarz inequality becomes
Since φ is a positive linear map whose range, the complex numbers C, is a commutative C*-algebra, φ is completely
positive. Therefore
This is precisely the Cauchy–Schwarz inequality. If ƒ and g are elements of a C*-algebra, f* and g* denote their
respective adjoints.
We can also deduce from above that every positive linear functional is bounded, corresponding to the fact that the
inner product is jointly continuous.
Positive maps
Positive functionals are special cases of positive maps. A linear map Φ between C*-algebras is said to be a positive
map if a ≥ 0 implies Φ(a) ≥ 0. It is natural to ask whether inequalities of Schwarz-type exist for positive maps. In
this more general setting, usually additional assumptions are needed to obtain such results.
Kadison-Schwarz inequality
The following theorem is named after Richard Kadison.
Theorem. If Φ is a unital positive map, then for every normal element a in its domain, we have Φ(a*a) ≥ Φ(a*)Φ(a)
and Φ(a*a) ≥ Φ(a)Φ(a*).
This extends the fact φ(a*a) · 1 ≥ φ(a)*φ(a) = |φ(a)|2, when φ is a linear functional.
The case when a is self-adjoint, i.e. a = a*, is sometimes known as Kadison's inequality.
Cauchy–Schwarz inequality 47
2-positive maps
When Φ is 2-positive, a stronger assumption than merely positive, one has something that looks very similar to the
original Cauchy–Schwarz inequality:
Theorem (Modified Schwarz inequality for 2-positive maps) For a 2-positive map Φ between C*-algebras, for all a,
b in its domain,
i) Φ(a)*Φ(a) ≤ ||Φ(1)|| Φ(a*a).
ii) ||Φ(a*b)||2 ≤ ||Φ(a*a)|| · ||Φ(b*b)||.
A simple argument for ii) is as follows. Consider the positive matrix
By 2-positivity of Φ,
is positive. The desired inequality then follows from the properties of positive 2 × 2 (operator) matrices.
Physics
The general formulation of the Heisenberg uncertainty principle is derived using the Cauchy–Schwarz inequality in
the Hilbert space of quantum observables.
References
In-line references
[1] The Cauchy-Schwarz Master Class: an Introduction to the Art of Mathematical Inequalities, Ch. 1 (http:/ / www-stat. wharton. upenn. edu/
~steele/ Publications/ Books/ CSMC/ CSMC_index. html) by J. Michael Steele.
General references
• Bityutskov, V.I. (2001), "Bunyakovskii inequality" (http://eom.springer.de/b/b017770.htm), in Hazewinkel,
Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104
• Bouniakowsky, V. (1859), "Sur quelques inegalités concernant les intégrales aux différences finies" (http://
www-stat.wharton.upenn.edu/~steele/Publications/Books/CSMC/bunyakovsky.pdf) (PDF), Mem. Acad. Sci.
St. Petersbourg 7 (1): 9
• Cauchy, A. (1821), Oeuvres 2, III, p. 373
• Dragomir, S. S. (2003), "A survey on Cauchy-Bunyakovsky-Schwarz type discrete inequalities" (http://jipam.
vu.edu.au/article.php?sid=301), JIPAM. J. Inequal. Pure Appl. Math. 4 (3): 142 pp
• Kadison, R.V. (1952), "A generalized Schwarz inequality and algebraic invariants for operator algebras" (http://
jstor.org/stable/1969657), Ann. Of Math. 56 (3): 494, doi:10.2307/1969657.
• Lohwater, Arthur (1982), Introduction to Inequalities (http://www.mediafire.com/?1mw1tkgozzu), Online
e-book in PDF fomat
• Paulsen, V. (2003), Completely Bounded Maps and Operator Algebras, Cambridge University Press.
• Schwarz, H. A. (1888), "Über ein Flächen kleinsten Flächeninhalts betreffendes Problem der Variationsrechnung"
(http://www-stat.wharton.upenn.edu/~steele/Publications/Books/CSMC/Schwarz.pdf) (PDF), Acta
Societatis scientiarum Fennicae XV: 318
• Solomentsev, E.D. (2001), "Cauchy inequality" (http://eom.springer.de/C/c020880.htm), in Hazewinkel,
Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104
Cauchy–Schwarz inequality 48
External links
• Earliest Uses: The entry on the Cauchy-Schwarz inequality has some historical information. (http://jeff560.
tripod.com/c.html)
Orthonormal basis
In mathematics, particularly linear algebra, an orthonormal basis for inner product space V with finite dimension is
a basis for V whose vectors are orthonormal.[1] [2] [3] For example, the standard basis for a Euclidean space Rn is an
orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis
under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for
Rn arises in this fashion.
For a general inner product space V, an orthonormal basis can be used to define normalized orthogonal coordinates
on V. Under these coordinates, the inner product becomes dot product of vectors. Thus the presence of an
orthonormal basis reduces the study of a finite-dimensional inner product space to the study of Rn under dot product.
Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary
basis using the Gram–Schmidt process.
In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional)
inner product spaces (or pre-Hilbert spaces).[4] Given a pre-Hilbert space H, an orthonormal basis for H is an
orthonormal set of vectors with the property that every vector in H can be written as an infinite linear combination of
the vectors in the basis. In this case, the orthonormal basis is sometimes called a Hilbert basis for H. Note that an
orthonormal basis in this sense is not generally a Hamel basis, since infinite linear combinations are required.
Specifically, the linear span of the basis must be dense in H, but it may not be the entire space.
Examples
• The set of vectors {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)} (the standard basis) forms an orthonormal basis of
R3.
Proof: A straightforward computation shows that the inner products of these vectors equals zero, <e1,
e2> = <e1, e3> = <e2, e3> = 0 and that each of their magnitudes equals one, ||e1|| = ||e2|| = ||e3|| = 1. This
means {e1, e2, e3} is an orthonormal set. All vectors (x, y, z) in R3 can be expressed as a sum of the basis
vectors scaled
so {e1,e2,e3} spans R3 and hence must be a basis. It may also be shown that the standard basis rotated
about an axis through the origin or reflected in a plane through the origin forms an orthonormal basis of
R3.
• The set {fn : n ∈ Z} with fn(x) = exp(2πinx) forms an orthonormal basis of the complex space L2([0,1]). This is
fundamental to the study of Fourier series.
• The set {eb : b ∈ B} with eb(c) = 1 if b = c and 0 otherwise forms an orthonormal basis of ℓ 2(B).
• Eigenfunctions of a Sturm–Liouville eigenproblem.
• An orthogonal matrix is a matrix whose column vectors form an orthonormal set.
Orthonormal basis 49
Basic formula
If B is an orthogonal basis of H, then every element x of H may be written as
Even if B is uncountable, only countably many terms in this sum will be non-zero, and the expression is therefore
well-defined. This sum is also called the Fourier expansion of x, and the formula is usually known as Parseval's
identity. See also Generalized Fourier series.
If B is an orthonormal basis of H, then H is isomorphic to ℓ 2(B) in the following sense: there exists a bijective linear
map Φ : H -> ℓ 2(B) such that
Existence
Using Zorn's lemma and the Gram–Schmidt process (or more simply well-ordering and transfinite recursion), one
can show that every Hilbert space admits a basis and thus an orthonormal basis; furthermore, any two orthonormal
bases of the same space have the same cardinality. A Hilbert space is separable if and only if it admits a countable
orthonormal basis.
As a homogeneous space
The set of orthonormal bases for a space is a principal homogeneous space for the orthogonal group O(n), and is
called the Stiefel manifold of orthonormal n-frames.
In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given
an orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one
correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a
given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any
orthogonal basis to any other orthogonal basis.
The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal k-frames) are still
homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken to
any other k-frame by an orthogonal map, but this map is not uniquely determined.
Orthonormal basis 50
References
[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.
[2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.
[3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.
[4] Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.
Vector space
A vector space is a mathematical structure formed by a collection
of vectors: objects that may be added together and multiplied
("scaled") by numbers, called scalars in this context. Scalars are
often taken to be real numbers, but one may also consider vector
spaces with scalar multiplication by complex numbers, rational
numbers, or even more general fields instead. The operations of
vector addition and scalar multiplication have to satisfy certain
requirements, called axioms, listed below. An example of a vector
Vector addition and scalar multiplication: a vector v
space is that of Euclidean vectors which are often used to represent (blue) is added to another vector w (red, upper
physical quantities such as forces: any two forces (of the same illustration). Below, w is stretched by a factor of 2,
type) can be added to yield a third, and the multiplication of a yielding the sum v + 2·w.
Vector spaces are the subject of linear algebra and are well understood from this point of view, since vector spaces
are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the
space. The theory is further enhanced by introducing on a vector space some additional structure, such as a norm or
inner product. Such spaces arise naturally in mathematical analysis, mainly in the guise of infinite-dimensional
function spaces whose vectors are functions. Analytical problems call for the ability to decide if a sequence of
vectors converges to a given vector. This is accomplished by considering vector spaces with additional data, mostly
spaces endowed with a suitable topology, thus allowing the consideration of proximity and continuity issues. These
topological vector spaces, in particular Banach spaces and Hilbert spaces, have a richer theory.
Historically, the first ideas leading to vector spaces can be traced back as far as 17th century's analytic geometry,
matrices, systems of linear equations, and Euclidean vectors. The modern, more abstract treatment, first formulated
by Giuseppe Peano in the late 19th century, encompasses more general objects than Euclidean space, but much of
the theory can be seen as an extension of classical geometric ideas like lines, planes and their higher-dimensional
analogs.
Today, vector spaces are applied throughout mathematics, science and engineering. They are the appropriate
linear-algebraic notion to deal with systems of linear equations; offer a framework for Fourier expansion, which is
employed in image compression routines; or provide an environment that can be used for solution techniques for
partial differential equations. Furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with
geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds
by linearization techniques. Vector spaces may be generalized in several directions, leading to more advanced
notions in geometry and abstract algebra.
Vector space 51
Definition
A vector space over a field F is a set V together with two binary operators that satisfy 8 axioms listed below.
Elements of V are called vectors. Elements of F are called scalars. In this article, vectors are differentiated from
scalars by boldface.[1] In the two examples above, our set consists of the planar arrows with fixed starting point and
of pairs of real numbers, respectively, while our field is the real numbers. The first operation, vector addition, takes
any two vectors v and w and assigns to them a third vector which is commonly written as v + w, and called the sum
of these two vectors. The second operation takes any scalar a and any vector v and gives another vector a · v. In view
of the first example, where the multiplication is done by rescaling the vector v by a scalar a, the multiplication is
called scalar multiplication of v by a.
To qualify as a vector space, the set V and the operations of addition and multiplication have to adhere to a number
of requirements called axioms.[2] In the list below, let u, v, w be arbitrary vectors in V, and a, b be scalars in F.
Vector space 52
Axiom Signification
Associativity of addition u + (v + w) = (u + v) + w.
Commutativity of addition v + w = w + v.
Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V.
Inverse elements of addition For all v ∈ V, there exists an element w ∈ V, called the additive inverse of v, such that v + w =
0. The additive inverse is denoted −v.
These axioms generalize properties of the vectors introduced in the above examples. Indeed, the result of addition of
two ordered pairs (as in the second example above) does not depend on the order of the summands:
(xv, yv) + (xw, yw) = (xw, yw) + (xv, yv),
Likewise, in the geometric example of vectors as arrows, v + w = w + v, since the parallelogram defining the sum of
the vectors is independent of the order of the vectors. All other axioms can be checked in a similar manner in both
examples. Thus, by disregarding the concrete nature of the particular type of vectors, the definition incorporates
these two and many more examples in one notion of vector space.
Subtraction of two vectors and division by a (non-zero) scalar can be performed via
v − w = v + (−w),
v / a = (1 / a) · v.
The concept introduced above is called a real vector space. The word "real" refers to the fact that vectors can be
multiplied by real numbers, as opposed to, say, complex numbers. When scalar multiplication is defined for complex
numbers, the denomination complex vector space is used. These two cases are the ones used most often in
engineering. The most general definition of a vector space allows scalars to be elements of a fixed field F. Then, the
notion is known as F-vector spaces or vector spaces over F. A field is, essentially, a set of numbers possessing
addition, subtraction, multiplication and division operations.[4] For example, rational numbers also form a field.
In contrast to the intuition stemming from vectors in the plane and higher-dimensional cases, there is, in general
vector spaces, no notion of nearness, angles or distances. To deal with such matters, particular types of vector spaces
are introduced; see below.
vector v are unique. Other properties follow from the distributive law, for example av equals 0 if and only if a equals
0 or v equals 0.
History
Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional
space. Around 1636, Descartes and Fermat founded analytic geometry by equating solutions to an equation of two
variables with points on a plane curve.[7] To achieve geometric solutions without using coordinates, Bolzano
introduced, in 1804, certain operations on points, lines and planes, which are predecessors of vectors.[8] This work
was made use of in the conception of barycentric coordinates by Möbius in 1827.[9] The foundation of the definition
of vectors was Bellavitis' notion of the bipoint, an oriented segment one of whose ends is the origin and the other one
a target. Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the
inception of quaternions and biquaternions by the latter.[10] They are elements in R2, R4, and R8; treating them using
linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.
In 1857, Cayley introduced the matrix notation which allows for a harmonization and simplification of linear maps.
Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract
objects endowed with operations.[11] In his work, the concepts of linear independence and dimension, as well as
scalar products are present. Actually Grassmann's 1844 work exceeds the framework of vector spaces, since his
considering multiplication, too, led him to what are today called algebras. Peano was the first to give the modern
definition of vector spaces and linear maps in 1888.[12]
An important development of vector spaces is due to the construction of function spaces by Lebesgue. This was later
formalized by Banach and Hilbert, around 1920.[13] At that time, algebra and the new field of functional analysis
began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.[14] Vector
spaces, including infinite-dimensional ones, then became a firmly established notion, and many mathematical
branches started making use of this concept.
Examples
Linear equations
Systems of homogeneous linear equations are closely tied to vector spaces.[18] For example, the solutions of
a + 3b + c =0
4a + 2b + 2c = 0
are given by triples with arbitrary a, b = a/2, and c = −5a/2. They form a vector space: sums and scalar multiples of
such triples still satisfy the same ratios of the three variables; thus they are solutions, too. Matrices can be used to
condense multiple linear equations as above into one vector equation, namely
Ax = 0,
where A = is the matrix containing the coefficients of the given equations, x is the vector (a, b, c), Ax
denotes the matrix product and 0 = (0, 0) is the zero vector. In a similar vein, the solutions of homogeneous linear
differential equations form vector spaces. For example
ƒ''(x) + 2ƒ'(x) + ƒ(x) = 0
yields ƒ(x) = a e−x + bx e−x, where a and b are arbitrary constants, and ex is the natural exponential function.
Field extensions
Field extensions F / E ("F over E") provide another class of examples of vector spaces, particularly in algebra and
algebraic number theory: a field F containing a smaller field E becomes an E-vector space, by the given
multiplication and addition operations of F.[19] For example the complex numbers are a vector space over R. A
particularly interesting type of field extension in number theory is Q(α), the extension of the rational numbers Q by a
fixed complex number α. Q(α) is the smallest field containing the rationals and a fixed complex number α. Its
dimension as a vector space over Q depends on the choice of α.
,
Vector space 55
where the ak are scalars and vik (k = 1, ..., n) elements of the basis B. Minimality, on the other hand, is made formal
by requiring B to be linearly independent. A set of vectors is said to be linearly independent if none of its elements
can be expressed as a linear combination of the remaining ones. Equivalently, an equation
can only hold if all scalars a1, ..., an equal zero. Linear independence ensures that the representation of any vector in
terms of basis vectors, the existence of which is guaranteed by the requirement that the basis span V, is unique.[20]
This is referred to as the coordinatized viewpoint of vector spaces, by viewing basis vectors as generalizations of
coordinate vectors x, y, z in R3 and similarly in higher-dimensional cases.
The coordinate vectors e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), to en = (0, 0, ..., 0, 1), form basis of Fn, called the
standard basis, since any vector (x1, x2, ..., xn) can be uniquely expressed as a linear combination of these vectors:
(x1, x2, ..., xn) = x1(1, 0, ..., 0) + x2(0, 1, 0, ..., 0) + ... + xn(0, ..., 0, 1) = x1e1 + x2e2 + ... + xnen.
Every vector space has a basis. This follows from Zorn's lemma, an equivalent formulation of the axiom of
choice.[21] Given the other axioms of Zermelo-Fraenkel set theory, the existence of bases is equivalent to the axiom
of choice.[22] The ultrafilter lemma, which is weaker than the axiom of choice, implies that all bases of a given
vector space have the same number of elements, or cardinality.[23] It is called the dimension of the vector space,
denoted dim V. If the space is spanned by finitely many vectors, the above statements can be proven without such
fundamental input from set theory.[24]
The dimension of the coordinate space Fn is n, by the basis exhibited above. The dimension of the polynomial ring
F[x] introduced above is countably infinite, a basis is given by 1, x, x2, ... A fortiori, the dimension of more general
function spaces, such as the space of functions on some (bounded or unbounded) interval, is infinite.[25] Under
suitable regularity assumptions on the coefficients involved, the dimension of the solution space of a homogeneous
ordinary differential equation equals the degree of the equation.[26] For example, the solution space for the above
equation is generated by e−x and xe−x. These two functions are linearly independent over R, so the dimension of this
space is two, as is the degree of the equation.
The dimension (or degree) of the field extension Q(α) over Q depends on α. If α satisfies some polynomial equation
qnαn + qn−1αn−1 + ... + q0 = 0, with rational coefficients qn, ..., q0.
("α is algebraic"), the dimension is finite. More precisely, it equals the degree of the minimal polynomial having α as
a root.[27] For example, the complex numbers C are a two-dimensional real vector space, generated by 1 and the
imaginary unit i. The latter satisfies i2 + 1 = 0, an equation of degree two. Thus, C is a two-dimensional R-vector
space (and, as any field, one-dimensional as a vector space over itself, C). If α is not algebraic, the dimension of
Q(α) over Q is infinite. For instance, for α = π there is no such equation, in other words π is transcendental.[28]
Once a basis of V is chosen, linear maps ƒ : V → W are completely determined by specifying the images of the basis
vectors, because any element of V is expressed uniquely as a linear combination of them.[34] If dim V = dim W, a
1-to-1 correspondence between fixed bases of V and W gives rise to a linear map that maps any basis element of V to
the corresponding basis element of W. It is an isomorphism, by its very definition.[35] Therefore, two vector spaces
are isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space is
completely classified (up to isomorphism) by its dimension, a single number. In particular, any n-dimensional
F-vector space V is isomorphic to Fn. There is, however, no "canonical" or preferred isomorphism; actually an
isomorphism φ: Fn → V is equivalent to the choice of a basis of V, by mapping the standard basis of Fn to V, via φ.
Appending an automorphism, i.e. an isomorphism ψ: V → V yields another isomorphism ψ∘φ: Fn → V, the
composition of ψ and φ, and therefore a different basis of V. The freedom of choosing a convenient basis is
particularly useful in the infinite-dimensional context, see below.
Matrices
Matrices are a useful notion to encode linear maps.[36] They are
written as a rectangular array of scalars as in the image at the right.
Any m-by-n matrix A gives rise to a linear map from Fn to Fm, by
the following
A typical matrix
or, using the matrix multiplication of the matrix A with the coordinate vector x:
x ↦ Ax.
Moreover, after choosing bases of V and W, any linear map ƒ : V → W is uniquely represented by a matrix via this
assignment.[37]
Vector space 57
det (ƒ − λ · Id) = 0.
By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomial
function in λ, called the characteristic polynomial of ƒ.[41] If the field F is large enough to contain a zero of this
polynomial (which automatically happens for F algebraically closed, such as F = C) any linear map has at least one
eigenvector. The vector space V may or may not possess an eigenbasis, a basis consisting of eigenvectors. This
phenomenon is governed by the Jordan canonical form of the map.[42] The set of all eigenvectors corresponding to a
particular eigenvalue of ƒ forms a vector space known as the eigenspace corresponding to the eigenvalue (and ƒ) in
question. To achieve the spectral theorem, the corresponding statement in the infinite-dimensional case, the
machinery of functional analysis is needed, see below.
Basic constructions
In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield
vector spaces related to given ones. In addition to the definitions given below, they are also characterized by
universal properties, which determine an object X by specifying the linear maps from X to any other vector space.
The kernel ker(ƒ) of a linear map ƒ: V → W consists of vectors v that are mapped to 0 in W.[47] Both kernel and image
im(ƒ) = {ƒ(v), v ∈ V} are subspaces of V and W, respectively.[48] The existence of kernels and images is part of the
statement that the category of vector spaces (over a fixed field F) is an abelian category, i.e. a corpus of
mathematical objects and structure-preserving maps between them (a category) that behaves much like the category
of abelian groups.[49] Because of this, many statements such as the first isomorphism theorem (also called
rank-nullity theorem in matrix-related terms)
V / ker(ƒ) ≅ im(ƒ).
and the second and third isomorphism theorem can be formulated and proven in a way very similar to the
corresponding statements for groups.
An important example is the kernel of a linear map x ↦ Ax for some fixed matrix A, as above. The kernel of this
map is the subspace of vectors x such that Ax = 0, which is precisely the set of solutions to the system of
homogeneous linear equations belonging to A. This concept also extends to linear differential equations
the derivatives of the function ƒ appear linearly (as opposed to ƒ''(x)2, for example). Since differentiation is a linear
procedure (i.e., (ƒ + g)' = ƒ' + g ' and (c·ƒ)' = c·ƒ' for a constant c) this assignment is linear, called a linear differential
operator. In particular, the solutions to the differential equation D(ƒ) = 0 form a vector space (over R or C).
Vector space 59
Tensor product
The tensor product V ⊗F W, or simply V ⊗ W, of two vector spaces V and W is one of the central notions of
multilinear algebra which deals with extending notions such as linear maps to several variables. A map g: V × W →
X is called bilinear if g is linear in both variables v and w. That is to say, for fixed w the map v ↦ g(v, w) is linear in
the sense above and likewise for fixed v.
The tensor product is a particular vector space that is a universal recipient of bilinear maps g, as follows. It is defined
as the vector space consisting of finite (formal) sums of symbols called tensors
v1 ⊗ w1 + v2 ⊗ w2 + ... + vn ⊗ wn,
subject to the rules
a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w), where a is a scalar,
(v1 + v2) ⊗ w = v1 ⊗ w + v2 ⊗ w, and
v ⊗ (w1 + w2) = v ⊗ w1 + v ⊗ w2.[51]
These rules ensure that the map ƒ from the V × W to V ⊗ W that
maps a tuple (v, w) to v ⊗ w is bilinear. The universality states
that given any vector space X and any bilinear map g: V × W → X,
there exists a unique map u, shown in the diagram with a dotted
arrow, whose composition with ƒ equals g: u(v ⊗ w) = g(v, w).[52]
This is called the universal property of the tensor product, an
instance of the method—much used in advanced abstract
algebra—to indirectly define objects by specifying maps from or
Commutative diagram depicting the universal property
to this object. of the tensor product.
In R2, this reflects the common notion of the angle between two vectors x and y, by the law of cosines:
Because of this, two vectors satisfying are called orthogonal. An important variant of the standard dot
product is used in Minkowski space: R4 endowed with the Lorentz product
[56]
In contrast to the standard dot product, it is not positive definite: also takes negative values, for example for
x = (0, 0, 0, 1). Singling out the fourth coordinate—corresponding to time, as opposed to three
space-dimensions—makes it useful for the mathematical treatment of special relativity.
denotes the limit of the corresponding finite partial sums of the sequence (ƒi)i∈N of elements of V. For example, the ƒi
could be (real or complex) functions belonging to some function space V, in which case the series is a function
series. The mode of convergence of the series depends on the topology imposed on the function space. In such cases,
pointwise convergence and uniform convergence are two prominent examples.
Vector space 61
Banach and Hilbert spaces are complete topological spaces whose topologies are given, respectively, by a norm and
an inner product. Their study—a key piece of functional analysis—focusses on infinite-dimensional vector spaces,
since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence.[62] The
image at the right shows the equivalence of the 1-norm and ∞-norm on R2: as the unit "balls" enclose each other, a
sequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case,
however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richer
than that of vector spaces without additional data.
From a conceptual point of view, all notions related to topological vector spaces should match the topology. For
example, instead of considering all linear maps (also called functionals) V → W, maps between topological vector
spaces are required to be continuous.[63] In particular, the (topological) dual space V∗ consists of continuous
functionals V → R (or C). The fundamental Hahn–Banach theorem is concerned with separating subspaces of
appropriate topological vector spaces by continuous functionals.[64]
Banach spaces
Banach spaces, introduced by Stefan Banach, are complete normed vector spaces.[65] A first example is the vector
space ℓ p consisting of infinite vectors with real entries x = (x1, x2, ...) whose p-norm (1 ≤ p ≤ ∞) given by
is finite. The topologies on the infinite-dimensional space ℓ p are inequivalent for different p. E.g. the sequence of
vectors xn = (2−n, 2−n, ..., 2−n, 0, 0, ...), i.e. the first 2n components are 2−n, the following ones are 0, converges to the
zero vector for p = ∞, but does not for p = 1:
, but
Vector space 62
More generally than sequences of real numbers, functions ƒ: Ω → R are endowed with a norm that replaces the above
sum by the Lebesgue integral
The space of integrable functions on a given domain Ω (for example an interval) satisfying |ƒ|p < ∞, and equipped
with this norm are called Lebesgue spaces, denoted Lp(Ω).[66] These spaces are complete.[67] (If one uses the
Riemann integral instead, the space is not complete, which may be seen as a justification for Lebesgue's integration
theory.[68] ) Concretely this means that for any sequence of Lebesgue-integrable functions ƒ1, ƒ2, ... with |ƒn|p < ∞,
satisfying the condition
there exists a function ƒ(x) belonging to the vector space Lp(Ω) such that
Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces.[69]
Hilbert spaces
such as energy, or momentum, correspond to eigenvalues of a certain (linear) differential operator and the associated
wavefunctions are called eigenstates. The spectral theorem decomposes a linear compact operator acting on
functions in terms of these eigenfunctions and their eigenvalues.[79]
Applications
Vector spaces have manifold applications as they occur in many circumstances, namely wherever functions with
values in some field are involved. They provide a framework to deal with analytical and geometrical problems, or
are used in the Fourier transform. This list is not exhaustive: many more applications exist, for example in
optimization. The minimax theorem of game theory stating the existence of a unique payoff when all players play
optimally can be formulated and proven using vector spaces methods.[85] Representation theory fruitfully transfers
Vector space 64
the good understanding of linear algebra and vector spaces to other mathematical domains such as group theory.[86]
Distributions
A distribution (or generalized function) is a linear map assigning a number to each "test" function, typically a
smooth function with compact support, in a continuous way: in the above terminology the space of distributions is
the (continuous) dual of the test function space.[87] The latter space is endowed with a topology that takes into
account not only ƒ itself, but also all its higher derivatives. A standard example is the result of integrating a test
function ƒ over some domain Ω:
When Ω = {p}, the set consisting of a single point, this reduces to the Dirac distribution, denoted by δ, which
associates to a test function ƒ its value at the p: δ(ƒ) = ƒ(p). Distributions are a powerful instrument to solve
differential equations. Since all standard analytic notions such as derivatives are linear, they extend naturally to the
space of distributions. Therefore the equation in question can be transferred to a distribution space, which is bigger
than the underlying function space, so that more flexible methods are available for solving the equation. For
example, Green's functions and fundamental solutions are usually distributions rather than proper functions, and can
then be used to find solutions of the equation with prescribed boundary conditions. The found solution can then in
some cases be proven to be actually a true function, and a solution to the original equation (e.g., using the
Lax–Milgram theorem, a consequence of the Riesz representation theorem).[88]
Fourier analysis
Resolving a periodic function into a sum of trigonometric
functions forms a Fourier series, a technique much used in physics
and engineering.[89] [90] The underlying vector space is usually the
Hilbert space L2(0, 2π), for which the functions sin mx and cos mx
(m an integer) form an orthogonal basis.[91] The Fourier expansion
of an L2 function f is
The coefficients am and bm are called Fourier coefficients of ƒ, and are calculated by the formulas[92]
In physical terms the function is represented as a superposition of sine waves and the coefficients give information
about the function's frequency spectrum.[93] A complex-number form of Fourier series is also commonly used.[92]
The concrete formulae above are consequences of a more general mathematical duality called Pontryagin duality.[94]
Applied to the group R, it yields the classical Fourier transform; an application in physics are reciprocal lattices,
where the underlying group is a finite-dimensional real vector space endowed with the additional datum of a lattice
encoding positions of atoms in crystals.[95]
Vector space 65
Fourier series are used to solve boundary value problems in partial differential equations.[96] In 1822, Fourier first
used this technique to solve the heat equation.[97] A discrete version of the Fourier series can be used in sampling
applications where the function value is known only at a finite number of equally spaced points. In this case the
Fourier series is finite and its value is equal to the sampled values at all points.[98] The set of coefficients is known as
the discrete Fourier transform (DFT) of the given sample sequence. The DFT is one of the key tools of digital signal
processing, a field whose applications include radar, speech encoding, image compression.[99] The JPEG image
format is an application of the closely-related discrete cosine transform.[100]
The fast Fourier transform is an algorithm for rapidly computing the discrete Fourier transform.[101] It is used not
only for calculating the Fourier coefficients but, using the convolution theorem, also for computing the convolution
of two finite sequences.[102] They in turn are applied in digital filters[103] and as a rapid multiplication algorithm for
polynomials and large integers (Schönhage-Strassen algorithm).[104] [105]
Differential geometry
The tangent plane to a surface at a point is naturally a vector space
whose origin is identified with the point of contact. The tangent
plane is the best linear approximation, or linearization, of a surface
at a point.[106] Even in a three-dimensional Euclidean space, there
is typically no natural way to prescribe a basis of the tangent
plane, and so it is conceived of as an abstract vector space rather
than a real coordinate space. The tangent space is the
generalization to higher-dimensional differentiable manifolds.[107]
Riemannian manifolds are manifolds whose tangent spaces are The tangent space to the 2-sphere at some point is the
[108] infinite plane touching the sphere in this point.
endowed with a suitable inner product. Derived therefrom, the
Riemann curvature tensor encodes all curvatures of a manifold in
one object, which finds applications in general relativity, for example, where the Einstein curvature tensor describes
the matter and energy content of space-time.[109] [110] The tangent space of a Lie group can be given naturally the
structure of a Lie algebra and can be used to classify compact Lie groups.[111]
Vector space 66
Generalizations
Vector bundles
A vector bundle is a family of vector spaces
parametrized continuously by a topological space
X.[107] More precisely, a vector bundle over X is a
topological space E equipped with a continuous map
π:E→X
such that for every x in X, the fiber π−1(x) is a vector
space. The case dim V = 1 is called a line bundle. For
any vector space V, the projection X × V → X makes the
product X × V into a "trivial" vector bundle. Vector
bundles over X are required to be locally a product of X
and some (fixed) vector space V: for every x in X, there
is a neighborhood U of x such that the restriction of π to
π−1(U) is isomorphic[112] to the trivial bundle U × V →
U. Despite their locally trivial character, vector bundles
may (depending on the shape of the underlying space A Möbius strip. Locally, it looks like U × R.
X) be "twisted" in the large, i.e., the bundle need not be
(globally isomorphic to) the trivial bundle X × V. For example, the Möbius strip can be seen as a line bundle over the
circle S1 (by identifying open intervals with the real line). It is, however, different from the cylinder S1 × R, because
the latter is orientable whereas the former is not.[113]
Properties of certain vector bundles provide information about the underlying topological space. For example, the
tangent bundle consists of the collection of tangent spaces parametrized by the points of a differentiable manifold.
The tangent bundle of the circle S1 is globally isomorphic to S1 × R, since there is a global nonzero vector field on
S1.[114] In contrast, by the hairy ball theorem, there is no (tangent) vector field on the 2-sphere S2 which is
everywhere nonzero.[115] K-theory studies the isomorphism classes of all vector bundles over some topological
space.[116] In addition to deepening topological and geometrical insight, it has purely algebraic consequences, such
as the classification of finite-dimensional real division algebras: R, C, the quaternions H and the octonions.
The cotangent bundle of a differentiable manifold consists, at every point of the manifold, of the dual of the tangent
space, the cotangent space. Sections of that bundle are known as differential forms. They are used to do integration
on manifolds.
Modules
Modules are to rings what vector spaces are to fields. The very same axioms, applied to a ring R instead of a field F
yield modules.[117] The theory of modules, compared to vector spaces, is complicated by the presence of ring
elements that do not have multiplicative inverses. For example, modules need not have bases, as the Z-module (i.e.,
abelian group) Z/2Z shows; those modules that do (including all vector spaces) are known as free modules.
Nevertheless, a vector space can be compactly defined as a module over a ring which is a field with the elements
being called vectors. The algebro-geometric interpretation of commutative rings via their spectrum allows the
development of concepts such as locally free modules, the algebraic counterpart to vector bundles.
Vector space 67
Ax = b
generalizing the homogeneous case b = 0 above.[119] The space of solutions is the affine subspace x + V where x is a
particular solution of the equation, and V is the space of solutions of the homogeneous equation (the nullspace of A).
The set of one-dimensional subspaces of a fixed finite-dimensional vector space V is known as projective space; it
may be used to formalize the idea of parallel lines intersecting at infinity.[120] Grassmannians and flag manifolds
generalize this by parametrizing linear subspaces of fixed dimension k and flags of subspaces, respectively.
Convex analysis
Over an ordered field, notably the real numbers, there are the added
notions of convex analysis, most basically a cone, which allows only
non-negative linear combinations, and a convex set, which allows only
non-negative linear combinations that sum to 1. A convex set can be
seen as the combinations of the axioms for an affine space and a cone,
which is reflected in the standard space for it, the n-simplex, being the
intersection of the affine hyperplane and orthant. Such spaces are
particularly used in linear programming.
generalized barycentric coordinates, and a dual map from a polytope into the orthant (of dimension equal to the
number of faces) given by slack variables, but these are rarely isomorphisms – most polytopes are not a simplex or
Vector space 68
an orthant.
Notes
[1] It is also common, especially in physics, to denote vectors with an arrow on top: .
[2] Roman 2005, ch. 1, p. 27
[3] This axiom is not asserting the associativity of an operation, since there are two operations in question, scalar multiplication: bv; and field
multiplication: ab.
[4] Some authors (such as Brown 1991) restrict attention to the fields R or C, but most of the theory is unchanged over an arbitrary field.
[5] van der Waerden 1993, Ch. 19
[6] Bourbaki 1998, Section II.1.1. Bourbaki calls the group homomorphisms ƒ(a) homotheties.
[7] Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91
[8] Bolzano 1804
[9] Möbius 1827
[10] Hamilton 1853
[11] Grassmann 2000
[12] Peano 1888, ch. IX
[13] Banach 1922
[14] Dorier 1995, Moore 1995
[15] Lang 1987, ch. I.1
[16] e.g. Lang 1993, ch. XII.3., p. 335
[17] Lang 1987, ch. IX.1
[18] Lang 1987, ch. VI.3.
[19] Lang 2002, ch. V.1
[20] Lang 1987, ch. II.2., pp. 47–48
[21] Roman 2005, Theorem 1.9, p. 43
[22] Blass 1984
[23] Halpern 1966, pp. 670–673
[24] Artin 1991, Theorem 3.3.13
[25] The indicator functions of intervals (of which there are infinitely many) are linearly independent, for example.
[26] Braun 1993, Th. 3.4.5, p. 291
[27] Stewart 1975, Proposition 4.3, p. 52
[28] Stewart 1975, Theorem 6.5, p. 74
[29] Roman 2005, ch. 2, p. 45
[30] Lang 1987, ch. IV.4, Corollary, p. 106
[31] Lang 1987, Example IV.2.6
[32] Lang 1987, ch. VI.6
[33] Halmos 1974, p. 28, Ex. 9
[34] Lang 1987, Theorem IV.2.1, p. 95
[35] Roman 2005, Th. 2.5 and 2.6, p. 49
[36] Lang 1987, ch. V.1
[37] Lang 1987, ch. V.3., Corollary, p. 106
[38] Lang 1987, Theorem VII.9.8, p. 198
[39] The nomenclature derives from German "eigen", which means own or proper.
[40] Roman 2005, ch. 8, p. 135–156
[41] Lang 1987, ch. IX.4
[42] Roman 2005, ch. 8, p. 140. See also Jordan–Chevalley decomposition.
[43] Roman 2005, ch. 1, p. 29
[44] Roman 2005, ch. 1, p. 35
[45] Roman 2005, ch. 3, p. 64
[46] Some authors (such as Roman 2005) choose to start with this equivalence relation and derive the concrete shape of V/W from this.
[47] Lang 1987, ch. IV.3.
[48] Roman 2005, ch. 2, p. 48
[49] Mac Lane 1998
[50] Roman 2005, ch. 1, pp. 31–32
[51] Lang 2002, ch. XVI.1
[52] Roman 2005, Th. 14.3. See also Yoneda lemma.
[53] Schaefer & Wolff 1999, pp. 204–205
[54] Bourbaki 2004, ch. 2, p. 48
Vector space 69
Footnotes
References
Linear algebra
• Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1
• Blass, Andreas (1984), "Existence of bases implies the axiom of choice", Axiomatic set theory (Boulder,
Colorado, 1983), Contemporary Mathematics, 31, Providence, R.I.: American Mathematical Society, pp. 31–33,
MR763890
• Brown, William A. (1991), Matrices and vector spaces, New York: M. Dekker, ISBN 978-0-8247-8419-5
• Lang, Serge (1987), Linear algebra, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96412-6
• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, 211 (Revised third ed.), New York:
Springer-Verlag, ISBN 978-0-387-95385-4, MR1878556
• Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra (http://www.matrixanalysis.com/), SIAM,
ISBN 978-0-89871-454-8
• Roman, Steven (2005), Advanced Linear Algebra, Graduate Texts in Mathematics, 135 (2nd ed.), Berlin, New
York: Springer-Verlag, ISBN 978-0-387-24766-3
• Spindler, Karlheinz (1993), Abstract Algebra with Applications: Volume 1: Vector spaces and groups, CRC,
ISBN 978-0-82479-144-5
• (German) van der Waerden, Bartel Leendert (1993), Algebra (9th ed.), Berlin, New York: Springer-Verlag,
ISBN 978-3-540-56799-8
Analysis
• Bourbaki, Nicolas (1987), Topological vector spaces, Elements of mathematics, Berlin, New York:
Springer-Verlag, ISBN 978-3-540-13627-9
• Bourbaki, Nicolas (2004), Integration I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-41129-1
• Braun, Martin (1993), Differential equations and their applications: an introduction to applied mathematics,
Berlin, New York: Springer-Verlag, ISBN 978-0-387-97894-9
• BSE-3 (2001), "Tangent plane" (http://eom.springer.de/T/t092180.htm), in Hazewinkel, Michiel,
Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104
• Choquet, Gustave (1966), Topology, Boston, MA: Academic Press
• Dennery, Philippe; Krzywicki, Andre (1996), Mathematics for Physicists, Courier Dover Publications,
ISBN 978-0-486-69193-0
Vector space 71
• Dudley, Richard M. (1989), Real analysis and probability, The Wadsworth & Brooks/Cole Mathematics Series,
Pacific Grove, CA: Wadsworth & Brooks/Cole Advanced Books & Software, ISBN 978-0-534-10050-6
• Dunham, William (2005), The Calculus Gallery, Princeton University Press, ISBN 978-0-691-09565-3
• Evans, Lawrence C. (1998), Partial differential equations, Providence, R.I.: American Mathematical Society,
ISBN 978-0-8218-0772-9
• Folland, Gerald B. (1992), Fourier Analysis and Its Applications, Brooks-Cole, ISBN 978-0-534-17094-3
• Gasquet, Claude; Witomski, Patrick (1999), Fourier Analysis and Applications: Filtering, Numerical
Computation, Wavelets, Texts in Applied Mathematics, New York: Springer-Verlag, ISBN 0-387-98485-2
• Ifeachor, Emmanuel C.; Jervis, Barrie W. (2001), Digital Signal Processing: A Practical Approach (2nd ed.),
Harlow, Essex, England: Prentice-Hall (published 2002), ISBN 0-201-59619-9
• Krantz, Steven G. (1999), A Panorama of Harmonic Analysis, Carus Mathematical Monographs, Washington,
DC: Mathematical Association of America, ISBN 0-88385-031-1
• Kreyszig, Erwin (1988), Advanced Engineering Mathematics (6th ed.), New York: John Wiley & Sons,
ISBN 0-471-85824-2
• Kreyszig, Erwin (1989), Introductory functional analysis with applications, Wiley Classics Library, New York:
John Wiley & Sons, ISBN 978-0-471-50459-7, MR992618
• Lang, Serge (1983), Real analysis, Addison-Wesley, ISBN 978-0-201-14179-5
• Lang, Serge (1993), Real and functional analysis, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94001-4
• Loomis, Lynn H. (1953), An introduction to abstract harmonic analysis, Toronto-New York–London: D. Van
Nostrand Company, Inc., pp. x+190
• Schaefer, Helmut H.; Wolff, M.P. (1999), Topological vector spaces (2nd ed.), Berlin, New York:
Springer-Verlag, ISBN 978-0-387-98726-2
• Treves, François (1967), Topological vector spaces, distributions and kernels, Boston, MA: Academic Press
Historical references
• (French) Banach, Stefan (1922), "Sur les opérations dans les ensembles abstraits et leur application aux équations
intégrales (On operations in abstract sets and their application to integral equations)" (http://matwbn.icm.edu.
pl/ksiazki/fm/fm3/fm3120.pdf), Fundamenta Mathematicae 3, ISSN 0016-2736
• (German) Bolzano, Bernard (1804), Betrachtungen über einige Gegenstände der Elementargeometrie
(Considerations of some aspects of elementary geometry) (http://dml.cz/handle/10338.dmlcz/400338)
• (French) Bourbaki, Nicolas (1969), Éléments d'histoire des mathématiques (Elements of history of mathematics),
Paris: Hermann
• Dorier, Jean-Luc (1995), "A general outline of the genesis of vector space theory" (http://www.sciencedirect.
com/science?_ob=ArticleURL&_udi=B6WG9-45NJHDR-C&_user=1634520&_coverDate=12/31/1995&
_rdoc=2&_fmt=high&_orig=browse&
_srch=doc-info(#toc#6817#1995#999779996#308480#FLP#display#Volume)&_cdi=6817&_sort=d&
_docanchor=&_ct=9&_acct=C000054038&_version=1&_urlVersion=0&_userid=1634520&
md5=fd995fe2dd19abde0c081f1e989af006), Historia Mathematica 22 (3): 227–261,
doi:10.1006/hmat.1995.1024, MR1347828
• (French) Fourier, Jean Baptiste Joseph (1822), Théorie analytique de la chaleur (http://books.google.com/
?id=TDQJAAAAIAAJ), Chez Firmin Didot, père et fils
• (German) Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (http:/
/books.google.com/?id=bKgAAAAAMAAJ&pg=PA1&dq=Die+Lineale+Ausdehnungslehre+ein+neuer+
Zweig+der+Mathematik), O. Wigand, reprint: Hermann Grassmann. Translated by Lloyd C. Kannenberg.
(2000), Kannenberg, L.C., ed., Extension Theory, Providence, R.I.: American Mathematical Society,
ISBN 978-0-8218-2031-5
Vector space 72
Further references
• Ashcroft, Neil; Mermin, N. David (1976), Solid State Physics, Toronto: Thomson Learning,
ISBN 978-0-03-083993-1
• Atiyah, Michael Francis (1989), K-theory, Advanced Book Classics (2nd ed.), Addison-Wesley,
ISBN 978-0-201-09394-0, MR1043170
• Bourbaki, Nicolas (1998), Elements of Mathematics : Algebra I Chapters 1-3, Berlin, New York:
Springer-Verlag, ISBN 978-3-540-64243-5
• Bourbaki, Nicolas (1989), General Topology. Chapters 1-4, Berlin, New York: Springer-Verlag,
ISBN 978-3-540-64241-1
• Coxeter, Harold Scott MacDonald (1987), Projective Geometry (2nd ed.), Berlin, New York: Springer-Verlag,
ISBN 978-0-387-96532-1
• Eisenberg, Murray; Guy, Robert (1979), "A proof of the hairy ball theorem", The American Mathematical
Monthly (Mathematical Association of America) 86 (7): 572–574, doi:10.2307/2320587, JSTOR 2320587
• Eisenbud, David (1995), Commutative algebra, Graduate Texts in Mathematics, 150, Berlin, New York:
Springer-Verlag, ISBN 978-0-387-94268-1; 978-0-387-94269-8, MR1322960
• Goldrei, Derek (1996), Classic Set Theory: A guided independent study (1st ed.), London: Chapman and Hall,
ISBN 0-412-60610-0
• Griffiths, David J. (1995), Introduction to Quantum Mechanics, Upper Saddle River, NJ: Prentice Hall,
ISBN 0-13-124405-1
• Halmos, Paul R. (1974), Finite-dimensional vector spaces, Berlin, New York: Springer-Verlag,
ISBN 978-0-387-90093-3
• Halpern, James D. (Jun 1966), "Bases in Vector Spaces and the Axiom of Choice", Proceedings of the American
Mathematical Society (American Mathematical Society) 17 (3): 670–673, doi:10.2307/2035388, JSTOR 2035388
• Husemoller, Dale (1994), Fibre Bundles (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94087-8
• Jost, Jürgen (2005), Riemannian Geometry and Geometric Analysis (4th ed.), Berlin, New York: Springer-Verlag,
ISBN 978-3-540-25907-7
• Kreyszig, Erwin (1991), Differential geometry, New York: Dover Publications, pp. xiv+352,
ISBN 978-0-486-66721-8
• Kreyszig, Erwin (1999), Advanced Engineering Mathematics (8th ed.), New York: John Wiley & Sons,
ISBN 0-471-15496-2
• Luenberger, David (1997), Optimization by vector space methods, New York: John Wiley & Sons,
ISBN 978-0-471-18117-0
Vector space 73
• Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York:
Springer-Verlag, ISBN 978-0-387-98403-2
• Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973), Gravitation, W. H. Freeman,
ISBN 978-0-7167-0344-0
• Naber, Gregory L. (2003), The geometry of Minkowski spacetime, New York: Dover Publications,
ISBN 978-0-486-43235-9, MR2044239
• (German) Schönhage, A.; Strassen, Volker (1971), "Schnelle Multiplikation großer Zahlen (Fast multiplication of
big numbers)" (http://www.springerlink.com/content/y251407745475773/fulltext.pdf), Computing 7:
281–292, ISSN 0010-485X
• Spivak, Michael (1999), A Comprehensive Introduction to Differential Geometry (Volume Two), Houston, TX:
Publish or Perish
• Stewart, Ian (1975), Galois Theory, Chapman and Hall Mathematics Series, London: Chapman and Hall,
ISBN 0-412-10800-3
• Varadarajan, V. S. (1974), Lie groups, Lie algebras, and their representations, Prentice Hall,
ISBN 978-0-13-535732-3
• Wallace, G.K. (Feb 1992), "The JPEG still picture compression standard", IEEE Transactions on Consumer
Electronics 38 (1): xviii–xxxiv, ISSN 0098-3063
• Weibel, Charles A. (1994), An introduction to homological algebra, Cambridge Studies in Advanced
Mathematics, 38, Cambridge University Press, ISBN 978-0-521-55987-4, OCLC 36131259, MR1269324
External links
• A lecture (http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/
lecture-9-independence-basis-and-dimension/) about fundamental concepts related to vector spaces (given at
MIT)
• A graphical simulator (http://code.google.com/p/esla/) for the concepts of span, linear dependency, base and
dimension
Matrix multiplication 74
Matrix multiplication
In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another
matrix.
Matrix product
Non-Technical Details
The result of matrix multiplication is a matrix whose elements are found by multiplying the elements of the same
row from the first matrix times the associated elements of the same column from the second matrix and summing.
The procedure for finding an element of the resultant matrix is to multiply the first element of the same row from the
first matrix times the first element of the same column from the second matrix and add it to the second element of
that row from the first matrix times the second element of that column from the second matrix, plus the third
elements and so on until the last element of that row from the first matrix is multiplied by the last element of that
column from the second matrix and added to the sum.
Non-Technical Example
Technical Details
The matrix product is the most commonly used type of product of matrices. Matrices offer a concise way of
representing linear transformations between vector spaces, and matrix multiplication corresponds to the composition
of linear transformations. The matrix product of two matrices can be defined when their entries belong to the same
ring, and hence can be added and multiplied, and, additionally, the number of the columns of the first matrix matches
the number of the rows of the second matrix. The product of an m×p matrix A with an p×n matrix B is an m×n matrix
denoted AB whose entries are
where 1 ≤ i ≤ m is the row index and 1 ≤ j ≤ n is the column index. This definition can be restated by postulating that
the matrix product is left and right distributive and the matrix units are multiplied according to the following rule:
where the first factor is the m×n matrix with 1 at the intersection of the ith row and the kth column and zeros
elsewhere and the second factor is the p×n matrix with 1 at the intersection of the lth row and the jth column and
zeros elsewhere.
Matrix multiplication 75
Application Example
A company sells cement, chalk and plaster in bags weighing 25, 10, and 5 kg respectively. Four construction firms
Arcen, Build, Construct and Demolish, buy these products from this company. The number of bags the clients buy in
a specific year may be arranged in a 4×3-matrix A, with columns for the products and rows representing the clients:
We see for instance that , indicating that client Construct has bought 12 bags of chalk that year.
A bag of cement costs $12, a bag of chalk $9 and a bag of plaster $8. The 3×2-matrix B shows prices and weights of
the three products:
To find the total amount firm Arcen has spent that year, we calculate:
,
in which we recognize the first row of the matrix A (Arcen) and the first column of the matrix B (prices).
The total weight of the product bought by Arcen is calculated in a similar manner:
,
in which we now recognize the first row of the matrix A (Arcen) and the second column of the matrix B (weight).
We can make similar calculations for the other clients. Together they form the matrix AB as the matrix product of the
matrices A and B:
Properties
In general, matrix multiplication is not commutative. More precisely, AB and BA need not be simultaneously
defined; if they are, they may have different dimensions; and even if A and B are square matrices of the same order n,
so that AB and BA are also square matrices of order n, if n is greater or equal than 2, AB need not be equal to BA. For
example,
whereas
However, if A and B are both diagonal square matrices and of the same order then AB = BA.
Matrix multiplication is associative:
where c is a scalar (for the second identity to hold, c must belong to the center of the ground ring — this condition is
automatically satisfied if the ground ring is commutative, in particular, for matrices over a field).
If A and B are both nxn matrices with entries in a field then the determinant of their product is the product of their
determinants:
Thus the matrix of the composition (or the product) of linear transformations is the product of their matrices with
respect to the given bases.
Illustration
The figure to the right illustrates the product of two matrices A and B,
showing how each intersection in the product matrix corresponds to a
row of A and a column of B. The size of the output matrix is always the
largest possible, i.e. for each row of A and for each column of B there
are always corresponding intersections in the product matrix. The
product matrix AB consists of all combinations of dot products of rows
of A and columns of B.
The first coordinate in matrix notation denotes the row and the second the column; this order is used both in indexing
and in giving the dimensions. The element at the intersection of row and column of the product matrix is
the dot product (or scalar product) of row of the first matrix and column of the second matrix. This explains
why the width and the height of the matrices being multiplied must match: otherwise the dot product is not defined.
Matrix multiplication 77
Alternative descriptions
The Euclidean inner product and outer product are the simplest special cases of the matrix product. The inner
product of two column vectors and is , where T denotes the matrix transpose. More
explicitly,
Matrix multiplication can be viewed in terms of these two operations by considering the effect of the matrix product
on block matrices.
Suppose that the first factor, A, is decomposed into its rows, which are row vectors and the second factor, B, is
decomposed into its columns, which are column vectors:
where
This is an outer product where the product inside is replaced with the inner product. In general, block matrix
multiplication works exactly like ordinary matrix multiplication, but the real product inside is replaced with the
matrix product.
An alternative method results when the decomposition is done the other way around, i.e. the first factor, A, is
decomposed into column vectors and the second factor, B, is decomposed into row vectors:
This method emphasizes the effect of individual column/row pairs on the result, which is a useful point of view with
e.g. covariance matrices, where each such pair corresponds to the effect of a single sample point. An example for a
small matrix:
One more description of the matrix product may be obtained in the case when the second factor, B, is decomposed
into the columns and the first factor, A, is viewed as a whole. Then A acts on the columns of B. If x is a vector and A
is decomposed into columns, then
.
Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gain
substantial performance gains through use of parallelisation and vectorization. It should therefore be noted that some
lower time-complexity algorithms on paper may have indirect time complexity costs on real machines.
Scalar multiplication
The scalar multiplication of a matrix A = (aij) and a scalar r gives a product r A of the same size as A. The entries of
r A are given by
For example, if
then
If we are concerned with matrices over a more general ring, then the above multiplication is the left multiplication of
the matrix A with scalar r while the right multiplication is defined to be
When the underlying ring is commutative, for example, the real or complex number field, the two multiplications are
the same. However, if the ring is not commutative, such as the quaternions, they may be different. For example
Hadamard product
For two matrices of the same dimensions, we have the Hadamard product (named after French mathematician
Jacques Hadamard), also known as the entrywise product and the Schur product.[6]
Formally, for two matrices of the same dimensions:
Note that the Hadamard product is a principal submatrix of the Kronecker product.
The Hadamard product is commutative, associative and distributive over addition.
The Hadamard product of two positive-(semi)definite matrices is positive-(semi)definite,[7] and for
positive-semidefinite and
.
For vectors and , and corresponding diagonal matrices and with these vectors as their leading
[8]
diagonals, the following identity holds:
,
where denotes the conjugate transpose of . In particular, using vectors of ones, this shows that the sum of all
elements in the Hadamard product is the trace of .
Matrix multiplication 80
A related result for square and , is that the row-sums of their Hadamard product are the diagonal elements of
[9]
Powers of matrices
Square matrices can be multiplied by themselves repeatedly in the same way that ordinary numbers can. This
repeated multiplication can be described as a power of the matrix. Using the ordinary notion of matrix multiplication,
the following identities hold for an n-by-n matrix A, a positive integer k, and a scalar c:
The naive computation of matrix powers is to multiply k times the matrix A to the result, starting with the identity
matrix just like the scalar case. This can be improved using the binary representation of k, a method commonly used
for scalars. An even better method is to use the eigenvalue decomposition of A.
Calculating high powers of matrices can be very time-consuming, but the complexity of the calculation can be
dramatically decreased by using the Cayley-Hamilton theorem, which takes advantage of an identity found using the
matrices' characteristic polynomial and gives a much more effective equation for Ak, which instead raises a scalar to
the required power, rather than a matrix.
When raising an arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to diagonalize the
matrix first.
Matrix multiplication 81
Notes
[1] Mary L. Boas, "Mathematical Methods in the Physical Sciences", Third Addition, Wiley, 2006, page 115
[2] Press 2007, p. 108.
[3] Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication" (http:/ / www. siam. org/ pdf/ news/ 174. pdf), SIAM News
38 (9),
[4] Robinson, 2005.
[5] Eve, 2009.
[6] (Horn & Johnson 1985, Ch. 5).
[7] (Styan 1973)
[8] (Horn & Johnson 1991)
[9] (Styan 1973)
References
• Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix
Multiplication. arXiv:math.GR/0511460. Proceedings of the 46th Annual Symposium on Foundations of
Computer Science, 23–25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379–388.
• Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. arXiv:math.GR/0307321.
Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 11–14 October 2003,
Cambridge, MA, IEEE Computer Society, pp. 438–449.
• Coppersmith, D., Winograd S., Matrix multiplication via arithmetic progressions, J. Symbolic Comput. 9,
p. 251-280, 1990.
• Eve, James. On O(n^2 log n) algorithms for n x n matrix operations. Technical Report No. 1169, School of
Computing Science, Newcastle University, August 2009. PDF (http://www.cs.ncl.ac.uk/publications/trs/
papers/1169.pdf)
• Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press,
ISBN 978-0-521-38632-6
• Horn, Roger A.; Johnson, Charles R. (1991), Topics in Matrix Analysis, Cambridge University Press,
ISBN 978-0-521-46713-1
• Knuth, D.E., The Art of Computer Programming Volume 2: Seminumerical Algorithms. Addison-Wesley
Professional; 3 edition (November 14, 1997). ISBN 978-0201896848. pp. 501.
• Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (2007), Numerical Recipes: The
Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8.
• Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on
Theory of computing. ACM Press, 2002. doi:10.1145/509907.509932.
• Robinson, Sara, Toward an Optimal Algorithm for Matrix Multiplication, SIAM News 38(9), November 2005.
PDF (http://www.siam.org/pdf/news/174.pdf)
• Strassen, Volker, Gaussian Elimination is not Optimal, Numer. Math. 13, p. 354-356, 1969.
• Styan, George P. H. (1973), "Hadamard Products and Multivariate Statistical Analysis", Linear Algebra and its
Applications 6: 217–240, doi:10.1016/0024-3795(73)90023-2
Matrix multiplication 82
External links
• The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication
(http://arxiv.org/abs/cs/0703145)
• WIMS Online Matrix Multiplier (http://wims.unice.fr/~wims/en_tool~linear~matmult.html)
• Matrix Multiplication Problems (http://ceee.rice.edu/Books/LA/mult/mult4.html#TOP)
• Block Matrix Multiplication Problems (http://www.gordon-taft.net/MatrixMultiplication.html)
• Matrix Multiplication in C (http://www.edcc.edu/faculty/paul.bladek/Cmpsc142/matmult.htm)
• Wijesuriya, Viraj B., Daniweb: Sample Code for Matrix Multiplication using MPI Parallel Programming
Approach (http://www.daniweb.com/forums/post1428830.html#post1428830), retrieved 2010-12-29
• Linear algebra: matrix operations (http://www.umat.feec.vutbr.cz/~novakm/algebra_matic/en) Multiply or
add matrices of a type and with coefficients you choose and see how the result was computed.
• Visual Matrix Multiplication (http://www.wefoundland.com/project/Visual_Matrix_Multiplication) An
interactive app for learning matrix multiplication.
• Online Matrix Calculator (http://www.numberempire.com/matrixbinarycalculator.php)
• Matrix Multiplication in Java – Dr. P. Viry (http://www.ateji.com/px/whitepapers/Ateji PX MatMult
Whitepaper v1.2.pdf?phpMyAdmin=95wsvAC1wsqrAq3j,M3duZU3UJ7)
Determinant
In linear algebra, the determinant is a value associated with a square matrix. The determinant provides important
information when the matrix is that of the coefficients of a system of linear equations, or when it corresponds to a
linear transformation of a vector space: in the former case the system has a unique solution if and only if the
determinant is nonzero, in the latter case that same condition means that the transformation has an inverse operation.
An intuitive interpretation can be given to the value of the determinant of a square matrix with real entries: the
absolute value of the determinant gives the scale factor by which area or volume is multiplied under the associated
linear transformation, while its sign indicates whether the transformation preserves orientation. Thus a 2 × 2 matrix
with determinant −2, when applied to a region of the plane with finite area, will transform that region into one with
twice the area, while reversing its orientation.
Determinants occur throughout mathematics. They appear in calculus as the Jacobian determinant in the substitution
rule for integrals of functions of several variables. They are used to define the characteristic polynomial of a matrix
that is an essential tool in eigenvalue problems in linear algebra. In some cases they are used just as a compact
notation for expressions that would otherwise be unwieldy to write down.
The determinant of a matrix A is denoted det(A), or without parentheses: det A. An alternative notation, used for
compactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of a
matrix by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses. For instance
.
Although most often used for matrices whose entries are real or complex numbers, the definition of the determinant
only involves addition, subtraction and multiplication, and so it can be defined for square matrices with entries are
taken from any commutative ring. Thus for instance the determinant of a matrix with integer coefficients will be an
integer, and the matrix has an inverse with integer coefficients if and only if this determinant is 1 or −1 (these being
the only invertible elements of the integers). For square matrices with entries in a non-commutative ring, for instance
the quaternions, there is no unique definition for the determinant, and no definition that has all the usual properties of
Determinant 83
Definition
The determinant of a square matrix A, one with the same number of rows and columns, is a value that can be
obtained by multiplying certain sets of entries of A, and adding and subtracting such products, according to a given
rule: it is a polynomial expression of the matrix entries. This expression grows rapidly with the size of the matrix (an
n-by-n matrix contributes n! terms), so it will first be given explicitly for the case of 2-by-2 matrices and 3-by-3
matrices, followed by the rule for arbitrary size matrices, which subsumes these two cases.
Assume A is a square matrix with n rows and n columns, so that it can be written as
The entries can be numbers or expressions (as happens when the determinant is used to define a characteristic
polynomial); the definition of the determinant depends only on the fact that they can be added and multiplied
together in a commutative manner.
The determinant of A is denoted as det(A), or it can be denoted directly in terms of the matrix entries by writing
enclosing bars instead of brackets:
2-by-2 matrices
The determinant of a 2×2 matrix is defined by
If the matrix entries are real numbers, the matrix A can be used to represent two linear mappings: one that maps the
standard basis vectors to the rows of A, and one that maps them to the columns of A. In either case, the images of the
basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram
defined by the rows of the above matrix is the one with vertices at (0,0), (a,b), (a + c, b + d), and (c,d), as shown in
the accompanying diagram. The absolute value of is the area of the parallelogram, and thus represents the
scale factor by which areas are transformed by A. (The parallelogram formed by the columns of A is in general a
different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be
the same.)
The absolute value of the determinant together with the sign becomes the oriented area of the parallelogram. The
oriented area is the same as the usual area, except that it is negative when the angle from the first to the second
vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for
the identity matrix).
Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by A. When the
determinant is equal to one, the linear mapping defined by the matrix represents is equi-areal and
orientation-preserving.
3-by-3 matrices
The determinant of a 3×3 matrix
The volume of this Parallelepiped is the absolute value of the determinant of the
matrix formed by the rows r1, r2, and r3.
Determinant 85
n-by-n matrices
The determinant of a matrix of arbitrary size can be defined by the Leibniz formula or the Laplace formula.
The Leibniz formula for the determinant of an n-by-n matrix A is
Here the sum is computed over all permutations σ of the set {1, 2, ..., n}. A permutation is a function that reorders
this set of integers. The position of the element i after the reordering σ is denoted σi. For example, for n = 3, the
original sequence 1, 2, 3 might be reordered to S = [2, 3, 1], with S1 = 2, S2 = 3, S3 = 1. The set of all such
permutations (also known as the symmetric group on n elements) is denoted Sn. For each permutation σ, sgn(σ)
denotes the signature of σ; it is +1 for even σ and −1 for odd σ. Evenness or oddness can be defined as follows: the
permutation is even (odd) if the new sequence can be obtained by an even number (odd, respectively) of switches of
numbers. For example, starting from [1, 2, 3] and switching the positions of 2 and 3 yields [1, 3, 2], switching once
more yields [3, 1, 2], and finally, after a total of three (an odd number) switches, [3, 2, 1] results. Therefore [3, 2, 1]
is an odd permutation. Similarly, the permutation [2, 3, 1] is even ([1, 2, 3] → [2, 1, 3] → [2, 3, 1], with an even
number of switches). It is explained in the article on parity of a permutation why a permutation cannot be
simultaneously even and odd.
In any of the summands, the term
is notation for the product of the entries at positions (i, σi), where i ranges from 1 to n:
This agrees with the rule of Sarrus given in the previous section.
The formal extension to arbitrary dimensions was made by Tullio Levi-Civita, see (Levi-Civita symbol) using a
pseudo-tensor symbol.
Levi-Civita symbol
The determinant for an n-by-n matrix can be expressed in terms of the totally antisymmetric Levi-Civita symbol as
follows:
the product of the diagonal entries of A. For example, the determinant of the identity matrix
is one.
2. If B results from A by interchanging two rows or two columns, then det(B) = −det(A). The determinant is called
alternating (as a function of the rows or columns of the matrix).
3. If B results from A by multiplying one row or column with a number c, then det(B) = c · det(A). As a
consequence, multiplying the whole matrix by c yields
4. If B results from A by adding a multiple of one row to another row, or a multiple of one column to another
column, then
These properties can be shown by inspecting the definition via the Leibniz formula. For example, the first property is
because, for triangular matrices, the product is zero for any permutation σ different from the identity
permutation (the one not changing the order of the numbers 1, 2, ..., n), since then at least one ai,σ(i) is zero.
These four properties can be used to compute determinants of any matrix, using Gaussian elimination. This is an
algorithm that transforms any given matrix to a triangular matrix, only by using the operations in the last three items.
Since the effect of these operations on the determinant can be traced, the determinant of the original matrix is known,
Determinant 87
once Gaussian elimination is performed. For example, the determinant of can be computed using the fo
matrices:
Here, B is obtained from A by adding −1/2 × the first row to the second, so that det(A) = det(B). C is obtained from B
by adding the first to the third row, so that det(C) = det(B). Finally, D is obtained from C by exchanging the second
and third row, so that det(D) = −det(C). The determinant of the (upper) triangular matrix D is the product of its
entries on the main diagonal: (−2) · 2 · 4.5 = −18. Therefore det(A) = +18.
Further properties
In addition to the above-mentioned properties characterizing the determinant, there are a number of further basic
properties. For example, a matrix and its transpose have the same determinant:
These properties are chiefly important from a theoretical point of view. For instance, the relation of the determinant
and eigenvalues is not typically used to numerically compute either one, especially for large matrices, because of
efficiency and numerical stability considerations.
In this section all matrices are assumed to be n-by-n matrices.
Thus the determinant is a multiplicative map. This property is a consequence of the characterization given above of
the determinant as the unique n-linear alternating function of the columns with value 1 on the identity matrix, since
the function Mn(K) → K that maps M ↦ det(AM) can easily be seen to be n-linear and alternating in the columns of
M, and takes the value det(A) at the identity. The formula can be generalized to (square) products of rectangular
matrices, giving the Cauchy-Binet formula, which also provides an independent proof of the multiplicative property.
The determinant det(A) of a matrix A is non-zero if and only if A is invertible or, yet another equivalent statement, if
its rank equals the size of the matrix. If so, the determinant of the inverse matrix is given by
In particular, products and inverses of matrices with determinant one still have this property. Thus, the set of such
matrices (of fixed size n) form a group known as the special linear group. More generally, the word "special"
indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special
orthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.
Determinant 88
where I is the identity matrix of the same dimension as A. Conversely, det(A) is the product of the eigenvalues of A,
counted with their algebraic multiplicities. The product of all non-zero eigenvalues is referred to as
pseudo-determinant.
An Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is
equivalent to the determinants of the submatrices
Here exp(A) denotes the matrix exponential of A, because every eigenvalue λ of A corresponds to the eigenvalue
exp(λ) of exp(A). In particular, given any logarithm of A, that is, any matrix L satisfying
Calculating det(A) by means of that formula is referred to as expanding the determinant along a row or column. For
the example 3-by-3 matrix , Laplace expansion along the second column (j = 2, the sum
Cramer's rule
For a matrix equation
where Ai is the matrix formed by replacing the i-th column of A by the column vector b. This fact is implied by the
following identity
It has recently been shown that Cramer's rule can be implemented in O(n3) time, which is comparable to more
common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.
Block matrices
Suppose A, B, C, and D are n×n-, n×m-, m×n-, and m×m-matrices, respectively. Then
This can be seen from the Leibniz formula or by induction on n. When A is invertible, employing the following
identity
leads to
When D is invertible, a similar identity with factored out can be derived analogously,[2] that is,
When the blocks are square matrices of the same order further formulas hold. For example, if C and D commute (i.e.,
CD = DC), then the following formula comparable to the determinant of a 2-by-2 matrix holds:[3]
Derivative
By definition, e.g., using the Leibniz formula, the determinant of real (or analogously for complex) square matrices
is a polynomial function from Rn×n to R. As such it is everywhere differentiable. Its derivative can be expressed
using Jacobi's formula:
This identity is used in describing the tangent space of certain matrix Lie groups.
If the matrix A is written as where a, b, c are vectors, then the gradient over one of the three
vectors may be written as the cross product of the other two:
Determinant 91
Determinant of an endomorphism
The above identities concerning the determinant of a products and inverses of matrices imply that similar matrices
have the same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that A =
X−1BX. Indeed, repeatedly applying the above identities yields
The determinant is therefore also called a similarity invariant. The determinant of a linear transformation
for some finite dimensional vector space V is defined to be the determinant of the matrix describing it, with respect
to an arbitrary choice of basis in V. By the similarity invariant, this determinant is independent of the choice of the
basis for V and therefore only depends on the endomorphism T.
Exterior algebra
The determinant can also be characterized as the unique function
from the set of all n-by-n matrices with entries in a field K to this field satisfying the following three properties: first,
D is an n-linear function: considering all but one column of A fixed, the determinant is linear in the remaining
column, that is
for any column vectors v1, ..., vn, and w and any scalars (elements of K) a and b. Second, D is an alternating function:
for any matrix A with two identical columns D(A) = 0. Finally, D(In) = 1. Here In is the identity matrix.
This fact also implies that any every other n-linear alternating function F: Mn(K) → K satisfies
The last part in fact follows from the preceding statement: one easily sees that if F is nonzero it satisfies F(I) ≠ 0,
and function that associates F(M)/F(I) to M satisfies all conditions of the theorem. The importance of stating this part
is mainly that it remains valid[4] if K is any commutative ring rather than a field, in which case the given argument
does not apply.
The determinant of a linear transformation A : V → V of an n-dimensional vector space V can be formulated in a
coordinate-free manner by considering the n-th exterior power ΛnV of V. A induces a linear map
As ΛnV is one-dimensional, the map ΛnA is given by multiplying with some scalar. This scalar coincides with the
determinant of A, that is to say
This definition agrees with the more concrete coordinate-dependent definition. This follows from the above
characterization of the determinant given above. For example, switching two columns changes the parity of the
determinant; likewise, permuting the vectors in the exterior product v1 ∧ v2 ∧ ... ∧ vn to v2 ∧ v1 ∧ v3 ∧ ... ∧ vn, say,
also alters the parity.
Determinant 92
For this reason, the highest non-zero exterior power Λn(V) is sometimes also called the determinant of V and
similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix
can also be cast in this setting, by considering lower alternating forms ΛkV with k < n.
is supposed to hold for all elements r and s of the ring. For example, the integers form a commutative ring.
Many of the above statements and notions carry over mutatis mutandis to determinants of these more general
matrices: the determinant is multiplicative in this more general situation, and Cramer's rule also holds. A square
matrix over a commutative ring R is invertible if and only if its determinant is a unit in R, that is, an element having a
(multiplicative) inverse. (If R is a field, this latter condition is equivalent to the determinant being nonzero, thus
giving back the above characterization.) For example, a matrix A with entries in Z, the integers, is invertible (in the
sense that the inverse matrix has again integer entries) if the determinant is +1 or −1. Such a matrix is called
unimodular.
The determinant defines a mapping between
the group of invertible n×n matrices with entries in R and the multiplicative group of units in R. Since it respects the
multiplication in both groups, this map is a group homomorphism. Secondly, given a ring homomorphism f: R → S,
there is a map GLn(R) → GLn(S) given by replacing all entries in R by their images under f. The determinant
respects these maps, i.e., given a matrix A = (ai,j) with entries in R, the identity
holds. For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of
its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction
modulo m of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo m (the latter
determinant being computed using modular arithmetic). In the more high-brow parlance of category theory, the
determinant is a natural transformation between the two functors GLn and (⋅)×.[5] Adding yet another layer of
abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear
group to the multiplicative group,
Infinite matrices
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over
directly. For example, in Leibniz' formula, an infinite sum (all of whose terms are infinite products) would have to be
calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional
situations, which however only work for particular kinds of operators.
The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate
generalization of the formula
Further variants
Determinants of matrices in superrings (that is, Z/2-graded rings) are known as Berezinians or superdeterminants.[6]
The permanent of a matrix is defined as the determinant, except that the factors sgn(σ) occurring in Leibniz' rule are
omitted. The immanant generalizes both by introducing a character of the symmetric group Sn in Leibniz' rule.
Calculation
Naive methods of implementing an algorithm to compute the determinant include using Leibniz' formula or
Laplace's formula. Both these approaches are extremely inefficient for large matrices, though, since the number of
required operations grows very quickly: it is of order n! (n factorial) for an n×n matrix M. For example, Leibniz'
formula requires to calculate n! products. Therefore, more involved techniques have been developed for calculating
determinants.
Decomposition methods
Given a matrix A, some methods compute its determinant by writing A as a product of matrices whose determinants
can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU
decomposition, Cholesky decomposition or the QR decomposition. These methods are of order O(n3), which is a
significant improvement over O(n!)
The LU decomposition expresses A in terms of a lower triangular matrix L, an upper triangular matrix U and a
permutation matrix P:
The determinants of L and U can be quickly calculated, since they are the products of the respective diagonal entries.
The determinant of P is just the sign of the corresponding permutation. The determinant of A is then
Moreover, the decomposition can be chosen such that L is a unitriangular matrix and therefore has determinant 1, in
which case the formula further simplifies to
.
Determinant 94
Further methods
If the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows to
quickly calculate the determinant of A + uvT, where u and v are column vectors.
Since the definition of the determinant does not need divisions, a question arises: do fast algorithms exist that do not
need divisions? This is especially interesting for matrices over rings. Indeed algorithms with run-time proportional to
n4 exist. An algorithm of Mahajan and Vinay, and Berkowitz[7] is based on closed ordered walks (short clow). It
computes more products than the determinant definition requires, but some of these products cancel and the sum of
these products can be computed more efficiently. The final algorithm looks very much like an iterated product of
triangular matrices.
If two matrices of order n can be multiplied in time M(n), where M(n)≥na for some a>2, then the determinant can be
computed in time O(M(n)).[8] This means, for example, that an O(n2.376) algorithm exists based on the
Coppersmith–Winograd algorithm.
Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to
store intermediate values occuring in the computation. For example, the Gaussian elimination (or LU decomposition)
methods is of order O(n3), but the bit length of intermediate values can become exponentially long.[9] The Bareiss
Algorithm, on the other hand, is an exact-division method based on Sylvester's identity is also of order n3, but the bit
complexity roughly the bit size of the original entries in the matrix times n.
History
Historically, determinants were considered without reference to matrices: originally, a determinant was defined as a
property of a system of linear equations. The determinant "determines" whether the system has a unique solution
(which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinese
mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd
century BC). In Europe, two-by-two determinants were considered by Cardano at the end of the 16th century and
larger ones by Leibniz.[10] [11] [12] [13]
In Europe, Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrent law
was first announced by Bézout (1764).
It was Vandermonde (1771) who first recognized determinants as independent functions.[10] Laplace (1772) [14] [15]
gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had
already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third
order. Lagrange was the first to apply determinants to questions of elimination theory; he proved many special cases
of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers.
He introduced the word determinants (Laplace had used resultant), though not in the present signification, but rather
as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and
came very near the multiplication theorem.
The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of
two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On
the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the
subject. (See Cauchy-Binet formula.) In this he used the word determinant in its present sense,[16] [17] summarized
and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with
a proof more satisfactory than Binet's.[10] [18] With him begins the theory in its generality.
The next important figure was Jacobi[11] (from 1827). He early used the functional determinant which Sylvester later
called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class of
alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839)
Determinant 95
Applications
Orientation of a basis
One often thinks of the determinant as assigning a number to every sequence of vectors in , by using the
square matrix whose columns are the given vectors. For instance, an orthogonal matrix with entries in
represents an orthonormal basis in Euclidean space. The determinant of such a matrix determines whether the
orientation of the basis is consistent with or opposite to the orientation of the standard basis. Namely, if the
determinant is +1, the basis has the same orientation. If it is −1, the basis has the opposite orientation.
More generally, if the determinant of A is positive, A represents an orientation-preserving linear transformation (if A
is an orthogonal 2×2 or 3×3 matrix, this is a rotation), while if it is negative, A switches the orientation of the basis.
Volume
Determinants are used to calculate volumes in vector calculus: the absolute value of the determinant of real vectors is
equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if the linear map
is represented by the matrix A, and is any measurable subset of , then the volume of
is given by . More generally, if the linear map is represented by the
-by- matrix A, and is any measurable subset of , then the -dimensional volume of is given
by . By calculating the volume of the tetrahedron bounded by four points, they can
be used to identify skew lines.
The volume of any tetrahedron, given its vertices a, b, c, and d, is (1/6)·|det(a − b, b − c, c − d)|, or any other
combination of pairs of vertices that would form a spanning tree over the vertices.
Jacobian determinant
Given a differentiable function
is called the Jacobian matrix of f. Its determinant, the Jacobian determinant appears in the higher-dimensional
version of integration by substitution. It also occurs in the inverse function theorem.
Determinant 96
Notes
[1] Proofs can be found in http:/ / web. archive. org/ web/ 20080113084601/ http:/ / www. ee. ic. ac. uk/ hp/ staff/ www/ matrix/ proof003. html
[2] These identities were taken http:/ / www. ee. ic. ac. uk/ hp/ staff/ dmb/ matrix/ proof003. html
[3] Proofs are given at http:/ / www. mth. kcl. ac. uk/ ~jrs/ gazette/ blocks. pdf
[4] Roger Godement, Cours d'Algèbre, seconde édition, Hermann (1966), §23, Théorème 5, p. 303
[5] Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics 5 ((2nd ed.) ed.), Springer-Verlag,
ISBN 0-387-98403-8
[6] (http:/ / books. google. de/ books?id=sZ1-G4hQgIIC& pg=PA116& dq=Berezinian& hl=de& ei=chHHTcefLJGdOpajsPYB& sa=X&
oi=book_result& ct=result& resnum=1& ved=0CC4Q6AEwAA#v=onepage& q=Berezinian& f=false)
[7] http:/ / page. inf. fu-berlin. de/ ~rote/ Papers/ pdf/ Division-free+ algorithms. pdf
[8] J.R. Bunch and J.E. Hopcroft, Triangular factorization and inversion by fast matrix multiplication, Mathematics of Computation, 28 (1974)
231–236.
[9] Fang, Xin Gui; Havas, George (1997). "On the worst-case complexity of integer Gaussian elimination" (http:/ / perso. ens-lyon. fr/ gilles.
villard/ BIBLIOGRAPHIE/ PDF/ ft_gateway. cfm. pdf). Proceedings of the 1997 international symposium on Symbolic and algebraic
computation. ISSAC '97. Kihei, Maui, Hawaii, United States: ACM. pp. 28-31. doi:http:/ / doi. acm. org/ 10. 1145/ 258726. 258740. &
#32;ISBN& nbsp;0-89791-875-4. .
[10] Campbell, H: "Linear Algebra With Applications", pages 111-112. Appleton Century Crofts, 1971
[11] Eves, H: "An Introduction to the History of Mathematics", pages 405, 493–494, Saunders College Publishing, 1990.
[12] A Brief History of Linear Algebra and Matrix Theory : http:/ / darkwing. uoregon. edu/ ~vitulli/ 441. sp04/ LinAlgHistory. html
[13] Cajori, F. A History of Mathematics p. 80 (http:/ / books. google. com/ books?id=bBoPAAAAIAAJ& pg=PA80#v=onepage& f=false)
[14] Expansion of determinants in terms of minors: Laplace, Pierre-Simon (de) "Researches sur le calcul intégral et sur le systéme du monde,"
Histoire de l'Académie Royale des Sciences (Paris), seconde partie, pages 267-376 (1772).
[15] Muir, Sir Thomas, The Theory of Determinants in the historical Order of Development [London, England: Macmillan and Co., Ltd., 1906].
[16] The first use of the word "determinant" in the modern sense appeared in: Cauchy, Augustin-Louis “Memoire sur les fonctions qui ne peuvent
obtenir que deux valeurs égales et des signes contraires par suite des transpositions operées entre les variables qu'elles renferment," which was
first read at the Institute de France in Paris on November 30, 1812, and which was subsequently published in the Journal de l'Ecole
Polytechnique, Cahier 17, Tome 10, pages 29-112 (1815).
[17] Origins of mathematical terms: http:/ / jeff560. tripod. com/ d. html
[18] History of matrices and determinants: http:/ / www-history. mcs. st-and. ac. uk/ history/ HistTopics/ Matrices_and_determinants. html
[19] The first use of vertical lines to denote a determinant appeared in: Cayley, Arthur "On a theorem in the geometry of position," Cambridge
Mathematical Journal, vol. 2, pages 267-271 (1841).
[20] History of matrix notation: http:/ / jeff560. tripod. com/ matrices. html
[21] Down with Determinants: http:/ / www. axler. net/ DwD. html
References
• Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0387982590
• de Boor, Carl (1990), "An empty exercise" (http://ftp.cs.wisc.edu/Approx/empty.pdf), ACM SIGNUM
Newsletter 25 (2): 3–7, doi:10.1145/122272.122273.
• Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley,
ISBN 978-0321287137
• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra (http://www.matrixanalysis.
com/DownloadChapters.html), Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0898714548
• Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
• Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
Determinant 97
External links
• WebApp to calculate determinants and descriptively solve systems of linear equations (http://sole.ooz.ie/en)
• Determinant Interactive Program and Tutorial (http://people.revoledu.com/kardi/tutorial/LinearAlgebra/
MatrixDeterminant.html)
• Online Matrix Calculator (http://matri-tri-ca.narod.ru/en.index.html)
• Linear algebra: determinants. (http://www.umat.feec.vutbr.cz/~novakm/determinanty/en/) Compute
determinants of matrices up to order 6 using Laplace expansion you choose.
• Matrices and Linear Algebra on the Earliest Uses Pages (http://www.economics.soton.ac.uk/staff/aldrich/
matrices.htm)
• Determinants explained in an easy fashion in the 4th chapter as a part of a Linear Algebra course. (http://algebra.
math.ust.hk/course/content.shtml)
• Instructional Video on taking the determinant of an nxn matrix (Kahn Academy) (http://khanexercises.appspot.
com/video?v=H9BWRYJNIv4)
• Online matrix calculator (determinant, track, inverse, adjoint, transpose) (http://www.stud.feec.vutbr.cz/
~xvapen02/vypocty/matreg.php?language=english) Compute determinant of matrix up to order 8
Exterior algebra
In mathematics, the exterior product or wedge product of vectors is
an algebraic construction used in Euclidean geometry to study areas,
volumes, and their higher-dimensional analogs. The exterior product of
two vectors u and v, denoted by u ∧ v, lies in a space called the exterior
square, a different geometrical space (vector space) than the original
space of vectors. The magnitude[1] of u ∧ v can be interpreted as the
area of the parallelogram with sides u and v, which can also be
computed using the cross product of the two vectors. Also like the
cross product, the exterior product is anticommutative, meaning that u
∧ v = −v ∧ u for all vectors u and v. One way to visualize the exterior
product of two vectors is as a family of parallelograms all lying in the The cross product (blue vector) in relation to the
same plane, having the same area, and with the same orientation of exterior product (light blue parallelogram). The
length of the cross product is to the length of the
their boundaries (a decision of clockwise or counterclockwise). When
parallel unit vector (red) as the size of the exterior
thought of in this manner (common in geometric algebra) the exterior product is to the size of the reference
product of two vectors is called a 2-blade. More generally, the exterior parallelogram (light red).
product of any number k of vectors can be defined and is sometimes
called a k-blade. It lives in a geometrical space known as the k-th exterior power. The magnitude of the resulting
k-blade is the volume of the k-dimensional parallelotope whose sides are the given vectors, just like the magnitude of
the scalar triple product of vectors in three dimensions gives the volume of the parallelepiped spanned by those
vectors.
The exterior algebra (also known as the Grassmann algebra, after Hermann Grassmann[2] ) is the algebraic system
whose product is the exterior product. The exterior algebra provides an algebraic setting in which to answer
geometric questions. For instance, whereas blades have a concrete geometrical interpretation, objects in the exterior
algebra can be manipulated according to a set of unambiguous rules. The exterior algebra contains objects that are
not just k-blades, but sums of k-blades. The k-blades, because they are simple products of vectors, are called the
simple elements of the algebra. The rank of any element of the exterior algebra is defined to be the smallest number
of simple elements of which it is a sum. An example when k = 2 is a symplectic form, which is an element of the
Exterior algebra 98
exterior square whose rank is maximal. The exterior product extends to the full exterior algebra, so that it make sense
to multiply any two elements of the algebra. Equipped with this product, the exterior algebra is an associative
algebra, which means that α ∧ (β ∧ γ) = (α ∧ β) ∧ γ for any elements α, β, γ. The elements of the algebra that are
sums of k-blades are called the degree k-elements, and when elements of different degrees are multiplied, the degrees
add (like multiplication of polynomials). This means that the exterior algebra is a graded algebra.
In a precise sense (given by what is known as a universal construction), the exterior algebra is the largest algebra
that supports an alternating product on vectors, and can be easily defined in terms of other known objects such as
tensors. The definition of the exterior algebra makes sense for spaces not just of geometric vectors, but of other
vector-like objects such as vector fields or functions. In full generality, the exterior algebra can be defined for
modules over a commutative ring, and for other structures of interest in abstract algebra. It is one of these more
general constructions where the exterior algebra finds one if its most important applications, where it appears as the
algebra of differential forms that is fundamental in areas that use differential geometry. Differential forms are
mathematical objects that represent infinitesimal areas of infinitesimal parallelograms (and higher-dimensional
bodies), and so can be integrated over surfaces and higher dimensional manifolds in a way that generalizes the line
integrals from calculus. The exterior algebra also has many algebraic properties that make it a convenient tool in
algebra itself. The association of the exterior algebra to a vector space is a type of functor on vector spaces, which
means that it is compatible in a certain way with linear transformations of vector spaces. The exterior algebra is one
example of a bialgebra, meaning that its dual space also possesses a product, and this dual product is compatible with
the wedge product. This dual algebra is precisely the algebra of alternating multilinear forms on V, and the pairing
between the exterior algebra and its dual is given by the interior product.
Motivating examples
Suppose that
are a pair of given vectors in R2, written in components. There is a unique parallelogram having v and w as two of its
sides. The area of this parallelogram is given by the standard determinant formula:
Exterior algebra 99
where the first step uses the distributive law for the wedge product, and the last uses the fact that the wedge product
is alternating, and in particular e2 ∧ e1 = −e1 ∧ e2. Note that the coefficient in this last expression is precisely the
determinant of the matrix [v w]. The fact that this may be positive or negative has the intuitive meaning that v and w
may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an
area is called the signed area of the parallelogram: the absolute value of the signed area is the ordinary area, and the
sign determines its orientation.
The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exterior
product should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, if
A(v, w) denotes the signed area of the parallelogram determined by the pair of vectors v and w, then A must satisfy
the following properties:
1. A(av, bw) = a b A(v, w) for any real numbers a and b, since rescaling either of the sides rescales the area by the
same amount (and reversing the direction of one of the sides reverses the orientation of the parallelogram).
2. A(v,v) = 0, since the area of the degenerate parallelogram determined by v (i.e., a line segment) is zero.
3. A(w,v) = −A(v,w), since interchanging the roles of v and w reverses the orientation of the parallelogram.
4. A(v + aw,w) = A(v,w), since adding a multiple of w to v affects neither the base nor the height of the
parallelogram and consequently preserves its area.
5. A(e1, e2) = 1, since the area of the unit square is one.
With the exception of the last property, the wedge product satisfies the same formal properties as the area. In a
certain sense, the wedge product generalizes the final property by allowing the area of a parallelogram to be
compared to that of any "standard" chosen parallelogram. In other words, the exterior product in two-dimensions is a
basis-independent formulation of area.[3]
and
is
where {e1 Λ e2, e3 Λ e1, e2 Λ e3} is the basis for the three-dimensional space Λ2(R3). This imitates the usual
definition of the cross product of vectors in three dimensions.
Bringing in a third vector
where e1 Λ e2 Λ e3 is the basis vector for the one-dimensional space Λ3(R3). This imitates the usual definition of the
triple product.
Exterior algebra 100
The cross product and triple product in three dimensions each admit both geometric and algebraic interpretations.
The cross product u×v can be interpreted as a vector which is perpendicular to both u and v and whose magnitude is
equal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vector
consisting of the minors of the matrix with columns u and v. The triple product of u, v, and w is geometrically a
(signed) volume. Algebraically, it is the determinant of the matrix with columns u, v, and w. The exterior product in
three-dimensions allows for similar interpretations. In fact, in the presence of a positively oriented orthonormal
basis, the exterior product generalizes these notions to higher dimensions.
hence
Conversely, it follows from the anticommutativity of the product that the product is alternating, unless K has
characteristic two.
More generally, if x1, x2, ..., xk are elements of V, and σ is a permutation of the integers [1,...,k], then
If α ∈ Λk(V), then α is said to be a k-multivector. If, furthermore, α can be expressed as a wedge product of k
elements of V, then α is said to be decomposable. Although decomposable multivectors span Λk(V), not every
element of Λk(V) is decomposable. For example, in R4, the following 2-multivector is not decomposable:
is a basis for Λk(V). The reason is the following: given any wedge product of the form
then every vector vj can be written as a linear combination of the basis vectors ei; using the bilinearity of the wedge
product, this can be expanded to a linear combination of wedge products of those basis vectors. Any wedge product
in which the same basis vector appears more than once is zero; any wedge product in which the basis vectors do not
appear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general,
the resulting coefficients of the basis k-vectors can be computed as the minors of the matrix that describes the vectors
vj in terms of the basis ei.
By counting the basis elements, the dimension of Λk(V) is equal to a binomial coefficient:
(where by convention Λ0(V) = K and Λ1(V) = V), and therefore its dimension is equal to the sum of the binomial
coefficients, which is 2n.
Rank of a multivector
If α ∈ Λk(V), then it is possible to express α as a linear combination of decomposable multivectors:
The rank of the multivector α is the minimal number of decomposable multivectors in such an expansion of α. This
is similar to the notion of tensor rank.
Rank is particularly important in the study of 2-multivectors (Sternberg 1974, §III.6) (Bryant et al. 1991). The rank
of a 2-multivector α can be identified with half the rank of the matrix of coefficients of α in a basis. Thus if ei is a
basis for V, then α can be expressed uniquely as
where aij = −aji (the matrix of coefficients is skew-symmetric). The rank of the matrix aij is therefore even, and is
twice the rank of the form α.
In characteristic 0, the 2-multivector α has rank p if and only if
and
Exterior algebra 102
Graded structure
The wedge product of a k-multivector with a p-multivector is a (k+p)-multivector, once again invoking bilinearity.
As a consequence, the direct sum decomposition of the preceding section
gives the exterior algebra the additional structure of a graded algebra. Symbolically,
Moreover, the wedge product is graded anticommutative, meaning that if α ∈ Λk(V) and β ∈ Λp(V), then
In addition to studying the graded structure on the exterior algebra, Bourbaki (1989) studies additional graded
structures on exterior algebras, such as those on the exterior algebra of a graded module (a module that already
carries its own gradation).
Universal property
Let V be a vector space over the field K. Informally, multiplication in Λ(V) is performed by manipulating symbols
and imposing a distributive law, an associative law, and using the identity v ∧ v = 0 for v ∈ V. Formally, Λ(V) is the
"most general" algebra in which these rules hold for the multiplication, in the sense that any unital associative
K-algebra containing V with alternating multiplication on V must contain a homomorphic image of Λ(V). In other
words, the exterior algebra has the following universal property:[7]
Given any unital associative K-algebra A and any K-linear map j : V → A such that j(v)j(v) = 0 for every v in V, then
there exists precisely one unital algebra homomorphism f : Λ(V) → A such that j(v) = f(i(v)) for all v in V.
To construct the most general algebra that contains V and whose multiplication is alternating on V, it is natural to
start with the most general algebra that contains V, the tensor algebra T(V), and then enforce the alternating property
by taking a suitable quotient. We thus take the two-sided ideal I in T(V) generated by all elements of the form v⊗v
for v in V, and define Λ(V) as the quotient
(and use Λ as the symbol for multiplication in Λ(V)). It is then straightforward to show that Λ(V) contains V and
satisfies the above universal property.
As a consequence of this construction, the operation of assigning to a vector space V its exterior algebra Λ(V) is a
functor from the category of vector spaces to the category of algebras.
Rather than defining Λ(V) first and then identifying the exterior powers Λk(V) as certain subspaces, one may
alternatively define the spaces Λk(V) first and then combine them to form the algebra Λ(V). This approach is often
used in differential geometry and is described in the next section.
Exterior algebra 103
Generalizations
Given a commutative ring R and an R-module M, we can define the exterior algebra Λ(M) just as above, as a suitable
quotient of the tensor algebra T(M). It will satisfy the analogous universal property. Many of the properties of Λ(M)
also require that M be a projective module. Where finite-dimensionality is used, the properties further require that M
be finitely generated and projective. Generalizations to the most common situations can be found in (Bourbaki
1989).
Exterior algebras of vector bundles are frequently considered in geometry and topology. There are no essential
differences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those of
the exterior algebra of finitely-generated projective modules, by the Serre-Swan theorem. More general exterior
algebras can be defined for sheaves of modules.
Duality
Alternating operators
Given two vector spaces V and X, an alternating operator (or anti-symmetric operator) from Vk to X is a multilinear
map
which associates to k vectors from V their wedge product, i.e. their corresponding k-vector, is also alternating. In
fact, this map is the "most general" alternating operator defined on Vk: given any other alternating operator f : Vk →
X, there exists a unique linear map φ: Λk(V) → X with f = φ o w. This universal property characterizes the space
Λk(V) and can serve as its definition.
is called an alternating multilinear form. The set of all alternating multilinear forms is a vector space, as the sum
of two such maps, or the product of such a map with a scalar, is again alternating. By the universal property of the
exterior power, the space of alternating forms of degree k on V is naturally isomorphic with the dual vector space
(ΛkV)∗. If V is finite-dimensional, then the latter is naturally isomorphic to Λk(V∗). In particular, the dimension of
the space of anti-symmetric maps from Vk to K is the binomial coefficient n choose k.
Under this identification, the wedge product takes a concrete form: it produces a new anti-symmetric map from two
given ones. Suppose ω : Vk → K and η : Vm → K are two anti-symmetric maps. As in the case of tensor products of
multilinear maps, the number of variables of their wedge product is the sum of the numbers of their variables. It is
defined as follows:
where the alternation Alt of a multilinear map is defined to be the signed average of the values over all the
permutations of its variables:
Exterior algebra 104
This definition of the wedge product is well-defined even if the field K has finite characteristic, if one considers an
equivalent version of the above that does not use factorials or any constants:
where here Shk,m ⊂ Sk+m is the subset of (k,m) shuffles: permutations σ of the set {1,2,…,k+m} such that σ(1) < σ(2)
< … < σ(k), and σ(k+1) < σ(k+2)< … <σ(k+m).[8]
Bialgebra structure
In formal terms, there is a correspondence between the graded dual of the graded algebra Λ(V) and alternating
multilinear forms on V. The wedge product of multilinear forms defined above is dual to a coproduct defined on
Λ(V), giving the structure of a coalgebra.
The coproduct is a linear function Δ : Λ(V) → Λ(V) ⊗ Λ(V) given on decomposable elements by
For example,
This extends by linearity to an operation defined on the whole exterior algebra. In terms of the coproduct, the wedge
product on the dual space is just the graded dual of the coproduct:
where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements of
incompatible homogeneous degree: more precisely, α∧β = ε o (α⊗β) o Δ, where ε is the counit, as defined
presently).
The counit is the homomorphism ε : Λ(V) → K which returns the 0-graded component of its argument. The
coproduct and counit, along with the wedge product, define the structure of a bialgebra on the exterior algebra.
With an antipode defined on homogeneous elements by S(x) = (−1)deg xx, the exterior algebra is furthermore a Hopf
algebra.[9]
Interior product
Suppose that V is finite-dimensional. If V* denotes the dual space to the vector space V, then for each α ∈ V*, it is
possible to define an antiderivation on the algebra Λ(V),
This derivation is called the interior product with α, or sometimes the insertion operator, or contraction by α.
Suppose that w ∈ ΛkV. Then w is a multilinear mapping of V* to K, so it is defined by its values on the k-fold
Cartesian product V*× V*× ... × V*. If u1, u2, ..., uk-1 are k-1 elements of V*, then define
In fact, these three properties are sufficient to characterize the interior product as well as define it in the general
infinite-dimensional case.
Further properties of the interior product include:
•
•
Hodge duality
Suppose that V has finite dimension n. Then the interior product induces a canonical isomorphism of vector spaces
In the geometrical setting, a non-zero element of the top exterior power Λn(V) (which is a one-dimensional vector
space) is sometimes called a volume form (or orientation form, although this term may sometimes lead to
ambiguity.) Relative to a given volume form σ, the isomorphism is given explicitly by
If, in addition to a volume form, the vector space V is equipped with an inner product identifying V with V*, then the
resulting isomorphism is called the Hodge dual (or more commonly the Hodge star operator)
The composite of * with itself maps Λk(V) → Λk(V) and is always a scalar multiple of the identity map. In most
applications, the volume form is compatible with the inner product in the sense that it is a wedge product of an
orthonormal basis of V. In this case,
where I is the identity, and the inner product has metric signature (p,q) — p plusses and q minuses.
Inner product
For V a finite-dimensional space, an inner product on V defines an isomorphism of V with V∗, and so also an
isomorphism of ΛkV with (ΛkV)∗. The pairing between these two spaces also takes the form of an inner product. On
decomposable k-multivectors,
the determinant of the matrix of inner products. In the special case vi = wi, the inner product is the square norm of the
multivector, given by the determinant of the Gramian matrix (⟨vi, vj⟩). This is then extended bilinearly (or
sesquilinearly in the complex case) to a non-degenerate inner product on ΛkV. If ei, i=1,2,...,n, form an orthonormal
basis of V, then the vectors of the form
With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically,
for v ∈ Λk−1(V), w ∈ Λk(V), and x ∈ V,
for all y ∈ V. This property completely characterizes the inner product on the exterior algebra.
Functoriality
Suppose that V and W are a pair of vector spaces and f : V → W is a linear transformation. Then, by the universal
construction, there exists a unique homomorphism of graded algebras
such that
In particular, Λ(f) preserves homogeneous degree. The k-graded components of Λ(f) are given on decomposable
elements by
Let
The components of the transformation Λ(k) relative to a basis of V and W is the matrix of k × k minors of f. In
particular, if V = W and V is of finite dimension n, then Λn(f) is a mapping of a one-dimensional vector space Λn to
itself, and is therefore given by a scalar: the determinant of f.
Exactness
If
Direct sums
In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras:
is exact.[12]
where the sum is taken over the symmetric group of permutations on the symbols {1,...,r}. This extends by linearity
and homogeneity to an operation, also denoted by Alt, on the full tensor algebra T(V). The image Alt(T(V)) is the
alternating tensor algebra, denoted A(V). This is a vector subspace of T(V), and it inherits the structure of a graded
vector space from that on T(V). It carries an associative graded product defined by
Although this product differs from the tensor product, the kernel of Alt is precisely the ideal I (again, assuming that K
has characteristic 0), and there is a canonical isomorphism
Index notation
Suppose that V has finite dimension n, and that a basis e1, ..., en of V is given. then any alternating tensor t ∈ Ar(V) ⊂
Tr(V) can be written in index notation as
The components of this tensor are precisely the skew part of the components of the tensor product s ⊗ t, denoted by
square brackets on the indices:
The interior product may also be described in index notation as follows. Let be an antisymmetric
*
tensor of rank r. Then, for α ∈ V , iαt is an alternating tensor of rank r-1, given by
Applications
Linear algebra
In applications to linear algebra, the exterior product provides an abstract algebraic manner for describing the
determinant and the minors of a matrix. For instance, it is well-known that the magnitude of the determinant of a
square matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix. This suggests
that the determinant can be defined in terms of the exterior product of the column vectors. Likewise, the k×k minors
of a matrix can be defined by looking at the exterior products of column vectors chosen k at a time. These ideas can
be extended not just to matrices but to linear transformations as well: the magnitude of the determinant of a linear
transformation is the factor by which it scales the volume of any given reference parallelotope. So the determinant of
a linear transformation can be defined in terms of what the transformation does to the top exterior power. The action
of a transformation on the lesser exterior powers gives a basis-independent way to talk about the minors of the
transformation.
Linear geometry
The decomposable k-vectors have geometric interpretations: the bivector represents the plane spanned by the
vectors, "weighted" with a number, given by the area of the oriented parallelogram with sides u and v. Analogously,
the 3-vector represents the spanned 3-space weighted by the volume of the oriented parallelepiped with
edges u, v, and w.
Projective geometry
Decomposable k-vectors in ΛkV correspond to weighted k-dimensional subspaces of V. In particular, the
Grassmannian of k-dimensional subspaces of V, denoted Grk(V), can be naturally identified with an algebraic
subvariety of the projective space P(ΛkV). This is called the Plücker embedding.
Differential geometry
The exterior algebra has notable applications in differential geometry, where it is used to define differential forms. A
differential form at a point of a differentiable manifold is an alternating multilinear form on the tangent space at the
point. Equivalently, a differential form of degree k is a linear functional on the k-th exterior power of the tangent
space. As a consequence, the wedge product of multilinear forms defines a natural wedge product for differential
forms. Differential forms play a major role in diverse areas of differential geometry.
In particular, the exterior derivative gives the exterior algebra of differential forms on a manifold the structure of a
differential algebra. The exterior derivative commutes with pullback along smooth mappings between manifolds, and
it is therefore a natural differential operator. The exterior algebra of differential forms, equipped with the exterior
derivative, is a differential complex whose cohomology is called the de Rham cohomology of the underlying
manifold and plays a vital role in the algebraic topology of differentiable manifolds.
Exterior algebra 109
Representation theory
In representation theory, the exterior algebra is one of the two fundamental Schur functors on the category of vector
spaces, the other being the symmetric algebra. Together, these constructions are used to generate the irreducible
representations of the general linear group; see fundamental representation.
Physics
The exterior algebra is an archetypal example of a superalgebra, which plays a fundamental role in physical theories
pertaining to fermions and supersymmetry. For a physical discussion, see Grassmann number. For various other
applications of related ideas to physics, see superspace and supergroup (physics).
The Jacobi identity holds if and only if ∂∂ = 0, and so this is a necessary and sufficient condition for an
anticommutative nonassociative algebra L to be a Lie algebra. Moreover, in that case ΛL is a chain complex with
boundary operator ∂. The homology associated to this complex is the Lie algebra homology.
Homological algebra
The exterior algebra is the main ingredient in the construction of the Koszul complex, a fundamental object in
homological algebra.
History
The exterior algebra was first introduced by Hermann Grassmann in 1844 under the blanket term of
Ausdehnungslehre, or Theory of Extension.[14] This referred more generally to an algebraic (or axiomatic) theory of
extended quantities and was one of the early precursors to the modern notion of a vector space. Saint-Venant also
published similar ideas of exterior calculus for which he claimed priority over Grassmann.[15]
The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester's
theory of multivectors. It was thus a calculus, much like the propositional calculus, except focused exclusively on
the task of formal reasoning in geometrical terms.[16] In particular, this new development allowed for an axiomatic
characterization of dimension, a property that had previously only been examined from the coordinate point of view.
The import of this new theory of vectors and multivectors was lost to mid 19th century mathematicians,[17] until
being thoroughly vetted by Giuseppe Peano in 1888. Peano's work also remained somewhat obscure until the turn of
the century, when the subject was unified by members of the French geometry school (notably Henri Poincaré, Élie
Cartan, and Gaston Darboux) who applied Grassmann's ideas to the calculus of differential forms.
A short while later, Alfred North Whitehead, borrowing from the ideas of Peano and Grassmann, introduced his
universal algebra. This then paved the way for the 20th century developments of abstract algebra by placing the
axiomatic notion of an algebraic system on a firm logical footing.
Exterior algebra 110
Notes
[1] Strictly speaking, the magnitude depends on some additional structure, namely that the vectors be in a Euclidean space. We do not generally
assume that this structure is available, except where it is helpful to develop intuition on the subject.
[2] Grassmann (1844) introduced these as extended algebras (cf. Clifford 1878). He used the word äußere (literally translated as outer, or
exterior) only to indicate the produkt he defined, which is nowadays conventionally called exterior product, probably to distinguish it from the
outer product as defined in modern linear algebra.
[3] This axiomatization of areas is due to Leopold Kronecker and Karl Weierstrass; see Bourbaki (1989, Historical Note). For a modern
treatment, see MacLane & Birkhoff (1999, Theorem IX.2.2). For an elementary treatment, see Strang (1993, Chapter 5).
[4] This definition is a standard one. See, for instance, MacLane & Birkhoff (1999).
[5] A proof of this can be found in more generality in Bourbaki (1989).
[6] See Sternberg (1964, §III.6).
[7] See Bourbaki (1989, III.7.1), and MacLane & Birkhoff (1999, Theorem XVI.6.8). More detail on universal properties in general can be found
in MacLane & Birkhoff (1999, Chapter VI), and throughout the works of Bourbaki.
[8] Some conventions, particularly in physics, define the wedge product as
This convention is not adopted here, but is discussed in connection with alternating tensors.
[9] Indeed, the exterior algebra of V is the enveloping algebra of the abelian Lie superalgebra structure on V.
[10] This part of the statement also holds in greater generality if V and W are modules over a commutative ring: That Λ converts epimorphisms to
epimorphisms. See Bourbaki (1989, Proposition 3, III.7.2).
[11] This statement generalizes only to the case where V and W are projective modules over a commutative ring. Otherwise, it is generally not the
case that Λ converts monomorphisms to monomorphisms. See Bourbaki (1989, Corollary to Proposition 12, III.7.9).
[12] Such a filtration also holds for vector bundles, and projective modules over a commutative ring. This is thus more general than the result
quoted above for direct sums, since not every short exact sequence splits in other abelian categories.
[13] See Bourbaki (1989, III.7.5) for generalizations.
[14] Kannenberg (2000) published a translation of Grassmann's work in English; he translated Ausdehnungslehre as Extension Theory.
[15] J Itard, Biography in Dictionary of Scientific Biography (New York 1970-1990).
[16] Authors have in the past referred to this calculus variously as the calculus of extension (Whitehead 1898; Forder 1941), or extensive algebra
(Clifford 1878), and recently as extended vector algebra (Browne 2007).
[17] Bourbaki 1989, p. 661.
References
Mathematical references
• Bishop, R.; Goldberg, S.I. (1980), Tensor analysis on manifolds, Dover, ISBN 0-486-64039-6
Includes a treatment of alternating tensors and alternating forms, as well as a detailed discussion of
Hodge duality from the perspective adopted in this article.
• Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9
This is the main mathematical reference for the article. It introduces the exterior algebra of a module
over a commutative ring (although this article specializes primarily to the case when the ring is a field),
including a discussion of the universal property, functoriality, duality, and the bialgebra structure. See
chapters III.7 and III.11.
• Bryant, R.L.; Chern, S.S.; Gardner, R.B.; Goldschmidt, H.L.; Griffiths, P.A. (1991), Exterior differential systems,
Springer-Verlag
This book contains applications of exterior algebras to problems in partial differential equations. Rank
and related concepts are developed in the early chapters.
• MacLane, S.; Birkhoff, G. (1999), Algebra, AMS Chelsea, ISBN 0-8218-1646-2
Chapter XVI sections 6-10 give a more elementary account of the exterior algebra, including duality,
determinants and minors, and alternating forms.
• Sternberg, Shlomo (1964), Lectures on Differential Geometry, Prentice Hall
Exterior algebra 111
Contains a classical treatment of the exterior algebra as alternating tensors, and applications to
differential geometry.
Historical references
• Bourbaki, Nicolas (1989), "Historical note on chapters II and III", Elements of mathematics, Algebra I,
Springer-Verlag
• Clifford, W. (1878), "Applications of Grassmann's Extensive Algebra", American Journal of Mathematics (The
Johns Hopkins University Press) 1 (4): 350–358, doi:10.2307/2369379, JSTOR 2369379
• Forder, H. G. (1941), The Calculus of Extension, Cambridge University Press
• Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (http://books.
google.com/books?id=bKgAAAAAMAAJ&pg=PA1&dq=Die+Lineale+Ausdehnungslehre+ein+neuer+
Zweig+der+Mathematik) (The Linear Extension Theory - A new Branch of Mathematics) alternative reference
(http://resolver.sub.uni-goettingen.de/purl?PPN534901565)
• Kannenberg, Llyod (2000), Extension Theory (translation of Grassmann's Ausdehnungslehre), American
Mathematical Society, ISBN 0821820311
• Peano, Giuseppe (1888), Calcolo Geometrico secondo l'Ausdehnungslehre di H. Grassmann preceduto dalle
Operazioni della Logica Deduttiva; Kannenberg, Lloyd (1999), Geometric calculus: According to the
Ausdehnungslehre of H. Grassmann, Birkhäuser, ISBN 978-0817641269.
• Whitehead, Alfred North (1898), a Treatise on Universal Algebra, with Applications (http://historical.library.
cornell.edu/cgi-bin/cul.math/docviewer?did=01950001&seq=5), Cambridge
Geometric algebra
Geometric algebra (along with an associated Geometric calculus, Spacetime algebra and Conformal Geometric
algebra, together GA) provides an alternative and comprehensive approach to the algebraic representation of
classical, computational and relativistic geometry. GA now finds application in all of Physics, in graphics, robotics
as well as the mathematics of its formal parents, the Grassmann and Clifford algebras.
A distinguishing characteristic of GA is that its products are used and interpreted geometrically due to the natural
correspondence between geometric entities and the elements of the algebra. GA allows one to manipulate subspaces
directly and is a coordinate-free formalism.
Proponents argue it provides compact and intuitive descriptions in many areas including classical and quantum
mechanics, electromagnetic theory and relativity among others. They strive to work with real algebras wherever that
is possible and they argue that it is generally possible and usually enlightening to identify the presence of an
imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square
to -1, and these have geometric significance because of the properties of the algebra and the interaction of its various
subspaces.
which is a scalar (0-vector) and equal to the usual Euclidean inner product if both vectors have positive
signature.
• The outer product
decomposing the geometric product of a vector with a -blade into a -blade and a
-blade .
From now on, a vector is something in itself. Vectors will be represented by small case letters (e.g. ), and
multivectors by upper case letters (e.g. ). Scalars will be represented by Greek characters.
Geometric algebra 113
For example
In general, a multivector has grade-0 scalar, grade-1 vector, grade-2 bivector, ..., grade-(p+q) pseudoscalar parts.
The definition and the associativity of geometric product entails the concept of the inverse of a vector (or division by
vector) expressed by
Although not all the elements of the algebra are invertible, the inversion concept extends to the geometric product
and multivectors.
and
Similarly, the even subalgebra of with basis is isomorphic to the Quaternions if, for
example, we identify .
We know that every associate algebra has a matrix representation and it turns out that the Pauli matrices are a
representation of and the Dirac matrices of a matter of some interest to physicists.
Rotations
If we have a product of vectors then we denote the reverse as
.
Now assume that so
.
†
Setting = 1 then
†
so leaves the length of unchanged. We can also show that
†
so the transformation preserves both length and angle. It therefore can be identified as a rotation; is
called a rotor and is an instance of what is known in GA as a versor (presumably for historical reasons-Versor).
Geometric algebra 114
There is a general method for rotating a vector involving the formation of a multivector of the form
being an anticlockwise rotation in the plane defined by a bivector .
Rotors can be seen as the generalization of quaternions to spaces.
For more about reflections, rotations and "sandwiching" products like see Plane of Rotation.
Geometric Calculus
Geometric Calculus extends the formalism to include differentiation and integration including differential geometry
and differential forms.[1]
Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,
as a geometric product, effectively generalizing Stokes theorem (including the differential forms version of it).
In when A is a curve with endpoints and , then
reduces to
This again extends the language of GA, the conformal model of is embedded in the CGA via
the identification of Euclidean points with vectors in the null cone, adding a point at infinity and
normalizing all points to the hyperplane . Allows all of conformal algebra to be done by
combinations of rotations and reflections and the language is covariant, permitting the extension of incidence
relations of projective geometry to circles and spheres.
Specifically, we add such that and such that to the orthonormal basis of
allowing the creation of
representing an ideal point (point at infinity)(see Compactification)
Geometric algebra 115
Software
GA is a very practically oriented subject but there is a reasonably steep initial learning curve associated with it, that
can be eased somewhat by the use of applicable software.
The following is a list of freely available software that does not require ownership of commercial software or
purchase of any commercial products for this purpose:
• GA Viewer Fontijne, Dorst, Bouma & Mann [5]
The link provides a manual, introduction to GA and sample material as well as the software.
• CLUViz Perwass [6]
Software allowing script creation and including sample visualizations, manual and GA introduction.
• Gaigen Fontijne [7]
For programmers,this is a code generator with support for C,C++,C# and Java.
• Cinderella Visualizations Hitzer [8] and Dorst [9].
Applications
Algebraic Examples
Consider a line L defined by points T and P (which we seek) and a plane defined by a bivector B containing points P
and Q.
We may define the line parametrically by where p and t are position vectors for points T and P and v
is the direction vector for the line.
Then
and
so
and
Boosts in this Lorenzian metric space have the same expression as rotation in Euclidean space, where is the
bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector
generated by the two space directions, strengthening the "analogy" to almost identity.
Geometric algebra 117
Rotational Systems
The mathematical description of rotational forces such as Torque and angular momentum make use of the Cross
product.
The cross product can be viewed in terms of the outer product allowing
a more natural geometric interpretation of the cross product as a
bivector using the dual relationship
For example,torque is generally defined as the magnitude of the perpendicular force component times distance, or
work per unit angle.
Suppose a circular path in an arbitrary plane containing orthonormal vectors and is parameterized by angle.
Unlike the cross product description of torque, , the geometric-algebra description does not introduce a
vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three
dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is
relative to the angle between the vectors and .
History
Although the connection of geometry with algebra dates as far back at least to Euclid's Elements in the 3rd century
B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was
used in a systematic way to describe the geometrical properties and transformations of a space. In that year,
Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous
to the propositional calculus) which encoded all of the geometrical information of a space.[11] Grassmann's algebraic
Geometric algebra 118
system could be applied to a number of different kinds of spaces: the chief among them being Euclidean space,
affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's
algebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view,
the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra described
certain properties (or Strecken such as length, area, and volume). His contribution was to define a new product —
the geometric product — on an existing Grassmann algebra, which realized the quaternions as living within that
algebra. Subsequently Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied
them to the geometry of rotations in n dimensions. Later these developments would lead other 20th-century
mathematicians to formalize and explore the properties of the Clifford algebra.
Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric
algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector
analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express
and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared
to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit
of choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, following
lectures of Gibbs.
In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in
1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector
analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of
quaternionic analysis in vector analysis can be seen in the use of to indicate the basis vectors of it is
being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, quaternions can be
identified as Cℓ+3,0(R), the even part of the Clifford algebra on Euclidean 3-space, which unifies the three
approaches.
References
[1] Clifford Algebra to Geometric Calculus, a Unified Language for mathematics and Physics (Dordrecht/Boston:G.Reidel Publ.Co.,1984
[2] Geometric Algebra Computing in Engineering and Computer Science, E.Bayro-Corrochano & Gerik Scheuermann (Eds),Springer 2010.
Extract online at http:/ / geocalc. clas. asu. edu/ html/ UAFCG. html #5 New Tools for Computational Geometry and rejuvenation of Screw
Theory
[3] Dorst, Leo; Fontijne, Daniel; Mann, Stephen (2007). Geometric algebra for computer science: an object-oriented approach to geometry
(http:/ / www. geometricalgebra. net/ ). Amsterdam: Elsevier/Morgan Kaufmann. ISBN 978-0-12-369465-2. OCLC 132691969. .
[4] Hongbo Li (2008) Invariant Algebras and Geometric Reasoning, Singapore: World Scientific. Extract online at http:/ / www. worldscibooks.
com/ etextbook/ 6514/ 6514_chap01. pdf
[5] http:/ / www. geometricalgebra. net/ downloads. html
[6] http:/ / www. clucalc. info/
[7] http:/ / sourceforge. net/ projects/ g25/
[8] http:/ / sinai. apphy. u-fukui. ac. jp/ gcj/ software/ GAcindy-1. 4/ GAcindy. htm
[9] http:/ / staff. science. uva. nl/ ~leo/ cinderella/
[10] Hestenes, David (1966). Space-time Algebra. New York: Gordon and Breach. ISBN 0677013906. OCLC 996371.
[11] Grassmann, Hermann (1844). Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die
übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert (http:/ /
books. google. com/ ?id=bKgAAAAAMAAJ). Leipzig: O. Wigand. OCLC 20521674. .
[12] Doran, Chris J. L. (February 1994). Geometric Algebra and its Application to Mathematical Physics (http:/ / www. mrao. cam. ac. uk/
~clifford/ publications/ abstracts/ chris_thesis. html) (Ph.D. thesis). University of Cambridge. OCLC 53604228. .
Further reading
• Baylis, W. E., ed., 1996. Clifford (Geometric) Algebra with Applications to Physics, Mathematics, and
Engineering. Boston: Birkhäuser.
• Baylis, W. E., 2002. Electrodynamics: A Modern Geometric Approach, 2nd ed. Birkhäuser. ISBN 0-8176-4025-8
• Nicolas Bourbaki, 1980. Eléments de Mathématique. Algèbre. Chpt. 9, "Algèbres de Clifford". Paris: Hermann.
• Hestenes, D., 1999. New Foundations for Classical Mechanics, 2nd ed. Springer Verlag ISBN 0-7923-5302-1
• Lasenby, J., Lasenby, A. N., and Doran, C. J. L., 2000, " A Unified Mathematical Language for Physics and
Engineering in the 21st Century (http://www.mrao.cam.ac.uk/~clifford/publications/ps/dll_millen.pdf),"
Philosophical Transactions of the Royal Society of London A 358: 1-18.
• Chris Doran & Anthony Lasenby (2003). Geometric algebra for physicists (http://assets.cambridge.org/
052148/0221/sample/0521480221WS.pdf). Cambridge University Press. ISBN 978-0-521-71595-9.
• J Bain (2006). "Spacetime structuralism: §5 Manifolds vs. geometric algebra" (http://books.google.com/
?id=OI5BySlm-IcC&pg=PT72). In Dennis Geert Bernardus Johan Dieks. The ontology of spacetime. Elsevier.
p. 54 ff. ISBN 0444527680.
External links
• Imaginary Numbers are not Real - the Geometric Algebra of Spacetime (http://www.mrao.cam.ac.uk/
~clifford/introduction/intro/intro.html). Introduction (Cambridge GA group).
• Physical Applications of Geometric Algebra (http://www.mrao.cam.ac.uk/~clifford/ptIIIcourse/). Final-year
undergraduate course by Chris Doran and Anthony Lasenby (Cambridge GA group; see also 1999 version (http://
www.mrao.cam.ac.uk/~clifford/ptIIIcourse/course99/)).
• Maths for (Games) Programmers: 5 - Multivector methods (http://www.iancgbell.clara.net/maths/).
Comprehensive introduction and reference for programmers, from Ian Bell.
• Geometric Algebra (http://planetmath.org/?op=getobj&from=objects&id=3770) on PlanetMath
• Clifford algebra, geometric algebra, and applications (http://arxiv.org/abs/0907.5356) Douglas Lundholm,
Lars Svensson Lecture notes for a course on the theory of Clifford algebras, with special emphasis on their wide
range of applications in mathematics and physics.
• IMPA SUmmer School 2010 (http://www.visgraf.impa.br/Courses/ga/) Fernandes Oliveira Intro and Slides.
Geometric algebra 120
Research groups
• Geometric Calculus International (http://sinai.apphy.u-fukui.ac.jp/gcj/gc_int.html). Links to Research
groups, Software, and Conferences, worldwide.
• Cambridge Geometric Algebra group (http://www.mrao.cam.ac.uk/~clifford/). Full-text online publications,
and other material.
• University of Amsterdam group (http://www.science.uva.nl/ga/)
• Geometric Calculus research & development (http://geocalc.clas.asu.edu/) (Arizona State University).
• GA-Net blog (http://gaupdate.wordpress.com/) and newsletter archive (http://sinai.apphy.u-fukui.ac.jp/
GA-Net/archive/index.html). Geometric Algebra/Clifford Algebra development news.
Levi-Civita symbol
The Levi-Civita symbol, also called the permutation symbol, antisymmetric symbol, or alternating symbol, is a
mathematical symbol used in particular in tensor calculus. It is named after the Italian mathematician and physicist
Tullio Levi-Civita.
Definition
In three dimensions, the Levi-Civita symbol is defined
as follows:
i.e. is 1 if (i, j, k) is an even permutation of (1,2,3), −1 if it is an odd permutation, and 0 if any index is repeated.
The formula for the three dimensional Levi-Civita symbol is:
Levi-Civita symbol 121
or more simply:
transformation of jacobian determinant −1 (i.e., a rotation composed with a reflection), it acquires a minus sign.
Because the Levi-Civita symbol is a pseudotensor, the result of taking a cross product is a pseudovector, not a
vector.
Note that under a general coordinate change, the components of the permutation tensor get multiplied by the
jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the
tensor was defined, its components can differ from those of the Levi-Civita symbol by an overall factor. If the frame
is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not.
(In Einstein notation, the duplication of the i index implies the sum on i. The previous is then noted:
)
Generalization to n dimensions
The Levi-Civita symbol can be generalized to higher dimensions:
Thus, it is the sign of the permutation in the case of a permutation, and zero otherwise.
The generalized formula is:
follows from the facts that (a) every permutation is either even or odd, (b) (+1)2 = (-1)2 = 1, and (c) the permutations
of any n-element set number exactly n!.
In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual.
In general n dimensions one can write the product of two Levi-Civita symbols as:
Levi-Civita symbol 123
Properties
(in these examples, superscripts should be considered equivalent with subscripts)
1. In two dimensions, when all are in ,
Proofs
For equation 1, both sides are antisymmetric with respect of and . We therefore only need to consider the
case and . By substitution, we see that the equation holds for , i.e., for and
. (Both sides are then one). Since the equation is antisymmetric in and , any set of values for
these can be reduced to the above case (which holds). The equation thus holds for all values of and . Using
equation 1, we have for equation 2
.
Here we used the Einstein summation convention with going from to . Equation 3 follows similarly from
equation 2. To establish equation 4, let us first observe that both sides vanish when . Indeed, if , then
one can not choose and such that both permutation symbols on the left are nonzero. Then, with fixed,
there are only two ways to choose and from the remaining two indices. For any such indices, we have
(no summation), and the result follows. Property (5) follows since and for any
distinct indices in , we have (no summation).
Levi-Civita symbol 124
Examples
1. The determinant of an matrix can be written as
For instance, the first component of is . From the above expression for the cross product,
it is clear that . Further, if is a vector like and , then the triple
scalar product equals
From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any adjacent
arguments. For example, .
3. Suppose is a vector field defined on some open set of with Cartesian coordinates
. Then the th component of the curl of equals
Notation
A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrary
dimensions, for a rank 2 covariant tensor M,
Tensor density
In any arbitrary curvilinear coordinate system and even in the absence of a metric on the manifold, the Levi-Civita
symbol as defined above may be considered to be a tensor density field in two different ways. It may be regarded as
a contravariant tensor density of weight +1 or as a covariant tensor density of weight -1. In four dimensions,
Notice that the value, and in particular the sign, does not change.
Ordinary tensor
In the presence of a metric tensor field, one may define an ordinary contravariant tensor field which agrees with the
Levi-Civita symbol at each event whenever the coordinate system is such that the metric is orthonormal at that event.
Similarly, one may also define an ordinary covariant tensor field which agrees with the Levi-Civita symbol at each
event whenever the coordinate system is such that the metric is orthonormal at that event. These ordinary tensor
fields should not be confused with each other, nor should they be confused with the tensor density fields mentioned
above. One of these ordinary tensor fields may be converted to the other by raising or lowering the indices with the
metric as is usual, but a minus sign is needed if the metric signature contains an odd number of negatives. For
example, in Minkowski space (the four dimensional spacetime of special relativity)
References
• Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York;
ISBN 0-7167-0344-0. (See section 3.5 for a review of tensors in general relativity).
This article incorporates material from Levi-Civita permutation symbol on PlanetMath, which is licensed under the
Creative Commons Attribution/Share-Alike License.
Jacobi triple product 126
The Jacobi Triple Product also allows the Jacobi theta function to be written as an infinite product as follows:
Let and .
Then the Jacobi theta function
Using the Jacobi Triple Product Identity we can then write the theta function as the product
There are many different notations used to express the Jacobi triple product. It takes on a concise form when
expressed in terms of q-Pochhammer symbols:
Proof
This proof uses a simplified model of the Dirac sea and follows the proof in Cameron (13.3) which is attributed to
Richard Borcherds. It treats the case where the power series are formal. For the analytic case, see Apostol. The
Jacobi triple product identity can be expressed as
A level is a half-integer. The vacuum state is the set of all negative levels. A state is a set of levels whose symmetric
difference with the vacuum state is finite. The energy of the state is
Jacobi triple product 127
An unordered choice of the presence of finitely many positive levels and the absence of finitely many negative levels
(relative to the vacuum) corresponds to a state, so the generating function for the number
of states of energy with particles can be expressed as
On the other hand, any state with particles can be obtained from the lowest energy particle state,
, by rearranging particles: take a partition of and move the top particle up
by levels, the next highest particle up by levels, etc.... The resulting state has energy , so the
where is the partition function. The uses of random partitions [2] by Andrei Okounkov contains a picture of a
partition exciting the vacuum.
Notes
[1] Remmert, R. (1998). Classical Topics in Complex Function Theory (pp. 28-30). New York: Springer.
[2] http:/ / arxiv. org/ abs/ math-ph/ 0309015
References
• See chapter 14, theorem 14.6 of Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate
Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR0434929
• Peter J. Cameron, Combinatorics: Topics, Techniques, Algorithms, (1994) Cambridge University Press, ISBN
0-521-45761-0
Rule of Sarrus 128
Rule of Sarrus
Sarrus' rule or Sarrus' scheme is a method and a
memorization scheme to compute the determinant of a 3×3
matrix. It is named after the French mathematician Pierre
Frédéric Sarrus.
Consider a 3×3 matrix
Both are special cases of the Leibniz formula, which however does not yield similar memorization schemes for
larger matrices. Sarrus' rule can also be derived by looking at the Laplace expansion of a 3x3 matrix.
References
• Khattar, Dinesh (2010). The Pearson Guide to Complete Mathematics for AIEEE [1] (3rd ed.). Pearson Education
India. p. 6-2. ISBN 9788131721261.
• Fischer, Gerd (1985) (in German). Analytische Geometrie (4th ed.). Wiesbaden: Vieweg. p. 145.
ISBN 3528372354.
External links
• Sarrus' rule at Planetmath [2]
• Linear Algebra: Rule of Sarrus of Determinants [3] at khanacademy.org
Rule of Sarrus 129
References
[1] http:/ / books. google. de/ books?id=7cwSfkQYJ_EC& pg=SA6-PA2
[2] http:/ / planetmath. org/ encyclopedia/ RuleOfSarrus. html
[3] http:/ / www. youtube. com/ watch?v=4xFIi0JF2AM
Laplace expansion
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an
expression for the determinant |B| of an n × n square matrix B that is a weighted sum of the determinants of n
sub-matrices of B, each of size (n–1) × (n–1). The Laplace expansion is of theoretical interest as one of several ways
to view the determinant, as well as of practical use in determinant computation.
The i,j cofactor of B is the scalar Cij defined by
where Mij is the i,j minor matrix of B, that is, the (n–1) × (n–1) matrix that results from deleting the i-th row and the
j-th column of B.
Then the Laplace expansion is given by the following
Theorem. Suppose B = (bij) is an n × n matrix and i,j ∈ {1, 2, ...,n}.
Then its determinant |B| is given by:
Examples
Consider the matrix
The determinant of this matrix can be computed by using the Laplace expansion along the first row:
It is easy to see that the result is correct: the matrix is singular because the sum of its first and third column is twice
the second column, and hence its determinant is zero.
Laplace expansion 130
Proof
Suppose is an n × n matrix and For clarity we also label the entries of that compose
its minor matrix as
for
Consider the terms in the expansion of that have as a factor. Each has the form
for some permutation τ ∈ Sn with , and a unique and evidently related permutation which
selects the same minor entries as Similarly each choice of determines a corresponding i.e. the
correspondence is a bijection between and The permutation can be
derived from as follows.
Define by for and . Then and
References
• David Poole: Linear Algebra. A Modern Introduction. Cengage Learning 2005, ISBN 0534998453, p. 265-267 (
restricted online copy [1] at Google Books)
• Harvey E. Rose: Linear Algebra. A Pure Mathematical Approach. Springer 2002, ISBN 3764369051, p. 57-60 (
restricted online copy [2] at Google Books)
External links
• Laplace expansion [3] at PlanetMath
References
[1] http:/ / books. google. com/ books?id=oBk3u2fDFc8C& pg=PA265
[2] http:/ / books. google. com/ books?id=mTdAj-Yn4L4C& pg=PA57
[3] http:/ / planetmath. org/ encyclopedia/ LaplaceExpansion2. html
Lie algebra 131
Lie algebra
In mathematics, a Lie algebra ( /ˈliː/, not /ˈlaɪ/) is an algebraic structure whose main use is in studying geometric
objects such as Lie groups and differentiable manifolds. Lie algebras were introduced to study the concept of
infinitesimal transformations. The term "Lie algebra" (after Sophus Lie) was introduced by Hermann Weyl in the
1930s. In older texts, the name "infinitesimal group" is used.
for all x in .
• The Jacobi identity:
for all x, y, z in .
Note that the bilinearity and alternating properties imply anticommutativity, i.e., for all elements
x, y in , while anticommutativity only implies the alternating property if the field's characteristic is not 2.[1]
For any associative algebra A with multiplication , one can construct a Lie algebra L(A). As a vector space, L(A) is
the same as A. The Lie bracket of two elements of L(A) is defined to be their commutator in A:
The associativity of the multiplication * in A implies the Jacobi identity of the commutator in L(A). In particular, the
associative algebra of n × n matrices over a field F gives rise to the general linear Lie algebra The
associative algebra A is called an enveloping algebra of the Lie algebra L(A). It is known that every Lie algebra can
be embedded into one that arises from an associative algebra in this fashion. See universal enveloping algebra.
then I is called an ideal in the Lie algebra .[2] A Lie algebra in which the commutator is not identically zero and
which has no proper ideals is called simple. A homomorphism between two Lie algebras (over the same ground
field) is a linear map that is compatible with the commutators:
for all elements x and y in . As in the theory of associative rings, ideals are precisely the kernels of
homomorphisms, given a Lie algebra and an ideal I in it, one constructs the factor algebra , and the first
Lie algebra 132
isomorphism theorem holds for Lie algebras. Given two Lie algebras and , their direct sum is the vector space
consisting of the pairs , with the operation
Examples
• Any vector space V endowed with the identically zero Lie bracket becomes a Lie algebra. Such Lie algebras are
called abelian, cf. below. Any one-dimensional Lie algebra over a field is abelian, by the antisymmetry of the Lie
bracket.
• The three-dimensional Euclidean space R3 with the Lie bracket given by the cross product of vectors becomes a
three-dimensional Lie algebra.
• The Heisenberg algebra is a three-dimensional Lie algebra with generators (see also the definition at Generating
set):
for all real numbers t. The Lie bracket of is given by the commutator of matrices. As a concrete example,
consider the special linear group SL(n,R), consisting of all n × n matrices with real entries and determinant 1.
This is a matrix Lie group, and its Lie algebra consists of all n × n matrices with real entries and trace 0.
• The real vector space of all n × n skew-hermitian matrices is closed under the commutator and forms a real Lie
algebra denoted . This is the Lie algebra of the unitary group U(n).
• An important class of infinite-dimensional real Lie algebras arises in differential topology. The space of smooth
vector fields on a differentiable manifold M forms a Lie algebra, where the Lie bracket is defined to be the
commutator of vector fields. One way of expressing the Lie bracket is through the formalism of Lie derivatives,
which identifies a vector field X with a first order partial differential operator LX acting on smooth functions by
letting LX(f) be the directional derivative of the function f in the direction of X. The Lie bracket [X,Y] of two
vector fields is the vector field defined through its action on functions by the formula:
becomes zero eventually. By Engel's theorem, a Lie algebra is nilpotent if and only if for every u in the adjoint
endomorphism
is nilpotent.
More generally still, a Lie algebra is said to be solvable if the derived series:
Classification
In many ways, the classes of semisimple and solvable Lie algebras are at the opposite ends of the full spectrum of
the Lie algebras. The Levi decomposition expresses an arbitrary Lie algebra as a semidirect sum of its solvable
radical and a semisimple Lie algebra, almost in a canonical way. Semisimple Lie algebras over an algebraically
closed field have been completely classified through their root systems. The classification of solvable Lie algebras is
a 'wild' problem, and cannot be accomplished in general.
Cartan's criterion gives conditions for a Lie algebra to be nilpotent, solvable, or semisimple. It is based on the notion
of the Killing form, a symmetric bilinear form on defined by the formula
where tr denotes the trace of a linear operator. A Lie algebra is semisimple if and only if the Killing form is
nondegenerate. A Lie algebra is solvable if and only if
Notes
[1] Humpfrey p. 1
[2] Due to the anticommutativity of the commutator, the notions of a left and right ideal in a Lie algebra coincide.
[3] Humphreys p.2
References
• Hall, Brian C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer, 2003. ISBN
0-387-40122-9
• Erdmann, Karin & Wildon, Mark. Introduction to Lie Algebras, 1st edition, Springer, 2006. ISBN 1-84628-040-0
• Humphreys, James E. Introduction to Lie Algebras and Representation Theory, Second printing, revised.
Graduate Texts in Mathematics, 9. Springer-Verlag, New York, 1978. ISBN 0-387-90053-5
• Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979.
ISBN 0-486-63832-4
• Kac, Victor G. et al. Course notes for MIT 18.745: Introduction to Lie Algebras, http://www-math.mit.edu/
~lesha/745lec/
• O'Connor, J. J. & Robertson, E.F. Biography of Sophus Lie, MacTutor History of Mathematics Archive, http://
www-history.mcs.st-and.ac.uk/Biographies/Lie.html
• O'Connor, J. J. & Robertson, E.F. Biography of Wilhelm Killing, MacTutor History of Mathematics Archive,
http://www-history.mcs.st-and.ac.uk/Biographies/Killing.html
• Steeb, W.-H. Continuous Symmetries, Lie Algebras, Differential Equations and Computer Algebra, second
edition, World Scientific, 2007, ISBN 978-981-270-809-0
• Varadarajan, V. S. Lie Groups, Lie Algebras, and Their Representations, 1st edition, Springer, 2004. ISBN
0-387-90969-9
Orthogonal group 136
Orthogonal group
In mathematics, the orthogonal group of degree n over a field F (written as O(n,F)) is the group of n-by-n
orthogonal matrices with entries from F, with the group operation of matrix multiplication. This is a subgroup of the
general linear group GL(n,F) given by
where QT is the transpose of Q. The classical orthogonal group over the real numbers is usually just written O(n).
More generally the orthogonal group of a non-singular quadratic form over F is the group of linear operators
preserving the form – the above group O(n, F) is then the orthogonal group of the sum-of-n-squares quadratic
form.[1] The Cartan–Dieudonné theorem describes the structure of the orthogonal group for non-singular form. This
article only discusses definite forms – the orthogonal group of the positive definite form (equivalent to sum of n
squares) and negative definite forms (equivalent to the negative sum of n squares) are identical –
– though the associated Pin groups differ; for other non-singular forms O(p,q), see indefinite
orthogonal group.
Every orthogonal matrix has determinant either 1 or −1. The orthogonal n-by-n matrices with determinant 1 form a
normal subgroup of O(n,F) known as the special orthogonal group, SO(n,F). (More precisely, SO(n,F) is the kernel
of the Dickson invariant, discussed below.) By analogy with GL/SL (general linear group, special linear group), the
orthogonal group is sometimes called the general orthogonal group and denoted GO, though this term is also
sometimes used for indefinite orthogonal groups O(p,q).
The derived subgroup Ω(n,F) of O(n,F) is an often studied object because when F is a finite field Ω(n,F) is often a
central extension of a finite simple group.
Both O(n,F) and SO(n,F) are algebraic groups, because the condition that a matrix be orthogonal, i.e. have its own
transpose as inverse, can be expressed as a set of polynomial equations in the entries of the matrix.
where the matrices R1,...,Rk are 2-by-2 rotation matrices in orthogonal planes of rotation. As a special case, known as
Euler's rotation theorem, any (non-identity) element of SO(3,R) is rotation about a uniquely defined axis.
The orthogonal group is generated by reflections (two reflections give a rotation), as in a Coxeter group,[2] and
elements have length at most n (require at most n reflections to generate; this follows from the above classification,
noting that a rotation is generated by 2 reflections, and is true more generally for indefinite orthogonal groups, by the
Cartan–Dieudonné theorem). A longest element (element needing the most reflections) is reflection through the
origin (the map ), though so are other maximal combinations of rotations (and a reflection, in odd
dimension).
The symmetry group of a circle is O(2,R), also called Dih (S1), where S1 denotes the multiplicative group of complex
numbers of absolute value 1.
SO(2,R) is isomorphic (as a Lie group) to the circle S1 (circle group). This isomorphism sends the complex number
exp(φi) = cos(φ) + i sin(φ) to the orthogonal matrix
The group SO(3,R), understood as the set of rotations of 3-dimensional space, is of major importance in the sciences
and engineering. See rotation group and the general formula for a 3 × 3 rotation matrix in terms of the axis and the
angle.
In terms of algebraic topology, for n > 2 the fundamental group of SO(n,R) is cyclic of order 2, and the spinor group
Spin(n) is its universal cover. For n = 2 the fundamental group is infinite cyclic and the universal cover corresponds
to the real line (the spinor group Spin(2) is the unique 2-fold cover).
Lie algebra
The Lie algebra associated to the Lie groups O(n,R) and SO(n,R) consists of the skew-symmetric real n-by-n
matrices, with the Lie bracket given by the commutator. This Lie algebra is often denoted by o(n,R) or by so(n,R),
and called the orthogonal Lie algebra or special orthogonal Lie algebra. These Lie algebras are the compact real
forms of two of the four families of semisimple Lie algebras: in odd dimension while in even
dimension
More intrinsically, given a vector space with an inner product, the special orthogonal Lie algebra is given by the
bivectors on the space, which are sums of simple bivectors (2-blades) . The correspondence is given by the
map where is the covector dual to the vector v; in coordinates these are exactly
the elementary skew-symmetric matrices.
This characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation
or "curl", hence the name. Generalizing the inner product with a nondegenerate form yields the indefinite orthogonal
Lie algebras
The representation theory of the orthogonal Lie algebras includes both representations corresponding to linear
representations of the orthogonal groups, and representations corresponding to projective representations of the
orthogonal groups (linear representations of spin groups), the so-called spin representation, which are important in
physics.
Conformal group
Being isometries (preserving distances), orthogonal transforms also preserve angles, and are thus conformal maps,
though not all conformal linear transforms are orthogonal. In classical terms this is the difference between
congruence and similarity, as exemplified by SSS (Side-Side-Side) congruence of triangles and AAA
(Angle-Angle-Angle) similarity of triangles. The group of conformal linear maps of Rn is denoted CO(n) for the
conformal orthogonal group, and consists of the product of the orthogonal group with the group of dilations. If n is
odd, these two subgroups do not intersect, and they are a direct product: ,
while if n is even, these subgroups intersect in , so this is not a direct product, but it is a direct product with the
subgroup of dilation by a positive scalar: .
Similarly one can define CSO(n); note that this is always : .
Orthogonal group 139
Topology
Low dimensional
The low dimensional (real) orthogonal groups are familiar spaces:
Homotopy groups
The homotopy groups of the orthogonal group are related to homotopy groups of spheres, and thus are in general
hard to compute.
However, one can compute the homotopy groups of the stable orthogonal group (aka the infinite orthogonal group),
defined as the direct limit of the sequence of inclusions
(as the inclusions are all closed inclusions, hence cofibrations, this can also be interpreted as a union).
is a homogeneous space for , and one has the following fiber bundle:
which can be understood as "The orthogonal group acts transitively on the unit sphere , and the
stabilizer of a point (thought of as a unit vector) is the orthogonal group of the perpendicular complement, which is
an orthogonal group one dimension lower". The map is the natural inclusion.
Thus the inclusion is (n − 1)-connected, so the homotopy groups stabilize, and
for : thus the homotopy groups of the stable space equal the lower homotopy
groups of the unstable spaces.
Via Bott periodicity, , thus the homotopy groups of O are 8-fold periodic, meaning ,
and one needs only to compute the lower 8 homotopy groups to compute them all.
Orthogonal group 140
Relation to KO-theory
Via the clutching construction, homotopy groups of the stable space O are identified with stable vector bundles on
spheres (up to isomorphism), with a dimension shift of 1: .
Setting (to make fit into the periodicity), one obtains:
Low-dimensional groups
The first few homotopy groups can be calculated by using the concrete descriptions of low-dimensional groups.
• from orientation-preserving/reversing (this class survives to and hence
stably)
yields
• , which is spin
• , which surjects onto ; this latter thus vanishes.
Lie groups
From general facts about Lie groups, always vanishes, and is free (free abelian).
Vector bundles
From the vector bundle point of view, is vector bundles over , which is two points. Thus over each
point, the bundle is trivial, and the non-triviality of the bundle is the difference between the dimensions of the vector
spaces over the two points, so
is dimension.
Orthogonal group 141
Loop spaces
Using concrete descriptions of the loop spaces in Bott periodicity, one can interpret higher homotopy of O as lower
homotopy of simple to analyze spaces. Using , O and O/U have two components, and
have components, and the rest are connected.
where are hyperbolic lines and contains no singular vectors. If , then is of plus type. If
then has odd dimension. If has dimension 2, is of minus type.
In the special case where n = 1, is a dihedral group of order .
We have the following formulas for the order of these groups, O(n,q) = { A in GL(n,q) : A·At=I }, when the
characteristic is greater than two
If −1 is a square in
If −1 is a nonsquare in
Orthogonal group 142
Here μ2 is the algebraic group of square roots of 1; over a field of characteristic not 2 it is roughly the same as a
two-element group with trivial Galois action. The connecting homomorphism from H0(OV), which is simply the
group OV(F) of F-valued points, to H1(μ2) is essentially the spinor norm, because H1(μ2) is isomorphic to the
multiplicative group of the field modulo squares.
There is also the connecting homomorphism from H1 of the orthogonal group, to the H2 of the kernel of the spin
covering. The cohomology is non-abelian, so that this is as far as we can go, at least with the conventional
definitions.
Related groups
The orthogonal groups and special orthogonal groups have a number of important subgroups, supergroups, quotient
groups, and covering groups. These are listed below.
The inclusions and are part of a
sequence of 8 inclusions used in a geometric proof of the Bott periodicity theorem, and the corresponding quotient
spaces are symmetric spaces of independent interest – for example, is the Lagrangian Grassmannian.
Lie subgroups
In physics, particularly in the areas of Kaluza–Klein compactification, it is important to find out the subgroups of the
orthogonal group. The main ones are:
– preserves an axis
– U(n) are those that preserve a compatible complex structure or a compatible
symplectic structure – see 2-out-of-3 property; SU(n) also preserves a complex orientation.
Orthogonal group 144
Lie supergroups
The orthogonal group O(n) is also an important subgroup of various Lie groups:
Discrete subgroups
As the orthogonal group is compact, discrete subgroups are equivalent to finite subgroups.[6] These subgroups are
known as point group and can be realized as the symmetry groups of polytopes. A very important class of examples
are the finite Coxeter groups, which include the symmetry groups of regular polytopes.
Dimension 3 is particularly studied – see point groups in three dimensions, polyhedral groups, and list of spherical
symmetry groups. In 2 dimensions, the finite groups are either cyclic or dihedral – see point groups in two
dimensions.
Other finite subgroups include:
• Permutation matrices (the Coxeter group An)
• Signed permutation matrices (the Coxeter group Bn); also equals the intersection of the orthogonal group with the
integer matrices.[7]
Notes
[1] Away from 2, it is equivalent to use bilinear forms or quadratic forms, but at 2 these differ – notably in characteristic 2, but also when
generalizing to rings where 2 is not invertible, most significantly the integers, where the notions of even and odd quadratic forms arise.
[2] The analogy is stronger: Weyl groups, a class of (representations of) Coxeter groups, can be considered as simple algebraic groups over the
field with one element, and there are a number of analogies between algebraic groups and vector spaces on the one hand, and Weyl groups and
sets on the other.
[3] John Baez "This Week's Finds in Mathematical Physics" week 105 (http:/ / math. ucr. edu/ home/ baez/ week105. html)
[4] (Taylor 1992, page 160)
[5] (Grove 2002, Theorem 6.6 and 14.16)
[6] Infinite subsets of a compact space have an accumulation point and are not discrete.
[7] equals the signed permutation matrices because an integer vector of norm 1 must have a single non-zero entry,
which must be ±1 (if it has two non-zero entries or a larger entry, the norm will be larger than 1), and in an orthogonal matrix these entries
must be in different coordinates, which is exactly the signed permutation matrices.
[8] In odd dimension, SO(2k+1) PSO(2k+1) is centerless (but not simply connected), while in even dimension SO(2k) is neither centerless
nor simply connected.
References
• Grove, Larry C. (2002), Classical groups and geometric algebra, Graduate Studies in Mathematics, 39,
Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2019-3, MR1859189
• Taylor, Donald E. (1992), The Geometry of the Classical Groups, 9, Berlin: Heldermann Verlag,
ISBN 3-88538-009-9, MR1189139
External links
• John Baez "This Week's Finds in Mathematical Physics" week 105 (http://math.ucr.edu/home/baez/week105.
html)
• John Baez on Octonions (http://math.ucr.edu/home/baez/octonions/node10.html)
• (Italian) n-dimensional Special Orthogonal Group parametrization (http://ansi.altervista.org)
Rotation group 146
Rotation group
In mechanics and geometry, the rotation group is the group of all rotations about the origin of three-dimensional
Euclidean space R3 under the operation of composition.[1] By definition, a rotation about the origin is a linear
transformation that preserves length of vectors (it is an isometry) and preserves orientation (i.e. handedness) of
space. A length-preserving transformation which reverses orientation is an improper rotation, that is a reflection or
more generally a rotoinversion.
Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identity
map satisfies the definition of a rotation. Owing to the above properties, the set of all rotations is a group under
composition. Moreover, the rotation group has a natural manifold structure for which the group operations are
smooth; so it is in fact a Lie group. The rotation group is often denoted SO(3) for reasons explained below.
It follows that any length-preserving transformation in R3 preserves the dot product, and thus the angle between
vectors. Rotations are often defined as linear transformations that preserve the inner product on R3. This is
equivalent to requiring them to preserve length.
where RT denotes the transpose of R and I is the 3 × 3 identity matrix. Matrices for which this property holds are
called orthogonal matrices. The group of all 3 × 3 orthogonal matrices is denoted O(3), and consists of all proper and
improper rotations.
In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverse
orientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix R,
note that det RT = det R implies (det R)2 = 1 so that det R = ±1. The subgroup of orthogonal matrices with
determinant +1 is called the special orthogonal group, denoted SO(3).
Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, since
composition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the special
orthogonal group SO(3).
Improper rotations correspond to orthogonal matrices with determinant −1, and they do not form a group because the
product of two improper rotations is a proper rotation.
Rotation group 147
Group structure
The rotation group is a group under function composition (or equivalently the product of linear transformations). It is
a subgroup of the general linear group consisting of all invertible linear transformations of Euclidean space.
Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference.
For example, a quarter turn around the positive x-axis followed by a quarter turn around the positive y-axis is a
different rotation than the one obtained by first rotating around y and then x.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper
rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Axis of rotation
Every nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of R3 which is called
the axis of rotation (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation in
the plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle φ, an arbitrary
3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis.
(Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise or
counterclockwise with respect to this orientation).
For example, counterclockwise rotation about the positive z-axis by angle φ is given by
Given a unit vector n in R3 and an angle φ, let R(φ, n) represent a counterclockwise rotation about the axis through n
(with orientation determined by n). Then
• R(0, n) is the identity transformation for any n
• R(φ, n) = R(−φ, −n)
• R(π + φ, n) = R(π − φ, −n).
Using these properties one can show that any rotation can be represented by a unique angle φ in the range 0 ≤ φ ≤ π
and a unit vector n such that
• n is arbitrary if φ = 0
• n is unique if 0 < φ < π
• n is unique up to a sign if φ = π (that is, the rotations R(π, ±n) are identical).
Topology
Consider the solid ball in R3 of radius π (that is, all points of R3 of distance π or less from the origin). Given the
above, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle
equal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of the
ball. Rotation through angles between 0 and -π correspond to the point on the same axis and distance from the origin
but on the opposite side of the origin. The one remaining issue is that the two rotations through π and through -π are
the same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, we
arrive at a topological space homeomorphic to the rotation group.
Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic to
the rotation group. It is also diffeomorphic to the real 3-dimensional projective space RP3, so the latter can also serve
as a topological model for the rotation group.
These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with
antipodal surface points identified, consider the path running from the "north pole" straight through the interior down
Rotation group 148
to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be
shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else
the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the
z-axis starting and ending at the identity rotation (i.e. a series of rotation through an angle φ where φ runs from 0 to
2π).
Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north
pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole,
so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths
continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then
be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the
surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole
without problems.
The same argument can be performed in general, and it shows that the fundamental group of SO(3) is cyclic group of
order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects
known as spinors, and is an important tool in the development of the spin-statistics theorem.
The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary
group SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of unit quaternions
(i.e. those with absolute value 1). The connection between quaternions and rotations, commonly exploited in
computer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifies
antipodal points of S3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a
two-to-one covering map.
Lie algebra
Since SO(3) is a Lie subgroup of the general linear group GL(3), its Lie algebra can be identified with a Lie
subalgebra of gl(3), the algebra of 3×3 matrices with the commutator given by
and so the Lie algebra so(3) consists of all skew-symmetric 3×3 matrices.
Representations of rotations
We have seen that there are a variety of ways to represent rotations:
• as orthogonal matrices with determinant 1,
• by axis and rotation angle
• in quaternion algebra with versors and the map S3 → SO(3) (see quaternions and spatial rotations).
Another method is to specify an arbitrary rotation by a sequence of rotations about some fixed axes. See:
• Euler angles
See charts on SO(3) for further discussion.
Rotation group 149
Generalizations
The rotation group generalizes quite naturally to n-dimensional Euclidean space, Rn. The group of all proper and
improper rotations in n dimensions is called the orthogonal group, O(n), and the subgroup of proper rotations is
called the special orthogonal group, SO(n).
In special relativity, one works in a 4-dimensional vector space, known as Minkowski space rather than
3-dimensional Euclidean space. Unlike Euclidean space, Minkowski space has an inner product with an indefinite
signature. However, one can still define generalized rotations which preserve this inner product. Such generalized
rotations are known as Lorentz transformations and the group of all such transformations is called the Lorentz group.
The rotation group SO(3) can be described as a subgroup of E+(3), the Euclidean group of direct isometries of R3.
This larger group is the group of all motions of a rigid body: each of these is a combination of a rotation about an
arbitrary axis and a translation along the axis, or put differently, a combination of an element of SO(3) and an
arbitrary translation.
In general, the rotation group of an object is the symmetry group within the group of direct isometries; in other
words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same
as the full symmetry group.
Notes
[1] Jacobson (2009), p. 34, Ex. 14.
References
• A. W. Joshi Elements of Group Theory for Physicists (2007 New Age International) pp. 111ff.
• Weisstein, Eric W., " Rotation Group (http://mathworld.wolfram.com/RotationGroup.html)" from
MathWorld.
• Mathematical Methods in the Physical Sciences by Mary L Boas pp. 120,127,129,155ff and 535
• Jacobson, Nathan (2009), Basic algebra, 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1
Vector-valued function 150
Vector-valued function
A vector-valued function also referred to
as a vector function is a mathematical
function of one or more variables whose
range is a set of multidimensional vectors or
infinite-dimensional vectors. Often the input
of a vector-valued function is a scalar, but in
general the input can be a vector of both
complex or real variables.
A graph of the vector-valued function r(t) = <2 cos t, 4 sin t, t> indicating a range
of solutions and the vector when evaluated near t = 19.5
Example
A common example of a vector valued function is one that depends on a single real number parameter t, often
representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian
3-space, these specific type of vector-valued functions are given by expressions such as
• or
•
where f(t), g(t) and h(t) are the coordinate functions of the parameter t. The vector r(t) has its tail at the origin and
its head at the coordinates evaluated by the function.
The vector shown in the graph to the right is the evaluation of the function near t=19.5 (between 6π and 6.5π; i.e.,
somewhat more than 3 rotations). The spiral is the path traced by the tip of the vector as t increases from zero
through 8π.
Vector functions can also be referred to in a different notation:
• or
•
Vector-valued function 151
Properties
The domain of a vector-valued function is the intersection of the domain of the functions f, g, and h.
The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then
the derivative is the velocity of the particle
Partial derivative
The partial derivative of a vector function a with respect to a scalar variable q is defined as[1]
where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot
product. The vectors e1,e2,e3 form an orthonormal basis fixed in the reference frame in which the derivative is being
taken.
Ordinary derivative
If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the
first ordinary time derivative of a with respect to t,[1]
Total derivative
If the vector a is a function of a number n of scalar variables qr (r = 1,...,n), and each qr is only a function of time t,
then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as[1]
Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs
from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the
variables qr.
Vector-valued function 152
Reference frames
Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a
vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is
not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be
computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice
of reference frame will, in general, produce a different derivative function. The derivative functions in different
reference frames have a specific kinematical relationship.
where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is
taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame
where e1,e2,e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is
equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself.[1] Thus,
after substitution, the formula relating the derivative of a vector function in two reference frames is[1]
where NωE is the angular velocity of the reference frame E relative to the reference frame N.
One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in
the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in
inertial reference frame N of a rocket R located at position rR can be found using the formula
where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of
position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution,
where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth.
Vector-valued function 153
In the case of dot multiplication, for two vectors a and b that are both functions of q,[1]
Similarly, the derivative of the cross product of two vector functions is[1]
Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis.
Differentiation can also be defined to functions of several variables (e.g., or even , where Y is an
infinite-dimensional vector space).
N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed
componentwise: if
Notes
[1] Kane & Levinson 1996, p. 29–37
[2] In fact, these relations are derived applying the product rule componentwise.
References
• Kane, Thomas R.; Levinson, David A. (1996), "1–9 Differentiation of Vector Functions", Dynamics Online,
Sunnyvale, California: OnLine Dynamics, Inc., pp. 29–37
External links
• Vector-valued functions and their properties (from Lake Tahoe Community College) (http://ltcconline.net/
greenl/courses/202/vectorFunctions/vectorFunctions.htm)
• Weisstein, Eric W., " Vector Function (http://mathworld.wolfram.com/VectorFunction.html)" from
MathWorld.
• Everything2 article (http://www.everything2.com/index.pl?node_id=1525585)
• 3 Dimensional vector-valued functions (from East Tennessee State University) (http://math.etsu.edu/
MultiCalc/Chap1/Chap1-6/part1.htm)
Gramian matrix
In linear algebra, the Gramian matrix (or Gram matrix or Gramian) of a set of vectors in an inner
product space is the Hermitian matrix of inner products, whose entries are given by .
An important application is to compute linear independence: a set of vectors is linearly independent if and only if the
Gram determinant (the determinant of the Gram matrix) is non-zero.
It is named after Jørgen Pedersen Gram.
Examples
Most commonly, the vectors are elements of an Euclidean space, or are functions in an L2 space, such as continuous
functions on a compact interval [a, b] (which are a subspace of L 2([a, b])).
Given real-valued functions on the interval , the Gram matrix , is
given by the standard inner product on functions:
Given a real matrix A, the matrix ATA is a Gram matrix (of the columns of A), while the matrix AAT is the Gram
matrix of the rows of A.
For a general bilinear form B on a finite-dimensional vector space over any field we can define a Gram matrix G
attached to a set of vectors by . The matrix will be symmetric if the bilinear form B
is symmetric.
Gramian matrix 155
Applications
• If the vectors are centered random variables, the Gramian is proportional to the covariance matrix, with the
scaling determined by the number of elements in the vector.
• In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.
• In control theory (or more generally systems theory), the controllability Gramian and observability Gramian
determine properties of a linear system.
• Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied
Psychological Measurement, Volume 18, pp. 79–94).
• In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional
space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional
subspace.
• In machine learning, kernel functions are often represented as Gram matrices.[1]
Properties
Positive semidefinite
The Gramian matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some
set of vectors. This set of vectors is not in general unique: the Gramian matrix of any orthonormal basis is the
identity matrix.
The infinite-dimensional analog of this statement is Mercer's theorem.
Change of basis
Under change of basis represented by an invertible matrix P, the Gram matrix will change by a matrix congruence to
PTGP.
Gram determinant
The Gram determinant or Gramian is the determinant of the Gram matrix:
Geometrically, the Gram determinant is the square of the volume of the parallelotope formed by the vectors. In
particular, the vectors are linearly independent if and only if the Gram determinant is nonzero (if and only if the
Gram matrix is nonsingular).
The Gram determinant can also be expressed in terms of the exterior product of vectors by
Gramian matrix 156
References
[1] Lanckriet, G. R. G., N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. “Learning the kernel matrix with semidefinite programming.”
The Journal of Machine Learning Research 5 (2004): 29.
• Barth, Nils (1999). "The Gramian and K-Volume in N-Space: Some Classical Results in Linear Algebra" (http://
www.jyi.org/volumes/volume2/issue1/articles/barth.html). Journal of Young Investigators 2.
External links
• Volumes of parallelograms (http://www.owlnet.rice.edu/~fjones/chap8.pdf) by Frank Jones
Lagrange's identity
In algebra, Lagrange's identity, named after Joseph Louis Lagrange, is:[1] [2]
which applies to any two sets {a1, a2, . . ., an} and {b1, b2, . . ., bn} of real or complex numbers (or more generally,
elements of a commutative ring). This identity is a special form of the Binet–Cauchy identity.
In a more compact vector notation, Lagrange's identity is expressed as:[3]
where a and b are n-dimensional vectors with components that are real numbers. The extension to complex numbers
requires the interpretation of the dot product as an inner product or Hermitian dot product. Explicitly, for complex
numbers, Lagrange's identity can be written in the form:[4]
Hence, it can be seen as a formula which gives the length of the wedge product of two vectors, which is the area of
the paralleogram they define, in terms of the dot products of the two vectors, as
Lagrange's identity 157
Using the definition of angle based upon the dot product (see also Cauchy–Schwartz inequality), the left-hand side is
where θ is the angle formed by the vectors a and b. The area of a parallelogram with sides |a| and |b| and angle θ is
known in elementary geometry to be
so the left-hand side of Lagrange's identity is the squared area of the parallelogram. The cross product appearing on
the right-hand side is defined by
which is a vector whose components are equal in magnitude to the areas of the projections of the parallelogram onto
the yz, zx, and xy planes, respectively.
Seven dimensions
For a and b as vectors in ℝ7, Lagrange's identity takes on the same form as in the case of ℝ3 [8]
However, the cross product in 7 dimensions does not share all the properties of the cross product in 3 dimensions.
For example, the direction of a × b in 7-dimensions may be the same as c × d even though c and d are linearly
independent of a and b. Also the seven dimensional cross product is not compatible with the Jacobi identity.[8]
Quaternions
A quaternion p is defined as the sum of a scalar t and a vector v:
The multiplicativity of the norm in the quaternion algebra provides, for quaternions p and q:[9]
The quaternions p and q are called imaginary if their scalar part is zero; equivalently, if
since, by definition,
Lagrange's identity 158
(1)
which means that the product of a column of as and a row of bs yields (a sum of elements of) a square of abs, which
can be broken up into a diagonal and a pair of triangles on either side of the diagonal.
The second term on the left side of Lagrange's identity can be expanded as:
(2)
which means that a symmetric square can be broken up into its diagonal and a pair of equal triangles on either side of
the diagonal.
To expand the summation on the right side of Lagrange's identity, first expand the square within the summation:
Now exchange the indices i and j of the second term on the right side, and permute the b factors of the third term,
yielding:
(3)
Back to the left side of Lagrange's identity: it has two terms, given in expanded form by Equations (1) and (2). The
first term on the right side of Equation (2) ends up canceling out the first term on the right side of Equation (1),
yielding
(1) - (2) =
which is the same as Equation (3), so Lagrange's identity is indeed an identity, Q.E.D..
References
[1] Eric W. Weisstein (2003). CRC concise encyclopedia of mathematics (http:/ / books. google. com/ ?id=8LmCzWQYh_UC& pg=PA228) (2nd
ed.). CRC Press. ISBN 1584883472. .
[2] Robert E Greene and Steven G Krantz (2006). "Exercise 16" (http:/ / www. amazon. com/
Function-Complex-Variable-Graduate-Mathematics/ dp/ 082182905X/ ref=sr_1_1?ie=UTF8& s=books& qid=1271907834&
sr=1-1#reader_082182905X). Function theory of one complex variable (3rd ed.). American Mathematical Society. p. 22. ISBN 0821839624. .
[3] Vladimir A. Boichenko, Gennadiĭ Alekseevich Leonov, Volker Reitmann (2005). Dimension theory for ordinary differential equations (http:/
/ books. google. com/ ?id=9bN1-b_dSYsC& pg=PA26). Vieweg+Teubner Verlag. p. 26. ISBN 3519004372. .
[4] J. Michael Steele (2004). "Exercise 4.4: Lagrange's identity for complex numbers" (http:/ / books. google. com/ ?id=bvgBdZKEYAEC&
pg=PA68). The Cauchy-Schwarz master class: an introduction to the art of mathematical inequalities. Cambridge University Press.
pp. 68–69. ISBN 052154677X. .
[5] Greene, Robert E.; Krantz, Steven G. (2002). Function Theory of One Complex Variable. Providence, R.I.: American Mathematical Society.
p. 22, Exercise 16. ISBN 978-0-8218-2905-9;
Palka, Bruce P. (1991). An Introduction to Complex Function Theory. Berlin, New York: Springer-Verlag. p. 27, Exercise 4.22.
ISBN 978-0-387-97427-9.
Lagrange's identity 159
[6] Howard Anton, Chris Rorres (2010). "Relationships between dot and cross products" (http:/ / books. google. com/ ?id=1PJ-WHepeBsC&
pg=PA162& dq="cross+ product"+ "Lagrange's+ identity"& cd=6#v=onepage& q="cross product" "Lagrange's identity"). Elementary Linear
Algebra: Applications Version (10th ed.). John Wiley and Sons. p. 162. ISBN 0470432055. .
[7] Pertti Lounesto (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC& pg=PA94& dq="which+ in+
coordinate+ form+ means+ Lagrange's+ identity"& cd=1#v=onepage& q="which in coordinate form means Lagrange's identity") (2nd ed.).
Cambridge University Press. p. 94. ISBN 0521005515. .
[8] Door Pertti Lounesto (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC& printsec=frontcover&
q=Pythagorean) (2nd ed.). Cambridge University Press. ISBN 0521005515. . See particularly § 7.4 Cross products in ℝ7 (http:/ / books.
google. be/ books?id=kOsybQWDK4oC& pg=PA96#v=onepage& q& f=false), p. 96.
[9] Jack B. Kuipers (2002). "§5.6 The norm" (http:/ / books. google. com/ ?id=_2sS4mC0p-EC& pg=PA111). Quaternions and rotation
sequences: a primer with applications to orbits. Princeton University Press. p. 111. ISBN 0691102988. .
[10] See, for example, Frank Jones, Rice University (http:/ / docs. google. com/ viewer?a=v& q=cache:rDnOA-ZKljkJ:www. owlnet. rice. edu/
~fjones/ chap7. pdf+ lagrange's+ identity+ in+ the+ seven+ dimensional+ cross+ product& hl=en& gl=ph&
sig=AHIEtbQQtdVGhgbYhz78SQQb2biLxRi4kA), page 4 in Chapter 7 of a book still to be published (http:/ / www. owlnet. rice. edu/
~fjones/ ).
Quaternion
In mathematics, the quaternions are a number
system that extends the complex numbers. They
were first described by Irish mathematician Sir
William Rowan Hamilton in 1843 and applied to
mechanics in three-dimensional space. A striking
feature of quaternions is that the product of two
quaternions is noncommutative, meaning that the
product of two quaternions depends on which factor
is to the left of the multiplication sign and which
factor is to the right. Hamilton defined a quaternion
as the quotient of two directed lines in a
three-dimensional space[1] or equivalently as the
quotient of two vectors.[2] Quaternions can also be
represented as the sum of a scalar and a vector.
The unit quaternions can therefore be thought of as a choice of a group structure on the 3-sphere , the group
Spin(3), the group SU(2), or the universal cover of SO(3).
History
Quaternion algebra was introduced by Irish mathematician Sir William
Rowan Hamilton in 1843.[4] Important precursors to this work included
Euler's four-square identity (1748) and Olinde Rodrigues'
parameterization of the general rotation by four parameters (1840), but
neither of these authors treated the four-parameter rotations as an
algebra.[5] [6] Gauss had also discovered quaternions in 1819, but this
work was only published in 1900.[7]
However, quaternions have had a revival since the late 20th century, primarily due to their utility in describing
spatial rotations. Representations of rotations by quaternions are more compact and faster to compute than
representations by matrices and unlike Euler angles are not susceptible to gimbal lock. For this reason, quaternions
are used in computer graphics,[8] computer vision, robotics, control theory, signal processing, attitude control,
physics, bioinformatics, molecular dynamics computer simulation and orbital mechanics. For example, it is common
for spacecraft attitude-control systems to be commanded in terms of quaternions. Quaternions have received another
boost from number theory because of their relation to quadratic forms.
Since 1989, the Department of Mathematics of the National University of Ireland, Maynooth has organized a
pilgrimage, where scientists (including physicists Murray Gell-Mann in 2002, Steven Weinberg in 2005, and
mathematician Andrew Wiles in 2003) take a walk from Dunsink Observatory to the Royal Canal bridge where,
unfortunately, no trace of Hamilton's carving remains.
Definition
As a set, the quaternions H are equal to R4, a four-dimensional vector space over the real numbers. H has three
operations: addition, scalar multiplication, and quaternion multiplication. The sum of two elements of H is defined to
be their sum as elements of R4. Similarly the product of an element of H by a real number is defined to be the same
as the product in R4. To define the product of two elements in H requires a choice of basis for R4. The elements of
this basis are customarily denoted as 1, i, j, and k. Every element of H can be uniquely written as a linear
combination of these basis elements, that is, as a1 + bi + cj + dk, where a, b, c, and d are real numbers. The basis
element 1 will be the identity element of H, meaning that multiplication by 1 does nothing, and for this reason,
elements of H are usually written a + bi + cj + dk, suppressing the basis element 1. Given this basis, associative
quaternion multiplication is defined by first defining the products of basis elements and then defining all other
products using the distributive law.
where i, j, and k are basis elements of H, determine all the possible products of i, j, and k. For example, since
All the other possible products can be determined by similar methods, resulting in
which can be arranged as a table whose rows represent the left factor of the product and whose columns represent the
right factor:
Quaternion 162
Quaternion multiplication
× 1 i j k
1 1 i j k
i i −1 k −j
j j −k −1 i
k k j −i −1
Hamilton product
For two elements a1 + b1i + c1j + d1k and a2 + b2i + c2j + d2k, their Hamilton product (a1 + b1i + c1j + d1k)(a2 + b2i
+ c2j + d2k) is determined by the products of the basis elements and the distributive law. The distributive law makes
it possible to expand the product so that it is a sum of products of basis elements. This gives the following
expression:
Now the basis elements can be multiplied using the rules given above to get:[4]
Remarks
Noncommutative
Unlike multiplication of real or complex numbers, multiplication of quaternions is not commutative: For example,
, while . The noncommutativity of multiplication has some unexpected consequences, among
them that polynomial equations over the quaternions can have more distinct solutions than the degree of the
polynomial. The equation , for instance, has infinitely many quaternion solutions
with , so that these solutions lie on the two-dimensional surface of a sphere centered on zero in
the three-dimensional subspace of quaternions with zero real part. This sphere intersects the complex plane at the
two poles and .
Conjugation can be used to extract the scalar and vector parts of a quaternion. The scalar part of p is (p + p*)/2, and
the vector part of p is (p − p*)/2.
The square root of the product of a quaternion with its conjugate is called its norm and is denoted ||q||. (Hamilton
called this quantity the tensor of q, but this conflicts with modern usage. See tensor.) It has the formula
This is always a non-negative real number, and it is the same as the Euclidean norm on H considered as the vector
space R4. Multiplying a quaternion by a real number scales its norm by the absolute value of the number. That is, if
α is real, then
This is a special case of the fact that the norm is multiplicative, meaning that
for any two quaternions p and q. Multiplicativity is a consequence of the formula for the conjugate of a product.
Alternatively multiplicativity follows directly from the corresponding property of determinants of square matrices
and the formula
This makes H into a metric space. Addition and multiplication are continuous in the metric topology.
A unit quaternion is a quaternion of norm one. Dividing a non-zero quaternion q by its norm produces a unit
quaternion Uq called the versor of q:
This makes it possible to divide two quaternions p and q in two different ways. That is, their quotient can be either
pq−1 or q−1p. The notation p/q is ambiguous because it does not specify whether q divides on the left or the right.
Quaternion 165
Algebraic properties
The set H of all quaternions is a vector space over the real numbers
with dimension 4. (In comparison, the real numbers have dimension 1,
the complex numbers have dimension 2, and the octonions have
dimension 8.) The quaternions have a multiplication that is associative
and that distributes over vector addition, but which is not commutative.
Therefore the quaternions H are a non-commutative associative algebra
over the real numbers. Even though H contains copies of the complex
numbers, it is not an associative algebra over the complex numbers.
Because the product of any two basis vectors is plus or minus another basis vector, the set {±1, ±i, ±j, ±k} forms a
group under multiplication. This group is called the quaternion group and is denoted Q8.[11] The real group ring of
Q8 is a ring RQ8 which is also an eight-dimensional vector space over R. It has one basis vector for each element of
Q8. The quaternions are the quotient ring of RQ8 by the ideal generated by the elements 1 + (−1), i + (−i), j + (−j),
and k + (−k). Here the first term in each of the differences is one of the basis elements 1, i, j, and k, and the second
term is one of basis elements −1, −i, −j, and −k, not the additive inverses of 1, i, j, and k.
This is equal to the scalar parts of p*q, qp*, pq*, and q*p. (Note that the vector parts of these four products are
different.) It also has the formulas
The cross product of p and q relative to the orientation determined by the ordered basis i, j, and k is
(Recall that the orientation is necessary to determine the sign.) This is equal to the vector part of the product pq (as
quaternions), as well as the vector part of −q*p*. It also has the formula
Quaternion 166
where ps and qs are the scalar parts of p and q and and are the vector parts of p and q. Then we have the
formula
This shows that the noncommutativity of quaternion multiplication comes from the multiplication of pure imaginary
quaternions. It also shows that two quaternions commute if and only if their vector parts are collinear.
Matrix representations
Just as complex numbers can be represented as matrices, so can quaternions. There are at least two ways of
representing quaternions as matrices in such a way that quaternion addition and multiplication correspond to matrix
addition and matrix multiplication. One is to use 2×2 complex matrices, and the other is to use 4×4 real matrices. In
the terminology of abstract algebra, these are injective homomorphisms from H to the matrix rings M2(C) and
M4(R), respectively.
Using 2×2 complex matrices, the quaternion a + bi + cj + dk can be represented as
In this representation, the conjugate of a quaternion corresponds to the transpose of the matrix. The fourth power of
the norm of a quaternion is the determinant of the corresponding matrix. Complex numbers are block diagonal
matrices with two 2×2 blocks.
Quaternion 167
If we define j2 = −1 and ij = −ji, then we can multiply two vectors using the distributive law. Writing k in place of
the product ij leads to the same rules for multiplication as the usual quaternions. Therefore the above vector of
complex numbers corresponds to the quaternion a + bi + cj + dk. If we write the elements of C2 as ordered pairs and
quaternions as quadruples, then the correspondence is
Square roots of −1
In the complex numbers, there are just two numbers, i and −i, whose square is −1 . In H there are infinitely many
square roots of minus one: the quaternion solution for the square root of −1 is the surface of the unit sphere in
3-space. To see this, let q = a + bi + cj + dk be a quaternion, and assume that its square is −1. In terms of a, b, c, and
d, this means
To satisfy the last three equations, either a = 0 or b, c, and d are all 0. The latter is impossible because a is a real
number and the first equation would imply that a2 = −1. Therefore a = 0 and b2 + c2 + d2 = 1. In other words, a
quaternion squares to −1 if and only if it is a vector (that is, pure imaginary) with norm 1. By definition, the set of all
such vectors forms the unit sphere.
Only negative real quaternions have an infinite number of square roots. All others have just two (or one in the case
of 0).
The identification of the square roots of minus one in H was given by Hamilton[14] but was frequently omitted in
other texts. By 1971 the sphere was included by Sam Perlis in his three page exposition included in Historical Topics
in Algebra (page 39) published by the National Council of Teachers of Mathematics. More recently, the sphere of
square roots of minus one is described in Ian R. Porteous's book Clifford Algebras and the Classical Groups
(Cambridge, 1995) in proposition 8.13 on page 60. Also in Conway (2003) On Quaternions and Octonions we read
on page 40: "any imaginary unit may be called i, and perpendicular one j, and their product k", another statement of
the sphere.
Quaternion 168
In the language of abstract algebra, each is an injective ring homomorphism from C to H. The images of the
embeddings corresponding to q and -q are identical.
Every non-real quaternion lies in a unique copy of C. Write q as the sum of its scalar part and its vector part:
Decompose the vector part further as the product of its norm and its versor:
(Note that this is not the same as .) The versor of the vector part of q, , is a pure imaginary
unit quaternion, so its square is −1. Therefore it determines a copy of the complex numbers by the function
Under this function, q is the image of the complex number . Thus H is the union of complex planes
intersecting in a common real line, where the union is taken over the sphere of square roots of minus one.
Commutative subrings
The relationship of quaternions to each other within the complex subplanes of H can also be identified and expressed
in terms of commutative subrings. Specifically, since two quaternions p and q commute (p q = q p) only if they lie in
the same complex subplane of H, the profile of H as a union of complex planes arises when one seeks to find all
commutative subrings of the quaternion ring. This method of commutative subrings is also used to profile the
coquaternions and 2 × 2 real matrices.
and
.[15]
where the angle and the unit vector are defined by:
Quaternion 169
and
Generalizations
If F is any field with characteristic different from 2, and a and b are elements of F, one may define a
four-dimensional unitary associative algebra over F with basis 1, i, j, and ij, where i2 = a, j2 = b and ij = −ji (so ij2 =
−ab). These algebras are called quaternion algebras and are isomorphic to the algebra of 2×2 matrices over F or
form division algebras over F, depending on the choice of a and b.
If these fundamental basis elements are taken to represent vectors in 3D space, then it turns out that the reflection of
a vector r in a plane perpendicular to a unit vector w can be written:
Two reflections make a rotation by an angle twice the angle between the two reflection planes, so
corresponds to a rotation of 180° in the plane containing σ1 and σ2. This is very similar to the corresponding
quaternion formula,
In this picture, quaternions correspond not to vectors but to bivectors, quantities with magnitude and orientations
associated with particular 2D planes rather than 1D directions. The relation to complex numbers becomes clearer,
too: in 2D, with two vector directions σ1 and σ2, there is only one bivector basis element σ1σ2, so only one
imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ1σ2, σ2σ3, σ3σ1, so three
imaginaries.
This reasoning extends further. In the Clifford algebra Cℓ4,0(R), there are six bivector basis elements, since with four
different basic vector directions, six different pairs and therefore six different linearly independent planes can be
defined. Rotations in such spaces using these generalisations of quaternions, called rotors, can be very useful for
applications involving homogeneous coordinates. But it is only in 3D that the number of basis bivectors equals the
number of basis vectors, and each bivector can be identified as a pseudovector.
Dorst et al. identify the following advantages for placing quaternions in this wider setting:[16]
• Rotors are natural and non-mysterious in geometric algebra and easily understood as the encoding of a double
reflection.
• In geometric algebra, a rotor and the objects it acts on live in the same space. This eliminates the need to change
representations and to encode new data structures and methods (which is required when augmenting linear
algebra with quaternions).
• A rotor is universally applicable to any element of the algebra, not just vectors and other quaternions, but also
lines, planes, circles, spheres, rays, and so on.
• In the conformal model of Euclidean geometry, rotors allow the encoding of rotation, translation and scaling in a
single element of the algebra, universally acting on any element. In particular, this means that rotors can represent
rotations around an arbitrary axis, whereas quaternions are limited to an axis through the origin.
• Rotor-encoded transformations make interpolation particularly straightforward.
For further detail about the geometrical uses of Clifford algebras, see Geometric algebra.
Brauer group
The quaternions are "essentially" the only (non-trivial) central simple algebra (CSA) over the real numbers, in the
sense that every CSA over the reals is Brauer equivalent to either the reals or the quaternions. Explicitly, the Brauer
group of the reals consists of two classes, represented by the reals and the quaternions, where the Brauer group is the
set of all CSAs, up to equivalence relation of one CSA being a matrix ring over another. By the Artin–Wedderburn
theorem (specifically, Wedderburn's part), CSAs are all matrix algebras over a division algebra, and thus the
quaternions are the only non-trivial division algebra over the reals.
Quaternion 171
CSAs – rings over a field, which are simple algebras (have no non-trivial 2-sided ideals, just as with fields) whose
center is exactly the field – are a noncommutative analog of extension fields, and are more restrictive than general
ring extensions. The fact that the quaternions are the only non-trivial CSA over the reals (up to equivalence) may be
compared with the fact that the complex numbers are the only non-trivial field extension of the reals.
Quotes
• "I regard it as an inelegance, or imperfection, in quaternions, or rather in the state to which it has been hitherto
unfolded, whenever it becomes or seems to become necessary to have recourse to x, y, z, etc." — William Rowan
Hamilton (ed. Quoted in a letter from Tait to Cayley).
• "Time is said to have only one dimension, and space to have three dimensions. […] The mathematical quaternion
partakes of both these elements; in technical language it may be said to be "time plus space", or "space plus time":
and in this sense it has, or at least involves a reference to, four dimensions. And how the One of Time, of Space
the Three, Might in the Chain of Symbols girdled be." — William Rowan Hamilton (Quoted in R.P. Graves, "Life
of Sir William Rowan Hamilton").
• "Quaternions came from Hamilton after his really good work had been done; and, though beautifully ingenious,
have been an unmixed evil to those who have touched them in any way, including Clerk Maxwell." — Lord
Kelvin, 1892.
• "Neither matrices nor quaternions and ordinary vectors were banished from these ten [additional] chapters. For, in
spite of the uncontested power of the modern Tensor Calculus, those older mathematical languages continue, in
my opinion, to offer conspicuous advantages in the restricted field of special relativity. Moreover, in science as
well as in every-day life, the mastery of more than one language is also precious, as it broadens our views, is
conducive to criticism with regard to, and guards against hypostasy [weak-foundation] of, the matter expressed by
words or mathematical symbols." — Ludwik Silberstein, preparing the second edition of his Theory of Relativity
in 1924.
• "… quaternions appear to exude an air of nineteenth century decay, as a rather unsuccessful species in the
struggle-for-life of mathematical ideas. Mathematicians, admittedly, still keep a warm place in their hearts for the
remarkable algebraic properties of quaternions but, alas, such enthusiasm means little to the harder-headed
physical scientist." — Simon L. Altmann, 1986.
• "...the thing about a Quaternion 'is' is that we're obliged to encounter it in more than one guise. As a vector
quotient. As a way of plotting complex numbers along three axes instead of two. As a list of instructions for
turning one vector into another..... And considered subjectively, as an act of becoming longer or shorter, while at
the same time turning, among axes whose unit vector is not the familiar and comforting 'one' but the altogether
disquieting square root of minus one. If you were a vector, mademoiselle, you would begin in the 'real' world,
change your length, enter an 'imaginary' reference system, rotate up to three different ways, and return to 'reality'
a new person. Or vector..." — Thomas Pynchon, Against the Day, 2006.
Quaternion 172
Notes
[1] Hamilton (http:/ / books. google. com/ ?id=TCwPAAAAIAAJ& printsec=frontcover& dq=quaternion+ quotient+ lines+ tridimensional+
space+ time#PPA60,M1). Hodges and Smith. 1853. p. 60. .
[2] Hardy 1881 pg. 32 (http:/ / books. google. com/ ?id=YNE2AAAAMAAJ& printsec=frontcover& dq=quotient+ two+ vectors+ called+
quaternion#PPA32,M1). Ginn, Heath, & co.. 1881. .
[3] Journal of Theoretics. http:/ / www. journaloftheoretics. com/ articles/ 3-6/ qm-pub. pdf.
[4] See Hazewinkel et. al. (2004), p. 12.
[5] Conway, John Horton; Smith, Derek Alan (2003). On quaternions and octonions: their geometry, arithmetic, and symmetry (http:/ / books.
google. com/ books?id=E_HCwwxMbfMC& pg=PA9). p. 9. ISBN 1-56881-134-9. .
[6] Robert E. Bradley, Charles Edward Sandifer (2007). Leonhard Euler: life, work and legacy (http:/ / books. google. com/
books?id=75vJL_Y-PvsC& pg=PA193). p. 193. ISBN 0-444-52728-1. .. They mention Wilhelm Blaschke's 1959 claim that "the quaternions
were first identified by L. Euler in a letter to Goldbach written on May 4, 1748" and comment that "it makes no sense whatsoever to say that
Euler 'identified' the quaternions in this letter... this claim is absurd."
[7] Simon L. Altmann (December 1989). "Hamilton, Rodrigues, and the Quaternion Scandal" (http:/ / www. jstor. org/ stable/ 2689481).
Mathematics Magazine 62 (5): 306. .
[8] Ken Shoemake (1985). "Animating Rotation with Quaternion Curves" (http:/ / www. cs. cmu. edu/ ~kiranb/ animation/ p245-shoemake. pdf).
Computer Graphics 19 (3): 245–254. doi:10.1145/325165.325242. . Presented at SIGGRAPH '85.
Tomb Raider (1996) is often cited as the first mass-market computer game to have used quaternions to achieve smooth 3D rotation. See eg
Nick Bobick, " Rotating Objects Using Quaternions (http:/ / www. gamasutra. com/ view/ feature/ 3278/ rotating_objects_using_quaternions.
php)", Game Developer magazine, July 1998
[9] Hamilton, Sir William Rowan (1866). Hamilton Elements of Quaternions article 285 (http:/ / books. google. com/ ?id=fIRAAAAAIAAJ&
pg=PA117& dq=quaternion#PPA310,M1). p. 310]. .
[10] Hardy Elements of quaternions (http:/ / dlxs2. library. cornell. edu/ cgi/ t/ text/ pageviewer-idx?c=math;cc=math;q1=right
quaternion;rgn=full text;idno=05140001;didno=05140001;view=image;seq=81). library.cornell.edu. p. 65. .
[11] "quaternion group" (http:/ / www. wolframalpha. com/ input/ ?i=quaternion+ group). Wolframalpha.com. .
[12] Vector Analysis (http:/ / books. google. com/ ?id=RC8PAAAAIAAJ& printsec=frontcover& dq=right+ tensor+ dyadic#PPA428,M1).
Gibbs-Wilson. 1901. p. 428. .
[13] Wolframalpha.com (http:/ / www. wolframalpha. com/ input/ ?i=det+ {{a+ b*i,+ c+ d*i},+ {-c+ d*i,+ a-b*i)
[14] Hamilton (1899). Elements of Quaternions (2nd ed.). p. 244. ISBN 1108001718.
[15] Lce.hut.fi (http:/ / www. lce. hut. fi/ ~ssarkka/ pub/ quat. pdf)
[16] Quaternions and Geometric Algebra (http:/ / www. geometricalgebra. net/ quaternions. html). Accessed 2008-09-12. See also: Leo Dorst,
Daniel Fontijne, Stephen Mann, (2007), Geometric Algebra For Computer Science (http:/ / www. geometricalgebra. net/ index. html), Morgan
Kaufmann. ISBN 0-12-369465-5
• Joly, Charles Jasper (1905), "A manual of quaternions". London, Macmillan and co., limited; New York, The
Macmillan company. LCCN 05036137 //r84
• Macfarlane, Alexander (1906), "Vector analysis and quaternions", 4th ed. 1st thousand. New York, J. Wiley &
Sons; [etc., etc.]. LCCN es 16000048
• 1911 encyclopedia: " Quaternions (http://www.1911encyclopedia.org/Quaternions)".
• Finkelstein, David, Josef M. Jauch, Samuel Schiminovich, and David Speiser (1962), "Foundations of quaternion
quantum mechanics". J. Mathematical Phys. 3, pp. 207–220, MathSciNet.
• Du Val, Patrick (1964), "Homographies, quaternions, and rotations". Oxford, Clarendon Press (Oxford
mathematical monographs). LCCN 64056979 //r81
• Crowe, Michael J. (1967), A History of Vector Analysis: The Evolution of the Idea of a Vectorial System,
University of Notre Dame Press. Surveys the major and minor vector systems of the 19th century (Hamilton,
Möbius, Bellavitis, Clifford, Grassmann, Tait, Peirce, Maxwell, Macfarlane, MacAuley, Gibbs, Heaviside).
• Altmann, Simon L. (1986), "Rotations, quaternions, and double groups". Oxford [Oxfordshire] : Clarendon Press
; New York : Oxford University Press. LCCN 85013615 ISBN 0-19-855372-2
• Altmann, Simon L. (1989), "Hamilton, Rodrigues, and the Quaternion Scandal". Mathematics Magazine. Vol. 62,
No. 5. p. 291–308, Dec. 1989.
• Adler, Stephen L. (1995), "Quaternionic quantum mechanics and quantum fields". New York : Oxford University
Press. International series of monographs on physics (Oxford, England) 88. LCCN 94006306 ISBN
0-19-506643-X
• Trifonov, Vladimir (http://members.cox.net/vtrifonov/) (1995), "A Linear Solution of the Four-Dimensionality
Problem", Europhysics Letters, 32 (8) 621–626, DOI: 10.1209/0295-5075/32/8/001 (http://dx.doi.org/10.
1209/0295-5075/32/8/001)
• Ward, J. P. (1997), "Quaternions and Cayley Numbers: Algebra and Applications", Kluwer Academic Publishers.
ISBN 0-7923-4513-4
• Kantor, I. L. and Solodnikov, A. S. (1989), "Hypercomplex numbers, an elementary introduction to algebras",
Springer-Verlag, New York, ISBN 0-387-96980-2
• Gürlebeck, Klaus and Sprössig, Wolfgang (1997), "Quaternionic and Clifford calculus for physicists and
engineers". Chichester ; New York : Wiley (Mathematical methods in practice; v. 1). LCCN 98169958 ISBN
0-471-96200-7
• Kuipers, Jack (2002), "Quaternions and Rotation Sequences: A Primer With Applications to Orbits, Aerospace,
and Virtual Reality" (reprint edition), Princeton University Press. ISBN 0-691-10298-8
• Conway, John Horton, and Smith, Derek A. (2003), "On Quaternions and Octonions: Their Geometry,
Arithmetic, and Symmetry", A. K. Peters, Ltd. ISBN 1-56881-134-9 ( review (http://nugae.wordpress.com/
2007/04/25/on-quaternions-and-octonions/)).
• Kravchenko, Vladislav (2003), "Applied Quaternionic Analysis", Heldermann Verlag ISBN 3-88538-228-8.
• Hanson, Andrew J. (http://www.cs.indiana.edu/~hanson/quatvis/) (2006), "Visualizing Quaternions",
Elsevier: Morgan Kaufmann; San Francisco. ISBN 0-12-088400-3
• Trifonov, Vladimir (http://members.cox.net/vtrifonov/)</ref> (2007), "Natural Geometry of Nonzero
Quaternions", International Journal of Theoretical Physics, 46 (2) 251–257, DOI: 10.1007/s10773-006-9234-9
(http://dx.doi.org/10.1007/s10773-006-9234-9)
• Ernst Binz & Sonja Pods (2008) Geometry of Heisenberg Groups American Mathematical Society, Chapter 1:
"The Skew Field of Quaternions" (23 pages) ISBN 978-0-8218-4495-3.
• Vince, John A. (2008), Geometric Algebra for Computer Graphics, Springer, ISBN 978-1-84628-996-5.
• For molecules that can be regarded as classical rigid bodies molecular dynamics computer simulation employs
quaternions. They were first introduced for this purpose by D.J. Evans, (1977), "On the Representation of
Orientation Space", Mol. Phys., vol 34, p 317.
Quaternion 174
• David Erickson, Defence Research and Development Canada (DRDC), Complete derivation of rotation matrix
from unitary quaternion representation in DRDC TR 2005-228 paper. Drdc-rddc.gc.ca (http://aiss.suffield.
drdc-rddc.gc.ca/uploads/quaternion.pdf)
• Alberto Martinez, University of Texas Department of History, "Negative Math, How Mathematical Rules Can Be
Positively Bent", Utexas.edu (https://webspace.utexas.edu/aam829/1/m/NegativeMath.html)
• D. Stahlke, Quaternions in Classical Mechanics Stahlke.org (http://www.stahlke.org/dan/phys-papers/
quaternion-paper.pdf) (PDF)
• Morier-Genoud, Sophie, and Valentin Ovsienko. "Well, Papa, can you multiply triplets?", arxiv.org (http://arxiv.
org/abs/0810.5562) describes how the quaternions can be made into a skew-commutative algebra graded by
Z/2×Z/2×Z/2.
• Curious Quaternions (http://plus.maths.org/content/os/issue32/features/baez/index) by Helen Joyce hosted
by John Baez.
Software
• Quaternion Calculator (http://www.bluetulip.org/programs/quaternions.html) [javascript], bluetulip.org
• Quaternion Calculator (http://theworld.com/~sweetser/java/qcalc/qcalc.html) [Java], theworld.com
• Quaternion Toolbox for Matlab (http://qtfm.sourceforge.net/), qtfm.sourceforge.net
• Boost library support for Quaternions in C++ (http://www.boost.org/doc/libs/1_41_0/libs/math/doc/
quaternion/html/index.html), boost.org
• Mathematics of flight simulation >Turbo-PASCAL software for quaternions, Euler angles and Extended Euler
angles (http://www.xs4all.nl/~jemebius/Eea.htm), xs4all.nl
Skew-symmetric matrix
In mathematics, and in particular linear algebra, a skew-symmetric (or antisymmetric or antimetric[1] ) matrix is a
square matrix A whose transpose is also its negative; that is, it satisfies the equation A = −AT. If the entry in the i th
row and j th column is aij, i.e. A = (aij) then the symmetric condition becomes aij = −aji. For example, the following
matrix is skew-symmetric:
Properties
We assume that the underlying field is not of characteristic 2: that is, that 1 + 1 ≠ 0 where 1 denotes the
multiplicative identity and 0 the additive identity of the given field. Otherwise, a skew-symmetric matrix is just the
same thing as a symmetric matrix.
Sums and scalar multiples of skew-symmetric matrices are again skew-symmetric. Hence, the skew-symmetric
matrices form a vector space. Its dimension is n(n−1)/2.
Let Matn denote the space of n × n matrices. A skew-symmetric matrix is determined by n(n − 1)/2 scalars (the
number of entries above the main diagonal); a symmetric matrix is determined by n(n + 1)/2 scalars (the number of
entries on or above the main diagonal). If Skewn denotes the space of n × n skew-symmetric matrices and Symn
denotes the space of n × n symmetric matrices and then since Matn = Skewn + Symn and Skewn ∩ Symn = {0}, i.e.
Notice that ½(A − AT) ∈ Skewn and ½(A + AT) ∈ Symn. This is true for every square matrix A with entries from any
field whose characteristic is different from 2.
As to equivalent conditions, notice that the relation of skew-symmetricity, A=-AT, holds for a matrix A if and only if
one has xTAy =-yTAx for all vectors x and y. This is also equivalent to xTAx=0 for all x (one implication being
obvious, the other a plain consequence of (x+y)TA(x+y)=0 for all x and y).
All main diagonal entries of a skew-symmetric matrix must be zero, so the trace is zero. If A = (aij) is
skew-symmetric, aij = −aji; hence aii = 0.
3x3 skew symmetric matrices can be used to represent cross products as matrix multiplications.
Determinant
Let A be a n×n skew-symmetric matrix. The determinant of A satisfies
det(A) = det(AT) = det(−A) = (−1)ndet(A). Hence det(A) = 0 when n is odd.
In particular, if n is odd, and since the underlying field is not of characteristic 2, the determinant vanishes. This result
is called Jacobi's theorem, after Carl Gustav Jacobi (Eves, 1980).
The even-dimensional case is more interesting. It turns out that the determinant of A for n even can be written as the
square of a polynomial in the entries of A (Theorem by Thomas Muir):
det(A) = Pf(A)2.
This polynomial is called the Pfaffian of A and is denoted Pf(A). Thus the determinant of a real skew-symmetric
matrix is always non-negative.
The number of distinct terms s(n) in the expansion of the determinant of a skew-symmetric matrix of order n has
been considered already by Cayley, Sylvester, and Pfaff. Due to cancellations, this number is quite small as
compared the number of terms of a generic matrix of order n, which is n!. The sequence s(n) (sequence A002370 [2]
in OEIS) is
1, 0, 2, 0, 6, 0, 120, 0, 5250, 0, 395010, 0, …
and it is encoded in the exponential generating function
The number of positive and negative terms are approximatively a half of the total, although their difference takes
larger and larger positive and negative values as n increases (sequence A167029 [3] in OEIS).
Spectral theory
The eigenvalues of a skew-symmetric matrix always come in pairs ±λ (except in the odd-dimensional case where
there is an additional unpaired 0 eigenvalue). For a real skew-symmetric matrix the nonzero eigenvalues are all pure
imaginary and thus are of the form iλ1, −iλ1, iλ2, −iλ2, … where each of the λk are real.
Real skew-symmetric matrices are normal matrices (they commute with their adjoints) and are thus subject to the
spectral theorem, which states that any real skew-symmetric matrix can be diagonalized by a unitary matrix. Since
the eigenvalues of a real skew-symmetric matrix are complex it is not possible to diagonalize one by a real matrix.
However, it is possible to bring every skew-symmetric matrix to a block diagonal form by an orthogonal
transformation. Specifically, every 2n × 2n real skew-symmetric matrix can be written in the form A = Q Σ QT where
Skew-symmetric matrix 177
Q is orthogonal and
for real λk. The nonzero eigenvalues of this matrix are ±iλk. In the odd-dimensional case Σ always has at least one
row and column of zeros.
More generally, every complex skew-symmetric matrix can be written in the form A = U Σ UT where U is unitary
and Σ has the block-diagonal form given above with complex λk. This is an example of the Youla decomposition of a
complex square matrix.[4]
Alternating forms
We begin with a special case of the definition. An alternating form φ on a vector space V over a field K, not of
characteristic 2, is defined to be a bilinear form
φ:V×V→K
such that
φ(v,w) = −φ(w,v).
This defines a form with desirable properties for vector spaces over fields of characteristic not equal to 2, but in a
vector space over a field of characteristic 2, the definition fails, as every element is its own additive inverse. That is,
symmetric and alternating forms are equivalent, which is clearly false in the case above. However, we may extend
the definition to vector spaces over fields of characteristic 2 as follows:
In the case where the vector space V is over a field of arbitrary characteristic including characteristic 2, we may state
that for all vectors v in V
φ(v,v) = 0.
This reduces to the above case when the field is not of characteristic 2 as seen below
0 = φ(v + w,v + w) = φ(v,v) + φ(v,w) + φ(w,v) + φ(w,w) = φ(v,w) + φ(w,v)
Whence,
φ(v,w) = −φ(w,v).
Thus, we have a definition that now holds for vector spaces over fields of all characteristics.
Such a φ will be represented by a skew-symmetric matrix A, φ(v, w) = vTAw, once a basis of V is chosen; and
conversely an n×n skew-symmetric matrix A on Kn gives rise to an alternating form sending x to xTAx.
Skew-symmetric matrix 178
Infinitesimal rotations
Skew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group O(n) at
the identity matrix; formally, the special orthogonal Lie algebra. In this sense, then, skew-symmetric matrices can be
thought of as infinitesimal rotations.
Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra o(n) of the Lie group
O(n). The Lie bracket on this space is given by the commutator:
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that
contains the identity element. In the case of the Lie group O(n), this connected component is the special orthogonal
group SO(n), consisting of all orthogonal matrices with determinant 1. So R = exp(A) will have determinant +1.
Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that every
orthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix. In the
particular important case of dimension n=2, the exponential representation for an orthogonal matrix reduces to the
well-known polar form of a complex number of unit modulus. Indeed, if n=2, a special orthogonal matrix has the
form
which corresponds exactly to the polar form cosθ + isinθ =eiθ of a complex number of unit modulus. The
exponential representation of an orthogonal matrix of order n can also be obtained starting from the fact that in
dimension n any special orthogonal matrix R can be written as R = Q S QT, where Q is orthogonal and S is a block
diagonal matrix with blocks of order 2, plus one of order 1 if n is odd; since each single block of order 2 is
also an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix S writes as exponential of a
skew-symmetric block matrix Σ of the form above, S=exp(Σ), so that R= Q exp(Σ)QT =exp(Q Σ QT), exponential of
the skew-symmetric matrix Q Σ QT. Conversely, the surjectivity of the exponential map, together with the above
mentioned block-diagonalization for skew-simmetric matrices, implies the block-diagonalization for orthogonal
matrices.
Coordinate-free
More intrinsically (i.e., without using coordinates), skew-symmetric matrices on a vector space V with an inner
product may be defined as the bivectors on the space, which are sums of simple bivectors (2-blades) . The
correspondence is given by the map where is the covector dual to the vector v;
in coordinates these are exactly the elementary skew-symmetric matrices. This characterization is used in
interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name.
Skew-symmetric matrix 179
Skew-symmetrizable matrix
An n-by-n matrix A is said to be skew-symmetrizable if there exist an invertible diagonal matrix D and
skew-symmetric matrix S such that A = DS. For real n-by-n matrices, sometimes the condition for D to have positive
entries is added.[5]
References
[1] Richard A. Reyment, K. G. Jöreskog, Leslie F. Marcus (1996). Applied Factor Analysis in the Natural Sciences. Cambridge University Press.
p. 68. ISBN 0521575567.
[2] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa002370
[3] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa167029
[4] Youla, D. C. (1961). "A normal form for a matrix under the unitary congruence group". Canad. J. Math. 13: 694–704.
doi:10.4153/CJM-1961-059-8.
[5] Fomin, Sergey; Zelevinsky, Andrei (2001). "Cluster algebras I: Foundations". arXiv:math/0104151.
Further reading
• Eves, Howard (1980). Elementary Matrix Theory. Dover Publications. ISBN 978-0-486-63946-8.
• Suprunenko, D. A. (2001), "Skew-symmetric matrix" (http://eom.springer.de/S/s085720.htm), in Hazewinkel,
Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104
• Aitken, A. C. (1944). "On the number of distinct terms in the expansion of symmetric and skew determinants.".
Edinburgh Math. Notes.
External links
• "Antisymmetric matrix" (http://mathworld.wolfram.com/AntisymmetricMatrix.html). Wolfram Mathworld.
Xyzzy 180
Xyzzy
Xyzzy is a magic word from the Colossal Cave Adventure computer game.
In computing, the word is sometimes used as a metasyntactic variable or as a video game cheat code, the canonical
"magic word". In mathematics, the word is used as a mnemonic for the cross product.[1]
Origin
Modern usage derives primarily from one of the earliest computer games, Colossal Cave Adventure, in which the
idea is to explore an underground cave with many rooms, collecting the treasures found there. By typing "xyzzy" at
the appropriate time, the player could move instantly between two otherwise distant points. As Colossal Cave
Adventure was both the first adventure game and the first interactive fiction, hundreds of later interactive fiction
games included responses to the command "xyzzy" in tribute.[2]
The origin of the word has been the subject of debate. Rick Adams pointed out that the mnemonic "XYZZY" has
long been taught by math teachers to remember the process for performing cross products (as a mnemonic that lists
the order of subscripts to be multiplied first).[1] Crowther, author of Colossal Cave Adventure, states that he was
unaware of the mnemonic, and that he "made it up from whole cloth" when writing the game.[3]
Uses
Xyzzy has actually been implemented as an undocumented no-op command on several operating systems; in Data
General's AOS, for example, it would typically respond "Nothing happens", just as the game did if the magic was
invoked at the wrong spot or before a player had performed the action that enabled the word. The 32-bit version,
AOS/VS, would respond "Twice as much happens".[1] On several computer systems from Sun Microsystems, the
command "xyzzy" is used to enter the interactive shell of the u-boot bootloader.[4] Early versions of Zenith Z-DOS
(a re-branded variant of MS-DOS 1.25) had the command "xyzzy" which took a parameter of "on" or "off". Xyzzy
by itself would print the status of the last "xyzzy on" or "xyzzy off" command.
The popular Minesweeper game under Microsoft Windows has a cheat mode triggered by entering the comment
xyzzy, then pressing the key sequence shift and then enter, which turns a single pixel in the top-left corner of the
entire screen into a small black or white dot depending on whether or not the mouse pointer is over a mine.[5] This
feature is present in all versions except for Windows Vista and Windows 7, but under Windows 95, 98 and NT 4.0
the pixel is only visible if the standard Explorer desktop is not running.[6]
The low-traffic Usenet newsgroup alt.xyzzy is used for test messages, to which other readers (if there are any)
customarily respond, "Nothing happens" as a note that the test message was successfully received. The Google
IMAP service documents a CAPABILITY called XYZZY when the CAPABILITY command is issued. If the
command XYZZY is given, the server responds "OK Nothing happens."; in mIRC and Pidgin, entering the
command /xyzzy will display the response "Nothing happens".
A "deluxe chatting program" for DIGITAL's VAX/VMS written by David Bolen in 1987 and distributed via
BITNET took the name xyzzy. It enabled users on the same system or on linked DECnet nodes to communicate via
text in real time. There was a compatible program with the same name for IBM's VM/CMS.[7]
Xyzzy was the inspiration for the name of the interactive fiction competition the XYZZY Awards.
xYzZY is used as the default boundary marker by the Perl HTTP::Message module for multipart MIME messages,[8]
and was used in Apple's AtEase for workgroups as the default administrator password in the 1990s.
In the game, Zork, type xyzzy and press enter. The game responds with: A hollow voice says "fool."
Xyzzy 181
References
[1] Rick Adams. "Everything you ever wanted to know about…the magic word XYZZY" (http:/ / www. rickadams. org/ adventure/ c_xyzzy.
html). The Colossal Cave Adventure page. .
[2] David Welbourn. "xyzzy responses" (http:/ / webhome. idirect. com/ ~dswxyz/ sol/ xyzzy. html). . A web page giving responses to "xyzzy" in
many games of interactive fiction
[3] Dennis G. Jerz. "Somewhere Nearby is Colossal Cave: Examining Will Crowther's Original "Adventure" in Code and in Kentucky" (http:/ /
www. digitalhumanities. org/ dhq/ vol/ 001/ 2/ 000009. html). .
[4] "Page 17" (http:/ / dlc. sun. com/ pdf/ 820-4783-10/ 820-4783-10. pdf) (PDF). . Retrieved 2009-08-20.
[5] eeggs.com. "Windows 2000 Easter Eggs - Eeggs.com" (http:/ / www. eeggs. com/ items/ 6818. html). Eeggs.com<!. . Retrieved 2009-08-20.
[6] "Minesweeper Cheat codes" (http:/ / cheatcodes. com/ minesweeper-pc-cheats/ ). .
[7] (http:/ / web. inter. nl. net/ users/ fred/ relay/ xyzzy. html) VAX/VMS XYZZY Reference Card, Created by David Bolen
[8] Sean M. Burke (2002). "Perl and LWP", p.82. O'Reilly Media, Inc. ISBN 0596001789
Notice that a number of characteristics of such rotations and their representations can be seen by this visualization.
The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and
this neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by two
antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the
fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about
an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a
particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the
north to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360
degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of
rotations is not closed under composition. Two successive rotations with axes in the xy plane will not necessarily
give a rotation whose axis lies in the xy plane, and thus cannot be represented as a point on the sphere. This will not
be the case with a general rotation in 3-space, in which rotations do form a closed set under composition.
Quaternions and spatial rotation 183
In the same way the hyperspherical space of 3D rotations can be parameterized by three angles (Euler angles), but
any such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock.
We can avoid this by using four Euclidean coordinates w,x,y,z, with w2 + x2 + y2 + z2 = 1. The point (w,x,y,z)
represents a rotation around the axis directed by the vector by an angle
Quaternions and spatial rotation 184
Quaternions briefly
The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebra
and additionally the rule i2 = −1. This is sufficient to reproduce all of the rules of complex number arithmetic: for
example:
.
In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i2 = j2
= k2 = ijk = −1 and the usual algebraic rules except the commutative law of multiplication (a familiar example of
such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic
follow: for example, one can show that:
.
The imaginary part of a quaternion behaves like a vector in three dimension vector
space, and the real part a behaves like a scalar in . When quaternions are used in geometry, it is more convenient
to define them as a scalar plus a vector:
.
Those who have studied vectors at school might find it strange to add a number to a vector, as they are objects of
very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one
remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In
other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and
another one with zero scalar/real part:
.
We can express quaternion multiplication in the modern language of vector cross and dot products (which were
actually inspired by the quaternions in the first place). In place of the rules i2 = j2 = k2 = ijk = −1 we have the
quaternion multiplication rule:
where:
• is the resulting quaternion,
• is vector cross product (a vector),
• is vector scalar product (a scalar).
Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while
scalar-scalar and scalar-vector multiplications commute. From these rules it follows immediately that (see details):
.
The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm
ratio (see details):
where is a unit vector. Let also be an ordinary vector in 3 dimensional space, considered as a quaternion with a
real coordinate equal to zero. Then it can be shown (see next section) that the quaternion product
yields the vector upon rotation of the original vector by an angle around the axis . The rotation is
clockwise if our line of sight points in the direction pointed by . This operation is known as conjugation by q.
It follows that quaternion multiplication is composition of rotations, for if p and q are quaternions representing
rotations, then rotation (conjugation) by pq is
,
which is the same as rotating (conjugating) by q and then by p.
The quaternion inverse of a rotation is the opposite rotation, since . The square of a quaternion
n
rotation is a rotation by twice the angle around the same axis. More generally q is a rotation by n times the angle
around the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation between spatial
orientations; see Slerp.
Let be a unit vector (the rotation axis) and let . Our goal is to show that
yields the vector rotated by an angle around the axis . Expanding out, we have
where and are the components of perpendicular and parallel to respectively. This is the formula of a
rotation by around the axis.
Quaternions and spatial rotation 186
Example
Consider the rotation f around the axis , with a rotation angle of 120°, or 2π⁄3 radians.
The length of is √3, the half angle is π⁄3 (60°) with cosine ½, (cos 60° = 0.5) and sine √3⁄2, (sin 60° ≈ 0.866). We
are therefore dealing with a conjugation by the unit quaternion
It can be proved that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary
components. As a consequence,
and
This can be simplified, using the ordinary rules for quaternion arithmetic, to
As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120° about the long
diagonal through the fixed point (observe how the three axes are permuted cyclically).
Quaternions and spatial rotation 187
It gives us:
which is the expected result. As we can see, such computations are relatively long and tedious if done manually;
however, in a computer program, this amounts to calling the quaternion multiplication routine twice.
Non-commutativity
The multiplication of quaternions is non-commutative. Since the multiplication of unit quaternions corresponds to
the composition of three dimensional rotations, this property can be made intuitive by showing that three
dimensional rotations are not commutative in general.
A simple exercise of applying two rotations to an asymmetrical object (e.g., a book) can explain it. First, rotate a
book 90 degrees clockwise around the z axis. Next flip it 180 degrees around the x axis and memorize the result.
Then restore the original orientation, so that the book title is again readable, and apply those rotations in opposite
order. Compare the outcome to the earlier result. This shows that, in general, the composition of two different
rotations around two distinct spatial axes will not commute.
Quaternions and spatial rotation 188
will be a real number because . If r is zero the matrix is the identity matrix, and the
quaternion must be the identity quaternion (1, 0, 0, 0). Otherwise the quaternion can be calculated as follows:
Quaternions and spatial rotation 189
Beware the vector convention: There are two conventions for rotation matrices: one assumes row vectors on the left;
the other assumes column vectors on the right; the two conventions generate matrices that are the transpose of each
other. The above matrix assumes column vectors on the right. In general, a matrix for vertex transpose is ambiguous
unless the vector convention is also mentioned. Historically, the column-on-the-right convention comes from
mathematics and classical mechanics, whereas row-vector-on-the-left comes from computer graphics, where
typesetting row vectors was easier back in the early days.
(Compare the equivalent general formula for a 3 × 3 rotation matrix in terms of the axis and the angle.)
Fitting quaternions
The above section described how to recover a quaternion q from a 3 × 3 rotation matrix Q. Suppose, however, that
we have some matrix Q that is not a pure rotation — due to round-off errors, for example — and we wish to find the
quaternion q that most accurately represents Q. In that case we construct a symmetric 4×4 matrix
and find the eigenvector (x,y,z,w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is a
pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q [3]
Results
Storage requirements
Method Storage
Rotation matrix 9
Quaternion 4
Angle/axis 3*
* Note: angle-axis can be stored as 3 elements by multiplying the unit rotation axis by the rotation angle, forming the
logarithm of the quaternion, at the cost of additional calculations.
Quaternions and spatial rotation 190
Rotation matrices 27 18 45
Quaternions 16 12 28
Rotation matrix 9 6 0 15
Quaternions 21 18 0 39
Angle/axis 23 16 2 41
Used methods
There are three basic approaches to rotating a vector :
1. Compute the matrix product of a 3x3 rotation matrix R and the original 3x1 column matrix representing . This
requires 3*(3 multiplications + 2 additions) = 9 multiplications and 6 additions, the most efficient method for
rotating a vector.
2. Use the quaternion rotation formula derived above of . Computing this result is equivalent to
transforming the quaternion to a rotation matrix R using the formula above then multiplying with a vector.
Performing some common subexpression elimination yields an algorithm that costs 21 multiplies and 18 adds. As
a second approach, the quaternion could first be converted to its equivalent angle/axis representation then the
angle/axis representation used to rotate the vector. However, this is both less efficient and less numerically stable
when the quaternion nears the no-rotation point.
3. Use the angle-axis formula to convert an angle/axis to a rotation matrix R then multiplying with a vector.
Converting the angle/axis to R using common subexpression elimination costs 14 multiplies, 2 function calls (sin,
cos), and 10 add/subtracts; from item 1, rotating using R adds an additional 9 multiplications and 6 additions for a
total of 23 multiplies, 16 add/subtracts, and 2 function calls (sin, cos).
It is straightforward to check that for each matrix M MT = I, that is, that each matrix (and hence both matrices
together) represents a rotation. Note that since , the two matrices must commute. Therefore,
there are two commuting subgroups of the set of four dimensional rotations. Arbitrary four dimensional rotations
have 6 degrees of freedom, each matrix represents 3 of those 6 degrees of freedom.
Since an infinitesimal four-dimensional rotation can be represented by a pair of quaternions (as follows), all
(non-infinitesimal) four-dimensional rotations can also be represented.
Quaternions and spatial rotation 191
References
[1] Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality. Kuipers, Jack B., Princeton
University Press copyright 1999.
[2] Rotations, Quaternions, and Double Groups. Altmann, Simon L., Dover Publications, 1986 (see especially Ch. 12).
[3] Bar-Itzhack, Itzhack Y. (2000), "New method for extracting the quaternion from a rotation matrix", AIAA Journal of Guidance, Control and
Dynamics 23 (6): 1085–1087 (Engineering Note), doi:10.2514/2.4654, ISSN 0731-5090
Example
The postulates underlying construction of the seven-dimensional cross product are presented in the section
Definition. As context for that discussion, the historically first example of the cross product is tabulated below using
e1 to e7 as basis vectors.[3] [4] This table is one of 480 independent multiplication tables fitting the pattern that each
unit vector appears once in each column and once in each row.[5] Thus, each unit vector appears as a product in the
table six times, three times with a positive sign and three with a negative sign because of antisymmetry about the
diagonal of zero entries. For example, e1 = e2 × e3 = e4 × e5 = e7 × e6 and the negative entries are the reversed
cross-products.
Letter i j k l il jl kl
Alternate i j k l m n o
Entries in the interior give the product of the corresponding vectors on the left and the top in that order (the product
is anti-commutative). Some entries are highlighted to emphasize the symmetry.
The table can be summarized by the relation[4]
where is a completely antisymmetric tensor with a positive value +1 when ijk = 123, 145, 176, 246, 257, 347,
365. By picking out the factors leading to the unit vector e1, for example, one finds the formula for the e1 component
of x × y. Namely
Seven-dimensional cross product 193
The top left 3 × 3 corner of the table is the same as the cross product in three dimensions. It also may be noticed that
orthogonality of the cross product to its constituents x and y is a requirement upon the entries in this table. However,
because of the many possible multiplication tables, general results for the cross product are best developed using a
basis-independent formulation, as introduced next.
Definition
We can define a cross product on a Euclidean space V as a bilinear map from V × V to V mapping vectors x and y in
V to another vector x × y also in V, where x × y has the properties[1] [6]
(orthogonality),
and:
(magnitude),
where (x·y) is the Euclidean dot product and |x| is the vector norm. The first property states that the cross product is
perpendicular to its arguments, while the second property gives the magnitude of the cross product. An equivalent
expression in terms of the angle θ between the vectors[7] is[8]
or the area of the parallelogram in the plane of x and y with the two vectors as sides.[9] As a third alternative the
following can be shown to be equivalent to either expression for the magnitude:[10]
Further properties follow from the definition, including the following identities:
(anticommutativity),
(scalar triple product),
(Malcev
[8]
identity),
Other properties follow only in the three dimensional case, and are not satisfied by the seven dimensional cross
product, notably,
(vector triple product),
(Jacobi identity).[8]
Coordinate expressions
To define a particular cross product, an orthonormal basis {ej} may be selected and a multiplication table provided
that determines all the products {ei× ej}. One possible multiplication table is described in the Example section, but it
is not unique.[5] Unlike three dimensions, there are many tables because every pair of unit vectors is perpendicular to
five other unit vectors, allowing many choices for each cross product.
Once we have established a multiplication table, it is then applied to general vectors x and y by expressing x and y in
terms of the basis and expanding x×y through bilinearity.
× e1 e2 e3 e4 e5 e6 e7
|+ align="bottom" style="caption-side: bottom" | Lounesto's multiplication table Using e1 to e7 for the basis
vectors a different multiplication table from the one in the Introduction, leading to a different cross product, is given
with anticommutativity by[8]
with i = 1...7 modulo 7 and the indices i, i + 1 and i + 3 allowed to permute evenly. Together with anticommutativity
this generates the product. This rule directly produces the two diagonals immediately adjacent to the diagonal of
Seven-dimensional cross product 195
As the cross product is bilinear the operator x×– can be written as a matrix, which takes the form
or
also obtained directly from the diagram with the rule that any two unit vectors on a straight line are connected by
multiplication to the third unit vector on that straight line with signs according to the arrows (sign of the permutation
that orders the unit vectors).
It can be seen that both multiplication rules follow from the same Fano diagram by simply renaming the unit vectors,
and changing the sense of the center unit vector. The question arises: how many multiplication tables are there?[14]
The question of possible multiplication tables arises, for example, when one reads another article on octonions,
which uses a different one from the one given by [Cayley, say]. Usually it is remarked that all 480 possible
ones are equivalent, that is, given an octonionic algebra with a multiplication table and any other valid
multiplication table, one can choose a basis such that the multiplication follows the new table in this basis.
One may also take the point of view, that there exist different octonionic algebras, that is, algebras with
different multiplication tables. With this interpretation...all these octonionic algebras are isomorphic.
—Jörg Schray, Corinne A Manogue, Octonionic representations of Clifford algebras and triality (1994)
Seven-dimensional cross product 197
This is bilinear, alternate, has the desired magnitude, but is not vector valued. The vector, and so the cross product,
comes from the product of this bivector with a trivector. In three dimensions up to a scale factor there is only one
trivector, the pseudoscalar of the space, and a product of the above bivector and one of the two unit trivectors gives
the vector result, the dual of the bivector.
A similar calculation is done is seven dimensions, except as trivectors form a 35-dimensional space there are many
trivectors that could be used, though not just any trivector will do. The trivector that gives the same product as the
above coordinate transform is
This is combined with the exterior product to give the cross product
Conversely, suppose V is a 7-dimensional Euclidean space with a given cross product. Then one can define a bilinear
multiplication on R⊕V as follows:
The space R⊕V with this multiplication is then isomorphic to the octonions.[16]
The cross product only exists in three and seven dimensions as one can always define a multiplication on a space of
one higher dimension as above, and this space can be shown to be a normed division algebra. By Hurwitz's theorem
such algebras only exist in one, two, four, and eight dimensions, so the cross product must be in zero, one, three or
seven dimensions The products in zero and one dimensions are trivial, so non-trivial cross products only exist in
three and seven dimensions.[17] [18]
The failure of the 7-dimension cross product to satisfy the Jacobi identity is due to the nonassociativity of the
octonions. In fact,
Rotations
In three dimensions the cross product is invariant under the group of the rotation group, SO(3), so the cross product
of x and y after they are rotated is the image of x × y under the rotation. But this invariance is not true in seven
dimensions; that is, the cross product is not invariant under the group of rotations in seven dimensions, SO(7).
Instead it is invariant under the exceptional Lie group G2, a subgroup of SO(7).[16] [8]
Generalizations
Non-trivial binary cross products exist only in three and seven dimensions. But if the restriction that the product is
binary is lifted, so products of more than two vectors are allowed, then more products are possible.[19] [20] As in two
dimensions the product must be vector valued, linear, and anti-commutative in any two of the vectors in the product.
The product should satisfy orthogonality, so it is orthogonal to all its members. This means no more than n - 1
vectors can be used in n dimensions. The magnitude of the product should equal the volume of the parallelotope with
the vectors as edges, which is can be calculated using the Gram determinant. So the conditions are
(orthogonality)
(Gram determinant)
The Gram determinant is the squared volume of the parallelotope with a1, ..., ak as edges. If there are just two
vectors x and y it simplifies to the condition for the binary cross product given above, that is
where v is the same trivector as used in seven dimensions, ⌋ is again the left contraction, and w = -ve12...7 is a
4-vector.
Notes
[1] WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces" (http:/ / www. jstor. org/ stable/ 2323537). The
American Mathematical Monthly (Mathematical Association of America) 90 (10): 697–701. doi:10.2307/2323537. .
[2] WS Massey (1983). "cited work" (http:/ / www. jstor. org/ stable/ 2323537). The American Mathematical Monthly 90 (10): 697–701. . "If one
requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and
7-dimensional Euclidean space.".
[3] This table is due to Arthur Cayley (1845) and John T. Graves (1843). See G Gentili, C Stoppato, DC Struppa and F Vlacci (2009). "Recent
developments for regular functions of a hypercomplex variable" (http:/ / books. google. com/ ?id=H-5v6pPpyb4C& pg=PA168). In Irene
Sabadini, M Shapiro, F Sommen. Hypercomplex analysis (Conference on quaternionic and Clifford analysis; proceedings ed.). Birkaüser.
p. 168. ISBN 9783764398927. .
[4] Lev Vasilʹevitch Sabinin, Larissa Sbitneva, I. P. Shestakov (2006). "§17.2 Octonion algebra and its regular bimodule representation" (http:/ /
books. google. com/ ?id=_PEWt18egGgC& pg=PA235). Non-associative algebra and its applications. CRC Press. p. 235. ISBN 0824726693.
[5] Rafał Abłamowicz, Pertti Lounesto, Josep M. Parra (1996). "§ Four ocotonionic basis numberings" (http:/ / books. google. com/
?id=OpbY_abijtwC& pg=PA202). Clifford algebras with numeric and symbolic computations. Birkhäuser. p. 202. ISBN 0817639071. .
Seven-dimensional cross product 199
[6] Mappings are restricted to be bilinear by (Massey 1993) and Robert B Brown and Alfred Gray (1967). "Vector cross products" (http:/ / www.
springerlink. com/ content/ a42n878560522255/ ). Commentarii Mathematici Helvetici (Birkhäuser Basel) 42 (Number 1/December):
222–236. doi:10.1007/BF02564418. ..
[7] The definition of angle in n-dimensions ordinarily is defined in terms of the dot product as:
where θ is the angle between the vectors. Consequently, this property of the cross product provides its magnitude as:
References
• Brown, Robert B.; Gray, Alfred (1967). "Vector cross products". Commentarii Mathematici Helvetici 42 (1):
222–236. doi:10.1007/BF02564418.
• Lounesto, Pertti (2001). Clifford algebras and spinors (http://books.google.com/?id=kOsybQWDK4oC).
Cambridge, UK: Cambridge University Press. ISBN 0-521-00551-5.
• Silagadze, Z.K. (2002). "Multi-dimensional vector product" (http://iopscience.iop.org/0305-4470/35/23/310).
J Phys A: Math Gen 35: 4949. doi:10.1088/0305-4470/35/23/310. Also available as ArXiv reprint
arXiv:math.RA/0204357.
• Massey, W.S. (1983). Cross products of vectors in higher dimensional Euclidian spaces. JSTOR 2323537.
Octonion 200
Octonion
In mathematics, the octonions are a normed division algebra over the real numbers. There are only four such
algebras, the other three being the quaternions, complex numbers and real numbers. The octonions are the largest
such algebra with eight dimensions, double the number of the quaternions from which they are an extension. They
are noncommutative and nonassociative, but satisfy a weaker form of associativity, power associativity. The
octonion algebra is usually represented by the capital letter O, using boldface O or blackboard bold .
Octonions are not as well known as the quaternions and complex numbers, which are much more widely studied and
used. Despite this they have some interesting properties and are related to a number of exceptional structures in
mathematics, among them the exceptional Lie groups. Additionally, octonions have applications in fields such as
string theory, special relativity, and quantum logic.
History
The octonions were discovered in 1843 by John T. Graves, inspired by his friend William Hamilton's discovery of
quaternions. Graves called his discovery octaves. They were discovered independently by Arthur Cayley (1845).
They are sometimes referred to as Cayley numbers or the Cayley algebra.
Definition
The octonions can be thought of as octets (or 8-tuples) of real numbers. Every octonion is a real linear combination
of the unit octonions {e0, e1, e2, e3, e4, e5, e6, e7 } where e0 is the scalar element. That is, every octonion x can be
written in the form with real
coefficients {xi}.
Addition of octonions is accomplished by adding corresponding coefficients, as with the complex numbers and
quaternions. By linearity, multiplication of octonions is completely determined once given a multiplication table for
the unit octonions such as that below.[1]
e0 e1 e2 e3 e4 e5 e6 e7
Number 1 2 3 4 5 6 7
Letter i j k l il jl kl
Alternate i j k l m n o
Some entries are tinted to emphasize the table symmetry. This particular table has a diagonal from lower left to
upper right of e7's, and the bottom row and rightmost column are a reverse ordering of the index rows at the top and
left of the table.
The table can be summarized by the relation:[2]
where is a completely antisymmetric tensor with a positive value +1 when ijk = 123, 145, 176, 246, 257, 347,
365, and:
Cayley–Dickson construction
A more systematic way of defining the octonions is via the Cayley–Dickson construction. Just as quaternions can be
defined as pairs of complex numbers, the octonions can be defined as pairs of quaternions. Addition is defined
pairwise. The product of two pairs of quaternions (a, b) and (c, d) is defined by
where denotes the conjugate of the quaternion z. This definition is equivalent to the one given above when the
eight unit octonions are identified with the pairs
(1,0), (i,0), (j,0), (k,0), (0,1), (0,i), (0,j), (0,k)
Octonion 202
Let (a, b, c) be an ordered triple of points lying on a given line with the
order specified by the direction of the arrow. Then multiplication is
given by
A simple mnemonic for the products of the unit
ab = c and ba = −c octonions.
together with cyclic permutations. These rules together with
• 1 is the multiplicative identity,
• e2 = -1 for each point in the diagram
completely defines the multiplicative structure of the octonions. Each of the seven lines generates a subalgebra of O
isomorphic to the quaternions H.
is given by
Properties
Octonionic multiplication is neither commutative:
nor associative:
The octonions do satisfy a weaker form of associativity: they are alternative. This means that the subalgebra
generated by any two elements is associative. Actually, one can show that the subalgebra generated by any two
elements of O is isomorphic to R, C, or H, all of which are associative. Because of their non-associativity, octonions
don't have matrix representations, unlike quaternions.
The octonions do retain one important property shared by R, C, and H: the norm on O satisfies
This implies that the octonions form a nonassociative normed division algebra. The higher-dimensional algebras
defined by the Cayley–Dickson construction (e.g. the sedenions) all fail to satisfy this property. They all have zero
divisors.
Wider number systems exist which have a multiplicative modulus (e.g. 16 dimensional conic sedenions). Their
modulus is defined differently from their norm, and they also contain zero divisors.
It turns out that the only normed division algebras over the reals are R, C, H, and O. These four algebras also form
the only alternative, finite-dimensional division algebras over the reals (up to isomorphism).
Not being associative, the nonzero elements of O do not form a group. They do, however, form a loop, indeed a
Moufang loop.
Automorphisms
An automorphism, A, of the octonions is an invertible linear transformation of O which satisfies
The set of all automorphisms of O forms a group called G2. The group G2 is a simply connected, compact, real Lie
group of dimension 14. This group is the smallest of the exceptional Lie groups and is isomorphic to the subgroup of
SO(7) that preserves any chosen particular vector in its 8-dimensional real spinor representation.
See also: PSL(2,7) - the automorphism group of the Fano plane.
Quotes
“The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on.
The complex numbers are a slightly flashier but still respectable younger brother: not ordered, but
algebraically complete. The quaternions, being noncommutative, are the eccentric cousin who is shunned at
important family gatherings. But the octonions are the crazy old uncle nobody lets out of the attic: they are
nonassociative.”
—John Baez
Octonion 204
References
• Baez, John (2002), "The Octonions" (http://www.ams.org/bull/2002-39-02/S0273-0979-01-00934-X/home.
html), Bull. Amer. Math. Soc. 39 (02): 145–205, doi:10.1090/S0273-0979-01-00934-X. Online HTML versions at
Baez's site (http://math.ucr.edu/home/baez/octonions/) or see lanl.arXiv.org copy (http://xxx.lanl.gov/abs/
math/0105155v4)
• Cayley, Arthur (1845), "On Jacobi's elliptic functions, in reply to the Rev..; and on quaternions", Philos. Mag. 26:
208–211. Appendix reprinted in "The Collected Mathematical Papers", Johnson Reprint Co., New York, 1963,
p. 127.
• Conway, John Horton; Smith, Derek A. (2003), On Quaternions and Octonions: Their Geometry, Arithmetic, and
Symmetry, A. K. Peters, Ltd., ISBN 1-56881-134-9. ( Review (http://nugae.wordpress.com/2007/04/25/
on-quaternions-and-octonions/)).
External links
• Octonions and the Fano Plane Mnemonic (video demonstration) (http://www.youtube.com/
watch?v=5sLnYi_AbEI)
Multilinear algebra 205
Multilinear algebra
In mathematics, multilinear algebra extends the methods of linear algebra. Just as linear algebra is built on the
concept of a vector and develops the theory of vector spaces, multilinear algebra builds on the concepts of p-vectors
and multivectors with Grassmann algebra.
Origin
In a vector space of dimension n, one usually considers only the vectors. For Hermann Grassmann and others this
presumption misses the complexity of considering the structures of pairs, triples, and general multivectors. Since
there are several combinatorial possibilities, the space of multivectors turns out to have 2n dimensions. The abstract
formulation of the determinant is the most immediate application. Multilinear algebra also has applications in
mechanical study of material response to stress and strain with various moduli of elasticity. This practical reference
led to the use of the word tensor to describe the elements of the multilinear space. The extra structure in a multilinear
space has led it to play an important role in various studies in higher mathematics. Though Grassmann started the
subject in 1844 with his Ausdehnungslehre, and re-published in 1862, his work was slow to find acceptance as
ordinary linear algebra provided sufficient challenges to comprehension.
The topic of multilinear algebra is applied in some studies of multivariate calculus and manifolds where the Jacobian
matrix comes into play. The infinitesimal differentials of single variable calculus become differential forms in
multivariate calculus, and their manipulation is done with exterior algebra.
After some preliminary work by Elwin Bruno Christoffel, a major advance in multilinear algebra came in the work
of Gregorio Ricci-Curbastro and Tullio Levi-Civita (see references). It was the absolute differential calculus form of
multilinear algebra that Marcel Grossman and Michele Besso introduced to Albert Einstein. The publication in 1915
by Einstein of a general relativity explanation for the precession of the perihelion of Mercury, established multilinear
algebra and tensors as important mathematics.
Indeed what was done is almost precisely to explain that tensor spaces are the constructions required to reduce
multilinear problems to linear problems. This purely algebraic attack conveys no geometric intuition.
Its benefit is that by re-expressing problems in terms of multilinear algebra, there is a clear and well-defined 'best
solution': the constraints the solution exerts are exactly those you need in practice. In general there is no need to
invoke any ad hoc construction, geometric idea, or recourse to co-ordinate systems. In the category-theoretic jargon,
everything is entirely natural.
References
• Hermann Grassmann (2000) Extension Theory, American Mathematical Society. Translation by Lloyd
Kannenberg of the 1862 Ausdehnungslehre.
• Wendell H. Fleming (1965) Functions of Several Variables, Addison-Wesley.
Second edition (1977) Springer ISBN 3540902066.
Chapter: Exterior algebra and differential calculus # 6 in 1st ed, # 7 in 2nd.
• Ricci-Curbastro, Gregorio; Levi-Civita, Tullio (1900), "Méthodes de calcul différentiel absolu et leurs
applications", Mathematische Annalen 54 (1): 125–201, doi:10.1007/BF01454201, ISSN 1432-1807
Pseudovector 208
Pseudovector
In physics and mathematics, a pseudovector (or axial vector) is a
quantity that transforms like a vector under a proper rotation, but gains
an additional sign flip under an improper rotation such as a reflection.
Geometrically it is the opposite, of equal magnitude but in the opposite
direction, of its mirror image. This is as opposed to a true or polar
vector (more formally, a contravariant vector), which on reflection
matches its mirror image.
A loop of wire (black), carrying a current, creates
In three dimensions the pseudovector p is associated with the cross a magnetic field (blue). If the position and current
product of two polar vectors a and b:[2] of the wire are reflected across the dotted line, the
magnetic field it generates would not be
reflected: Instead, it would be reflected and
reversed. The position of the wire and its current
are (polar) vectors, but the magnetic field is a
[1]
pseudovector.
The vector p calculated this way is a pseudovector. One example is the normal to a plane. A plane can be defined by
two non-parallel vectors, a and b,[3] which can be said to span the plane. The vector a × b is a normal to the plane
(there are two normals, one on each side – which is used can be determined by the right-hand rule), and is a
pseudovector. This has consequences in computer graphics where it has to be considered when transforming surface
normals.
A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and
angular velocity. In mathematics pseudovectors are equivalent to three dimensional bivectors, from which the
transformation rules of pseudovectors can be derived. More generally in n-dimensional geometric algebra
pseudovectors are the elements of the algebra with dimension n − 1, written Λn−1Rn. The label 'pseudo' can be
further generalized to pseudoscalars and pseudotensors, both of which gain an extra sign flip under improper
rotations compared to a true scalar or tensor.
Physical examples
Physical examples of pseudovectors include the magnetic field, torque, vorticity, and the angular momentum.
Often, the distinction between vectors and pseudovectors is overlooked, but it becomes important in understanding
and exploiting the effect of symmetry on the solution to physical systems. For example, consider the case of an
electrical current loop in the z = 0 plane, which has a magnetic field at z = 0 that is oriented in the z direction. This
system is symmetric (invariant) under mirror reflections through the plane (an improper rotation), so the magnetic
field should be unchanged by the reflection. But reflecting the magnetic field through that plane naively appears to
changes its sign if it is viewed as a vector field—this contradiction is resolved by realizing that the mirror reflection
of the field induces an extra sign flip because of its pseudovector nature, so the mirror flip in the end leaves the
magnetic field unchanged as expected.
Pseudovector 209
Details
The definition of a "vector" in physics (including both polar vectors and pseudovectors) is more specific than the
mathematical definition of "vector" (namely, any element of an abstract vector space). Under the physics definition,
a "vector" is required to have components that "transform" in a certain way under a proper rotation: In particular, if
everything in the universe were rotated, the vector would rotate in exactly the same way. (The coordinate system is
fixed in this discussion; in other words this is the perspective of active transformations.) Mathematically, if
everything in the universe undergoes a rotation described by a rotation matrix R, so that a displacement vector x is
transformed to x′ = Rx, then any "vector" v must be similarly transformed to v′ = Rv. This important requirement is
what distinguishes a vector (which might be composed of, for example, the x, y, and z-components of velocity) from
any other triplet of physical quantities (For example, the length, width, and height of a rectangular box cannot be
considered the three components of a vector, since rotating the box does not appropriately transform these three
components.)
(In the language of differential geometry, this requirement is equivalent to defining a vector to be a tensor of
contravariant rank one.)
The discussion so far only relates to proper rotations, i.e. rotations about an axis. However, one can also consider
improper rotations, i.e. a mirror-reflection possibly followed by a proper rotation. (One example of an improper
rotation is inversion.) Suppose everything in the universe undergoes an improper rotation described by the rotation
matrix R, so that a position vector x is transformed to x′ = Rx. If the vector v is a polar vector, it will be transformed
to v′ = Rv. If it is a pseudovector, it will be transformed to v′ = -Rv.
The transformation rules for polar vectors and pseudovectors can be compactly stated as
(polar vector)
(pseudovector)
where the symbols are as described above, and the rotation matrix R can be either proper or improper. The symbol
det denotes determinant; this formula works because the determinant of proper and improper rotation matrices are +1
and -1, respectively.
Pseudovector 210
So v3 is also a pseudovector. Similarly one can show that the difference between two pseudovectors is a
pseudovector, that the sum or difference of two polar vectors is a polar vector, that multiplying a polar vector by any
real number yields another polar vector, and that multiplying a pseudovector by any real number yields another
pseudovector.
On the other hand, suppose v1 is known to be a polar vector, v2 is known to be a pseudovector, and v3 is defined to
be their sum, v3=v1+v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to
Therefore, v3 is neither a polar vector nor a pseudovector. For an improper rotation, v3 does not in general even keep
the same magnitude:
but .
If the magnitude of v3 were to describe a measurable physical quantity, that would mean that the laws of physics
would not appear the same if the universe was viewed in a mirror. In fact, this is exactly what happens in the weak
interaction: Certain radioactive decays treat "left" and "right" differently, a phenomenon which can be traced to the
summation of a polar vector with a pseudovector in the underlying theory. (See parity violation.)
,
where v1 and v2 are any three-dimensional vectors. (This equation can be proven either through a geometric
argument or through an algebraic calculation, and is well known.)
Suppose v1 and v2 are known polar vectors, and v3 is defined to be their cross product, v3=v1×v2. If the universe is
transformed by a rotation matrix R, then v3 is transformed to
Examples
From the definition, it is clear that a displacement vector is a polar vector. The velocity vector is a displacement
vector (a polar vector) divided by time (a scalar), so is also a polar vector. Likewise, the momentum vector is the
velocity vector (a polar vector) times mass (a scalar), so is a polar vector. Angular momentum is the cross product of
a displacement (a polar vector) and momentum (a polar vector), and is therefore a pseudovector. Continuing this
way, it is straightforward to classify any vector as either a pseudovector or polar vector.
Geometric algebra
In geometric algebra the basic elements are vectors, and these are used to build an hierarchy of elements using the
definitions of products in this algebra. In particular, the algebra builds pseudovectors from vectors.
The basic multiplication in the geometric algebra is the geometric product, denoted by simply juxtaposing two
vectors as in ab. This product is expressed as:
where the leading term is the customary vector dot product and the second term is called the wedge product. Using
the postulates of the algebra, all combinations of dot and wedge products can be evaluated. A terminology to
describe the various combinations is provided. For example, a multivector is a summation of k-fold wedge products
of various k-values. A k-fold wedge product also is referred to as a k-blade.
In the present context the pseudovector is one of these combinations. This term is attached to a different mulitvector
depending upon the dimensions of the space (that is, the number of linearly independent vectors in the space). In
three dimensions, the most general 2-blade or bivector can be expressed as a single wedge product and is a
pseudovector.[5] In four dimensions, however, the pseudovectors are trivectors.[6] In general, it is a (n - 1)-blade,
where n is the dimension of the space and algebra.[7] An n-dimensional space has n vectors and also n pseudovectors.
Each pseudovector is formed from the outer (wedge) product of all but one of the n vectors. For instance, in four
dimensions where the vectors are: {e1, e2, e3, e4}, the pseudovectors can be written as: {e234, e134, e124, e123}.
where superscripts label vector components. On the other hand, the plane of the two vectors is represented by the
exterior product or wedge product, denoted by a ∧ b. In this context of geometric algebra, this bivector is called a
pseudovector, and is the dual of the cross product.[9] The dual of e1 is introduced as e23 ≡ e2e3 = e2 ∧ e3, and so forth.
That is, the dual of e1 is the subspace perpendicular to e1, namely the subspace spanned by e2 and e3. With this
understanding,[10]
For details see Hodge dual. Comparison shows that the cross product and wedge product are related by:
Using the above relations, it is seen that if the vectors a and b are inverted by changing the signs of their components
while leaving the basis vectors fixed, both the pseudovector and the cross product are invariant. On the other hand, if
the components are fixed and the basis vectors eℓ are inverted, then the pseudovector is invariant, but the cross
product changes sign. This behavior of cross products is consistent with their definition as vector-like elements that
change sign under transformation from a right-handed to a left-handed coordinate system, unlike polar vectors.
Note on usage
As an aside, it may be noted that not all authors in the field of geometric algebra use the term pseudovector, and
some authors follow the terminology that does not distinguish between the pseudovector and the cross product.[14]
However, because the cross product does not generalize beyond three dimensions,[15] the notion of pseudovector
based upon the cross product also cannot be extended to higher dimensions. The pseudovector as the (n–1)-blade of
an n-dimensional space is not so restricted.
Another important note is that pseudovectors, despite their name, are "vectors" in the common mathematical sense,
i.e. elements of a vector space. The idea that "a pseudovector is different from a vector" is only true with a different
and more specific definition of the term "vector" as discussed above.
Notes
[1] Stephen A. Fulling, Michael N. Sinyakov, Sergei V. Tischchenko (2000). Linearity and the mathematics of several variables (http:/ / books.
google. com/ books?id=Eo3mcd_62DsC& pg=RA1-PA343& dq=pseudovector+ "magnetic+ field"& lr=& as_drrb_is=q& as_minm_is=0&
as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q=pseudovector "magnetic field"& f=false). World Scientific.
p. 343. ISBN 9810241968. .
[2] Aleksandr Ivanovich Borisenko, Ivan Evgenʹevich Tarapov (1979). Vector and tensor analysis with applications (http:/ / books. google. com/
books?id=CRIjIx2ac6AC& pg=PA125& dq="C+ is+ a+ pseudovector. + Note+ that"& lr=& as_drrb_is=q& as_minm_is=0& as_miny_is=&
as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q="C is a pseudovector. Note that"& f=false) (Reprint of 1968 Prentice-Hall
ed.). Courier Dover. p. 125. ISBN 0486638332. .
[3] RP Feynman: §52-5 Polar and axial vectors (http:/ / student. fizika. org/ ~jsisko/ Knjige/ Opca Fizika/ Feynman Lectures on Physics/ Vol 1
Ch 52 - Symmetry in Physical Laws. pdf) from Chapter 52: Symmetry and physical laws, in: Feynman Lectures in Physics, Vol. 1
[4] See Feynman Lectures (http:/ / student. fizika. org/ ~jsisko/ Knjige/ Opca Fizika/ Feynman Lectures on Physics/ ).
[5] William M Pezzaglia Jr. (1992). "Clifford algebra derivation of the characteristic hypersurfaces of Maxwell's equations" (http:/ / books.
google. com/ books?id=KfNgBHNUW_cC& pg=PA131). In Julian Ławrynowicz. Deformations of mathematical structures II. Springer.
p. 131 ff. ISBN 0792325761. .
[6] In four dimensions, such as a Dirac algebra, the pseudovectors are trivectors. Venzo De Sabbata, Bidyut Kumar Datta (2007). Geometric
algebra and applications to physics (http:/ / books. google. com/ books?id=AXTQXnws8E8C& pg=PA64& dq=bivector+ trivector+
pseudovector+ "geometric+ algebra"& lr=& as_drrb_is=b& as_minm_is=0& as_miny_is=2005& as_maxm_is=0& as_maxy_is=2010&
as_brr=0& cd=1#v=onepage& q=bivector trivector pseudovector "geometric algebra"& f=false). CRC Press. p. 64. ISBN 1584887729. .
[7] .
William E Baylis (2004). "§4.2.3 Higher-grade multivectors in Cℓn: Duals" (http:/ / books. google. com/
books?id=oaoLbMS3ErwC& pg=PA100& dq="pseudovectors+ (grade+ n+ -+ 1+ elements)"& lr=& as_drrb_is=q&
as_minm_is=0& as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q="pseudovectors
Pseudovector 213
(grade n - 1 elements)"& f=false). Lectures on Clifford (geometric) algebras and applications. Birkhäuser. p. 100.
ISBN 0817632573. .
[8] William E Baylis (1994). Theoretical methods in the physical sciences: an introduction to problem solving using Maple V (http:/ / books.
google. com/ books?id=pEfMq1sxWVEC& pg=PA234). Birkhäuser. p. 234, see footnote. ISBN 081763715X. .
[9] R Wareham, J Cameron & J Lasenby (2005). "Application of conformal geometric algebra in computer vision and graphics" (http:/ / books.
google. com/ books?id=uxofVAQE3LoC& pg=PA330& dq="is+ termed+ the+ dual+ of+ x"& lr=& as_drrb_is=q& as_minm_is=0&
as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q="is termed the dual of x"& f=false). Computer algebra and
geometric algebra with applications. Springer. p. 330. ISBN 3540262962. . In three dimensions, a dual may be right-handed or left-handed;
see Leo Dorst, Daniel Fontijne, Stephen Mann (2007). "Figure 3.5: Duality of vectors and bivectors in 3-D" (http:/ / books. google. com/
books?id=-1-zRTeCXwgC& pg=PA82). Geometric Algebra for Computer Science: An Object-Oriented Approach to Geometry (2nd ed.).
Morgan Kaufmann. p. 82. ISBN 0123749425. .
[10] Christian Perwass (2009). "§1.5.2 General vectors" (http:/ / books. google. com/ books?id=8IOypFqEkPMC& pg=PA17#v=onepage& q=&
f=false). Geometric Algebra with Applications in Engineering. Springer. p. 17. ISBN 354089067X. .
[11] David Hestenes (1999). "The vector cross product" (http:/ / books. google. com/ books?id=AlvTCEzSI5wC& pg=PA60). New foundations
for classical mechanics: Fundamental Theories of Physics (2nd ed.). Springer. p. 60. ISBN 0792353021. .
[12] Venzo De Sabbata, Bidyut Kumar Datta (2007). "The pseudoscalar and imaginary unit" (http:/ / books. google. com/
books?id=AXTQXnws8E8C& pg=PA53). Geometric algebra and applications to physics. CRC Press. p. 53 ff. ISBN 1584887729. .
[13] Eduardo Bayro Corrochano, Garret Sobczyk (2001). Geometric algebra with applications in science and engineering (http:/ / books. google.
com/ books?id=GVqz9-_fiLEC& pg=PA126). Springer. p. 126. ISBN 0817641998. .
[14] For example, Bernard Jancewicz (1988). Multivectors and Clifford algebra in electrodynamics (http:/ / books. google. com/
books?id=seFyL-UWoj4C& pg=PA11#v=onepage& q=& f=false). World Scientific. p. 11. ISBN 9971502909. .
[15] Stephen A. Fulling, Michael N. Sinyakov, Sergei V. Tischchenko (2000). Linearity and the mathematics of several variables (http:/ / books.
google. com/ books?id=Eo3mcd_62DsC& pg=RA1-PA340). World Scientific. p. 340. ISBN 9810241968. .
General references
• Richard Feynman, Feynman Lectures on Physics, Vol. 1 Chap. 52. See §52-5: Polar and axial vectors, p. 52-6
(http://student.fizika.org/~jsisko/Knjige/Opca Fizika/Feynman Lectures on Physics/Vol 1 Ch 52 - Symmetry
in Physical Laws.pdf)
• George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists (Harcourt: San Diego, 2001). (ISBN
0-12-059815-9)
• John David Jackson, Classical Electrodynamics (Wiley: New York, 1999). (ISBN 0-471-30932-X)
• Susan M. Lea, "Mathematics for Physicists" (Thompson: Belmont, 2004) (ISBN 0-534-37997-4)
• Chris Doran and Anthony Lasenby, "Geometric Algebra for Physicists" (Cambridge University Press: Cambridge,
2007) (ISBN 978-0-521-71595-9)
• William E Baylis (2004). "Chapter 4: Applications of Clifford algebras in physics" (http://books.google.com/
books?id=oaoLbMS3ErwC&pg=PA100). In Rafał Abłamowicz, Garret Sobczyk. Lectures on Clifford
(geometric) algebras and applications. Birkhäuser. p. 100 ff. ISBN 0817632573.: The dual of the wedge product
a b is the cross product a × b.
Bivector 214
Bivector
In mathematics, a bivector or 2-vector is a quantity in geometric algebra or exterior algebra that generalises the idea
of a vector. If a scalar is considered a zero dimensional quantity, and a vector is a one dimensional quantity, then a
bivector can be thought of as two dimensional. Bivectors have applications in many areas of mathematics and
physics. They are related to complex numbers in two dimensions and to both pseudovectors and quaternions in three
dimensions. They can be used to generate rotations in any dimension, and are a useful tool for classifying such
rotations. They also are used in physics, tying together a number of otherwise unrelated quantities.
Bivectors are generated by the exterior product on vectors – given two vectors a and b their exterior product a ∧ b is
a bivector. But not all bivectors can be generated this way, and in higher dimensions a sum of exterior products is
often needed. More precisely a bivector that requires only a single exterior product is simple; in two and three
dimensions all bivectors are simple, but in higher dimensions this is not generally the case.[1] The exterior product is
antisymmetric, so b ∧ a negates the bivector, producing a rotation with the opposite sense, and a ∧ a is the zero
bivector.
Geometrically, a simple bivector can be interpreted as an oriented plane
segment, much as vectors can be thought of as directed line segments.[3]
Specifically for the bivector a ∧ b, its magnitude is the area of the
parallelogram with edges a and b, its attitude that of any plane specified
by a and b, and its orientation the sense of the rotation that would align a
with b. It does not have a definite location or position.[3] [4]
History
The bivector was first defined in 1844 by German mathematician
Hermann Grassmann in exterior algebra, as the result of the exterior
product. Around the same time in 1843 in Ireland William Rowan
Hamilton discovered quaternions. It was not until English mathematician
William Kingdon Clifford in 1888 added the geometric product to
Parallel plane segments with the same
Grassmann's algebra, incorporating the ideas of both Hamilton and
orientation and area corresponding to the same
Grassmann, and founded Clifford algebra, that the bivector as it is known [2]
bivector a ∧ b.
today was fully understood.
Around this time Josiah Willard Gibbs and Oliver Heaviside developed vector calculus which included separate
cross product and dot products, derived from quaternion multiplication.[5] [6] [7] The success of vector calculus, and
of the book Vector Analysis by Gibbs and Wilson, meant the insights of Hamilton and Clifford were overlooked for
a long time, as much of 20th century mathematics and physics was formulated in vector terms. Gibbs instead
described bivectors as vectors, and used "bivector" to describe an unrelated quantity, a use that has sometimes been
copied.[8] [9] [10]
Today the bivector is largely studied as a topic in geometric algebra, a more restricted Clifford algebra over real or
complex vector spaces with nondegenerate quadratic form. Its resurgence was led by David Hestenes who, along
with others, discovered a range of new applications in physics for geometric algebra.[11]
Bivector 215
Formal definition
For this article the bivector will be considered only in real geometric algebras. This in practice is not much of a
restriction, as all useful applications are drawn from such algebras. Also unless otherwise stated all examples have a
Euclidian metric and so a quadratic form with signature 1.
Contraction:
Where Q is the quadratic form, |a| is the magnitude of a and ϵa is the signature of the vector. For a space with
Euclidian metric ϵa is 1 so can be omitted, and the contraction condition becomes:
is a sum of scalars and so a scalar. From the law of cosines on the triangle formed by the vectors its value is
|a||b|cosθ, where θ is the angle between the vectors. It is therefore identical to the interior product between two
vectors, and is written the same way,
It is symmetric, scalar valued, and can be used to determine the angle between two vectors: in particular if a and b
are orthogonal the product is zero.
By addition:
That is the geometric product is the sum of the symmetric interior product and antisymmetric exterior product.
To calculate a ∧ b consider the sum
Bivector 216
With a negative square it cannot be a scalar or vector quantity, so it is a new sort of object, a bivector. It has
magnitude |a||b|sinθ, where θ is the angle between the vectors, and so is zero for parallel vectors.
To distinguish them from vectors, bivectors are written here with bold capitals, for example:
Although other conventions are used, in particular as vectors and bivectors are both elements of the geometric
algebra.
Properties
has dimension 2n - 1, and contains Λ2ℝn as a linear subspace with dimension 1⁄2n(n - 1) (a triangular number). In two
and three dimensions the even subalgebra contains only scalars and bivectors, and each is of particular interest. In
two dimensions the even subalgebra is isomorphic to the complex numbers, ℂ, while in three it is isomorphic to the
quaternions, ℍ. More generally the even subalgebra can be used to generate rotations in any dimension, and can be
generated by bivectors in the algebra.
Magnitude
As noted in the previous section the magnitude of a simple bivector, that is one that is the exterior product of two
vectors a and b, is |a||b|sin θ, where θ is the angle between the vectors. It is written |B|, where B is the bivector.
For general bivectors the magnitude can be calculated by taking the norm of the bivector considered as a vector in
the space Λ2ℝn. If the magnitude is zero then all the bivector's components are zero, and the bivector is the zero
bivector which as an element of the geometric algebra equals the scalar zero.
Bivector 217
Unit bivectors
A unit bivector is one with unit magnitude. It can be derived from any non-zero bivector by dividing the bivector by
its magnitude, that is
Of particular interest are the unit bivectors formed from the products of the Standard basis. If ei and ej are distinct
basis vectors then the product ei ∧ ej is a bivector. As the vectors are orthogonal this is just eiej, written eij, with unit
magnitude as the vectors are unit vectors. The set of all such bivectors form a basis for Λ2ℝn. For instance in four
dimensions the basis for Λ2ℝ4 is (e1e2, e1e3, e1e4, e2e3, e2e4, e3e4) or (e12, e13, e14, e23, e24, e34).[13]
Simple bivectors
The exterior product of two vectors is a bivector, but not all bivectors are exterior products of two vectors. For
example in four dimensions the bivector
cannot be written as the exterior product of two vectors. A vector that can be written as the exterior product of two
vectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions; in four
dimensions every bivector is the sum of at most two exterior products. A bivector has a real square if and only if it is
simple, and only simple bivectors can be represented geometrically by a oriented plane area.[1]
The quantity A · B is the scalar valued interior product, while A ∧ B is the grade 4 exterior product that arises in four
or more dimensions. The quantity A × B is the bivector valued commutator product, given by
[14]
The space of bivectors Λ2ℝn are a Lie algebra over ℝ, with the commutator product as the Lie bracket. The full
geometric product of bivectors generates the even subalgebra.
Of particular interest is the product of a bivector with itself. As the commutator product is antisymmetric the product
simplifies to
If the bivector is simple the last term is zero and the product is the scalar valued A · A, which can be used as a check
for simplicity. In particular the exterior product of bivectors only exists in four or more dimensions, so all bivectors
in two and three dimensions are simple.[1]
Bivector 218
Two dimensions
When working with coordinates in geometric algebra it is usual to write the basis vectors as (e1, e2, ...), a convention
that will be used here.
A vector in real two dimensional space ℝ2 can be written a = a1e1 + a2e2, where a1 and a2 are real numbers, e1 and e2
are orthonormal basis vectors. The geometric product of two such vectors is
This can be split into the symmetric, scalar valued, interior product and an antisymmetric, bivector valued exterior
product:
All bivectors in two dimensions are of this form, that is multiples of the bivector e1e2, written e12 to emphasise it is a
bivector rather than a vector. The magnitude of e12 is 1, with
so it is called the unit bivector. The term unit bivector can be used in other dimensions but it is only uniquely
defined in two dimensions and all bivectors are multiples of e12. As the highest grade element of the algebra e12 is
also the pseudoscalar which is given the symbol i.
Complex numbers
With the properties of negative square and unit magnitude the unit bivector can be identified with the imaginary unit
from complex numbers. The bivectors and scalars together form the even subalgebra of the geometric algebra, which
is isomorphic to the complex numbers ℂ. The even subalgebra has basis (1, e12), the whole algebra has basis (1, e1,
e2, e12).
The complex numbers are usually identified with the coordinate axes and two dimensional vectors, which would
mean associating them with the vector elements of the geometric algebra. There is no contradiction in this, as to get
from a general vector to a complex number an axis needs to be identified as the real axis, e1 say. This multiplies by
all vectors to generate the elements of even subalgebra.
All the properties of complex numbers can be derived from bivectors, but two are of particular interest. First as with
complex numbers products of bivectors and so the even subalgebra are commutative. This is only true in two
dimensions, so properties of the bivector in two dimensions that depend on commutativity do not usually generalise
to higher dimensions.
Second a general bivector can be written
where θ is a real number. Putting this into the Taylor series for the exponential map and using the property e122 = −1
results in a bivector version of Euler's formula,
which when multiplied by any vector rotates it through an angle θ about the origin:
The product of a vector with a bivector in two dimensions is anticommutative, so the following products all generate
the same rotation
Bivector 219
Of these the last product is the one that generalises into higher dimensions. The quantity needed is called a rotor and
is given the symbol R, so in two dimensions a rotor that rotates through angle θ can be written
Three dimensions
In three dimensions the geometric product of two vectors is
This can be split into the symmetric, scalar valued, interior product and the antisymmetric, bivector valued, exterior
product:
In three dimensions all bivectors are simple and so the result of an exterior product. The unit bivectors e23, e31 and
e12 form a basis for the space of bivectors Λ2ℝ3, which itself a three dimensional linear space. So if a general
bivector is:
which can be split into symmetric scalar and antisymmetric bivector parts as follows
This is another version of Euler's formula, but with a general bivector in three dimensions. Unlike in two dimensions
bivectors are not commutative so properties that depend on commutativity do not apply in three dimensions. For
example in general eA + B ≠ eAeB in three (or more) dimensions.
The full geometric algebra in three dimensions, Cℓ3(ℝ), has basis (1, e1, e2, e3, e23, e31, e12, e123). The element e123
is a trivector and the pseudoscalar for the geometry. Bivectors in three dimensions are sometimes identified with
pseudovectors[16] to which they are related, as discussed below.
Bivector 220
Quaternions
Bivectors are not closed under the geometric product, but the even subalgebra is. In three dimensions it consists of
all scalar and bivector elements of the geometric algebra, so a general element can be written for example a + A,
where a is the scalar part and A is the bivector part. It is written Cℓ +
3 and has basis (1, e , e , e ). The product of two general elements of the even subalgebra is
23 31 12
The even subalgebra, that is the algebra consisting of scalars and bivectors, is isomorphic to the quaternions, ℍ. This
can be seen by comparing the basis to the quaternion basis, or from the above product which is identical to the
quaternion product, except for a change of sign which relates to the negative products in the bivector interior product
A · B. Other quaternion properties can be similarly related to or derived from geometric algebra.
This suggests that the usual split of a quaternion into scalar and vector parts would be better represented as a split
into scalar and bivector parts; if this is done there is no special quaternion product, there is just the normal geometric
product on the elements. It also relates quaternions in three dimensions to complex numbers in two, as each is
isomorphic to the even subalgebra for the dimension, a relationship that generalises to higher dimensions.
Rotation vector
The rotation vector, from the axis angle representation of rotations, is a compact way of representing rotations in
three dimensions. In its most compact form it consists of a vector, the product of the a unit vector that is the axis of
rotation and the angle of rotation, so the magnitude of the vector is the rotation angle.
In geometric algebra this vector is classified as a bivector. This can be seen in its relation to quaternions. If the axis
is ω and the angle of rotation is θ then the rotation vector is ωθ quaternion associated with the rotation is
but this is just the exponent of half of the bivector Ωθ, that is
So rotation vectors are bivectors, just as quaternions are elements of the geometric algebra, and they are related by
the exponential map in that algebra.
Rotors
The bivector Ωθ generates a rotation through the exponential map. The even elements generated rotate a general
vector in three dimensions in the same way as quaternions:
As to two dimensions the quantity eΩθ is called a rotor and written R. The quantity e -Ωθ -1
is then R , and they
generate rotations as follows
This is identical to two dimensions, except here rotors are four-dimensional objects isomorphic to the quaternions.
This can be generalised to all dimensions, with rotors, elements of the even subalgebra with unit magnitude, being
generated by the exponential map from bivectors. They form a double cover over the rotation group, so the rotors R
and −R represent the same rotation.
Bivector 221
Matrices
Bivectors are isomorphic to skew-symmetric matrices; the general bivector B23e23 + B31e31 + B12e12 maps to the
matrix
This multiplied by vectors on both sides gives the same vector as the product of a vector and bivector; an example is
the angular velocity tensor.
Skew symmetric matrices generate orthogonal matrices with determinant 1 through the exponential map. In
particular the exponent of a bivector associated with a rotation is a rotation matrix, that is the rotation matrix MR
given by the above skew-symmetric matrix is
The rotation described by MR is the same as that described by the rotor R given by
Bivectors are related to the eigenvalues of a rotation matrix. Given a rotation matrix M the eigenvalues can
calculated by solving the characteristic equation for that matrix 0 = det (M - λI). By the fundamental theorem of
algebra this has three roots, but only one real root as there is only one eigenvector, the axis of rotation. The other
roots must be a complex conjugate pair. They have unit magnitude so purely imaginary logarithms, equal to the
magnitude of the bivector associated with the rotation, which is also the angle of rotation. The eigenvectors
associated with the complex eigenvalues are in the plane of the bivector, so the exterior product of two non-parallel
eigenvectors result in the bivector, or at least a multiple of it.
Axial vectors
The rotation vector is an example of an axial vector. Axial vectors or pseudovectors are vectors that undergo a sign
change compared to normal or polar vectors under inversion, that is when reflected or otherwise inverted. Examples
include quantities like torque, angular momentum and vector magnetic fields. Such quantities can be described as
bivectors in geometric algebra; that is quantities that might use axial vectors in vector algebra are better represented
by bivectors in geometric algebra.[17] More precisely, the Hodge dual gives the isomorphism between axial vectors
and bivectors, so each axial vector is associated with a bivector and vice-versa; that is
where * indicates the Hodge dual. Alternately, using the unit pseudoscalar in Cℓ3(ℝ), i = e1e2e3 gives
This is easier to use as the product is just the geometric product. But it is antisymmetric because (as in two
dimensions) the unit pseudoscalar i squares to −1, so a negative is needed in one of the products.
This relationship extends to operations like the vector valued cross product and bivector valued exterior product, as
when written as determinants they are calculated in the same way:
Bivector 222
Bivectors have a number of advantages over axial vectors. They better disambiguate axial and polar vectors, that is
the quantities represented by them, so it is clearer which operations are allowed and what their results are. For
example the inner product of a polar vector and a axial vector resulting from the cross product in the triple product
should result in a pseudoscalar, a result which is more obvious if the calculation is framed as the exterior product of
a vector and bivector. They generalises to other dimensions; in particular bivectors can be used to describe quantities
like torque and angular momentum in two as well as three dimensions. Also, they closely match geometric intuition
in a number of ways, as seen in the next section.[18]
Geometric interpretation
As suggested by their name and that of the algebra, one of the attractions
of bivectors is that they have a natural geometric interpretation. This can
be described in any dimension but is best done in three where parallels
can be drawn with more familiar objects, before being applied to higher
dimensions. In two dimensions the geometric interpretation is trivial, as
the space is two dimensional so has only one plane, and all bivectors are
associated with it differing only by a scale factor.
All bivectors can be interpreted as planes, or more precisely as directed
plane segments. In three dimensions there are three properties of a
bivector that can be interpreted geometrically:
• The arrangement of the plane in space, precisely the attitude of the
plane (or alternately the rotation, geometric orientation or gradient of
the plane), is associated with the ratio of the bivector components. In
particular the three basis bivectors, e23, e31 and e12, or scalar multiples Parallel plane segments with the same
of them, are associated with the yz-plane, xz-plane and xy-plane orientation and area corresponding to the same
[2]
bivector a ∧ b.
respectively.
• The magnitude of the bivector is associated with the area of the plane segment. The area does not have a
particular shape so any shape can be used. It can even be represented in other ways, such as by an angular
measure. But if the vectors are interpreted as lengths the bivector is usually interpreted as an area with the same
units, as follows.
• Like the direction of a vector a plane associated with a bivector has a direction, a circulation or a sense of rotation
in the plane, which takes two values seen as clockwise and counterclockwise when viewed from viewpoint not in
the plane. This is associated with a change of sign in the bivector, that is if the direction is reversed the bivector is
negated. Alternately if two bivectors have the same attitude and magnitude but opposite directions then one is the
negative of the other.
Bivector 223
where θ is the angle between the vectors. This is the area of the parallelogram with edges a and b, as shown in the
diagram. One interpretation is that the area is swept out by b as it moves along a. The exterior product is
antisymmetric, so reversing the order of a and b to make a move along b results in a bivector with the opposite
direction that is the negative of the first. The plane of bivector a ∧ b contains both a and b so they are both parallel
to the plane.
Bivectors and axial vectors are related by Hodge dual. In a real vector space the Hodge dual relates a subspace to its
orthogonal complement, so if a bivector is represented by a plane then the axial vector associated with it is simply
the plane's surface normal. The plane has two normals, one on each side, giving the two possible orientations for the
plane and bivector.
This relates the cross product to the exterior product. It can also be
used to represent physical quantities, like torque and angular
momentum. In vector algebra they are usually represented by
vectors, perpendicular to the plane of the force, linear momentum
or displacement that they are calculated from. But if a bivector is
used instead the plane is the plane of the bivector, so is a more
natural way to represent the quantities and the way they act. It also
unlike the vector representation generalises into other dimensions.
Like vectors these have magnitudes |A · B| = |A||B| cos θ and |A × B| = |A||B| sin θ, where θ is the angle between the
planes. In three dimensions it is the same as the angle between the normal vectors dual to the planes, and it
generalises to some extent in higher dimensions.
Bivector 224
This can be interpreted geometrically as seen in the diagram: the two areas sum to give a third, with the three areas
forming faces of a prism with a, b, c and b + c as edges. This corresponds to the two ways of calculating the area
using the distributivity of the exterior product:
This only works in three dimensions as it is the only dimension where a vector parallel to both bivectors must exist.
In higher dimensions bivectors generally are not associated with a single plane, or if they are (simple bivectors) two
bivectors may have no vector in common, and so sum to a non-simple bivector.
Four dimensions
In four dimensions the basis elements for the space Λ2ℝ4 of bivectors are (e12, e13, e14, e23, e24, e34), so a general
bivector is of the form
Orthogonality
In four dimensions bivectors are orthogonal to bivectors. That is the dual of a bivector is a bivector, and the space
Λ2ℝ4 is dual to itself in Cℓ4(ℝ). Normal vectors are not unique, instead every plane is orthogonal to all the vectors in
its dual space. This can be used to partition the bivectors into two 'halves', for example into two sets of three unit
bivectors each. There are only four distinct ways to do this, and whenever it's done one vector is in only one of the
two halves, for example (e12, e13, e14) and (e23, e24, e34).
Bivector 225
Simple bivectors in 4D
In four dimensions bivectors are generated by the exterior product of vectors in ℝ4, but with one important difference
from ℝ3 and ℝ2. In four dimensions not all bivectors are simple. There are bivectors such as e12 + e34 that cannot be
generated by the external product of two vectors. This also means they do not have a real, that is scalar, square. In
this case
The element e1234 is the pseudoscalar in Cℓ4, distinct from the scalar, so the square is non-scalar.
All bivectors in four dimensions can be generated using at most two exterior products and four vectors. The above
bivector can be written as
Alternately every bivector can be written as the sum of two simple bivectors. It is useful to choose two orthogonal
bivectors for this, and this is always possible to do. Moreover for a general bivector the choice of simple bivectors is
unique, that is there is only one way to decompose into orthogonal bivectors. This is true also for simple bivectors,
except one of the orthogonal parts is zero. The exception is when the two orthogonal bivectors have equal
magnitudes (as in the above example): in this case the decomposition is not unique.[1]
Rotations in ℝ4
As in three dimensions bivectors in four dimension generate rotations through the exponential map, and all rotations
can be generated this way. As in three dimensions if B is a bivector then the rotor R is eB/2 and rotations are
generated in the same way:
Bivectors in general do not commute, but one exception is orthogonal bivectors and exponents of them. So if the
bivector B = B1 + B2, where B1 and B2 are orthogonal simple bivectors, is used to generate a rotation it decomposes
into two simple rotations that commute as follows:
Bivector 226
It is always possible to do this as all bivectors can be expressed as sums of orthogonal bivectors.
Spacetime rotations
Spacetime is a mathematical model for our universe used in special relativity. It consists of three space dimensions
and one time dimension combined into a single four dimensional space. It is naturally described if using geometric
algebra and bivectors, with the Euclidean metric replaced by a Minkowski metric. That is the algebra is identical to
that of Euclidean space, except the signature is changed, so
(Note the order and indices above are not universal – here e4 is the time-like dimension). The geometric algebra is
Cℓ3,1(ℝ), and the subspace of bivectors is Λ2ℝ3,1. The bivectors are of two types. The bivectors e23, e31 and e12 have
negative squares and correspond to the bivectors of the three dimensional subspace corresponding to Euclidean
space, ℝ3. These bivectors generate normal rotations in ℝ3.
The bivectors e14, e24 and e34 have positive squares and as planes span a space dimension and the time dimension.
These also generate rotations through the exponential map, but instead of trigonometric functions hyperbolic
functions are needed, which generates a rotor as follows:
These are Lorentz transformations, expressed in a particularly compact way, using the same algebra as in ℝ3 and ℝ4.
In general all spacetime rotations are generated from bivectors through the exponential map, that is, a general rotor
generated by bivector A is of the form
The set of all rotations in spacetime form the Lorentz group, and from them most of the consequences of special
relativity can be deduced. More generally this show how transformations in Euclidean space and spacetime can all
be described using the same algebra.
Maxwell's equations
(Note: in this section traditional 3-vectors are indicated by lines over the symbols and spacetime vector and bivectors
by bold symbols, with the vectors J and A exceptionally in uppercase)
Maxwell's equations are used in physics to describe the relationship between electric and magnetic fields. Normally
given as four differential equations they have a particularly compact form when the fields are expressed as a
spacetime bivector from Λ2ℝ3,1. If the electric and magnetic fields in ℝ3 are E and B then the electromagnetic
bivector is
where e4 is again the basis vector for the time-like dimension and c is the speed of light. The quantity Be123 is the
bivector dual to B in three dimensions, as discussed above, while Ee4 as a product of orthogonal vectors is also
bivector valued. As a whole it is the electromagnetic tensor expressed more compactly as a bivector, and is used as
follows. First it is related to the 4-current J, a vector quantity given by
where j is current density and ρ is charge density. They are related by a differential operator ∂, which is
Bivector 227
The operator ∇ is a differential operator in geometric algebra, acting on the space dimensions and given by ∇M =
∇·M + ∇∧M. When applied to vectors ∇·M is the divergence and ∇∧M is the curl but with a bivector rather than
vector result, that is dual in three dimensions to the curl. For general quantity M they act as grade lowering and
raising differential operators. In particular if M is a scalar then this operator is just the gradient, and it can be thought
of as a geometric algebraic del operator.
Together these can be used to give a particularly compact form for Maxwell's equations in a vacuum:
This when decomposed according to geometric algebra, using geometric products which have both grade raising and
grade lowering effects, is equivalent to Maxwell's four equations. This is the form in a vacuum, but the general form
is only a little more complex. It is also related to the electromagnetic four-potential, a vector A given by
where A is the vector magnetic potential and V is the electric potential. It is related to the electromagnetic bivector as
follows
Higher dimensions
As has been suggested in earlier sections much of geometric algebra generalises well into higher dimensions. The
geometric algebra for the real space ℝn is Cℓn(ℝ), and the subspace of bivectors is Λ2ℝn.
The number of simple bivectors needed to form a general bivector rises with the dimension, so for n odd it is (n - 1) /
2, for n even it is n / 2. So for four and five dimensions only two simple bivectors are needed but three are required
for six and seven dimensions. For example in six dimensions with standard basis (e1, e2, e3, e4, e5, e6) the bivector
is the sum of three simple bivectors but no less. As in four dimensions it is always possible to find orthogonal simple
bivectors for this sum.
is the rotor generated by bivector B. Simple rotations, that take place in a plane of rotation around a fixed blade of
dimension (n - 2) are generated by with simple bivectors, while other bivectors generate more complex rotations
which can be described in terms of the simple bivectors they are sums of, each related to a plane of rotation. All
bivectors can be expressed as the sum of orthogonal and commutative simple bivectors, so rotations can always be
decomposed into a set of commutative rotations about the planes associated with these bivectors. The group of the
rotors in n dimensions is the spin group, Spin(n).
One notable feature, related to the number of simple bivectors and so rotation planes, is that in odd dimensions every
rotation has a fixed axis - it is misleading to call it an axis of rotation as in higher dimensions rotations are taking
place in multiple planes orthogonal to it. This is related to bivectors, as bivectors in odd dimensions decompose into
the same number of bivectors as the even dimension below, so have the same number of planes, but one extra
dimension. As each plane generates rotations in two dimensions in odd dimensions there must be one dimension, that
is an axis, that is not being rotated.[21]
Bivectors are also related to the rotation matrix in n dimensions. As in three dimensions the characteristic equation of
the matrix can be solved to find the eigenvalues. In odd dimensions this has one real root, with eigenvector the fixed
Bivector 228
axis, and in even dimensions it has no real roots, so either all or all but one of the roots are complex conjugate pairs.
Each pair is associated with a simple component of the bivector associated with the rotation. In particular the log of
each pair is ± the magnitude, while eigenvectors generated from the roots are parallel to and so can be used to
generate the bivector. In general the eigenvalues and bivectors are unique, and the set of eigenvalues gives the full
decomposition into simple bivectors; if roots are repeated then the decomposition of the bivector into simple
bivectors is not unique.
Projective geometry
Geometric algebra can be applied to projective geometry in a straightforward way. The geometric algebra used is
Cℓn(ℝ), n ≥ 3, the algebra of the real vector space ℝn. This is used to describe objects in the real projective space ℝℙn -
1
. The non-zero vectors in Cℓn(ℝ) or ℝn are associated with points in the projective space so vectors that differ only
by a scale factor, so their exterior product is zero, map to the same point. Non-zero simple bivectors in Λ2ℝn
represent lines in ℝℙn - 1, with bivectors differing only by a (positive or negative) scale factor representing the same
line.
A description of the projective geometry can be constructed in the geometric algebra using basic operations. For
example given two distinct points in ℝℙn - 1 represented by vectors a and b the line between them is given by a ∧ b
(or b ∧ a). Two lines intersect in a point if A ∧ B = 0 for their bivectors A and B. This point is given by the vector
The operation "⋁" is the meet, which can be defined as above in terms of the join, J = A ∧ B for non-zero A ∧ B.
Using these operations projective geometry can be formulated in terms of geometric algebra. For example given a
third (non-zero) bivector C the point p lies on the line given by C if and only if
where the angle brackets denote the scalar part of the geometric product. In the same way all projective space
operations can be written in terms of geometric algebra, with bivectors representing general lines in projective space,
so the whole geometry can be developed using geometric algebra.[14]
Notes
[1] Lounesto (2001) p. 87
[2] Leo Dorst, Daniel Fontijne, Stephen Mann (2009). Geometric Algebra for Computer Science: An Object-Oriented Approach to Geometry
(http:/ / books. google. com/ books?id=-1-zRTeCXwgC& pg=PA32#v=onepage& q=& f=false) (2nd ed.). Morgan Kaufmann. p. 32.
ISBN 0123749425. . "The algebraic bivector is not specific on shape; geometrically it is an amount of oriented area in a specific plane, that's
all."
[3] David Hestenes (1999). New foundations for classical mechanics: Fundamental Theories of Physics (http:/ / books. google. com/
books?id=AlvTCEzSI5wC& pg=PA21) (2nd ed.). Springer. p. 21. ISBN 0792353021. .
[4] Lounesto (2001) p. 33
[5] Karen Hunger Parshall, David E. Rowe (1997). The Emergence of the American Mathematical Research Community, 1876-1900 (http:/ /
books. google. com/ books?id=uMvcfEYr6tsC& pg=PA31). American Mathematical Society. p. 31 ff. ISBN 0821809075. .
[6] Rida T. Farouki (2007). "Chapter 5: Quaternions" (http:/ / books. google. com/ books?id=xg2fBfXUtGgC& pg=PA60).
Pythagorean-hodograph curves: algebra and geometry inseparable. Springer. p. 60 ff. ISBN 3540733973. .
[7] A discussion of quaternions from these years is Alexander McAulay (1911). "Quaternions" (http:/ / books. google. com/
books?id=KjwEAAAAYAAJ& pg=PA718). The encyclopædia britannica: a dictionary of arts, sciences, literature and general information.
Vol. 22 (11th ed.). Cambridge University Press. p. 718 et seq. .
[8] Josiah Willard Gibbs, Edwin Bidwell Wilson (1901). Vector analysis: a text-book for the use of students of mathematics and physics (http:/ /
books. google. com/ books?id=abwrAAAAYAAJ& pg=PA431& dq="directional+ ellipse"& lr=& as_drrb_is=q& as_minm_is=0&
as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=2#v=onepage& q="directional ellipse"& f=false). Yale University Press.
p. 481 ff. .
[9] Philippe Boulanger, Michael A. Hayes (1993). Bivectors and waves in mechanics and optics (http:/ / books. google. com/
books?id=QN0Ks3fTPpAC& pg=PR11). Springer. ISBN 0412464608. .
[10] PH Boulanger & M Hayes (1991). "Bivectors and inhomogeneous plane waves in anisotropic elastic bodies" (http:/ / books. google. com/
books?id=2fwUdSTN_6gC& pg=PA280). In Julian J. Wu, Thomas Chi-tsai Ting, David M. Barnett. Modern theory of anisotropic elasticity
and applications. Society for Industrial and Applied Mathematics (SIAM). p. 280 et seq. ISBN 0898712890. .
[11] David Hestenes. op. cit (http:/ / books. google. com/ books?id=AlvTCEzSI5wC& pg=PA61). p. 61. ISBN 0792353021. .
[12] Lounesto (2001) p. 35
[13] Lounesto (2001) p. 86
[14] Hestenes, David; Ziegler, Renatus (1991). "Projective Geometry with Clifford Algebra" (http:/ / geocalc. clas. asu. edu/ pdf/ PGwithCA.
pdf). Acta Applicandae Mathematicae 23: 25–63. .
[15] Lounesto (2001) p.29
[16] William E Baylis (1994). Theoretical methods in the physical sciences: an introduction to problem solving using Maple V (http:/ / books.
google. com/ books?id=pEfMq1sxWVEC& pg=PA234). Birkhäuser. p. 234, see footnote. ISBN 081763715X. . "The terms axial vector and
pseudovector are often treated as synonymous, but it is quite useful to be able to distinguish a bivector (...the pseudovector) from its dual
(...the axial vector)."
[17] Chris Doran, Anthony Lasenby (2003). Geometric algebra for physicists (http:/ / books. google. com/ books?id=6uI7bQb6qJ0C&
pg=PA56& dq="dispensing+ with+ the+ traditional+ definition+ of+ angular+ momentum"& lr=& as_brr=0& cd=1#v=onepage&
q="dispensing with the traditional definition of angular momentum"& f=false). Cambridge University Press. p. 56. ISBN 0521480221. .
[18] Lounesto (2001) pp. 37 - 39
[19] Lounesto (2001) pp. 89 - 90
[20] Lounesto (2001) pp. 109-110
[21] Lounesto (2001) p.222
[22] Lounesto (2001) p. 193
[23] Lounesto (2001) p. 217
General references
• Leo Dorst, Daniel Fontijne, Stephen Mann (2009). "§ 2.3.3 Visualizing bivectors" (http://books.google.com/
books?id=-1-zRTeCXwgC&pg=PA31). Geometric Algebra for Computer Science: An Object-Oriented
Approach to Geometry (2nd ed.). Morgan Kaufmann. p. 31 ff. ISBN 0123749425.
• Whitney, Hassler (1957). Geometric Integration Theoy. Princeton: Princeton University Press.
ISBN 0486445836, 9780486445830.
• Lounesto, Pertti (2001). Clifford algebras and spinors (http://books.google.com/books?id=kOsybQWDK4oC).
Cambridge: Cambridge University Press. ISBN 978-0-521-00551-7.
Bivector 230
• Chris Doran and Anthony Lasenby (2003). "§ 1.6 The outer product" (http://books.google.com/
books?id=nZ6MsVIdt88C&pg=PA11). Geometric Algebra for Physicists. Cambridge: Cambridge University
Press. p. 11 et seq. ISBN 978-0-521-71595-9.
Article Sources and Contributors 231
Cross product Source: http://en.wikipedia.org/w/index.php?oldid=430489806 Contributors: 21655, Aaron D. Ball, Acdx, Ajl772, Albmont, Aliotra, Altenmann, Andres, Andros 1337,
Anonauthor, Anonymous Dissident, Antixt, Arthur Rubin, Astronautics, AxelBoldt, BD2412, Balcer, Ben pcc, BenFrantzDale, Bobo192, Booyabazooka, BrainMagMo, Brews ohare, Bth, Can't
sleep, clown will eat me, Charles Matthews, Chinju, Chris the speller, Chrismurf, Christian List, Clarahamster, Conny, Conrad.Irwin, Coolkid70, Cortiver, CrazyChemGuy, Crunchy Numbers, D,
DVdm, Dan Granahan, David Tombe, Decumanus, Discospinster, Doshell, Dougthebug, Dr Dec, Dude1818, Dysprosia, Edgerck, El C, Emersoni, Empro2, Enviroboy, Eraserhead1, Esseye,
Eulerianpath, FDT, FoC400, Fresheneesz, FyzixFighter, Fzix info, Fæ, Gaelen S., Gandalf61, Gauge, Giftlite, Hanshuo, Henrygb, Hu12, HurukanLightWarrior, Iamnoah, Ino5hiro, Iorsh,
Iridescent, Isaac Dupree, Isnow, Ivokabel, J7KY35P21OK, JEBrown87544, Jaredwf, Jimduck, JohnBlackburne, KHamsun, KSmrq, KYN, Kevmitch, Kisielk, Kmhkmh, LOL, Lambiam,
Lantonov, Leland McInnes, Lethe, Lockeownzj00, Loodog, M simin, MFNickster, Magnesium, MarcusMaximus, Mark A Oakley, Mark Foskey, MathKnight, Mckaysalisbury, Mcnallod,
Melchoir, Michael Hardy, Michael Ross, Monguin61, Mrahner, Mycroft IV4, Nbarth, Neparis, Nikola Smolenski, Octahedron80, Oleg Alexandrov, Onco p53, Oneirist, Paolo.dL, Patrick, Paul
August, Paul Breeuwsma, Pi.1415926535, Pingveno, Pip2andahalf, Plugwash, Pouchkidium, PraShree, Quietly, Qwfp, R'n'B, Rainwarrior, Raiontov, RaulMiller, Rausch, Rbgros754, Reddevyl,
Reindra, ReinisFMF, Rgdboer, Rich Farmbrough, Robinh, Romanm, SCEhardt, SDC, Salix alba, SeventyThree, Silly rabbit, Simskoarl, Spinningspark, Sreyan, Ssafarik, Stefano85, Svick,
Sławomir Biały, T-rex, Tadeu, TakuyaMurata, Tarquin, TeH nOmInAtOr, Template namespace initialisation script, Teorth, Tesseran, Tetracube, The Thing That Should Not Be, TheSolomon,
Thinking of England, Thoreaulylazy, Tim Starling, Timrem, Timwi, Tobias Bergemann, Tomruen, Uberjivy, Voorlandt, WISo, WVhybrid, Wars, WhiteCrane, Whitepaw, WikHead,
Wilfordbrimley, Willking1979, Windchaser, Wolfkeeper, Wolfrock, Wshun, Wwoods, X42bn6, Xaos, Yellowdesk, Ysangkok, ZamorakO o, ZeroOne, Zundark, 373 anonymous edits
Triple product Source: http://en.wikipedia.org/w/index.php?oldid=427899190 Contributors: 08glasgow08, Antixt, Anxogcd, Art Carlson, Ato, BenFrantzDale, BenRG, Bjordan555, Brews
ohare, Brianjd, Charles Matthews, ChrKrabbe, Dashed, DavidCBryant, Dhirsbrunner, Dhollm, Dmharvey, Edgerck, Endofskull, Fresheneesz, Giftlite, Gr3gf, Isnow, J04n, Jacopo Werther, Jimp,
Jogloran, JohnBlackburne, Jorgenumata, Kalossoph, Kroberge, LOL, Leland McInnes, MagiMaster, Marino-slo, Melchoir, Michael Hardy, Mrh30, Nbarth, Neilc, Neparis, Nol Aders, Oleg
Alexandrov, Onomou, Paolo.dL, Patrick, Rxc, Samuel Huang, Silly rabbit, StarLight, The Sculler, Timgoh0, Usgnus, 虞海, 61 anonymous edits
Binet–Cauchy identity Source: http://en.wikipedia.org/w/index.php?oldid=407060845 Contributors: Brews ohare, Chungc, Gene Ward Smith, Giftlite, GregorB, Mhym, Michael Hardy,
Michael Slone, Petergacs, RDBury, ReyBrujo, Schmock, Tkuvho, 1 anonymous edits
Inner product space Source: http://en.wikipedia.org/w/index.php?oldid=428039145 Contributors: 0, 212.29.241.xxx, 2andrewknyazev, AHM, Aetheling, Aitter, Anakata, Ancheta Wis,
AnnaFrance, Archelon, Army1987, Artem M. Pelenitsyn, Atlantia, AxelBoldt, Ayda D, BenFrantzDale, BenKovitz, Benji1986, Bhanin, Blablablob, Brews ohare, Buster79, CSTAR, Cacadril,
Catgut, Charles Matthews, Conversion script, Corkgkagj, Dan Granahan, Darker Dreams, Dgrant, Dr. Universe, Droova, Dysprosia, Eliosh, Filu, Fpahl, Frau Hitt, Frencheigh, Functor salad,
FunnyMan3595, Fwappler, Gelbukh, Giftlite, Goodralph, Graham87, Henry Delforn, Ht686rg90, Ideyal, Ixfd64, JMK, Jakob.scholbach, James pic, Jeff3000, Jitse Niesen, JohnBlackburne,
Jpkotta, KSmrq, Karl-Henner, KittySaturn, Larryisgood, Leperous, Lethe, Linas, Lupin, LutzL, MFH, Markus Schmaus, MathKnight, MathMartin, Meanmetalx, Mets501, MiNombreDeGuerra,
Michael Hardy, Michael Slone, Mikez, Mm, Nbarth, Oleg Alexandrov, Padicgroup, Pankovpv, Patrick, Paul August, Paul D. Anderson, Pokus9999, Poor Yorick, Pred, Rdrosson, Rich
Farmbrough, Rlupsa, Rodrigo Rocha, Rudjek, SGBailey, Salix alba, Sam Hocevar, ScottSteiner, Seb35, Selfstudier, Ses, Severoon, SixWingedSeraph, Splat, Stigin, Struway, Sławomir Biały,
TakuyaMurata, Tarquin, Tbackstr, Tbsmith, TeH nOmInAtOr, TedPavlic, Thecaptainzap, Toby Bartels, Tomo, Tony Liao, Tsirel, Urdutext, Ustadny, Vanished User 0001, Vaughan Pratt,
Vilemiasma, Waltpohl, WinterSpw, Zhaoway, Zundark, 120 anonymous edits
Sesquilinear form Source: http://en.wikipedia.org/w/index.php?oldid=430666439 Contributors: Albmont, Almit39, Centrx, Charles Matthews, Evilbu, Fropuff, Giftlite, Hongooi, Jaksmata,
Maksim-e, Markus Schmaus, Mhss, Michael Slone, Nbarth, Phys, Plclark, Summentier, Sverdrup, Uncle G, Мыша, 21 anonymous edits
Scalar multiplication Source: http://en.wikipedia.org/w/index.php?oldid=397258198 Contributors: AJRobbins, Algebraist, Bob.v.R, Filemon, Giftlite, Jim.belk, Lantonov, LokiClock,
MathKnight, Neparis, Ohnoitsjamie, Oleg Alexandrov, Orhanghazi, PV=nRT, Patrick, Smultiplication, Stefano85, Sławomir Biały, TakuyaMurata, Toby Bartels, Waltpohl, WissensDürster,
Zsinj, יש דוד, 4 anonymous edits
Euclidean space Source: http://en.wikipedia.org/w/index.php?oldid=428167445 Contributors: Aiko, Alxeedo, Andycjp, Angela, Arthur Rubin, Avicennasis, AxelBoldt, Brian Tvedt, Caltas,
Charles Matthews, Conversion script, CrizCraig, Da Joe, DabMachine, Dan Gluck, David Shay, Dcljr, DefLog, DhananSekhar, Dysprosia, Eddie Dealtry, Epolk, Fresheneesz, Fropuff, Gapato,
GargoyleMT, Giftlite, Grendelkhan, Gwguffey, Hede2000, Heqs, Hongooi, IWhisky, Iantresman, Isnow, JDspeeder1, JPD, Jim.belk, JoeKearney, JohnArmagh, KSmrq, Kaarel, Karada,
Ksuzanne, Lethe, Lightmouse, Looxix, MarSch, MathMartin, Mav, Mgnbar, Mhss, MiNombreDeGuerra, Michael Hardy, Mineralquelle, Msh210, Nucleophilic, Oderbolz, Orionus, PAR,
Paolo.dL, Patrick, Paul August, Philip Trueman, Philomath3, Pizza1512, Pmod, Qwertyus, RG2, Rgdboer, Rich Farmbrough, Richardohio, RoyLeban, Rudjek, Salgueiro, Sandip90, Silly rabbit,
SpuriousQ, Sriehl, Sławomir Biały, Tarquin, The Rationalist, TheChard, Tide rolls, Tomas e, Tomo, Tomruen, Tosha, Trigamma, Tzanko Matev, VKokielov, Vsage, WereSpielChequers,
Wikivictory, Woohookitty, XJamRastafire, Yggdrasil014, Youandme, Zundark, 虞海, 90 anonymous edits
Orthonormality Source: http://en.wikipedia.org/w/index.php?oldid=423065055 Contributors: 09sacjac, Albmont, Amaurea, Axcte, Beland, Cyrius, DavidHouse, Fropuff, Gantlord, Gmh5016,
KnightRider, LOL, Lunch, MathKnight, Michael Hardy, Misza13, NatusRoma, Nikai, Oleg Alexandrov, Paolo.dL, Pcgomes, Plasticup, Pomte, Pred, Shahab, Squids and Chips, Stan
Lioubomoudrov, Stevertigo, Thurth, Timc, Wickey-nl, Winston365, Woohookitty, Wshun, Yuval madar, 11 anonymous edits
Cauchy–Schwarz inequality Source: http://en.wikipedia.org/w/index.php?oldid=430057061 Contributors: 209.218.246.xxx, ARUNKUMAR P.R, Alexmorgan, Algebra123230, Andrea Allais,
AnyFile, Arbindkumarlal, Arjunarikeri, Arved, Avia, AxelBoldt, BWDuncan, Belizefan, Bender235, Bh3u4m, Bkell, CJauff, CSTAR, Cbunix23, Chas zzz brown, Chinju, Christopherlumb,
Conversion script, Cyan, D6, Dale101usa, Dicklyon, DifferCake, EagleScout, Eliosh, FelixP, Gauge, Giftlite, Graham87, Haham hanuka, HairyFotr, Hede2000, Jackzhp, Jmsteele,
JohnBlackburne, Justin W Smith, Katzmik, Kawanoz, Krakhan, Madmath789, MarkSweep, MathMartin, Mathtyke, Mboverload, Mct mht, Mertdikmen, Mhym, Michael Hardy, Michael Slone,
Microcell, Miguel, Missyouronald, Nicol, Njerseyguy, Nsk92, Okaygecko, Oleg Alexandrov, Orange Suede Sofa, Orioane, Paul August, PaulGarner, Phil Boswell, Phys, Prari, Primalbeing, Q0k,
Qwfp, R.e.b., Rajb245, Rchase, Reflex Reaction, Rjwilmsi, S tyler, Salgueiro, Sbyrnes321, Schutz, Sl, SlipperyHippo, Small potato, Somewikian, Stevo2001, Sverdrup, Sławomir Biały,
TakuyaMurata, TedPavlic, Teorth, Tercer, ThibautLienart, Tobias Bergemann, TomyDuby, Tosha, Uncia, Vaughan Pratt, Vovchyck, Zdorovo, Zenosparadox, Zundark, Zuphilip, Zvika, 116
anonymous edits
Orthonormal basis Source: http://en.wikipedia.org/w/index.php?oldid=422267003 Contributors: Algebraist, Alway Arptonshay, Arthena, Bad2101, Bunyk, Charles Matthews, Conrad.Irwin,
Constructive editor, Cyan, Dave3457, Derbeth, Drbreznjev, Dysprosia, FrankTobia, Gauge, Geometry guy, George Butler, Giftlite, Guan, Infovarius, Jim.belk, Jitse Niesen, Josh Cherry, Lethe,
Looxix, MFH, Mhss, Michael Angelkovich, Michael Hardy, Mikez, Nbarth, Paolo.dL, Patrick, Pred, RaulMiller, Sina2, T68492, TakuyaMurata, Tarquin, Thenub314, Tkuvho, Wshun, Yuval
madar, Zundark, 24 anonymous edits
Vector space Source: http://en.wikipedia.org/w/index.php?oldid=427520038 Contributors: 0ladne, ABCD, Aenar, Akhil999in, Algebraist, Alksentrs, AndrewHarvey4, Andris, Anonymous
Dissident, Antiduh, Army1987, Auntof6, AxelBoldt, Barnaby dawson, BenFrantzDale, BraneJ, Brews ohare, Bryan Derksen, CRGreathouse, CambridgeBayWeather, Cartiod, Charitwo,
CharlotteWebb, Charvest, Chowbok, ChrisUK, Ciphergoth, Cjfsyntropy, Cloudmichael, Cmdrjameson, CommonsDelinker, Complexica, Conversion script, Cpl Syx, Cybercobra, Cícero, Dan
Granahan, Daniel Brockman, Daniele.tampieri, Danman3459, David Eppstein, Davidsiegel, Deconstructhis, DifferCake, Dysprosia, Eddie Dealtry, Englebert, Eric Kvaalen, FractalFusion,
Freiddie, Frenchef, Fropuff, Gabriele ricci, Geneb1955, Geometry guy, GeometryGirl, Giftlite, Glenn, Gombang, Graham87, Guitardemon666, Guruparan, Gwib, Hairy Dude, Hakankösem, Hans
Adler, Headbomb, Henry Delforn, Hfarmer, Hlevkin, Hrrr, Humanengr, Igni, Ilya, Imasleepviking, Infovarius, Inquisitus, Iulianu, Ivan Štambuk, JackSchmidt, Jakob.scholbach, James084,
January, Jasanas, Javierito92, Jheald, Jitse Niesen, Jludwig, Jogloran, Jorgen W, Jujutacular, Kan8eDie, Kbolino, Kiefer.Wolfowitz, Kinser, Kku, Koavf, Kri, Kwantus, Lambiam, Ldboer, Lethe,
Levineps, Lisp21, Lockeownzj00, Lonerville, Looie496, MFH, Madmath789, Magioladitis, MarSch, MarcelB612, MarkSweep, Markan, Martinwilke1980, MathKnight, Mathstudent3000,
Mattpat, Maximaximax, Mct mht, Mechakucha, Mh, Michael Hardy, Michael Kinyon, Michael Slone, Michaelp7, Mikewax, Mindmatrix, Mitsuruaoyama, Mpatel, Msh210, N8chz, Nbarth, Ncik,
Newbyguesses, Nick, Nihiltres, Nixdorf, Notinasnaid, Nsk92, Oleg Alexandrov, Olivier, Omnieiunium, Orimosenzon, Ozob, P0lyglut, Paolo.dL, Paranomia, Patrick, Paul August, Paul D.
Anderson, Pcb21, Phys, Pizza1512, Point-set topologist, Portalian, Profvk, Python eggs, R160K, RDBury, Rama, Randomblue, Rckrone, Rcowlagi, Rgdboer, Rjwilmsi, RobHar, Romanm, SMP,
Salix alba, SandyGeorgia, Setitup, Shoujun, Silly rabbit, Sligocki, Spacepotato, Sreyan, Srleffler, Ssafarik, Staka, Stephen Bain, Stpasha, Sławomir Biały, TakuyaMurata, Taw, Tbjw, TeH
Article Sources and Contributors 232
nOmInAtOr, TechnoFaye, TedPavlic, Terabyte06, Terry Bollinger, The Anome, Thehotelambush, Thenub314, Therearenospoons, Tim Starling, TimothyRias, Tiptoety, Titoxd, Tobias
Bergemann, Tommyinla, Tomo, Topology Expert, Tsirel, Uncia, Urdutext, VKokielov, Vanished User 0001, Waltpohl, Wapcaplet, Whackawhackawoo, WhatamIdoing, Wikithesource,
Wikomidia, Woodstone, Woohookitty, Wshun, Youandme, Zundark, ^demon, 246 anonymous edits
Matrix multiplication Source: http://en.wikipedia.org/w/index.php?oldid=429950528 Contributors: (:Julien:), A bit iffy, A little insignificant, Achurch, Acolombi, Adrian 1001, Alephhaz,
AlexG, Arcfrk, Arvindn, AugPi, AxelBoldt, BenFrantzDale, Bender2k14, Bentogoa, Bkell, Boud, Brian Randell, Bryan Derksen, Calbaer, CanaDanjl, Cburnett, Chris Q, Christos Boutsidis,
Citrus538, Cloudmichael, Coffee2theorems, Copyeditor42, Countchoc, Damian Yerrick, Damirgraffiti, Dandin1, Dbroadwell, Dcoetzee, DennyColt, Dominus, Doobliebop, Doshell, Ejrh, Erik
Zachte, Fleminra, Fresheneesz, FrozenUmbrella, Gandalf61, Gauge, Ged.R, Giftlite, Haham hanuka, Happy-melon, Harris000, HenningThielemann, Hermel, Hmonroe, Hoyvin-Mayvin, Ino5hiro,
JakeVortex, Jakob.scholbach, Jitse Niesen, Jivee Blau, JohnBlackburne, Jon Awbrey, Joshuav, Jérôme, K.menin, KelvSYC, Kevin Baas, Kvng, Lakeworks, Lambiam, Liao, LkNsngth, Man It's
So Loud In Here, Marc Venot, Marc van Leeuwen, MathMartin, MattTait, Max Schwarz, Mdd4696, Melchoir, Mellum, Michael Slone, Miquonranger03, Miym, Mononomic, MuDavid,
NellieBly, NeonMerlin, Neparis, Ngvrnd, Nijdam, Nikola Smolenski, Nobar, Nsk92, Olathe, Oleg Alexandrov, Oli Filth, Orderud, PV=nRT, Paolo.dL, Parerga, Patrick, Paul August, Paul D.
Anderson, Psychlohexane, Pt, R.e.b., RDBury, Radius, Ratiocinate, RexNL, Risk one, Robertwb, Rockn-Roll, Rpspeck, Running, Salih, Sandeepr.murthy, Schneelocke, Shabbychef, Shahab,
Skaraoke, Sr3d, Ssola, Stephane.magnenat, Sterrys, StitchProgramming, Svick, Swift1337, SyntaxError55, Sławomir Biały, Tarquin, Terry Bollinger, The Fish, The Thing That Should Not Be,
Thenub314, Throwaway85, Tide rolls, Tyrantbrian, Umofomia, Vgmddg, Vincisonfire, Wshun, Zazpot, Zhangleisk, Zmoboros, Zorakoid, 258 anonymous edits
Determinant Source: http://en.wikipedia.org/w/index.php?oldid=430327749 Contributors: 01001, 165.123.179.xxx, A-asadi, A. B., AbsolutDan, Adam4445, Adamp, Ae77, Ahoerstemeier,
Alex Sabaka, Alexandre Martins, Algebraist, Alison, Alkarex, Alksub, Anakata, Andres, Anonymous Dissident, Anskas, Ardonik, ArnoldReinhold, Arved, Asmeurer, AugPi, AxelBoldt, Balagen,
Barking Mad142, BenFrantzDale, Bender2k14, Betacommand, Big Jim Fae Scotland, BjornPoonen, BrianOfRugby, Bryan Derksen, Burn, CBM, CRGreathouse, Campuzano85, Carbonrodney,
Catfive, Cbogart2, Ccandan, Cesarth, Charles Matthews, Chocochipmuffin, Christopher Parham, Chu Jetcheng, Cjkstephenson, Closedmouth, Cobi, Connelly, Conversion script, Cowanae,
Crasshopper, Cronholm144, Crystal whacker, Cthulhu.mythos, Cwkmail, Danaman5, Dantestyrael, Dark Formal, Datahaki, Dcoetzee, Delirium, Demize, Dmbrown00, Dmcq, Doctormatt,
Dysprosia, EconoPhysicist, Elphion, Entropeneur, Epbr123, Euphrat1508, EverettYou, Executive Outcomes, Ffatossaajvazii, Fredrik, Fropuff, Gauge, Gejikeiji, Gene Ward Smith, Gershwinrb,
Giftlite, Graham87, GrewalWiki, Guiltyspark, Gwernol, Hangitfresh, Heili.brenna, HenkvD, HenningThielemann, Hlevkin, Ian13, Icairns, Ijpulido, Ino5hiro, Istcol, Itai, JJ Harrison,
JackSchmidt, Jackzhp, Jagged 85, Jakob.scholbach, Jasonevans, Jeff G., Jerry, Jersey Devil, Jewbacca, Jheald, Jim.belk, Jitse Niesen, Joejc, Jogers, Jordgette, Joriki, Josp-mathilde, Josteinaj,
Jrgetsin, Jshen6, Juansempere, Justin W Smith, Kaarebrandt, Kallikanzarid, Kaspar.jan, Kd345205, Khabgood, Kingpin13, Kmhkmh, Kokin, Kstueve, Kunal Bhalla, Kurykh, Kwantus,
LAncienne, LOL, Lagelspeil, Lambiam, Lavaka, Leakeyjee, Lethe, Lhf, Lightmouse, LilHelpa, Logapragasan, Luiscardona89, MackSalmon, Marc van Leeuwen, Marek69, MartinOtter,
MathMartin, McKay, Mcconnell3, Mcstrother, Mdnahas, Merge, Mets501, Michael Hardy, Michael P. Barnett, Michael Slone, Mikael Häggström, Mild Bill Hiccup, Misza13, Mmxx,
Mobiusthefrost, Mrsaad31, Msa11usec, MuDavid, N3vln, Nachiketvartak, NeilenMarais, Nekura, Netdragon, Nethgirb, Netrapt, Nickj, Nicolae Coman, Nistra, Nsaa, Numbo3, Obradovic Goran,
Octahedron80, Oleg Alexandrov, Oli Filth, Paolo.dL, Patamia, Patrick, Paul August, Pedrose, Pensador82, Personman, PhysPhD, Pigei, Priitliivak, Protonk, Pt, Quadell, Quadrescence, Quantling,
R.e.b., RDBury, RIBEYE special, Rbb l181, Recentchanges, Reinyday, RekishiEJ, RexNL, Rgdboer, Rich Farmbrough, Robinh, Rocchini, Rogper, Rpchase, Rumblethunder, SUL, Sabri76,
Salgueiro, Sandro.bosio, Sangwine, Sayahoy, Shai-kun, Shreevatsa, Siener, Simon Sang, SkyWalker, Slady, Smithereens, Snoyes, Spartan S58, Spireguy, Spoon!, Ssd, Stdazi, Stefano85, Stevenj,
StradivariusTV, Supreme fascist, Swerdnaneb, SwordSmurf, Sławomir Biały, T8191, Tarif Ezaz, Tarquin, Taw, TedPavlic, Tegla, Tekhnofiend, Tgr, The Thing That Should Not Be,
TheEternalVortex, TheIncredibleEdibleOompaLoompa, Thehelpfulone, Thenub314, Timberframe, Tobias Bergemann, TomViza, Tosha, TreyGreer62, Trifon Triantafillidis, Trivialsegfault,
Truthnlove, Ulisse0, Unbitwise, Urdutext, Vanka5, Vincent Semeria, Wellithy, Wik, Wolfrock, Woscafrench, Wshun, Xaos, Ztutz, Zzedar, ^demon, 380 anonymous edits
Exterior algebra Source: http://en.wikipedia.org/w/index.php?oldid=423094119 Contributors: Acannas, Aetheling, Akriasas, Algebraist, Amillar, Aponar Kestrel, Arcfrk, AugPi, AxelBoldt,
Billlion, Buka, Charles Matthews, Darkskynet, DomenicDenicola, Dr.CD, Dysprosia, Entropeter, Fropuff, Gauge, Gene Ward Smith, Gene.arboit, Geometry guy, Giftlite, Hephaestos,
JackSchmidt, Jao, JasonSaulG, Jhp64, Jitse Niesen, Jmath666, Jogloran, JohnBlackburne, Jrf, Juan Marquez, Kallikanzarid, Karl-Henner, Katzmik, Kclchan, Keenan Pepper, Kevs, Kilva, Leo
Gumpert, Lethe, MarSch, Marino-slo, Michael Hardy, Michael Slone, Muhandes, Myasuda, Naddington, Nageh, Nbarth, Nishkid64, Paolo.dL, Phys, Pjacobi, Pldx1, ProperFraction, Qutezuce,
Qwfp, Rainwarrior, Reyk, Rgdboer, Robert A West, Schizobullet, Schneelocke, Silly rabbit, Sillybanana, Slawekb, Sopholatre, Sreyan, StradivariusTV, Sławomir Biały, TakuyaMurata,
TimothyRias, Tkuvho, TobinFricke, Tosha, Varuna, WhatamIdoing, Ylebru, Zero sharp, Zhoubihn, Zinoviev, Кирпичик, 136 anonymous edits
Geometric algebra Source: http://en.wikipedia.org/w/index.php?oldid=427431154 Contributors: Anakata, Avriette, BenFrantzDale, Bomazi, Brews ohare, BryanG, CBM, Cabrer7, Charles
Matthews, Chessfan, Chowbok, Chris Capoccia, Cmdrjameson, Conversion script, D6, David Haslam, Dentdelion, Derek Ross, Devoutb3nji, DrCowsley, Fuzzypeg, Gaius Cornelius, Giftlite,
Gnfnrf, HEL, Hyperlinker, Ibfw, Icairns, Ironholds, Ixfd64, Jagged 85, Jason Quinn, Jheald, John of Reading, JohnBlackburne, Jontintinjordan, Jrf, LAUBO, Leo Zaza, LLB, LLM, Linas,
Lionelbrits, Madmardigan53, MathMartin, Michael Hardy, Nbarth, Neonumbers, Nightkey, Oleg Alexandrov, Paolo.dL, Paul D. Anderson, Peeter.joot, Populus, Quondum, RHaworth,
RainerBlome, Reyk, Rich Farmbrough, Rjwilmsi, Robert L, Selfstudier, Shreevatsa, Silly rabbit, Star trooper man, SunCreator, Tbriggs845, The Anome, Tiddly Tom, Tide rolls, Tpikonen,
Trond.olsen, Usgnus, Woohookitty, WriterHound, Xavic69, 80 anonymous edits
Levi-Civita symbol Source: http://en.wikipedia.org/w/index.php?oldid=405477600 Contributors: Akriesch, Antixt, Attilios, AugPi, AxelBoldt, BenFrantzDale, Bo Jacoby, Burn, Catem117,
Charles Matthews, Chuunen Baka, Dustimagic, Eclecticerudite, FilippoSidoti, Gaius Cornelius, Gene.arboit, Giftlite, Grafen, Hadilllllll, Ivancho.was.here, J.Rohrer, JRSpriggs, JabberWok,
Jbaber, Kipton, Legendre17, Leperous, Linas, Looxix, Lzur, Mani1, Marco6969, MathKnight, Michael Hardy, Mike Rosoft, Mpatel, Mythealias, NOrbeck, Paolo.dL, Patrick, PaulTanenbaum,
Physicistjedi, Plaes, Pnrj, Pt, Qniemiec, Rich Farmbrough, Roonilwazlib, Shirt58, Skyone.wiki, Sverdrup, Tarquin, TheObtuseAngleOfDoom, Thric3, Tomhosking, Trogsworth, Vuldoraq,
XJamRastafire, Zaslav, Zero sharp, 73 anonymous edits
Jacobi triple product Source: http://en.wikipedia.org/w/index.php?oldid=405948291 Contributors: Boothy443, CRGreathouse, Ceyockey, Charles Matthews, Giftlite, Kbdank71, Linas, Lunae,
Michael Hardy, Radicalt, RobHar, Salvatore Ingala, Triona, 10 anonymous edits
Rule of Sarrus Source: http://en.wikipedia.org/w/index.php?oldid=430122147 Contributors: Adam Zivner, Battlecruiser, Bender235, BiT, Diwas, Giftlite, Kmhkmh, M-le-mot-dit, Silver
Spoon, Svick, Tekhnofiend, Zhou Yu, 11 anonymous edits
Laplace expansion Source: http://en.wikipedia.org/w/index.php?oldid=398693100 Contributors: .mau., Anonymous Dissident, Attys, Bigmonachus, Catfive, Charles Matthews, Error792,
Fantasi, Giftlite, HappyCamper, Jitse Niesen, Kmhkmh, Lambiam, Luca Antonelli, Michael Hardy, MiloszD, Nihiltres, PV=nRT, Polyade, Tim R, Tobias Bergemann, 11 anonymous edits
Lie algebra Source: http://en.wikipedia.org/w/index.php?oldid=428450332 Contributors: Adam cohenus, AlainD, Arcfrk, Arthena, Asimy, AxelBoldt, BenFrantzDale, Bogey97, CSTAR,
Chameleon, Charles Matthews, Conversion script, CryptoDerk, Curps, Dachande, David Gerard, Dd314, DefLog, Deflective, Drbreznjev, Drorata, Dysprosia, Englebert, Foobaz, Freiddie,
Fropuff, Gauge, Geometry guy, Giftlite, Grendelkhan, Grokmoo, Grubber, Hairy Dude, Harold f, Hesam7, Iorsh, Isnow, JackSchmidt, Jason Quinn, Jason Recliner, Esq., Jeremy Henty, Jkock,
Joel Koerwer, [email protected], Juniuswikiae, Kaoru Itou, Kragen, Kwamikagami, Lenthe, Lethe, Linas, Loren Rosen, MarSch, Masnevets, Maurice Carbonaro, Michael Hardy, Michael Larsen,
Michael Slone, Miguel, Msh210, NatusRoma, Nbarth, Ndbrian1, Niout, Noegenesis, Oleg Alexandrov, Paolo.dL, Phys, Pizza1512, Pj.de.bruin, Prtmrz, Pt, Pyrop, Python eggs, R'n'B, Rausch,
Reinyday, RexNL, RobHar, Rossami, Rschwieb, Sbyrnes321, Shirulashem, Silly rabbit, Spangineer, StevenJohnston, Suisui, Supermanifold, TakuyaMurata, Thomas Bliem, Tobias Bergemann,
Tosha, Twri, Vanish2, Veromies, Wavelength, Weialawaga, Wood Thrush, Wshun, Zundark, 84 anonymous edits
Orthogonal group Source: http://en.wikipedia.org/w/index.php?oldid=429952786 Contributors: Algebraist, AxelBoldt, BenFrantzDale, Chadernook, Charles Matthews, DreamingInRed,
Drschawrz, Ettrig, Fropuff, Gauge, Giftlite, Guardian of Light, HappyCamper, JackSchmidt, Jim.belk, KnightRider, Kwamikagami, Lambiam, Looxix, Loren Rosen, Masnevets, MathMartin,
Mathiscool, Michael Hardy, Monsterman222, Msh210, Nbarth, Niout, Noideta, Oleg Alexandrov, Patrick, Paul D. Anderson, Phys, Pt, R.e.b., Renamed user 1, Salix alba, Schmloof, Shell
Kinney, Softcafe, Somethingcompletelydifferent, Technohead1980, The Anome, Thehotelambush, TooMuchMath, Ulner, Unyoyega, Weialawaga, Wolfrock, Zundark, 45 anonymous edits
Rotation group Source: http://en.wikipedia.org/w/index.php?oldid=429851471 Contributors: AxelBoldt, Banus, Beland, Cesiumfrog, Charles Matthews, CryptoDerk, Dan Gluck, Dr Dec,
Fropuff, G. Blaine, Gaius Cornelius, Giftlite, Grubber, Helder.wiki, Hyacinth, Jim.belk, JohnBlackburne, Juansempere, Linas, Looxix, M0rph, Maproom, Mct mht, Michael Hardy, Oleg
Alexandrov, PAR, Paolo.dL, Patrick, PaulGEllis, QFT, RDBury, Rgdboer, Robsavoie, Samuel Huang, Silly rabbit, Spurts, Stevenj, The Anome, Thurth, Tkuvho, V madhu, Wiki me, Yurivict,
Zundark, 34 anonymous edits
Vector-valued function Source: http://en.wikipedia.org/w/index.php?oldid=419513511 Contributors: Ac44ck, BrokenSegue, CBM, Charles Matthews, EconoPhysicist, FilipeS, Giftlite, Gurch,
Hannes Eder, Ht686rg90, JackSchmidt, Jecowa, MATThematical, MarcusMaximus, Michael Hardy, Neparis, Nillerdk, PV=nRT, Paolo.dL, Parodi, Plastikspork, Richie, Rror, Salix alba, Spoon!,
StradivariusTV, Sławomir Biały, TexasAndroid, User A1, 10 anonymous edits
Gramian matrix Source: http://en.wikipedia.org/w/index.php?oldid=427175745 Contributors: ABCD, Akhram, AugPi, Charles Matthews, Chnv, Cvalente, Eijkhout, Giftlite, IlPesso, Intellec7,
Jamshidian, Jitse Niesen, Keyi, MathMartin, Michael Hardy, Nbarth, Olegalexandrov, PV=nRT, Peskydan, Playingviolin1, Rausch, RicardoFachada, Shabbychef, Sharov, Sławomir Biały,
Tammojan, Tbackstr, Vanish2, Vonkje, Xelnx, Xodarap00, Ybungalobill, 18 anonymous edits
Lagrange's identity Source: http://en.wikipedia.org/w/index.php?oldid=410745245 Contributors: A. Pichler, AugPi, Brews ohare, David Tombe, FDT, Gene Ward Smith, Giftlite, Jitse Niesen,
JohnBlackburne, Justin545, Kbdank71, Michael Hardy, Michael Slone, Nicolas Bray, Paolo.dL, Rjwilmsi, Schmock, Selfworm, Simetrical, Stdazi, Sławomir Biały, TakuyaMurata,
XJamRastafire, Youngjinmoon
Article Sources and Contributors 233
Quaternion Source: http://en.wikipedia.org/w/index.php?oldid=429220151 Contributors: 62.100.19.xxx, 95j, A. di M., AManWithNoPlan, Afteread, AjAldous, Aleks kleyn, Amantine,
Andrej.westermann, Andrewa, Anniepoo, Arved, AugPi, AxelBoldt, Baccyak4H, Bdesham, Ben Standeven, BenBaker, BenFrantzDale, BenRG, Bidabadi, Bob A, Bob Loblaw, Boing! said
Zebedee, Brion VIBBER, C quest000, CRGreathouse, Caylays wrath, Cffk, Charles Matthews, Chas zzz brown, Chris Barista, ChrisHodgesUK, Chtito, Ckoenigsberg, Cleared as filed,
Cochonfou, CommonsDelinker, Conversion script, Cronholm144, Crust, Cullinane, Cyp, D.M. from Ukraine, D6, DPoon, DanMS, DanielPenfield, Daqu, DaveRErickson, David Eppstein, David
Haslam, DavidCary, DavidWBrooks, Davidleblanc, Decrypt3, DeltaIngegneria, DemonThing, Denevans, Diocles, Dmmaus, Docu, Dreish, Dstahlke, Dysprosia, EdJohnston, Eequor, Eeyore22,
Elimisteve, ElonNarai, Encephalon, Equendil, Eriatarka, Eric Kvaalen, Excirial, FDT, Fgnievinski, Frank Lofaro Jr., Frazzydee, Frencheigh, Fropuff, Gabeh, Gandalf61, Gdr, Geometry guy,
Giftlite, Godvjrt, Goochelaar, Graham87, GregorB, Greyhawthorn, Grzegorj, Hairy Dude, Helder.wiki, Henry Delforn, Hgrosser, Hkuiper, Hobojaks, Homebum, Hotlorp, Hu, Hubbard rox 2008,
Hyacinth, Icairns, Ida Shaw, Ideyal, Iluvcapra, Imagi-King, Irregulargalaxies, JWWalker, JackSchmidt, JadeNB, JakeVortex, JamesBWatson, Jan Hidders, Jay Gatsby, Jeff02, JeffBobFrank,
Jemebius, Jespdj, Jheald, Jitse Niesen, Jj137, Jkominek, Joanjoc, Joe Kress, JoeBruno, JohnBlackburne, Jondel, Joriki, Jtoft, Jumbuck, Jwynharris, KMcD, KSmrq, Kainous, Katzmik, Kbk,
KickAssClown, Knutux, Koeplinger, Kri, Kwiki, Linas, Lockeownzj00, LokiClock, Looxix, LordEniac, Lotje, Lotu, Lupin, Macrakis, Makeemlighter, MarkMYoung, MathMartin, Mav, Menchi,
Mets501, Mezzaluna, Michael C Price, Michael Hardy, Michael.Pohoreski, Mkch, Mrh30, Mskfisher, Muhandes, Nbarth, Neilbeach, Niac2, Nigholith, Nneonneo, Noeckel, Nousernamesleft,
OTB, OlEnglish, Oleg Alexandrov, OneWeirdDude, Oreo Priest, Orionus, Ozob, P0mbal, PAR, Pablo X, Pak21, Paolo.dL, Papadim.G, Patrick, Patsuloi, Paul D. Anderson, Pdn, Phil Boswell,
PhilipO, Phys, Pmanderson, Pmg, Possum, Pred, ProkopHapala, Prosfilaes, Pvazteixeira, QuatSkinner, Quaternionist, Qutezuce, R'n'B, R.e.b., R3m0t, RMcGuigan, Raiden10, Rdnzl, Reddi,
Revolver, Reywas92, Rgdboer, Rjwilmsi, Robinh, Rogper, Rs2, Ruud Koot, SDC, Salgueiro, Sam Staton, Sangwine, Sanjaynediyara, Schmock, Scythe33, Shawn in Montreal, Shmorhay,
Shsilver, Silverfish, Simetrical, Siroxo, Sjoerd visscher, Skarebo, SkyWalker, Slow Smurf, Sneakums, Soler97, Spiral5800, Spoon!, Stamcose, StevenDH, Stevenj, Stwalczyk, SuperUser79,
Tachikoma's All Memory, Taits Wrath, TakuyaMurata, Taw, TenPoundHammer, Terry Bollinger, TheLateDentarthurdent, TheTito, Thenry1337, Thorwald, Titi2, Tkuvho, TobyNorris,
Tomchiukc, Trifonov, Tsemii, Turgidson, Varlaam, Virginia-American, Vkravche, Voltagedrop, WegianWarrior, WhiteHatLurker, Wiki101012, William Allen Simpson, Wleizero, Wshun,
Wwoods, XJamRastafire, Xantharius, Yoderj, Zedall, Zundark, Zy26, 327 anonymous edits
Skew-symmetric matrix Source: http://en.wikipedia.org/w/index.php?oldid=427764290 Contributors: Algebraist, Andres, AxelBoldt, BenFrantzDale, Bourbakista, Brienanni, Burn, Calle,
Charles Matthews, Domminico, Dr Dec, Fropuff, Giftlite, Grinevitski, Haseldon, Herbee, Jitse Niesen, Josh Cherry, Jshadias, Juansempere, KSmrq, Kevin Baas, Kiefer.Wolfowitz, Kyap, LOL,
LilHelpa, Lizard86, Lunch, Maksim-e, Mattfister, Mcbeth50, Melchoir, Michael Hardy, Msh210, Nbarth, Neparis, Ocolon, Octahedron80, Oleg Alexandrov, Oli Filth, PMajer, Paolo.dL, Patrick,
RDBury, Rjwilmsi, Syp, TakuyaMurata, Tarquin, Tbackstr, The tree stump, Tobias Bergemann, Username314, Zero0000, 43 anonymous edits
Xyzzy Source: http://en.wikipedia.org/w/index.php?oldid=423621735 Contributors: 61.9.128.xxx, A More Perfect Onion, Alinnisawest, Altenmann, AndrewHZ, Antixt, Armandoban, B4hand,
BillMcGonigle, Biscuitforce, CarrotMan, Cleared as filed, Conversion script, Courtarro, CraigBox, Creidieki, Denimadept, Dravecky, Eaglizard, Ed Poor, Eleland, ElfQrin, Elliotbay, Erpelchen,
Fanx, Flewis, Frank Lofaro Jr., Furrykef, Gerry Ashton, GoingBatty, Guy Harris, Harwasch, Hfodf, JIP, Jao, Jc3s5h, Jeffrey296, Jekader, JoshuaZ, Ken444444, Kinema, Kizor, Komap, Liftarn,
Lordfeepness, Lowellian, Martarius, MartinHarper, McGeddon, Mdz, Mrbartjens, Mrh30, Mycroft IV4, Nandesuka, NetRolller 3D, One more night, PGSONIC, Perrella, Pinkunicorn, Poromenos,
Rafwuk, RevBooyah, Rickadams, RockMFR, Rory O'Kane, RoySmith, Search4Lancer, Semifor, Spidey104, Stephen Gilbert, SuperSuperBoi, Th1rt3en, Thenickdude, Thumperward, Tom
Ketchum, Tony Sidaway, Truthanado, WarrenA, Welsh, Willphase, XMog, 82 anonymous edits
Quaternions and spatial rotation Source: http://en.wikipedia.org/w/index.php?oldid=430632116 Contributors: Albmont, ArnoldReinhold, AxelBoldt, Ben pcc, BenFrantzDale, BenRG,
Bjones410, Bmju, Brews ohare, Bulee, CALR, Catskul, Ceyockey, Charles Matthews, CheesyPuffs144, Cyp, Daniel Brockman, Daniel.villegas, Darkbane, David Eppstein, Denevans, Depakote,
Dionyziz, Dl2000, Ebelular, Edward, Endomorphic, Enosch, Eugene-elgato, Fgnievinski, Fish-Face, Fropuff, Fyrael, Gaius Cornelius, GangofOne, Genedial, Giftlite, Gutza, HenryHRich,
Hyacinth, Ig0r, J04n, Janek Kozicki, Jemebius, Jermcb, Jheald, Jitse Niesen, JohnBlackburne, JohnPritchard, Josh Triplett, KSmrq, Kborer, Kordas, Lambiam, LeandraVicci, Lemontea, Light
current, Linas, Lkesteloot, Looxix, Lotu, Lourakis, LuisIbanez, ManoaChild, Markus Kuhn, MathsPoetry, Michael C Price, Michael Hardy, Mike Stramba, Mild Bill Hiccup, Oleg Alexandrov,
Orderud, PAR, Paddy3118, Paolo.dL, Patrick, Patrick Gill, Patsuloi, Ploncomi, Pt, Qz, RJHall, Rainwarrior, Randallbsmith, Reddi, Rgdboer, Robinh, RzR, Samuel Huang, Sebsch, Short Circuit,
Sigmundur, SlavMFM, Soler97, TLKeller, Tamfang, Terry Bollinger, Timo Honkasalo, Tkuvho, TobyNorris, User A1, WVhybrid, WaysToEscape, Yoderj, Zhw, Zundark, 159 anonymous edits
Seven-dimensional cross product Source: http://en.wikipedia.org/w/index.php?oldid=425935260 Contributors: Aghitza, Antixt, Bkell, Brews ohare, Charles Matthews, DVdm, David Tombe,
Eric119, FDT, Fropuff, Gauge, Giftlite, Holmansf, Hooperbloob, JohnBlackburne, Magnesium, Melchoir, Monguin61, Ozob, Paolo.dL, Rgdboer, Robinh, Silly rabbit, Staecker, Sławomir Biały,
Vaughan Pratt, 18 anonymous edits
Octonion Source: http://en.wikipedia.org/w/index.php?oldid=429410487 Contributors: 345Kai, 62.100.19.xxx, Afteread, Aleks kleyn, Antixt, AxelBoldt, Baccyak4H, Bdesham, Bender2k14,
Brews ohare, Buttonius, Casimir9999, Charles Matthews, ChongDae, Conversion script, DRLB, Dmmaus, Donarreiskoffer, Fropuff, Georg Muntingh, Giftlite, Graham87, Helder.wiki, Henry
Delforn, Heptadecagon, Hgrosser, Howard McCay, Icairns, Ideyal, Iida-yosiaki, Isaac Rabinovitch, Isnow, Janet Davis, Jeppesn, JoeBruno, JohnBlackburne, Karl Dickman, Koeplinger,
Lanthanum-138, Linas, Lumidek, Machine Elf 1735, Markherm, MatthewMain, Mav, Michael C Price, Michael Hardy, Mjb, Ms2ger, Nousernamesleft, Oleg Alexandrov, OneWeirdDude,
PV=nRT, Pill, Pmanderson, Qutezuce, R.e.b., Robinh, Sabbut, Sam Hocevar, Saxbryn, Silly rabbit, TakuyaMurata, Taw, Template namespace initialisation script, Tesseran, TheTito, Tobias
Bergemann, Tosha, Tristanb, Zoicon5, Zundark, یکیو یلع, 57 anonymous edits
Multilinear algebra Source: http://en.wikipedia.org/w/index.php?oldid=422590431 Contributors: Alodyne, Brews ohare, Bsilverthorn, CesarB, Charles Matthews, D6, Fredrik, Giftlite, JaGa,
Japanese Searobin, Jheald, Juan Marquez, Karada, Loren Rosen, Modify, Naddy, Neonumbers, NoVomit, Rgdboer, Silly rabbit, Sławomir Biały, The Anome, Usgnus, WikiMSL, Zeroparallax, 13
anonymous edits
Pseudovector Source: http://en.wikipedia.org/w/index.php?oldid=426858492 Contributors: Bloodsnort, Brews ohare, CBM, Corkgkagj, CosineKitty, Enon, Eteq, Gauge, Gerbrant, Giftlite,
Hancypetric, Hondje, JasonSaulG, JohnBlackburne, Jormundgard, Juansempere, KHamsun, Kwamikagami, Ligulem, Mangledorf, Mennsa, Mets501, Nk, Oblivious, PAR, Paolo.dL, Patrick,
Quibik, Roadrunner, RockMagnetist, Sbyrnes321, SeventyThree, Silly rabbit, Smack, Stevenj, Strait, Sysy, Tarquin, Thinking of England, Vonregensburg, 22 anonymous edits
Bivector Source: http://en.wikipedia.org/w/index.php?oldid=430803965 Contributors: Antixt, Brews ohare, Charles Matthews, Dr.K., Giraffedata, Hyacinth, Jenny Harrison, John of Reading,
JohnBlackburne, Joy, Kbk, LokiClock, Magioladitis, Mathewsyriac, Michael Hardy, NOrbeck, Nbarth, Noraft, Paul August, Peter Karlsen, Phys, R'n'B, RDBury, Reallyskeptic, Rgdboer,
Sander123, Welsh, WikHead, Xavic69, 10 anonymous edits
Image Sources, Licenses and Contributors 234
Image:Impulsmoment van autowiel onder inversie.svg Source: http://en.wikipedia.org/w/index.php?title=File:Impulsmoment_van_autowiel_onder_inversie.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: User:Gerbrant
Image:Uitwendig product onder inversie.svg Source: http://en.wikipedia.org/w/index.php?title=File:Uitwendig_product_onder_inversie.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: User:Gerbrant
File:Wedge product.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Wedge_product.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Brews ohare
File:Exterior calc cross product.png Source: http://en.wikipedia.org/w/index.php?title=File:Exterior_calc_cross_product.png License: Public Domain Contributors: Original uploader was
Leland McInnes at en.wikipedia
File:Torque animation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Torque_animation.gif License: Public Domain Contributors: Yawe
File:Bivector Sum.svg Source: http://en.wikipedia.org/w/index.php?title=File:Bivector_Sum.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JohnBlackburne
File:Tesseract.gif Source: http://en.wikipedia.org/w/index.php?title=File:Tesseract.gif License: Public domain Contributors: Amada44, Bryan, Mattes, Mentifisto, Rocket000, Rovnet,
Sarregouset, SharkD, Sl-Ziga, 1 anonymous edits
License 236
License
Creative Commons Attribution-Share Alike 3.0 Unported
http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/