0% found this document useful (0 votes)
71 views

1 Index Gymnastics and Einstein Convention: Part (V)

(i) The document derives expressions related to vector products and metric tensors in various dimensions. It shows that the dot product of two vectors is invariant under cyclic permutations and derives an expression for the sign of permutations. (ii) It proves identities for vector products, including that the spherical sine law can be expressed in terms of vector products. It also derives an expression for the vector triple product. (iii) Finally, it shows that an identity involving vector products is equivalent to the spherical cosine law.

Uploaded by

robotsheepboy
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

1 Index Gymnastics and Einstein Convention: Part (V)

(i) The document derives expressions related to vector products and metric tensors in various dimensions. It shows that the dot product of two vectors is invariant under cyclic permutations and derives an expression for the sign of permutations. (ii) It proves identities for vector products, including that the spherical sine law can be expressed in terms of vector products. It also derives an expression for the vector triple product. (iii) Finally, it shows that an identity involving vector products is equivalent to the spherical cosine law.

Uploaded by

robotsheepboy
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

1

Index Gymnastics and Einstein Convention


g = e e

Part (v)
We wish to show that is the statement that:

= 3
3 X =1

(in

3-dimensions).

Explicitly, this

Part (i)
Let be the metric. Then:

= 3

That this is true is directly veried:

xx

= = = = = =

g (x , x ) e (x e ) e (x e ) x x e (e ) e (e )
x x

11 + 22 + 33 = 1 + 1 + 1 = 3
In general, we have that:

= N
for

x x x x

is the dimension of the vector space.

Part (a) Part (ii)


The same analysis proceeds: We wish to show that:

(A B ) (C D ) = (A C ) (B D )
This can be veried directly:
3 X =1 3 X =1 3 X 3 X =1 =1

xy

= = = = = =

g (x , y ) e (x e ) e (y e ) x y e (e ) e (e )
x y

(A B )

(C D )

A B C D A C B D

x y x y

= =

3 X 3 X =1 =1

(A C ) (B D )

Part (iii)
We wish to show

Part (b)

= . = =
3 X =1

Explicitly, this is:

If

A = A

and

B = B , = =
3 X 3 X =1 =1

we have that:

A B

A B

1 1 + 2 2 + 3 3

A11 B 11 + A12 B 12 + A13 B 13 +A21 B 21 + A22 B 22 + A23 B 23 +A31 B 31 + A32 B 32 + A33 B 33

Now we can proceed on a case by case basis: 1.

= 1: only 1 1 = 11 1 = 1 = = 1. = 2: only 2 2 = 22 2 = 2 = = 2. = 3: only 3 3 = 33 3 = 3 = = 3.

survives, so

= 1 = 1 = 1

i

2.

survives, so

i

Noting now that the term Aji B ji , then:

Aij B ij

(no summation) is equal to

A12 B 12 + A21 B 21
survives, so i

= = =

0 0 0

3.

A13 B 13 + A31 B 31 A23 B


23

+ A32 B

32

In all possible cases, we have that identically

= 1

i

= ;

so this is

and by skew-symmetry, we must have Therefore,

A11 = A22 = A33 = 0.

A B = 0
Part (iv)
We wish to show that Alternatively, we can simply write that:

a = a .

Explicitly, we have:

A B

= =

A B A B

(by symmetry / antisymmetry) (by changing dummy indices

a = a1 1 + a2 2 + a3 3
Again, we can proceed on a cases by cases basis, and we will see that only the

So we end up with:

term for the summation survives; so

A B = A B
This can be only true if

a = a .

A B = 0.

Antisymmetry
3
dimensions, the even permutations of

= = {1, 2, 3} = =

X
S3

sgn ( ) l(i) m(j ) n(k) abc p1a p2b p3c

Part (a)
Consider that in are:

X
a,b,c

(1, 2, 3) , (2, 3, 1) , (3, 1, 2)


while the odd permutations are:

det (p) 2

li det 4 mi ni

lj mj nj

3 lk mk 5 nk

(1, 3, 2) , (3, 2, 1) , (2, 1, 3)


Therefore we have that:

but since the determinant of a matrix and its transpose are the same, we have that:

123 132

= =

231 = 312 = 1 321 = 213 = 1 +1.

ijk lmn

il = det 4 jl kl

im jm km

3 in jn 5 kn

so the cyclic permutations have parity of Consider that in are:

Expanding this gives us:

4 dimensions, the even permutations of {1, 2, 3, 4} (1, 2, 3, 4) , (1, 3, 4, 2) , (1, 4, 2, 3) , (2, 1, 4, 3) , (2, 3, 1, 4) , (2, 4, 3, 1) , (3, 1, 2, 4) , (3, 2, 4, 1) , (3, 4, 1, 2) , (4, 1, 3, 2) , (4, 2, 1, 3) , (4, 3, 2, 1) .

ijk lmn

il (jm kn jn km ) +im (jn kl jl kn ) +in (jl km jm kl )

In homework notation, this is:

ijk i

j k

ii (jj kk jk kj ) +ij (jk ki ji kk ) +ik (ji kj jj ki )

while the odd permutations are:

(1, 2, 4, 3) , (1, 3, 2, 4) , (1, 4, 3, 2) , (2, 1, 3, 4) , (2, 3, 4, 1) , (2, 4, 1, 3) , (3, 1, 4, 2) , (3, 2, 1, 4) , (3, 4, 2, 1) , (4, 1, 2, 3) , (4, 2, 3, 1) , (4, 3, 1, 2) .
In this case then, it is instead that

Part (c)
We wish to show the contraction formula:

1234 = 1

but

2341 = 1.

ajk amn = jm kn jn km
Consider from part (b), replacing

Part (b)
We wish to derive the general expression:

i a, l a,

and summing over

a: ajk amn in jn 5 kn 3 = = aa (jm kn jn km ) +am (jn ka ja kn ) +an (ja km jm ka ) 3 (jm kn jn km ) + (jn km jm kn ) + (jn km jm kn ) =
Part (d)
Proceeding in a manner similar to part (b), we will have that:

il ijk lmn = det 4 jl kl


Consider that

im jm km

ijk

is eectively dened as:

ijk = sgn ( )
where

jm kn jn km

(1, 2, 3) to (i, j, k). Since of the permutation from (1, 2, 3) to (i, j, k ) and lmn is the sign of the permutation from (1, 2, 3) to (l, m, n), then eectively ijk lmn is the sign of the permutation 0 that takes (i, j, k ) to (l, m, n). As such, we can dene:
permutation that takes

S3 is the ijk is the sign

ijk lmn = sgn (0 )


Now, we can articially extend the denition as:

ijk lmn =
since if

X
S3

sgn ( ) (i)l (j )m (k)n


Note however that have:

ijkl abcd 2 ia 6 ja 6 = det 4 ka la 2

ib jb kb lb

ic jc kc lc

= 0 ,

then

(i)l (j )m (k)m = 0.

this is the very denition of the determinant; if we make the identication

p1a = l(i) , p2b = m(j ) , p3c = n(k) , then we X ijk lmn = sgn ( ) (i)l (j )m (k)n
S3

2 3 jb jc jd ja jc jd = ia det 4 kb kc kd 5 ib 4 ka kc kd 5 lb lc ld la lc ld 2 3 2 3 ja jb jd ja jb jc +ic det 4 ka kb kd 5 id det 4 ka kb kc 5 la lb ld la lb lc

3 id jd 7 7 kd 5 ld 3

= ia jkl bcd ib jkl acd + ic jkl abd id jkl abc

So we have that:

Part (iv)
We wish to show that (i) is the spherical sine law, and that (iii) is the spherical cosine law. and contracting: First we show (i). The spherical sine law is the statement that:

ijkl abcd = ia jkl bcd ib jkl acd + ic jkl abd id jkl abc
Taking

i s, a s, = = = =

sjkl sbcd

ss jkl bcd sb jkl scd + sc jkl sbd sd jkl sbc 4jkl bcd jkl bcd + jkl cbd jkl dbc 4jkl bcd jkl bcd jkl bcd jkl bcd jkl bcd
where respectively. Let

sin A sin B sin C = = sin a sin b sin c a, b, c are the lengths of the sides opposite to angles A, B, C ,
be unit vectors at the origin, such that

So we have that:

a, b, c

is the angle

at the endpoint of

a, B

is the angle at the endpoint of

b,

and

is

sjkl sj

k l

= jkl j

k l

the angle at the endpoint of between and

c.

Then

is the line segment traced

and

c, b

is the line segment traced between

and

c,

is the line segment traced between

and

b.

Vector Products
a, b, c,
we have that:

Now, suppose we can show that:

Part (i)
We wish to show that for vectors

a ((a b) (a c)) = a (b c)
Then we will have solved our problem; because by (i), we have that

a (b c) = b (c a) = c (a b)
Note that this is really a consequence of the preservation of under cyclic permutations. Consider that:

a (b c) is invariant under cyclic a, b, c; so likewise we will have: a ((a b) (a c)) ijk


Now, the quantity

permutations of the arguments

= =

b ((b c) (b a)) c ((c a) (c b))


is given as:

a (b c)

= = =

ijk ai bj ck = jki ai bj ck ijk ak bi cj = ijk bi cj ak b (c a)

a ((a b) (a c))

a ((a b) (a c)) = |a| |(a b) (a c)| cos ()


where

is the angle between the vectors

and

(a b) (a c); a
is a unit vector,

and likewise:

however, by their construction, these two vectors are necessarily colinear, so that

cos () = 1.

Additionally, since

b (c a)

= = =

ijk bi cj ak = jki bi cj ak ijk bk ci aj = ijk ci aj bk c (a b)

then

|a| = 1.

Therefore, we have that:

a ((a b) (a c)) = |(a b) (a c)|


Using the fact that

|u v| = |u| |v| sin (uv ),

we obtain:

Part (ii)
We wish to show that

a ((a b) (a c)) = |a b| |a c| sin (ab,ac ) a (b c) = (a c) b (a b) c.


This is: Note again, however, that by its construction, we have that

ab,ac = A

(the angle between

a b,a c

is exactly the angle

a (b c)

= = = = =

ijk aj kmn bm cn = ijk kmn aj bm cn kij kmn aj bm cn = (im jn in jm ) aj bm cn im jn aj bm cn in jm aj bm cn an bi cn am bm ci (a c) b (a b) c

between the tangent vectors at Therefore, we have:

a = =

in the direction along

and

c.

a ((a b) (a c))

|a b| |a c| sin (A) sin (c) sin (b) sin (A) a, b, c


(and repeating

Finally, cyclically permutating the variables the same procedure) gives us:

Part (iii)
We wish to show that

sin (c) sin (b) sin (A) (a b) (c d) = (a c) (b d)


Now, dividing everything by This can be obtained from using parts (i) and (ii).

= =

sin (a) sin (c) sin (B ) sin (b) sin (a) sin (C )
gives us:

(a d ) ( b c ).

From part (i), we have that:

sin (a) sin (b) sin (c)

(a b) (c d) = c (d (a b))
and from part (ii), we have that:

sin (A) sin (B ) sin (C ) = = sin (a) sin (b) sin (c)
This is the spherical sine law. To actually show that:

c (d (a b))

= =

c (d b) a c (d a) b (a c) (b d) (a d) (b c)

a ((a b) (a c)) = a (b c)

we have that:

Therefore we have:

a ((a b) (a c)) = =
Now, for

a (((a b) c) a ((a b) a) c) ((a b) c) |a|2 ((a b) a) (a c) 0.


Likewise,

1 2 RHS = v s xi (v s ) + v s xs v i + xi v 2
Note however that:

is perpendicular to a so ((a b) a) = 2 is a unit vector, |a| = 1. Therefore we obtain:

ab

xi

1 2 v 2

= = = xi

1 s s v v 2

a ((a b) (a c)) = (a b) c = c (a b)
Using (i), we have:

1 s 1 v xi (v s ) + xi (v s ) v s 2 2 v s xi (v s )

a ((a b) (a c)) = a (b c)

Therefore:

RHS = v s xs v i
Now the cosine law. Using (iii), we have that: so we have proven our claim. As such, we have shown that:

(a b) (c d) = (a c) (b d) (a d) (b c)
Taking

(v ) v = v ( v ) +
Therefore Euler's equation becomes:

1 2 v 2

b = d,

we have:

(a b) (c b) = (a c) (b b) (a b) (b c)
Explicitly, this is:

dv + v ( v ) + dt
from which we obtain:

1 2 v 2

= h

|a b| |c b| cos (ab,cb ) = cos (b) cos (c) cos (a)


Again, by our construction we have

ab,cb = B ;

and for

|a b| = sin (c),

etc., we have:

dv + v = dt
for which we have identied
dv As such, if dt

1 2 v +h 2

sin (c) sin (a) cos (B ) = cos (b) cos (c) cos (a)
Rearranging terms give us:

=v

as the vorticity.

= 0,

then we are left with:

cos (b) = cos (c) cos (a) + sin (c) sin (a) cos (B )
This is the spherical cosine law.

v =

1 2 v +h 2

Bernoulli and Vector Products


dv + (v ) v = h dt

Along a streamline

r (t),

we have (by denition) that:

Given Euler's equation for uid motion:

v=
As such, we have:

dr dt

In Cartesian coordinates, this is:

= = =

dv + v s xs v i = xs h dt
We wish to show that:

dr dt d d ( r ) = (0) dt dt 0 v =

1 2 v s xs v i = ijk v j kab xa v b + xi v 2
On the RHS, we have that:

As such,

v = 0.

This gives us that:


ijk kab j

1 2 v +h 2

=0

ijk j

kab

xa v

= = =

v xa v b ia jb v j xa v b + ib ja v j xa v b v j xi v j + v j xj v i

along a streamline. Therefore, we must have:

1 2 v + h = const 2

Antisymmetry and Determinants


A,
an

We have:

Part (a)
We begin with the classical denition of the determinant of

det (A) det (B )

= = = = = =

det (B ) det (A) i1 ...in B1i1 . . . Bnin j1 ...jn A1j1 . . . Anjn j1 ...jn i1 ...in B1i1 . . . Bnin A1j1 . . . Anjn i1 ...in Bj1 i1 . . . Bjn in A1j1 . . . Anjn i1 ...in (AB )1i1 . . . (AB )nin det (AB )

nn

matrix:

det (A) = i1 i2 ...in A1i1 A2i2 . . . Anin


We wish to show that:

i1 ...in det (A) = j1 jn ...jn Ai1 j1 Ai2 j2 . . . Ain jn


We begin by using the denition of

Part (b)
(i) Given

det (A)

to obtain that:

i1 ...in det (A) = i1 ...in j1 ...jn A1j1 . . . Anjn


Now, since the product rst index

is an

n-dimensional

vector space. Let

: Vn C

be a

completely anti-symmetric multi-linear form. First, we wish to show that, up to a multiplicative constant, only one such map of all Let

A1j1 . . . Anjn

is commutative, we can imag-

ine rearranging the order of the product, at no cost, so that the

exists. That is, we wish to show that the space

of

Akl

goes in the order of

i1 . . . in :

is one-dimensional.
i V ; then for x V , we have x = x ei . ` x(1) , x(2) , . . . , x(n) . We can rewrite this as: ` in 1 x(1) , . . . , x(n) = xi (1) ei1 , . . . , x(n) ein

i1 ...in det (A)

= =

i1 ...in j1 ...jn A1j1 . . . Anjn i1 ...in j1 ...jn Ai1 ji1 . . . Ain jin j1 ...jn
into the order of

{ei }n i=1

be a basis for

Consider then

Now, we can rearrange the indices of

ji1 . . . jin ,

at the cost of a permutation sign:

=
Explicitly, this is:

in 1 xi (1) . . . x(n) (ei1 , . . . , ein )

i1 ...in det (A)

= =

i1 ...in j1 ...jn Ai1 ji1 . . . Ain jin sgn ( ) i1 ...in ji1 ...jin Ai1 ji1 . . . Ain jin 1 . . . n i1 . . . in . However, this is i1 ...in . That is, i1 ...in = sgn ( ). So this

X i ` n 1 x(1) , . . . , x(n) = x(1) . . . xi (n) (ei1 , . . . , ein )


i1 ...in

where

is the permutation

exactly the denition of give us:

However, due to the complete antisymmetry of sum over all indicies i1 permutations of

we need not

i1 ...in det (A)

= = =

sgn ( ) i1 ...in ji1 ...jin Ai1 ji1 . . . Ain jin (sgn ( ))2 ji1 ...jin Ai1 ji1 . . . Ain jin ji1 ...jin Ai1 ji1 . . . Ain jin
are just dummy indicies, so we may as well

. . . in , and instead only 1, 2, . . . , n. Let Sn be the set

over those that are of permutations of

1, . . . , n. Then we can equally write: X ` in 1 x(1) , . . . , x(n) = xi (1) . . . x(n) (ei1 , . . . , ein )
i1 ...in Sn

Finally, the indicies relabel them

ji k ji k jk :

Now, we can re-order the arguments a permutation sign in the process:

ei1 , . . . ein

of

and pick up

i1 ...in det (A)

= =

ji1 ...jin Ai1 ji1 . . . Ain jin j1 ...jn Ai1 j1 . . . Ain jn

` x(1) , . . . , x(n)

= =

X
i1 ...in Sn

in 1 xi (1) . . . x(n) (ei1 , . . . , ein ) in 1 xi (1) . . . x(n) sgn ( ) (e1 , . . . , en )

X
i1 ...in Sn

This gives us our desired equality. As an example, take

n = 4. =

We have:

ijkl det (A)


Let us take, say,

ijkl abcd A1a A2b A3c A4d


Then:

is the permutation : i1 , . . . , in 1, . . . , n `. That means : 1, . . . , n i1 , . . . , in . But for sgn ( ) = sgn 1 , we can equally write sgn ( ) = i1 ...in . Therefore, we can more compactly
where 1

ijkl = 4132. = = =

write:

4132 det (A)

4132 abcd A1a A2b A3c A4d 4132 abcd A4d A1a A3c A2b sgn ( ) 4132 dacb A4d A1a A3c A2b

` x(1) , . . . , x(n)

= =

in 1 xi (1) . . . x(n) i1 ...in (e1 , . . . , en )

D (e1 , . . . , en )

where

where So

: abcd dacb. Equivalently, this is : 1234 4132. sgn ( ) = 4132 , so at the end of the day we have: 4132 det (A) = dacb A4d A1a A3c A2b

in 1 D is the proportionally factor D = xi (1) . . . x(n) i1 ...in , (the letter D suggestively chosen to stand for determinant). As such, we see that any map is completely determined by its action on (e1 , . . . , en ). If (e1 , . . . , en ) = 0, then is identically

zero.

We can therefore consider, w/o l.o.g., only functions with (if

And then we can relabel the dummy indices as we wish. From this now, we wish to show that

(e1 , . . . , en ) = 0
function).

were identically zero, then it's trivially

proportional to any other completely antisymmetric multilinear

det (A) det (B ) = det (AB ).

Suppose now we have another such function cedure, we will also arrive at:

By the same pro-

` x(1) , . . . , x(n) = D (e1 , . . . , en )


where the such: to the map

. Let this constant of proportionality be D. We will write A ` = D . This x(n) V n , we have that means that ` x(1) , . . . , A x(1) , . . . , x(n) = D x(1) , . . . , x(n) . If we can compute the value of D and show that D = det (A), then we are done.
to Since

D is the same D as in . In fact, D makes no reference or , and depends only on the arguments x(k) . As ` ` x(1) , . . . , x(n) x(1) , . . . , x(n) =D= (e1 , . . . , en ) (e1 , . . . , en )

A = D ,
where

then in particular this relationship must hold

when evaluated on a basis

A ei = Ai ,

Ai

is the

More explicitly, this is

{ei }. Our choice of basis will be s.t. ith column of the matrix form of A. A ei = Aik ek .

Using this basis, we have:

so:

` (e1 , . . . , en ) ` x(1) , . . . , x(n) = x(1) , . . . , x(n) (e1 , . . . , en )


and:

A (e1 , . . . , en ) = D (e1 , . . . , en )

n x(1) . . . x ` ` (k) V , we will have that x(1) , . . . , x(n) x(1) , . . . , x(n) . So every such function is

Therefore, we deduce that

proportional to another such function by a multiplicative constant.

A (e1 , . . . , en )

= = = =

(A e1 , . . . , A en ) (A1i1 ei1 , . . . , Anin ein ) i1 ...in A1i1 . . . Anin (e1 , . . . , en ) det (A) (e1 , . . . , en )

(ii) Assuming linearly

is not identically zero. We wish to show that, given is

a set of vectors

x(1) , . . . , x that x(1) , . . . , x(n) `(n) , we will have independent i x(1) , . . . , x(n) = 0.

Consequently, we have that:

x(1) , . . . , x(n) is linearly dependent, then ` x(1) , . . . , x(n) = 0. For x(1) , . . . , x(n) is linearly dependent, Pn1 then we can w/o l.o.g. write x(n) = i=1 ai x(i) . As such, we will
First, we show that if have that:
n 1 X i=1

D (e1 , . . . , en ) = det (A) (e1 , . . . , en ) ei are (e1 , . . . , en ) = 0.


and since a basis, they are linearly independent, so As such, we must have:

! ai x(i)

D = det (A)
Hence our new denition coincides with our old one. Finally, we wish to show that sider:

` x(1) , . . . , x(n)

n 1 X i=1

x(1) , . . . ,

` ai x(1) , . . . , x(i) ` i will coin x(1) , . . . , x(i)


Therefore,

det (AB ) = det (A) det (B ).

Con-

However, throughout this summation, every index cide once with another argument

` det (AB ) x(1) , . . . , x(n)

= = = = =

Therefore,

always has a repeated argument, and so is zero.

` x(1) , . . . , x(n) = 0. x(1) , . . . , x(n) is linearly independent, ` then x(1) , . . . , x(n) = 0. The key point now is that since x(1) , . . . , x(n) is a set of n linearly independent vectors in an ndimensional vector space, then this set x(1) , . . . , x(n) forms a basis i for V . Hence, any vectors y V can be expressed as y = y x(i) .
Next, we show that if Consider now:

` ABx(1) , . . . , ABx(n) ` A Bx(1) , . . . , Bx(n) ` det (A) Bx(1) , . . . , Bx(n) ` det (A) B x(1) , . . . , x(n) ` det (A) det (B ) x(1) , . . . , x(n)
then we must have:

since this is true for arbitrary

x's,

det (AB ) = det (A) det (B )

` ` i1 in y(1) , . . . , y(n) = y(1) . . . y( n) i1 ...in x(1) , . . . , x(n)


where that

y have `(k) are arbitrary vectors in V . As ` such, we cannot x(1) , . . . , x(n) = 0; otherwise y(1) , . . . , y(n) = 0 for arbitrary y 's, meaning that would be identically zero, contradicting our original assumption. Therefore, pendent. We now dene the determinant of a linear map

` x(1) , . . . , x(n) = 0 i x(1) , . . . , x(n)

are linearly inde-

A : V V ` ` det (A) x(1) , . . . , x(n) = Ax(1) , . . . , Ax(n)

as:

We must show that this coincides with the original denition. Dene:

` ` A x(1) , . . . , x(n) = Ax(1) , . . . , Ax(n)


It is easily seen that linear map.

is also a complete anti-symmetric, multi-

Therefore, we must have that

is proportional

You might also like