EM Short Formulas
EM Short Formulas
Vector Algebra
2
If i, j, k are orthonormal vectors and A = A x i + A y j + A z k then | A| = A2x + A2y + A2z . [Orthonormal vectors ≡
orthogonal unit vectors.]
Scalar product
Equation of a line
A point r ≡ ( x, y, z) lies on a line passing through a point a and parallel to vector b if
r = a + λb
with λ a real number.
Equation of a plane
A point r ≡ ( x, y, z) is on a plane if either
(a) r · b
d = |d|, where d is the normal from the origin to the plane, or
x y z
(b) + + = 1 where X, Y, Z are the intercepts on the axes.
X Y Z
Vector product
A × B = n | A| | B| sin θ, where θ is the angle between the vectors and n is a unit vector normal to the plane containing
A and B in the direction for which A, B, n form a right-handed set of axes.
A × ( B × C ) = ( A · C ) B − ( A · B)C, ( A × B) × C = ( A · C ) B − ( B · C ) A
Non-orthogonal basis
A = A1 e1 + A2 e2 + A3 e3
e2 × e3
A1 = 0 · A where 0 =
e1 · (e2 × e3 )
Similarly for A2 and A3 .
Summation convention
a = ai ei implies summation over i = 1 . . . 3
a·b = ai bi
( a × b)i = εi jk a j bk where ε123 = 1; εi jk = −εik j
εi jkεklm = δil δ jm − δimδ jl
Matrix Algebra
Unit matrices
The unit matrix I of order n is a square matrix with all diagonal elements equal to one and all off-diagonal elements
zero, i.e., ( I ) i j = δi j . If A is a square matrix of order n, then AI = I A = A. Also I = I −1 .
I is sometimes written as In if the order needs to be stated explicitly.
Products
If A is a (n × l ) matrix and B is a (l × m) then the product AB is defined by
l
( AB)i j = ∑ Aik Bk j
k=1
In general AB 6= BA.
Transpose matrices
If A is a matrix, then transpose matrix A T is such that ( A T )i j = ( A) ji .
Inverse matrices
If A is a square matrix with non-zero determinant, then its inverse A −1 is such that AA−1 = A−1 A = I.
transpose of cofactor of A i j
( A−1 )i j =
| A|
where the cofactor of A i j is (−1)i+ j times the determinant of the matrix A with the j-th row and i-th column deleted.
Determinants
If A is a square matrix then the determinant of A, | A| (≡ det A) is defined by
| A| = ∑ i jk... A1i A2 j A3k . . .
i, j,k,...
where the number of the suffixes is equal to the order of the matrix.
2×2 matrices
a b
If A = then,
c d
a c 1 d −b
| A| = ad − bc AT = A−1 =
b d | A| −c a
Product rules
( AB . . . N ) T = N T . . . B T A T
( AB . . . N )−1 = N −1 . . . B−1 A−1 (if individual inverses exist)
| AB . . . N | = | A| | B| . . . | N | (if individual matrices are square)
Orthogonal matrices
An orthogonal matrix Q is a square matrix whose columns q i form a set of orthonormal vectors. For any orthogonal
matrix Q,
Q−1 = Q T , | Q| = ±1, Q T is also orthogonal.
Solving sets of linear simultaneous equations
If A is square then Ax = b has a unique solution x = A −1 b if A−1 exists, i.e., if | A| 6= 0.
If A is square then Ax = 0 has a non-trivial solution if and only if | A| = 0.
An over-constrained set of equations Ax = b is one in which A has m rows and n columns, where m (the number
of equations) is greater than n (the number of variables). The best solution x (in the sense that it minimizes the
error | Ax − b|) is the solution of the n equations A T Ax = A T b. If the columns of A are orthonormal vectors then
x = A T b.
Hermitian matrices
The Hermitian conjugate of A is A † = ( A∗ ) T , where A∗ is a matrix each of whose components is the complex
conjugate of the corresponding components of A. If A = A † then A is called a Hermitian matrix.
If S is a symmetric matrix, Λ is the diagonal matrix whose diagonal elements are the eigenvalues of S, and U is the
matrix whose columns are the normalized eigenvectors of A, then
U T SU = Λ and S = UΛU T.
If x is an approximation to an eigenvector of A then x T Ax/( x T x) (Rayleigh’s quotient) is an approximation to the
corresponding eigenvalue.
Commutators
[ A, B] ≡ AB − BA
[ A, B] = −[ B, A]
[ A, B]† = [ B† , A† ]
[ A + B, C ] = [ A, C ] + [ B, C ]
[ AB, C ] = A[ B, C ] + [ A, C ] B
[ A, [ B, C ]] + [ B, [C, A]] + [C, [ A, B]] = 0
Hermitian algebra
b† = (b∗1 , b∗2 , . . .)
Rayleigh–Ritz
Z
b∗ · A · b ψ∗ Oψ hψ|O|ψi
Lowest eigenvalue λ0 ≤ λ0 ≤ Z
b∗ · b ψ ψ ∗ hψ|ψi
Pauli spin matrices
0 1 0 −i 1 0
σx = , σy = , σz =
1 0 i 0 0 −1
σ xσ y = iσ z , σ yσ z = iσ x , σ zσ x = iσ y , σ xσ x = σ yσ y = σ zσ z = I
Vector Calculus
Notation
φ is a scalar function of a set of position coordinates. In Cartesian coordinates
φ = φ( x, y, z); in cylindrical polar coordinates φ = φ(ρ, ϕ, z); in spherical
polar coordinates φ = φ(r, θ , ϕ); in cases with radial symmetry φ = φ(r).
A is a vector function whose components are scalar functions of the position
coordinates: in Cartesian coordinates A = iA x + jA y + kA z , where A x , A y , A z
are independent functions of x, y, z.
∂
∂x
∂ ∂ ∂ ∂
In Cartesian coordinates ∇ (‘del’) ≡ i + j +k ≡
∂x ∂y ∂z ∂y
∂
∂z
grad φ = ∇φ, div A = ∇ · A, curl A = ∇ × A
Identities
grad(φ1 + φ2 ) ≡ grad φ1 + grad φ2 div( A1 + A2 ) ≡ div A1 + div A2
grad(φ1φ2 ) ≡ φ1 grad φ2 + φ2 grad φ1
curl( A + A ) ≡ curl A1 + curl A2
div(φ A) ≡ φ div A + (grad φ) · A, curl(φ A) ≡ φ curl A + (grad φ) × A
Conversion to
x = r cos ϕ sin θ y = r sin ϕ sin θ
Cartesian x = ρ cos ϕ y = ρ sin ϕ z=z
z = r cos θ
Coordinates
Vector A Ax i + A y j + Az k Aρ ρ
b + Aϕϕ
b + Az b
z b + Aϕϕ
Arbr + Aθθ b
∂φ ∂φ ∂φ ∂φ 1 ∂φ ∂φ ∂φ 1 ∂φ b 1 ∂φ
Gradient ∇φ i+ j+ k b+
ρ b+
ϕ z
b br + θ+ b
ϕ
∂x ∂y ∂z ∂ρ ρ ∂ϕ ∂z ∂r r ∂θ r sin θ ∂ϕ
1 ∂ (r 2 Ar ) 1 ∂Aθ sin θ
Divergence ∂A x ∂A y ∂A z 1 ∂(ρ Aρ ) 1 ∂Aϕ ∂A z +
+ + + + r2 ∂r r sin θ ∂θ
∇·A ∂x ∂y ∂z ρ ∂ρ ρ ∂ϕ ∂z 1 ∂Aϕ
+
r sin θ ∂ϕ
1 1 1 1 b 1
i j k b ϕ
ρ b z
b br θ b
ϕ
2
∂ ∂ ∂
ρ ρ r sin θ r sin θ r
Curl ∇ × A ∂ ∂ ∂ ∂ ∂ ∂
∂x ∂y ∂z
∂ρ ∂ϕ ∂z ∂r ∂θ ∂ϕ
Ax A y Az
Aρ ρ Aϕ Az Ar rAθ rAϕ sin θ
1 ∂ 2 ∂φ 1 ∂ ∂φ
r + 2 sin θ
Laplacian ∂ 2φ ∂ 2 φ ∂ 2 φ 1 ∂ ∂φ 1 ∂ 2 φ ∂ 2φ r2 ∂r ∂r r sin θ ∂θ ∂θ
+ 2 + 2 ρ + 2 2+ 2
∇2φ ∂x2 ∂y ∂z ρ ∂ρ ∂ρ ρ ∂ϕ ∂z 1 ∂ 2φ
+
r2 sin 2 θ ∂ϕ2
Transformation of integrals
L = the distance along some curve ‘C’ in space and is measured from some fixed point.
S = a surface area
τ = a volume contained by a specified surface
bt = the unit tangent to C at the point P
Green’s Theorem
Z Z
ψ∇φ · dS = ∇ · (ψ∇φ) dτ
S
Zτ
= ψ∇2φ + (∇ψ) · (∇φ) dτ
τ
Complex Variables
Complex numbers
The complex number z = x + iy = r(cos θ + i sin θ ) = r ei(θ +2nπ), where i2 = −1 and n is an arbitrary integer. The
real quantity r is the modulus of z and the angle θ is the argument of z. The complex conjugate of z is z ∗ = x − iy =
2
r(cos θ − i sin θ ) = r e−iθ ; zz∗ = | z| = x2 + y2
De Moivre’s theorem
(cos θ + i sin θ )n = einθ = cos nθ + i sin nθ
z3 z5
tan−1 z =z− + −···
3 5
This last series converges both on and within the circle | z| = 1 except at the points z = ±i.
n(n − 1) 2 n(n − 1)(n − 2) 3
(1 + z)n = 1 + nz + z + z + ···
2! 3!
This last series converges both on and within the circle | z| = 1 except at the point z = −1.
Trigonometric Formulae
cos( A + B) + cos( A − B)
sin ( A ± B) = sin A cos B ± cos A sin B cos A cos B =
2
cos( A − B) − cos( A + B)
cos( A ± B) = cos A cos B ∓ sin A sin B sin A sin B =
2
1 x x2 x4
cosh x = ( e + e− x ) = 1 + + + ··· valid for all x
2 2! 4!
1 x3 x5
sinh x = ( ex − e− x ) = x + + + ··· valid for all x
2 3! 5!
cosh ix = cos x cos ix = cosh x
sinh ix = i sin x sin ix = i sinh x
sinh x 1
tanh x = sech x =
cosh x cosh x
cosh x 1
coth x = cosech x =
sinh x sinh x
cosh 2 x − sinh 2 x = 1
Inverse functions
p !
−1 x x+ x2 + a2
sinh = ln for −∞ < x < ∞
a a
p !
−1 x x + x2 − a2
cosh = ln for x ≥ a
a a
x 1 a+x
tanh−1 = ln for x2 < a2
a 2 a−x
x 1 x +a
coth−1 = ln for x2 > a2
a 2 x−a
s
2
−1 x a a
sech = ln + − 1 for 0 < x ≤ a
a x x2
s
2
x a a
cosech−1 = ln + + 1 for x 6= 0
a x x2
Limits
(1 + x/n)n → ex as n → ∞, x ln x → 0 as x → 0
f ( x) f 0 ( a)
If f ( a) = g( a) = 0 then lim = 0 (l’Hôpital’s rule)
x→ a g( x) g ( a)
Differentiation
u 0 u0 v − uv0
(uv)0 = u0 v + uv0 , =
v v2
d d
(sin x) = cos x (sinh x) = cosh x
dx dx
d d
(cos x) = − sin x (cosh x) = sinh x
dx dx
d d
(tan x) = sec2 x (tanh x) = sech2 x
dx dx
d d
(sec x) = sec x tan x (sech x) = − sech x tanh x
dx dx
d d
(cot x) = − cosec2 x (coth x) = − cosech 2 x
dx dx
d d
(cosec x) = − cosec x cot x (cosech x) = − cosech x coth x
dx dx
Integration
Standard forms
xn+1
Z
xn dx = +c for n 6= −1
n+1
1
Z Z
dx = ln x + c ln x dx = x(ln x − 1) + c
x
1 ax x 1
Z Z
eax dx = e +c ax
x e dx = e ax
− 2 +c
a a a
x2 1
Z
x ln x dx = ln x − +c
2 2
1 1
−1 x
Z
dx = tan +c
a2 + x2 a a
1 1 −1 x 1 a+x
Z
dx = tanh +c= ln +c for x2 < a2
a2 − x2 a a 2a a−x
1 1 −1 x 1 x−a
Z
2 2
dx = − coth +c= ln +c for x2 > a2
x −a a a 2a x +a
x −1 1
Z
dx = +c for n 6= 1
( x2 ± a2 )n 2(n − 1) ( x2 ± a2 )n−1
x 1
Z
2 2
dx = ln( x2 ± a2 ) + c
x ±a 2
Z
1 x
p dx = sin−1 +c
a2 − x2 a
Z
1 p
p dx = ln x + x2 ± a2 + c
x2 ± a2
Z
x p
p dx = x2 ± a2 + c
x2 ± a2
p 1h p 2 x i
Z
a2 − x2 dx = x a − x2 + a2 sin −1 +c
2 a
∞ 1
Z
dx = π cosec pπ for p < 1
0 (1 + x) x p
r
∞ ∞ 1 π
Z Z
2 2
cos( x ) dx = sin ( x ) dx =
0 0 2 2
Z ∞ √
exp(− x2 /2σ 2 ) dx = σ 2π
−∞
√
Z ∞ 1 × 3 × 5 × · · · (n − 1)σ n+1 2π for n ≥ 2 and even
n 2 2
x exp(− x /2σ ) dx =
−∞
0 for n ≥ 1 and odd
Z Z
sin x dx = − cos x + c sinh x dx = cosh x + c
Z Z
cos x dx = sin x + c cosh x dx = sinh x + c
Z Z
tan x dx = − ln(cos x) + c tanh x dx = ln(cosh x) + c
Z Z
cosec x dx = ln(cosec x − cot x) + c cosech x dx = ln [tanh( x/2)] + c
Z Z
sec x dx = ln(sec x + tan x) + c sech x dx = 2 tan−1 ( ex ) + c
Z Z
cot x dx = ln(sin x) + c coth x dx = ln(sinh x) + c
sin (m − n) x sin (m + n) x
Z
sin mx sin nx dx = − +c if m2 6= n2
2(m − n) 2(m + n)
sin (m − n) x sin (m + n) x
Z
cos mx cos nx dx = + +c if m2 6= n2
2(m − n) 2(m + n)
Standard substitutions
If the integrand is a function of: substitute:
p
( a2 − x2 ) or a2 − x2 x = a sin θ or x = a cos θ
p
( x2 + a2 ) or x2 + a2 x = a tan θ or x = a sinh θ
p
( x2 − a2 ) or x2 − a2 x = a sec θ or x = a cosh θ
If the integrand is a rational function of sin x or cos x or both, substitute t = tan( x/2) and use the results:
2t 1 − t2 2 dt
sin x = cos x = dx = .
1 + t2 1 + t2 1 + t2
dx
Z
p px + q = u2
( ax + b) px + q
dx 1
Z
q ax + b = .
( ax + b) px2 + qx + r u
Integration by parts
Z b b Z b
u dv = uv − v du
a a a
Differentiation of an integral
If f ( x, α ) is a function of x containing a parameter α and the limits of integration a and b are functions of α then
∂
Z b(α ) Z b(α )
d db da
f ( x, α ) dx = f (b, α ) − f ( a, α ) + f ( x, α ) dx.
dα a (α ) dα dα a (α ) ∂α
Special case,
d
Z x
f ( y) dy = f ( x).
dx a
Dirac δ-‘function’
1 ∞
Z
δ (t − τ ) = exp[iω(t − τ )] dω.
2π −∞
Z ∞
If f (t) is an arbitrary function of t then δ (t − τ ) f (t) dt = f (τ ).
−∞
Z ∞
δ (t) = 0 if t 6= 0, also δ (t) dt = 1
−∞
Reduction formulae
Factorials
n! = n(n − 1)(n − 2) . . . 1, 0! = 1.
Stirling’s formula for large n: ln(n!) ≈ n ln n − n.
Z ∞ Z ∞ √ √
For any p > −1, x p e− x dx = p x p−1 e− x dx = p!. (− 1/2)! = π, ( 1/2)! = π/ ,
2 etc.
0 0
Z 1 p!q!
For any p, q > −1, x p (1 − x)q dx = .
0 ( p + q + 1)!
Trigonometrical
If m, n are integers,
Z π/ 2
m − 1 π/ 2 n − 1 π/ 2
Z Z
sin m θ cos n θ dθ =sin m−2 θ cosn θ dθ = sin m θ cosn−2 θ dθ
0 m+n 0 m+n 0
and can therefore be reduced eventually to one of the following integrals
Z π/ 2 Z π/ 2 Z π/ 2 Z π/ 2
1 π
sin θ cos θ dθ = , sin θ dθ = 1, cos θ dθ = 1, dθ = .
0 2 0 0 0 2
Other
r
∞ (n − 1) 1 π 1
Z
If In = xn exp(−α x2 ) dx then In = In − 2 , I0 = , I1 = .
0 2α 2 α 2α
Differential Equations
Wave equation
1 ∂2ψ
∇2ψ =
c2 ∂t2
Legendre’s equation
d2 y dy
(1 − x2 ) − 2x + l (l + 1) y = 0,
dx2 dx
l
1 d l
solutions of which are Legendre polynomials Pl ( x), where Pl ( x) = l x2 − 1 , Rodrigues’ formula so
2 l! dx
1 2
P0 ( x) = 1, P1 ( x) = x, P2 ( x) = (3x − 1) etc.
2
Recursion relation
1
Pl ( x) = [(2l − 1) xPl −1 ( x) − (l − 1) Pl −2( x)]
l
Orthogonality
Z 1 2
Pl ( x) Pl 0 ( x) dx = δll 0
−1 2l + 1
Bessel’s equation
d2 y dy
x2 +x + ( x2 − m2 ) y = 0,
dx2 dx
solutions of which are Bessel functions Jm ( x) of order m.
∞
(−1)k ( x/2)m+2k
Jm ( x ) = ∑ k!(m + k)!
(integer m).
k=0
∇2 u = 0
If expressed in two-dimensional polar coordinates (see section 4), a solution is
u(ρ, ϕ) = Aρn + Bρ−n C exp(inϕ) + D exp(−inϕ)
where A, B, C, D are constants and n is a real integer.
Pl0 (1) = 1.
Spherical harmonics
The normalized solutions Ylm (θ , ϕ) of the equation
1 ∂ ∂ 1 ∂2
sin θ + Ylm + l (l + 1)Ylm = 0
sin θ ∂θ ∂θ sin2 θ ∂ϕ2
are called spherical harmonics, and have values given by
s
2l + 1 (l − |m|)! m m
m
Yl (θ , ϕ) = Pl (cos θ ) e imϕ
× (−1) for m ≥ 0
4π (l + |m|)! 1 for m < 0
r r r
0 1 0 3 ±1 3
i.e., Y0 = , Y1 = cos θ, Y1 = ∓ sin θ e±iϕ , etc.
4π 4π 8π
Orthogonality
Z
Yl∗m Ylm0 dΩ = δll 0 δmm0
0
4π
Calculus of Variations
b ∂F d ∂F dy
Z
The condition for I = F ( y, y0, x) dx to have a stationary value is = , where y0 = . This is the
a ∂y dx ∂y0 dx
Euler–Lagrange equation.
Functions of Several Variables
∂φ
If φ = f ( x, y, z, . . .) then implies differentiation with respect to x keeping y, z, . . . constant.
∂x
∂φ ∂φ ∂φ ∂φ ∂φ ∂φ
dφ = dx + dy + dz + · · · and δφ ≈ δx + δy + δz + · · ·
∂x ∂y ∂z ∂x ∂y ∂z
∂φ ∂φ ∂φ
where x, y, z, . . . are independent variables. is also written as or when the variables kept
∂x ∂x y,... ∂x y,...
constant need to be stated explicitly.
∂ 2φ ∂2φ
If φ is a well-behaved function then = etc.
∂x ∂y ∂y ∂x
If φ = f ( x, y),
∂φ 1 ∂φ ∂x ∂y
= , = −1.
∂x y ∂x ∂x y ∂y φ ∂φ x
∂φ y
Stationary points
∂φ ∂φ ∂2φ ∂2φ ∂2φ
A function φ = f ( x, y) has a stationary point when = = 0. Unless 2 = = = 0, the following
∂x ∂y ∂x ∂y 2 ∂x ∂y
conditions determine whether it is a minimum, a maximum or a saddle point.
∂2φ ∂2φ
Minimum: > 0, or > 0,
2 2
∂x2 ∂y2 ∂2φ ∂2φ ∂φ
2 2 and >
∂φ ∂φ 2
∂x ∂y 2 ∂x ∂y
Maximum: < 0, or < 0,
∂x2 ∂y2
2 2
∂2φ ∂2φ ∂φ
Saddle point: <
2
∂x ∂y 2 ∂x ∂y
∂2φ ∂2φ ∂2φ
If = = = 0 the character of the turning point is determined by the next higher derivative.
∂x2 ∂y2 ∂x ∂y
∂φ ∂φ ∂x ∂φ ∂y
= + + ···
∂u ∂x ∂u ∂y ∂u
∂φ ∂φ ∂x ∂φ ∂y
= + + ···
∂v ∂x ∂v ∂y ∂v
etc.
Changing variables in surface and volume integrals – Jacobians
If an area A in the x, y plane maps into an area A 0 in the u, v plane then
∂x ∂x
∂u ∂v
Z Z
f ( x, y) dx dy = f (u, v) J du dv where J=
A A0 ∂y ∂y
∂u ∂v
∂( x, y)
The Jacobian J is also written as . The corresponding formula for volume integrals is
∂(u, v)
∂x ∂x ∂x
∂u ∂v ∂w
∂y ∂y ∂y
Z Z
f ( x, y, z) dx dy dz = f (u, v, w) J du dv dw where now J=
V V0 ∂u ∂v ∂w
∂z ∂z ∂z
∂u ∂v ∂w
Fourier series
If y( x) is a function defined in the range −π ≤ x ≤ π then
M M0
y( x) ≈ c0 + ∑ cm cos mx + ∑ sm sin mx
m=1 m=1
with m taking all integer values in the range ± M. This approximation converges to y( x) as M → ∞ under the same
conditions as the real form.
For other ranges the formulae are:
Variable t, range 0 ≤ t ≤ T, frequency ω = 2π/ T,
∞
ω
Z T
y(t) = ∑ Cm eimωt , Cm =
2π 0
y(t) e−imωt dt.
−∞
Variable x0 , range 0 ≤ x 0 ≤ L,
∞
1
Z L
∑ Cm e i2mπx0 / L
y( x0 ) e−i2mπx / L dx0 .
0 0
y( x ) = , Cm =
−∞ L 0
Specific cases
y(t) = a, |t| ≤ τ sin ωτ
(‘Top Hat’), by(ω) = 2a ≡ 2aτ sinc (ωτ )
= 0, |t| > τ ω
sin ( x)
where sinc( x) =
x
y(t) = a(1 − |t|/τ ), |t| ≤ τ 2a 2 ωτ
(‘Saw-tooth’), by(ω) = ( 1 − cos ωτ ) = aτ sinc
= 0, |t| > τ ω2 τ 2
√
y(t) = exp(−t2 /t20 ) (Gaussian), by(ω) = t0 π exp −ω2 t20 /4
Conversely, xcy = b
x ∗ by.
Parseval’s theorem
∞ 1 ∞
Z Z
y∗ (t) y(t) dt = by∗ (ω) by(ω) dω (if b
y is normalised as on page 21)
−∞ 2π −∞
Examples
Fourier transforms in three dimensions
Z V (r ) b (k)
V
b (k) =
V V (r ) e−ik·r d3 r
1 1
4π ∞ 4πr k2
Z
= V (r) r sin kr dr if spherically symmetric
k 0 e− λ r 1
1 4πr k + λ2
2
Z
V (r ) = b (k) eik·r d3 k
V
(2π)3 ∇V (r ) ikVb (k)
∇ 2 V (r ) −k2 Vb (k)
Laplace Transforms
If y(t) is a function defined for t ≥ 0, the Laplace transform y(s) is defined by the equation
Z ∞
y(s) = L{ y(t)} = e−st y(t) dt
0
e− at y(t) y( s + a )
y(t − τ ) θ (t − τ ) e − sτ y ( s )
dy
ty(t) −
ds
dy
s y( s ) − y ( 0 )
dt
dn y dy dn−1 y
s n y( s ) − s n − 1 y ( 0 ) − s n − 2 ···−
dtn dt 0 dtn−1 0
Z t y( s )
y(τ ) dτ
0 s
t
Z
x(τ ) y(t − τ ) dτ
0
x ( s ) y( s ) Convolution theorem
Z t
x(t − τ ) y(τ ) dτ
0
[Note that if y(t) = 0 for t < 0 then the Fourier transform of y(t) is by(ω) = y(iω).]
Numerical Analysis
Approximating to derivatives
dy yn+1 − yn yn − yn−1 δy 1 + δyn− 1/2
≈ ≈ ≈ n+ /2 where h = xn+1 − xn
dx n h h 2h
d2 y yn+1 − 2yn + yn−1 δ2 y n
≈ =
dx2 n h2 h2
Trapezoidal rule
The interval of integration is divided into n equal sub-intervals, each of width h; then
Z b
1 1
f ( x) dx ≈ h c f ( a) + f ( x1 ) + · · · + f ( x j ) + · · · + f (b)
a 2 2
where h = (b − a)/n and x j = a + jh.
Simpson’s rule
The interval of integration is divided into an even number (say 2n) of equal sub-intervals, each of width h =
(b − a)/2n; then
Z b
h
f ( x) dx ≈ f ( a) + 4 f ( x1 ) + 2 f ( x2 ) + 4 f ( x3 ) + · · · + 2 f ( x2n−2 ) + 4 f ( x2n−1 ) + f (b)
a 3
Gauss’s integration formulae
Z 1 n
These have the general form y( x) dx ≈ ∑ ci y( xi )
−1 1
For n = 2 : xi = ±0·5773; c i = 1, 1 (exact for any cubic).
For n = 3 : xi = −0·7746, 0·0, 0·7746; c i = 0·555, 0·888, 0·555 (exact for any quintic).
Range method
A quick but crude method of estimating σ is to find the range r of a set of n readings, i.e., the difference between
the largest and smallest values, then
r
σ≈ √ .
n
This is usually adequate for n less than about 12.
Combination of errors
If Z = Z ( A, B, . . .) (with A, B, etc. independent) then
2 2
∂Z ∂Z
(σ Z )2 = σA + σB + · · ·
∂A ∂B
So if
(i) Z = A ± B ± C, (σ Z )2 = (σ A )2 + (σ B )2 + (σC )2
σ 2 σ 2 σ 2
Z A B
(ii) Z = AB or A/ B, = +
Z A B
σZ σ
(iii) Z = Am , =m A
Z A
σA
(iv) Z = ln A, σZ =
A
σZ
(v) Z = exp A, = σA
Z
Statistics
Probability distributions
2 x Z
2
Error function: erf( x) = √ e− y dy
π 0
n x n− x
Binomial: f ( x) = p q where q = (1 − p), µ = np, σ 2 = npq, p < 1.
x
µ x −µ
Poisson: f ( x) = e , and σ 2 = µ
x!
1 ( x − µ )2
Normal: f ( x) = √ exp −
σ 2π 2σ 2
b2
b are σ σb 2
b and β
Estimates for the variances of α and 2 .
n ns x
s2xy
Correlation coefficient: ρ
b=r= .
sx s y