Gaussianintegral
Gaussianintegral
KEITH CONRAD
Let Z ∞ Z ∞ Z ∞
− 21 x2 −x2 2
I= e dx, and K =
dx, J = e e−πx dx.
−∞ 0 −∞
√ √
These numbers are positive, and J = I/(2 2) and K = I/ 2π.
√ √
Theorem. With notation as above, I = 2π, or equivalently J = π/2, or equivalently K = 1.
We will give multiple proofs of this result. (Other lists of proofs are in [4] and [9].) The theorem
1 2 2 2
is
Z ∞subtle because there is no simple antiderivative for e− 2 x (or e−x or e−πx ). For comparison,
1 2 1 2
xe− 2 x dx can be computed using the antiderivative −e− 2 x : this integral is 1.
0
but instead of using polar coordinates we make a change of variables x = yt on the inside integral,
with dx = y dt, so
Z ∞ Z ∞ Z ∞ Z ∞
2 −y 2 (t2 +1) −y 2 (t2 +1)
J = e y dt dy = ye dy dt.
0 0 0 0
Z ∞
2 1
Since ye−ay dy = for a > 0, we have
0 2a
Z ∞
2 dt 1 π π
J = 2
= · = ,
0 2(t + 1) 2 2 4
√
so J = π/2. This approach is due to Laplace [7, pp. 94–96] and historically precedes the more
familiar technique in the first proof above. We will see in the eighth proof that this was not
Laplace’s first method.
the right side tends to − dx/(1 + x2 ) + C = −π/4 + C. Thus C = π/4, so (3.1) becomes
0
2 2 2
t
e−t (1+x ) 1
Z Z
−x2 π
e dx = − dx.
0 0 1 + x2
4
√
Letting t → ∞ in this equation, we obtain J 2 = π/4, so J = π/2.
A comparison of this proof with the first proof is in [20].
THE GAUSSIAN INTEGRAL 3
For b > 0, integrate both sides from 0 to b and use the Fundamental Theorem of Calculus:
Z b Z b Z b
0 −t2 2
F (t) dt = −2J e dt =⇒ F (b) − F (0) = −2J e−t dt.
0 0 0
Z ∞
V = A(x) dx, where A(x) is the area of the x-slice:
−∞
Z ∞ 1 1 2
Z ∞ 1 2 1 2
2 +y 2 )
A(x) = e− 2 (x dy = e− 2 x e− 2 y dy = e− 2 x I.
−∞ −∞
Z ∞ Z ∞ 1 2
Z ∞ 1 2
Thus V = A(x) dx = e− 2 x I dx = I
e− 2 x dx = I 2 .
−∞ −∞ −∞ √
Comparing the two formulas for V , we have 2π = I 2 , so I = 2π.
Set x = y = 1/2:
2 Z 1
1 dt
Γ = p .
2 0 t(1 − t)
Note
Z ∞
√ −t dt Z ∞ e−t Z ∞ −x2 Z ∞
1 e 2
Γ = te = √ dt = 2x dx = 2 e−x dx = 2J,
2 0 t 0 t 0 x 0
2
R1 p 2
so 4J = 0 dt/ t(1 − t). With the substitution t = sin θ,
Z π/2
2 2 sin θ cos θ dθ π
4J = = 2 = π,
0 sin θ cos θ 2
√ √ √
so J = π/2. Equivalently,
Z ∞ Γ(1/2) = π. Any method that proves Γ(1/2) = π is also a method
2
that calculates e−x dx.
0
To show kIk2 → π/2, first we compute several values of Ik explicitly by a recursion. Using
integration by parts,
Z π/2 Z π/2
k
Ik = (cos θ) dθ = (cos θ)k−1 cos θ dθ = (k − 1)(Ik−2 − Ik ),
0 0
so
k−1
(7.4) Ik = Ik−2 .
k
Using (7.4) and the initial values I0 = π/2 and I1 = 1, the first few values of Ik are computed and
listed in Table 1.
k Ik k Ik
0 π/2 1 1
2 (1/2)(π/2) 3 2/3
4 (3/8)(π/2) 5 8/15
6 (15/48)(π/2) 7 48/105
Table 1.
and with the change of variables t = (cos θ)2 for 0 ≤ θ ≤ π/2, the integral on the right is equal to
R π/2
2 0 (cos θ)k dθ = 2Ik , so (7.5) is the same as
Γ( 2n+1 1 2n+2 1
2 )Γ( 2 ) Γ( 2 )Γ( 2 )
I2n I2n+1 =
2Γ( 2n+2
2 ) 2Γ( 2n+3
2 )
Γ( 2n+1 1 2
2 )Γ( 2 )
=
4Γ( 2n+1
2 + 1)
Γ( 2n+1 1 2
2 )Γ( 2 )
=
4 2n+1 2n+1
2 Γ( 2 )
Γ( 21 )2
= .
2(2n + 1)
THE GAUSSIAN INTEGRAL 7
√ √
By (7.5), π = Γ(1/2)2 . We saw in the fifth proof that Γ(1/2) = π if and only if J = π/2.
This function comes out of nowhere, so our first task is to motivate the introduction of this function.
We seek a meromorphic function f (z) to integrate around the rectangular contour γR in the
figure below, with vertices at −R, R, R + ib, and −R + ib, where b will be fixed and we let R → ∞.
Suppose f (z) → 0 along the right and left sides of γR uniformly as R → ∞. Then by applying
the residue theorem and letting R → ∞, we would obtain (if the integrals converge)
Z ∞ Z −∞ X
f (x) dx + f (x + ib) dx = 2πi Resz=a f (z),
−∞ ∞ a
where the sum is over poles of f (z) with imaginary part between 0 and b. This is equivalent to
Z ∞ X
(f (x) − f (x + ib)) dx = 2πi Resz=a f (z).
−∞ a
• Using integration by parts on the Fourier transform of f , with u = f (x) and dv = e−ixy dx,
we obtain
F(f 0 )(y) = iy(Ff )(y).
• If we apply the Fourier transform twice then we recover the original function up to interior
and exterior scaling:
(11.1) (F 2 f )(x) = 2πf (−x).
√
Let’s show the appearance of 2π in (11.1) is equivalent to the evaluation of I as 2π.
2
Fixing a > 0, set f (x) = e−ax , so
f 0 (x) = −2axf (x).
Applying the Fourier transform to both sides of this equation implies iy(Ff )(y) = −2a −i1
(Ff )0 (y),
0 1 0 1
which simplifies to (Ff ) (y) = − 2a y(Ff )(y). The general solution of g (y) = − 2a yg(y) is g(y) =
2
Ce−y /(4a) , so
2
(Ff )(y) = Ce−y /(4a)
2 /2
for some constant C. Letting a = 12 , so f (x) = e−x , we obtain
−y 2 /2
(Ff )(y) = Ce = Cf (y).
R ∞ −x2 /2
Setting y = 0, the left side is (Ff )(0) = −∞ e dx = I, so I = Cf (0) = C.
Applying the Fourier transform to both sides of the equation (Ff )(y)
√ = Cf (y), we get 2πf (−x) =
2 2
C(Ff√)(x) = C f (x). At x = 0 this becomes 2π = C , so I = C = ± 2π. Since I > 0, the number
I is 2π. If we didn’t know the constant on the right side of (11.1) were 2π, whatever its value is
would 2
√ wind up being C , so saying 2π appears on the right side of (11.1) is equivalent to saying
I = 2π.
References
[1] D. Bell, “Poisson’s remarkable calculation – a method or a trick?” Elem. Math. 65 (2010), 29–36.
[2] C. A. Berenstein and R. Gay, Complex Variables, Springer-Verlag, New York, 1991.
Z ∞ 2
[3] A. L. Delgado, “A Calculation of e−x dx,” The College Math. J. 34 (2003), 321–323.
0
[4] H. Iwasawa, “Gaussian Integral Puzzle,” Math. Intelligencer 31 (2009), 38–41.
[5] T. P. Jameson, “The Probability Integral by Volume of Revolution,” Mathematical Gazette 78 (1994), 339–340.
[6] H. Kneser, Funktionentheorie, Vandenhoeck and Ruprecht, 1958.
[7] P. S. Laplace, Théorie Analytique des Probabilités, Courcier, 1812.
[8] P. S. Laplace, “Mémoire sur la probabilité des causes par les évènemens,” Oeuvres Complétes 8, 27–65. (English
trans. by S. Stigler as “Memoir on the Probability of Causes of Events,” Statistical Science 1 (1986), 364–378.)
[9] P. M. Lee, http://www.york.ac.uk/depts/maths/histstat/normal_history.pdf.
[10] L. Mirsky, The Probability Integral, Math. Gazette 33 (1949), 279. Online at http://www.jstor.org/stable/
3611303.
[11] C. P. Nicholas and R. C. Yates, “The Probability Integral,” Amer. Math. Monthly 57 (1950), 412–413.
[12] G. Polya, “Remarks on Computing the Probability Integral in One and Two Dimensions,” pp. 63–78 in Berkeley
Symp. on Math. Statist. and Prob., Univ. California Press, 1949.
[13] R. Remmert, Theory of Complex Functions, Springer-Verlag, 1991.
[14] M. Rozman, “Evaluate Gaussian integral using differentiation under the integral sign,” Course notes for Physics
2400 (UConn), Spring 2016.
[15] W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw-Hill, 1976.
[16] M. Spivak, Calculus, W. A. Benjamin, 1967.
[17] S. Stigler, “Laplace’s 1774 Memoir on Inverse Probability,” Statistical Science 1 (1986), 359–363.
[18] J. van Yzeren, “Moivre’s and Fresnel’s Integrals by Simple Integration,” Amer. Math. Monthly 86 (1979),
690–693.
12 KEITH CONRAD
[19] G. N. Watson, Complex Integration and Cauchy’s Theorem, Cambridge Univ. Press, Cambridge, 1914.
[20] http://gowers.wordpress.com/2007/10/04/when-are-two-proofs-essentially-the-same/#comment-239.
[21] http://math.stackexchange.com/questions/34767/int-infty-infty-e-x2-dx-with-complex-analysis.
[22] http://math.stackexchange.com/questions/390850/integrating-int-infty-0-e-x2-dx-using-feynmans-
parametrization-trick