0% found this document useful (0 votes)
3 views

Stable_Distributions

Uploaded by

davicoutinho
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Stable_Distributions

Uploaded by

davicoutinho
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Stable Distributions

Lecture Notes

Prof. Dr. Evgeny Spodarev

Ulm
2016
Contents
1 Introduction 2

2 Properties of stable laws 7


2.1 Equivalent definitions of stability . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Strictly stable laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Properties of stable laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Limit theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5 Further properties of stable laws . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Simulation of stable variables 34

4 Additional exercises 38

Literature 43

i
Forewords
These lecture notes are based on the bachelor course “Stable distributions” which originally
tool place at Ulm University during the summer term 2016.
In modern applications, there is a need to model phenomena that can be measured by very
high numerical values which occur rarely. In probability theory, one talks about distributions
with heavy tails. One class of such distributions are stable laws which (apart from the Gaussian
one) do not have a finite variance. So, the aim of this course was to give an introduction into
the theory of stable distributions, its basic facts and properties.
The choice of material of the course is selective and was mainly dictated by its introductory
nature and limited lecture times. The main topics of these lecture notes are
1) Stability with respect to convolution
2) Characteristic functions and densities
3) Non-Gaussian limit theorem for i.i.d. random summands
4) Representations and tail properties, symmetry and skewness
5) Simulation.
For each topic, several exercises are included for deeper understanding of the subject. Since
the target audience are bachelor students of mathematics, no prerequisites other than basic
probability course are assumed.
You can find more information about this course at: https://www.uni-ulm.de/mawi/
mawi-stochastik/lehre/ss16/stable-distributions/
The author hopes you find these notes helpful. If you notice an error or would like to discuss
a topic further, please do not hesitate to contact the author at [email protected].
The author is also grateful to Dr. Vitalii Makogin for typesetting this lectures in LATEX,
making illustrations, and the selection of exercises.

02.11.2016 Prof. Dr. Evgeny Spodarev

1
1 Introduction
Let (Ω, F, P) be an abstract probability space. The property of stability of random variables
with respect to (w.r.t.) convolution is known for you from the basic course of probability. Let
X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ) be independent random variables. Then X1 + X2 ∼
d d
N (µ1 + µ2 , σ12 + σ22 ). One can restate this property as follows. Let X1 = X2 = X ∼ N (0, 1)
and X, X1 , X2 are independent. Then ∀a, b ∈ R aX1 + bX2 ∼ N (0, a2 + b2 ), and so
d
p
aX1 + bX2 = a2 + b2 X. (1.0.1)
| {z }
c≥0
d
Additionally, for any random variables X1 , . . . , Xn i.i.d., Xi = X, ∀i = 1, . . . , n, it holds
Pn d √
i=1 Xi = nX. Property (1.0.1) rewrites in terms of cumulative distribution functions of
Rx 2
X1 , X2 , X as Φ xa ? Φ xb = Φ xc , x ∈ R, where Φ(x) = √12π −∞ e−t /2 dt, x ∈ R, and ? is the
 

convolution operation.
It turns out that the normal law is not unique satisfying (1.0.1). Hence, it motivates the
following definition.
Definition 1.0.1
A random variable X is stable if ∀a, b ∈ R+ ∃c, d ∈ R, c > 0 s.t.
d
aX1 + bX2 = cX + d, (1.0.2)
where X1 , X2 are independent copies of X. X as above is called strictly stable if d = 0.
Remark 1.0.1
Let FX be the cumulative distribution function (c.d.f.) of X,i.e., FX (y) = P(X ≤ y), y ∈ R.
Then the property (1.0.2) rewrites as FX ay ? FX yb = FX y−d
 
c , y ∈ R, if a, b, c =
/ 0. The
case c = 0 corresponds to X ≡ const a.s., which is a degenerate case. Obviously, a constant
random variable is always stable. The property (1.0.1) shows that X ∼ N (0, 1) is strictly
stable.
Exercise 1.0.1
Show that X ∼ N (µ, σ 2 ) is stable for any µ ∈ R, σ 2 > 0. Find the parameters c and d in (1.0.2)
for it. Prove that X ∼ N (µ, σ 2 ) is strictly stable if and only if (iff) µ = 0.
The notion of (strictly) stability has first introduced by Paul Lévy in his book Calcul des
probabilités (1925). However, stable distributions (different from the normal ones), were known
long before. Thus, French mathematicians Poisson and Cauchy some 150 years before Lévy
found the distribution with density
λ
fλ (x) = , x ∈ R, (1.0.3)
π(x2 + λ2 )
depending on parameter λ > 0. Now this distribution bears theR name of Cauchy, and it is
known to be strictly stable. Its characteristic function ϕλ (t) = R eitx fλ (x)dx, t ∈ R has the
form ϕλ (t) = e−λ|t| .

2
1 Introduction 3

In 1919 the Danish astronomer J. Holtsmark found a law of random fluctuation of gravita-
3/2
tional field of some stars in space, which had characteristic function ϕ(t) = e−λktk , t ∈ R3 ,
which led to the family of characteristic functions
α
ϕ(t) = e−λ|t| , t ∈ R, λ > 0. (1.0.4)

For α = 3/2, it appeared to be strictly stable and now bears the name of Holtsmark. It needed
some time till it has proven by P.Lévy in 1927 that ϕ(t) as in (1.0.4) is a valid characteristic
function of some (strictly stable) distribution only for α ∈ (0, 2]. The theory of stable random
variables took its modern form after 1938 when the books by P.Lévy and A. Khinchin where
published.
Let us give further examples of stable laws and of their applications.
Example 1.0.1 (Constants):
Any constant c is evidently a stable random variable.
Example 1.0.2 (Cauchy distribution in nuclear physics):
Let a point source of radiation R be located at (0, 0, 1) and radiate its elementary particles
onto a screen S = {(x, y, 0), x, y ∈ R}. The screen S is covered by a thin layer of metal so
that it yields light flashes as the emitted particles reach it. Let (u, v, 0) be the coordinates

Figure 1.1:

of one of these (random) flashes. Due to the symmetry of this picture (the whole process of
radiation is rotationally symmetric around axis RA cf. Fig. 1.1) it is sufficient to find the
d
distribution of one coordinate of (u, v), say, u = v. Project the whole picture onto the plane
(x, z). Let Fu (x) = P(u ≤ x) be the c.d.f. of U. The angle α to the ray RU varies in (0, π)
if it arrives at S. It is logic to assume that α ∼ U [0, π]. Since tg α − 2 = u1 = u, it follows
π


α = π/2 + arctan x. Then for any x > 0 {U ≤ x} = {tg(α − π/2) ≤ x} = {α ≤ π/2 + arctan x}.
So,

π/2 + arctan x 1 1
FU (x) = P(α ≤ π/2 + arctan x) = = + arctan x
Z x
π 2 π
Z x
1 dy
= 2
= f1 (y)dy,
−∞ π 1 + y −∞

with f1 (·) as in (1.0.3), λ = 1. So, U ∼ Cauchy(0, 1). For instance, it describes the distribution
of energy of unstable states on nuclear reactions (Lorenz law).
4 1 Introduction

Example 1.0.3 (Theory of random matrices):


(n) n
 
Let An Yn = Bn be a random system of n linear equations, where An = Xij be a random
i,j=1
(n) n
 
(n × n)−matrices, and Bn = Bi be a random n − dim vector in Rn . If det(An ) =
/ 0,
i=1
its solution is Yn = A−1 = 0, put Yn = 0 a.s.). As n → ∞, the solution Yn is
n Bn (for det(An )
numerically very hard to compute. Then the following approximation (as n → ∞) is helpful.
(n) (n)
Assume that for each n ∈ N An and Bn are mutually independent, EXij = EBi = 0,
 
(n) (n) (n) (n)
VarXij = VarBi = 1 ∀i, j = 1, . . . , n. If supn,i,j E |Xij |5 + |Bi |5 < ∞ then for any
(n) 1 1 (n)
1 ≤ i, j ≤ n, i =
/ j limn→∞ P(Yi ) = 2 + π arctan x, x > 0, where Yn = (Yi )i=1...n . Hence,
(n)
here again, Yi ∼ Cauchy(0, 1), i = 1 . . . n, compare Exercise 1.0.1
Exercise 1.0.2
d Y1
Show that if X ∈ Cauchy(0, 1) then X = Y2 , where Y1 , Y2 are i.i.d. N (0, 1)−distributed
random variables.
Exercise 1.0.3
1) Prove that Cauchy distribution is stable. If it is centered, i.e., X ∼ Cauchy(0, λ), then it is
strictly stable.
d
2) Show that if X ∼ Cauchy(0, λ) then X = X1 .
d Pn d
In fact, it can be shown that for X ∼ Cauchy(0, 1), X1 , . . . , Xn i.i.d. and Xi = X, i=1 Xi =
Pn d
nX, i.e., the constant c in (1.0.2) is equal to 2 here. The property i=1 Xi = nX rewrites
1 Pn d
X n := n i=1 Xi = X, i.e., the arithmetic mean of Xi is distributed exactly as one of Xi .
Example 1.0.4 (Lévy distribution in branching processes):
Consider the following branching process in discrete time. A population of particles evolves
in time as follows: at each time step, each particle (independently from others) dies with
probability p > 0, doubles (i.e., is divided into two new similar particles) with probability
p > 0, or simply stays untouched (with complimentary probability 1 − 2p). Let G(s) = p + (1 −
2p)s + ps2 , |s| ≤ 1 be the generating function describing this evolution in one step. Let ν0 (k)
be the number of particles in generation k − 1, which died in k-th step. Let ν = ∞
P
k=1 ν0 (k) be
the total number of died particles during the whole evolution of the process. Assuming that
there is only one particle at time k = 0, put ν0 (0) = 0, and denote qn = P(ν = n), n ∈ N0 . Let


X
ϕ(s) = qn sn , |s| < 1 (1.0.5)
n=0

be the generating probability function of ν.


Exercise 1.0.4
Show that ϕ(s) = G(ϕ(s)) + p(s − 1), |s| < 1.
2 2
√ it follows ϕ(s) = p+(1−2p)ϕ(s)+pϕ (s)+p(s−1), or ϕ (s)−2ϕ(s)−s
From this evolution, √ =
0 =⇒ ϕ(s) − 1 = ± 1 − s, |s| <√ 1. Since |ϕ(s)| ≤ 1 ∀s : |s| < 1, then ϕ(s) = 1 − 1 − s > 1 is
not a solution =⇒ ϕ(s) = 1 − 1 − s, |s| < 1. Expanding it in the Taylor series, we get


1 X Γ(n − 1/2) n
ϕ(s) = √ s , |s| < 1, (1.0.6)
2 π n=1 n!
1 Introduction 5

Γ(n−1/2)
which follows from ϕ(0) = 0, ϕ0 (0) = √1
2 1−0
= 12 , ϕ00 (0) = 32 , and so on: ϕ(n) (0) = 2Γ(1/2) , n ∈
N.
Exercise 1.0.5
Prove it inductively.
q
2π x x µ(x)

Recall the Stirling’s formula for Gamma function: ∀x > 0 Γ(x) = x e e , where
1
0 < µ < 12x . Comparing the form (1.0.5) and (1.0.6), we get

n−1/2 n−1/2 µ(n−1/2)


q  

Γ(n − 1/2) n−1/2 e e
qn = √ = √ q
2 πΓ(n + 1) 2 π 2π n n µ(n)

n n e e
n−1 √
1 n − 1/2 e µ(n−1/2)−µ(n) 1 1 n−1 1 1/2+µ(n−1/2)−µ(n)
  
= √ e ∼ √ 1 − e
2 π n n3/2 2 π 2n n3/2
n−3/2 1 1
   
∼ √ exp + (n − 1) log 1 − + o(1)
2 π 2 2n
n−3/2 1 1 n−3/2
   
∼ √ exp + (n − 1) − + o(1) ∼ √ , n → ∞.
2 π 2 2n 2 π
−3/2
Summarizing, qn ∼ n2√π , n → ∞.
Now assume that the whole process starts with n particles at the initial moment of time
d
n = 0. Then, the total number of died particles is a sum ni=1 νi of i.i.d. r.v.’s νi = ν. We
P
d
will be able to show n12 ni=1 νi → X, n → ∞, where X is a standard Lévy distributed random
P

variable with density


1 1
 
fX (x) = √ x−3/2 exp − , x > 0. (1.0.7)
2π 2x
Exercise 1.0.6
Let X be as above. Then
d
1. X = Y −2 , where Y ∼ N (0, 1).

2. fX (s) ∼ √1 x−3/2 , x → +∞.


3. EX = VarX = ∞.

4. The standard Lévy distribution is strictly stable with c = 4 in (1.0.2), i.e., for independent
d d d
X1 = X2 = X : X1 + X2 = 4X.

The graph of fX (·) looks like it has its mode at x = 1/3, and f (0) = 0 by continuity, since
X1 +X2 d
limx→+0 f (x) = 0. Relation (1.0.2) from Exercise 1.0.6(4) can be interpreted as 2 = 2X,
d d
the arithmetic mean of X1 = X2 = X is distributed as 2X. Compare with the same property
of Cauchy distribution.
6 1 Introduction

Figure 1.2: Graph of fX


2 Properties of stable laws

2.1 Equivalent definitions of stability


Now we would like to give an number of further definitions of stability which appear to be
equivalent. At the same time, they give important properties of stable laws.
Definition 2.1.1
A random variable X is stable if there exists a family of i.i.d. r.v.’s {Xi }∞
i=1 and number
sequences {an }n∈N , {bn }n∈N , bn > 0∀n ∈ N s.t.
n
1 X d
Xi − an → X, n → ∞. (2.1.1)
bn i=1

Remark 2.1.1
Notice that this definition does not require the r.v. X1 to have a finite variance or even a finite
mean. But if σ 2 = VarX1 ∈ (0, +∞) then X ∼ N (0, 1) according to the central limit theorem
√ √
with bn = nσ, an = √nµ nσ
= n σµ , where µ = EX1 .
Definition 2.1.2
A non-constant random variable X is stable if its characteristic function has the form ϕX (s) =
eη(s) , s ∈ R, where η(s) = λ(isγ − |s|α + isω(s, α, β)), s ∈ R with
(
|s|α−1 βtg π2 α ,

α=/ 1,
ω(s, α, β) = (2.1.2)
−β π2 log |s|, α = 1,

α ∈ (0, 2], β ∈ [−1, 1], γ ∈ R, λ > 0. Here α is called stability index, β is the coefficient of
skewness, λ is the scale parameter, and µ = λγ is the shift parameter.
We denote the class of all stable distributions with given above parameters (α, β, λ, γ) by
Sα (λ, β, γ). Sometimes, the shift parameter µ is used instead of γ : Sα (λ, β, µ). X ∈ Sα (λ, β, γ)
means that X is a stable r.v. with parameters (α, β, λ, γ).
Unfortunately, the parametrisation of η(s) in Definition 2.1.2 is not a continuous function
of parameters (α, β, λ, γ). It can be easily seen that ω(s, α, β) → ∞ as α → 1 for any β = / 0,
instead of tending to −β 2 log |s|. To remedy this, we can introduce an additive shift +λβtg π2 α
π

to get η(s) = λ(isγM − |s|α + isωM (s, α, β)), s ∈ R, where


( (
π π
(|s|α−1 − 1)βtg
 
γ + βtg 2α , α=
/1 2α , α=/ 1,
γM = , ωM (s, α, β) = (2.1.3)
γ, α=1 −β π2 log |s|, α = 1.

(M stands for “modified”)


Exercise 2.1.1
Check that this modified parametrisation is a continuous function of all parameters.

7
8 2 Properties of stable laws

Another possibility to parametrise η(s) is given as follows:


η(s) = λB (isγB − |s|α + isωB (s, α, βB )), s ∈ R, where
(
exp −i π2 βB K(α)sign(s) , α =

/ 1,
ωB (s, α, βB ) = π K(α) = α − 1 + sign(1 − α),
2 + iβB log |s|sign(s), α = 1,
π π
  .  
and for α =
/ 1 : λ = λB cos βB K(α) , γ = γB cos βB K(α) ,
2 2
π π
   
β = ctg α tg βB K(α) ;
2 2
π 2
for α = 1 : λ = λB , γ = γB , β = βB .
2 π

(B stays for “bounded”representation). In this form η(s) is again not continuous at α = 1,


but for α → 1, α = / 1 the whole function η(s) does not go to +∞ as in (2.1.2), but has a
limiting finite form which correspondsto a characteristic function of a stable law with η(s) =
π
λB is γB ± sin 2 βB − |s| cos π2 βB . Here, the “+”sign is chosen for α → 1 + 0, and “−”


for α → 1 − 0.
Exercise 2.1.2
Show this convergence for α → 1 ± 0.
Let us give two more definitions of stability.
Definition 2.1.3
d
A random variable X is stable if for the sequence of i.i.d. r.v.’s {Xi }i∈N , Xi = X, ∀i ∈ N, for
any n ≥ 2 ∃cn > 0 and dn ∈ R s.t.
n
X d
Xi = cn X + dn . (2.1.4)
i=1

Definition 2.1.4
It turns out that this definition can be weakened Thus, it is sufficient for stability of X to
require (2.1.4) to hold only for n = 2, 3. We call it Definition 2.1.4.
Now let us formulate here equivalent statement.
Theorem 2.1.1
Definitions 1.0.1,2.1.1-2.1.4 are all equivalent for a non-degenerate random variable X (i.e.,
X≡ / const).
The proof of this result will require a number of auxiliary statements which now here to be
formulated. The first of them is a limit theorem describing domains of attraction of infinitely
divisible laws.
Theorem 2.1.2 (Khinchin):
Let {Xnj , j = 1 . . . kn , n ∈ N} be the sequence of series of independent random variables with
the property
lim max P(|Xnj | > ε) = 0, ∀ε > 0 (2.1.5)
n→∞ j=1...kn

Pkn
and with c.d.f. Fnj . Let Sn = j=1 Xnj − an , n ∈ N. Then a random variable X with c.d.f. FX
2 Properties of stable laws 9

Figure 2.1: Graph of H

d
is a weak limit of Sn (Sn → X, n → ∞) iff the characteristic function ϕX of X has the form
Z !
 
2 isx
ϕX (s) = exp isa − bs + e − 1 − is sin x dH(x) , s ∈ R, (2.1.6)
{x=0}
/

where a ∈ R, b > 0, H : R \ {0} → R is non-decreasing on R+ and R− , H(s) → 0, as |x| → +∞,


and 0<|x|<1 x2 dH(s) < ∞.
R

This theorem will be given without proof.


Remark 2.1.2 1. The condition (2.1.5) is called the asymptotic smallness condition of
Xnj .

2. Representation (2.1.6) is called the canonic representation of Lévy-Kninchin.

3. Laws of X with ch.f. ϕX as in (2.1.6) are called infinitely divisible. For more properties
of those, see lectures “Stochastics II”.

4. The function H is called a spectral density of X.


Exercise 2.1.3
Show that CLT is a special case of Theorem (2.1.2): find Xnj and an .
Another important result was obtained by B.V. Gnedenko.
Theorem 2.1.3 (Gnedenko):
Consider An (y) = kj=1 E(Xnj I(|Xnj | < y)), n ∈ N, where y ∈ R is a number s.t. y and −y are
P n

continuity points of H in (2.1.6). Introduce σnε = kj=1 Var(Xnj I(|Xnj | < y)), ε > 0. Let FX
P n

be a c.d.f. with ch.f. ϕX as in (2.1.6). Take

1
Z Z
an = An (y) − a − udH(u) + dH(u), n ∈ N.
|u|<y |u|≥y u

d
Then, Sn → X, n → ∞ (or Fn → F, n → ∞ weakly) iff

1) For each point x of continuity of H it holds


kn 
1
X 
lim Fnj (x) − (1 + signx) = H(x).
n→∞
j=1
2
10 2 Properties of stable laws

2) limε→0 lim supn→∞ σnε = limε→0 lim inf n→∞ σnε = 2b.
Without proof.
1. In order Sn = b1n ni=1 Xi − an from Definition 2.1.1 to fulfill condition
P
Remark 2.1.3
(2.1.5), it is sufficient to require bn → ∞, n → ∞. Indeed, in this case Xnj = Xj /bn ,
and, since Xj are i.i.d., lim→∞ maxj=1...kn P(|Xnj | > ε) = limn→∞ P(|X1 | > εbn ) = 0 if
bn → ∞.
2. Property (2.1.5) holds whenever Fn → FX weakly, where FX is non-degenerate, i.e.,
X≡ / const a.s. Indeed, let (2.1.5) does not hold, i.e., lim→∞ maxj=1...kn P(|Xnj | > ε) =
/0
for some ε > 0. Then ∃ a subsequence nk → ∞ as n → ∞ s.t. bnk = o(1). Since,
d d
Sn → X, n → ∞, s.t. ϕSnk (s) → ϕX (s), k → ∞, where
nk
s
Pnk  
is Xj /bnk −isank
ϕSnk (s) = Ee j=1 = e−isank ϕX1 , s ∈ R,
bnk
  nk
s
so, ϕSnk (s) = ϕX1 b nk (1+o(1)), k → ∞. Then for each small s ∈ Bδ (0) |ϕX1 (s)| =
|ϕX (sbnk )|1/nk (1 + o(1)) → 1, k → ∞, which can be only if |ϕX1 (s)| ≡ 1, ∀s ∈ R, and
hence |ϕX (s)| ≡ 1, which means X ≡ const a.s. This contradicts with our assumption
X≡ / const.
Definition 2.1.5 1) A function L : (0, +∞) → (0, +∞) is called slowly varying at infinity
if for any x > 0
L(tx)
→ 1, t → +∞.
L(t)
2) A function U : (0, +∞) → (0, +∞) is called regularly varying at infinity if U (x) =
xρ L(x), ∀x > 0, for some ρ ∈ R and some slowly varying (at infinity) function L.
Example 2.1.1 1. L(x) = | log(x)|p , x > 0 is slowly varying for each p ∈ R.
2. If limx→+∞ L(x) = p then L is slowly varying.
3. U (x) = (1 + x2 )p , x > 0 is regularly varying for each p ∈ R with ρ = 2p.
Lemma 2.1.1
A monotone function U : (0, +∞) → (0, +∞) is regularly varying at ∞ iff UU(tx)
(t) → ψ(x), t →
+∞ on a dense subset A of (0, +∞), and ψ(x) ∈ (0, +∞) on an interval I ∈ R+ .

Proof Let X1 , X2 ∈ A ∩ I. For t → +∞ we get


U (tx1 x2 ) U (tx1 x2 ) U (tx2 )
ψ(x1 x2 ) ← = → ψ(x1 )ψ(x2 ).
U (t) U (tx2 ) U (t)
Hence, ψ(x1 x2 ) = ψ(x1 )ψ(x2 ). Since U is monotone, so is ψ. By monotonicity, define ψ any-
where by continuity from the right. Then ψ(x1 x2 ) = ψ(x1 )ψ(x2 ) holds for any x1 , x2 ∈ I. Set
x = ey , ψ(ey ) = ϕ(y). The above equation transforms to ϕ(y1 + y2 ) = ϕ(y1 )ϕ(y2 ). One can
easily show that if has a unique (up to a constant ρ) solution bounded on any finite interval,
and it is ϕ(y)eρy ⇔ ψ(x) = xρ .

The proof of Theorem 2.1.1 will make use of the following important statement which is
interesting on its own right.
2 Properties of stable laws 11

Theorem 2.1.4
Let X be a stable r.v. in the sense of Definition 2.1.1 with characteristic function ϕX as in
(2.1.6). Then its spectral function H has the form
(
−c1 x−α , x>0
H(x) = where α ∈ (0, 2), c1 , c2 ≥ 0.
c2 (−x)−α , x < 0,

Proof Consider the non-trivial case of a non-degenerate distribution of X (otherwise c1 = c2 =


0). Denote by XH the set of all continuity points of the spectral function H.
Exercise 2.1.4
Prove that XH is at most countable.
Since X is stable in the sense of Definition 2.1.1, ∃ an i.i.d. sequence of r.v.’s {Xi }i∈N and
d
number sequences {an }n∈N , {bn }n∈N , bn > 0∀n ∈ N s.t. Sn = b1n ni=1 Xi − an → X, n → ∞.
P

Using Theorem 2.1.3, condition 1), it means that ∀x ∈ XH n(F (bn x) − 21 (1 + signx)) →
H(x), n → ∞, where F (y) = P(Xi ≤ y), y ∈ R.
Consider the case x > 0. If H(x) ≡ / 0 on R+ , so ∃x0 ∈ XH , x > 0 with q := −H(x0 ) > 0,
compare Fig. 2.1 For each t > 0, find an n = n(t) ∈ N s.t. n(t) = min{k : bk x0 ≤ t < bk+1 x0 }.
Since, F (x) = 1 − F (x) ↓ on R+ , we get

F (bn+1 x0 x) F (tx) F (bn x0 x)


≤ ≤ , ∀x > 0. (2.1.7)
F (bn x0 ) F (t) F (bn+1 x0 )

Since, n(t) → ∞, t → ∞, −(F (bn (x)) − 1) → H(x), x → ∞, we get for x0 x ∈ XH

F (bn+1 x0 x) −nF (bn+1 x0 x) H(x0 x) H(x0 x)


= → =− := L(x).
F (bn x0 ) −nF (bn x0 ) H(x0 ) q

The same holds for the right-hand side of (2.1.7). Hence, for any x, y > 0 s.t. x0 x, x0 y, x0 xy ∈
F (txy)
XH we have F (t)
→ L(xy), → +∞. Otherwise,

F (txy) F (txy) F (ty)


= → L(x)L(y), t → ∞
F (t) F (ty) F (t)

by the same reasoning. As a result, we get the separation L(xy) = L(x)L(y) which holds for all
x, y > 0. (may be except for a countable number of exceptions since XH is at most countable.)
By definition of L(x) := − H(xq0 x) , L : R+ → R+ is non-decreasing, L(1) = 1, L(∞) = 0. It
can be shown (cf. the proof of Lemma 2.1.1) that the solution of the equation
(
L(xy) = L(x)L(y),
L(1) = 1, L(∞) = 0

is L(x) = 1/xα , α > 0. RHence, for x > 0 H(x) = −qL(x/x0 ) = H(x0 )x−α /x−α α
0R = x0 H(x0 )x
−α =
−α 2
−c1 x , c1 ≥ 0. Since 0<|x|<1 x dH(x) < ∞ (cf. Theorem 2.1.2), it holds 0<|x|<1 x 2−α−1 dx <
∞ ⇐⇒ 2 − α > 0 ⇐⇒ α < 2. Hence, 0 < α < 2, c1 ≥ 0 can be arbitrary.
The case x < 0 is treated analogously and leads to the representation H(x) = c2 (−x)−δ , c2 ≥
0, 0 < δ < 2.
12 2 Properties of stable laws

F (tx)
Show that α = δ. Since F (t)
∼ x−α , t → ∞ for x > 0, it means that F (s) is regularly
varying by Lemma 2.1.1. Hence, exists a slowly varying function h1 : (0, +∞) → (0, +∞) s.t.
F (x) = x−α h1 (x), x > 0. By property 1) of Theorem 2.1.3, nF (bn x) = nb−α
n x
−α h (b x) →
1 n
−α h1 (bn x)
H(x) = c1 x , n → ∞. Since, h1 (bn ) → 1, n → ∞, it holds

h1 (bn x)
c1 ← nb−α −α
n h1 (bn x) = nbn h1 (bn ) ∼ nb−α
n h1 (bn ), n → ∞. (2.1.8)
h1 (bn )

Analogously, we get F (x) = (−x)−δ h2 (−x), x < 0, where h2 : (0, +∞) → (0, +∞) is slowly
varying, and nb−δ
n h1 (bn ) ∼ c2 . Assuming c1 , c2 > 0 (otherwise the statement get trivial since
h1 (bn ) c1
either α or δ can be chosen arbitrary), we get b−α+δ
n h2 (bn ) → c2 > 0, n → ∞, where h1 /h2 is
slowly varying at +∞, which is possible only if α = δ.

Corollary 2.1.1
Under the conditions of Theorem 2.1.4, assume that c1 +c2 > 0. Then the normalizing sequence
bn in Definition 2.1.1 behaves as bn ∼ n1/α h(n), where h : (0, +∞) → (0, +∞) is slowly varying
at +∞.

Proof Assume, for simplicity, c1 > 0. Then, formula (2.1.8) yields n ∼ c1 bαn h−1
1 (bn ), α ∈ (0, 2).
1/α −1/α 1/α 1/α −1 1/α
Hence, bn ∼ n c1 (h1 (bn )) = n h(n), where h(n) = (c1 h1 (bn )) is slowly varying
at +∞ due to the properties of h1 .

Proof of Theorem 2.1.1. 1) Show the equivalence of Definitions 2.1.1 and 2.1.2.
Let X be a non-constant r.v. with characteristic function ϕX as in (2.1.6). Assume that
X is stable in the (sence of Definition 2.1.1. By Theorem 2.1.4, its spectral function H has
−c1 /|x|α x > 0,
the form H(x) = , α ∈ (0, 2), c1 , c2 ≥ 0. Put it into the formula (2.1.6):
c2 /|x|α , x<0
log ϕX (x) = isa − bs2 + c1 Qα (s) + c2 Qα (s), s ∈ R, where
Z ∞ 
Qα (s) = − e−isc − 1 + is sin x dx−α = Re (ψα (i, t))|t=−is ,
0
R∞
e−zx − e−tx x−α dx for z, t ∈ C : Re z, Re t > 0, α ∈ (0, 2). Integrating by

and ψα (z, t) = t 0
parts, we get

t +∞ Z
ψα (z, t) = (ze−zx − te−tx )x1−α dx
1−α 0
Z +∞ Z +∞
t
 
α−1 −zx 1−α α−1 −tx 1−α xz = y
= z e (zx) d(zx) − t (e )(tx) d(tx) =
1−α 0 0 xt = y
t
 +∞ Z +∞ Z 
= z α−1 e−y y 2−α−1 dy − tα−1 e−y y 2−α−1 dy
1−α 0 0
tΓ(2 − α)  α−1 
= z − tα−1 , for any α = / 1, Re z, Re t > 0.
1−α

For fixed z, t ∈ C : Re z, Re t > 0 the function ψα (z, t) : (0, 2) → C as a function of α is


2 Properties of stable laws 13

continuous on (0, 2). Hence,

tΓ(2 − α) α−1
ψ1 (z, t) = lim ψα (z, t) = lim (z − tα−1 )
α→1 α→1 1−α
t
= lim (e(α−1) log z − e(α−1) log t ) = |1 − α = x|
1−α→0 1−α
t
= lim (1 − x log z − 1 + x log t + o(x)) = t(log t − log z) = t log(t/z).
x→0 x

Then for α =
/ 1 we get
−isΓ(2 − α)  
Qα (s) = Re (ei(π/2)(α−1) ) − (−is)α−1
1−α
−isΓ(2 − α)  
= Re (ei(π/2)(α−1) ) − e(α−1)i(−π/2)signs |s|α−1
1−α
π π
     
α−1
= −isΓ(1 − α) cos (α − 1) − i(signs) sin (α − 1) |s|
2 2
πα πα πα
     
α−1 2 α
= −is sin Γ(1 − α) − sin i(signs)|s| Γ(1 − α) + i |s| Γ(1 − α) cos
2 2 2
α α−1
= −Γ(1 − α) cos(πα/2)|s| − is(1 − |s| )Γ(1 − α) sin(πα/2).

For α = 1

Qα (s) = −is Re (log(t/i))|t=−is = −is log |t/i||t=−is = −is log(−is)


π
= −is(log |s| + i(−π/2)signs) = −|s| − is log |s|.
2
Then
|ϕX (s)| = exp{−bs2 − d|s|α }, (2.1.9)
where d = (c1 + c2 ) Γ(2−α) π

1−α sin 2 (1 − α) , α = / 1. For α = 1 get limit as α → 1 as a value of d:
(c1 + c2 )π/2. Show that bd = 0.
If, for instance, d > 0, then show that b = 0. By Definition 2.1.1, ∃ sequences {an }, {bn } ⊂ R :
bn → ∞ as n → ∞ and a characteristic function ϕX1 (s) s.t. e−isan ϕnX1 (s/bn ) → ϕX (s), n →
∞, s ∈ R. Hence, |ϕX1 (s/bn )|n → |ϕX (s)|, n → ∞ where bn = n1/α h(n) by Corollary 2.1.1.
Since, h is slowly varying, bbnk
n
→ k −1/α , n → ∞ for any k ∈ N. Then
 nk  nk
s bn −1
   k
|ϕX (s)| ← ϕX1 = ϕX1 s b → ϕX sk −1/α , ∀k ∈ N,
n→∞ bnk bnk n n→∞

i.e., by (2.1.9), exp{−bs2 − d|s|α } = exp{−bs2 k 1−2/α − d|s|α }, which is only possible if b = 0.
Now set
 (
 d, if c1 + c2
> 0,
λ=



b, if c1 + c2
= 0 (Gaussian case) ,




 (
(c1 − c2 )/λ, if c1 + c2 > 0,


β= (2.1.10)


 0, if c1 + c2 = 0 (Gaussian case) ,
 (
(c2 − c1 )Γ(1 − α), sin(πα/2) if α =
/ 1,


1

γ = λ (a + ā), where ā =



0, if α = 1.
14 2 Properties of stable laws

Then ϕX satisfies representation in Definition 2.1.2 with the above parameters λ, β, γ, α.


Vice versa, if ϕX satisfies Definition 2.1.2, then it can be represented as in (2.1.6) with
spectral function H as in Theorem 2.1.4, see the formula (2.1.10), where c1 , c2 can be restored
d
from λ, β, γ uniquely. By theorem 2.1.2, the limit theorem Sn → X, n → ∞ takes place.
Exercise 2.1.5
Show that {Xnj } can be chosen here as in Definition 2.1.1 (since bn = n1/α h(n) is clear, bn → ∞,
one has only to fix an , cf. Remark 2.1.3)
2) Show the equivalence of Definitions 2.1.1 and 1.0.1.
Let X be stable in the sense of Definition 1.0.1. By induction, if follows from the relation
d
aX1 + bX2 = cX + d of Definition 1.0.1 (with a = b = 1) that for any n ≥ 2 ∃ constants
d
bn > 0, an s.t. for independent copies Xi , i = 1 . . . n of X : X1 + · · · + Xn = bn X + an , or
1 Pn an d
bn i=1 Xi − bn = X. So, for n → ∞, the limiting distribution of the left-hand side coincides
with that of X, and Definition 2.1.1 holds.
Vice versa, we show that from Definition 2.1.2 (which is equivalent to Definition 2.1.1) it
follows Definition 1.0.1. Definition 1.0.1 can be rewritten in terms of characteristic function as
ϕX (as)ϕX (bs) = ϕX (cs)eisd , (2.1.11)
where a > 0 and b > 0 are arbitrary constants, and c > 0, d ∈ R are chosen as in Definition
1.0.1, ϕX (s) = Eeisλ . By Definition 2.1.2, ϕX (s) = exp{λ(isγ − |s|2 + isω(s, α, β))}, s ∈ R
with(ω(s, α, β) as in (2.1.2). It is quite easy to see that (2.1.2) follows with c = (aα + bα )1/α ,
λγ(a + b − c), α= / 1,
d= 2
λβ π (a log(a/c) + b log(b/c)), α = 1.
3) Show the equivalence of Definition 2.1.3 and Definition 1.0.1. Definition 2.1.3 follows from
Definition 1.0.1 as it was shown in 2). Vise versa, from Definition 2.1.3 it follows Definition
2.1.1 ( see 2) ), which is equivalent to Definition 1.0.1.
4) Show the equivalence of Definitions 2.1.3 and 2.1.4. In one direction (Definition 2.1.3
d
⇒ Definition 2.1.4) it is evident, in the other direction, assume that X1 + X2 = c2 X + d2 ,
d
X1 + X2 + X3 = c3 X + d3 for some c2 , c3 > 0, d2 , d3 ∈ R. In order to show Definition 2.1.3, it
is sufficient to check that
nη(s) = η(cn s) + isdn (2.1.12)
for any n ≥ 4, some cn > 0 and dn ∈ R, where η(s) = log ϕX (s), s (
∈ R. Since (by assumption)
(
2m cm
2 ,
(2.1.12) holds for n = 2, 3, it holds (by induction) for any n = m
with cn = ,
3 cm
3
d2 (1 + c2 + · · · + cm−1
(
2 ),
dn = m−1
m ∈ N. Hence, the distribution of X is infinitely divisible,
d3 (1 + c3 + · · · + c3 ).
and then |ϕ(s)| =/ 0, ∀s ∈ R.
From the said above, it holds
2j 3k η(s) = η(cj2 ck3 s) + iajk s (2.1.13)
for some c2 , c3 > 0, ajk ∈ R, j, k ∈ Z 1 . The set {2j 3k , j, k ∈ Z} is dense in R+ , since 2j 3k =
exp{j log 2 + k log 3}, and the set {j + ωk, j, k ∈ Z}, ω ∈ / Q is dense in R. Hence, for any n ∃
1
Let t = s/c2 then it follows from (2.1.12) that 12 η(t) = η(c−1 d2 1 −1 d3
2 t) − is c2 . Similarly we get 3 η(t) = η(c3 t) − is c3 .
So, formula (2.1.13) also holds for negative j, k ∈ Z.
2 Properties of stable laws 15

sequence {rm }m∈N , rm → n as m → ∞, and rm = 2jm 3km . Let cn (m) = cj2m ck3m , m ∈ N. Show
that {cn (m)}m∈N is bounded. It follows from (2.1.13) that rm Re (η(s)) = Re (η(cn (m)s)).
Assume that cn (m) is unbounded, then ∃ subsequence {cn (m0 )} such that |cn (m0 )| → ∞, m0 →
∞. Set s0 = scn (m0 ) in the last equation. Since rm0 → n, m0 → ∞, we get Re η(s0 ) =
s0 0
rm0 Re η( cn (m 0 ) ) → 0, m → ∞. Hence, |η(s)| ≡ 1, which can not be due to the assumption

that X ≡ / const.
Then {cn (m)}m∈N is bounded, and ∃ a subsequence {cn (m0 )}m0 ∈N such that |cn (m0 )| →
cn , m0 → ∞. Then ajm0 km0 = si (η(cn (m0 )) − rm0 η(s)) → si (η(cn − nη(s)) := dn . Hence, ∀n ∈ N
and s ∈ R it holds nη(s) = η(cn s) + isη(dn ), which is the statement of equation (2.1.12), so we
are done.

Remark 2.1.4
It follows from the proof of Theorem 2.1.1 1) that the parameter β = cc11 −c
+c2 , if c1 + c2 > 0 in
2

non-Gaussian case. Consider the extremal values of β = ±1. It is easy to see that for β = 1
c2 = 0, for β = −1 c1 = 0. This corresponds to the following situation in Definition 2.1.1:
a) Consider {Xn }n∈N to be i.i.d. and positive a.s., i.e., X1 > 0 a.s. By Theorem 2.1.3,1) it
follows that H(x) = 0, x < 0 =⇒ c2 = 0 =⇒ β = 1.
b) Consider {Xn }n∈N to be i.i.d. and negative a.s. As above, we conclude H(x) = 0, x > 0,
and c1 = 0 =⇒ β = −1.
Although this relation can not be inverted (from β ± 1 if does not follows that X > (<)0 a.s.),
it explains the situation of total skewness of a non-Gaussian X as a limit of sums of positive
or negative i.i.d. random variables Sn = b1n ni=1 Xi − an .
P

Remark 2.1.5
One can show that cn = n1/α in Definition 2.1.3, formula (2.1.4), for α ∈ (0, 2].

Proof We prove it only for strictly stable laws. First, for α = 2 (Gaussian case X, Xi ∼
d √
N (0, 1)) it holds ni=1 Xi ∼ N (0, n) = nX =⇒ cn = n1/α with α = 2.
P
Pn d
Now let α ∈ (0, 2). Let X be strictly stable, s.t. i=1 Xi = cn X. Take n = 2k , then
d d d
Sn = (X1 + X2 ) + (X3 + X4 ) + · · · + (Xn−1 + Xn ) = c2 (X10 + X20 + · · · + Xn/2
0
) = · · · = ck2 X,
| {z } | {z } | {z }
X10 X20 0
Xn/2

log n/ log 2
from which it follows cn = c2k = ck2 = c2 , so
log n
   
log cn = log c2 = log nlog c2 / log 2 , cn = n1/α2 , (2.1.14)
log 2
where α2 = log 2/ log c2 , for n = 2k , k ∈ N. Generalizing the above approach to n = mk turns,
we get

log m
cn = n1/αm , αm = , n = mk , k ∈ N. (2.1.15)
log cm
To prove that cn = n1/α0 it suffices to show that if cρ = r1/β then β = α0 . Now by (2.1.15)
crj = rj/αr and cρk = ρk/αρ . But for each k there exists a j such that rj < ρk ≤ rj+1 . Then

(crj )αr /αρ < cρk = ρk/αρ ≤ r1/αρ (crj )αr /αρ . (2.1.16)
16 2 Properties of stable laws

Note that Sm+n is the sum of the independent variables Sm and Sm+n − Sm distributed,
d
respectively, as cm X and cn X. Thus for symmetric stable distributions cm+n X = cm X1 + cn X2 .
Next put η = m + n and notice that due to the symmetry of the variables X, X1 , X2 we have for
t > 0 2P(X > t) ≥ P(X2 > tcη /cn ). It follows that for η > n the ratios cn /cη remain bounded.
So, it follows from (2.1.16) that
αr
cρk

r ≥ (cρk )αρ −αr
crj

and hence αr ≥ αρ . Interchanging the roles of r and ρ we find similarly that αr ≤ αρ and hence
αr = αρ ≡ α0 for any r, ρ ∈ N.
We get the conclusion that cn = n1/α0 , n ∈ N. It can be further shown that α0 = α.

Definition 2.1.6
d
A random variable X (or its distribution PX ) is said to be symmetric if X = −X. X is symmetric
about µ ∈ R if X − µ is symmetric. If X is α−stable and symmetric, we write X ∼ SαS. This
definition is justified by the property X ∼ Sα (λ, β, γ), X−symmetric ⇔ γ = β = 0, which will
be proven later.

2.2 Strictly stable laws


As it is clear from the definition of strict stability (Definition 1.0.1) X is stable iff for any
a, b ∈ R+ ∃c > 0 s.t. ϕX (as)ϕX (bs) = ϕX (cs), s ∈ R, where ϕX (s) = EeisX , s ∈ R.
Theorem 2.2.1
Let X ≡/ const a.s. It is strictly stable if its characteristic function admits one of the following
representations: ∀s ∈ R

1. ( (
λ(−|s|α + isω(s, α, β)) α=/ 1, γ = 0, α =
/ 1,
log ϕX (s) = i.e.
λ(isγ − |s|) α = 1, β = 0, α = 1
with ω(s, α, β) as in (2.1.2).

2. (form C) log ϕX (s) = −λC |s|α exp(− π2 θαsigns), where α ∈ (0, 2], λC > 0, θ ≤ θα =
2
min{1, α−1 }.

Proof 1. In the proof of Theorem 2.1.1, 2) it is shown that


( (
λγ(a + b − c), α=
/1 γ = 0, α =
/1
d= =0⇔
λβ π2 (a log(a/c) + b log(b/c)), α=1 β = 0, α = 1.

2. Take the parametrisation (B) of ϕX with parameters γ, β as in 1, and left α unchanged,



θ = βB K(α) , λC = λB , α=
/ 1,
α
   2 1/2
θ = 2 arctg 2 γ π 2
π π B , λC = λ B 4 + γ B , α = 1.
2 Properties of stable laws 17

2.3 Properties of stable laws


Here we consider further basic properties of α−stable distributions.
Theorem 2.3.1
Let Xi , i = 1, 2 be Sα (λi , βi , γi )-distributed independent random variables, X ∼ Sα (λ, β, γ).
Then

1) X has a density (i.e, has absolutely continuous distribution), which is bounded with all
its derivatives.

2) X1 + X2 ∼ Sα (λ, β, γ) with

β1 λ1 + β2 λ2 λ1 γ1 + λ2 γ2
λ = λ1 + λ2 , β= ,γ = .
λ1 + λ2 λ1 + λ2

3) X + a ∼ Sα (λ, β, γ + a/λ), where a ∈ R is a constant.

4) For a real constant a =


/ 0 it holds

Sα |a|α λ, sign(a)β, γ|a|1−α sign(a) ,

α=
/ 1,
aX ∼   
S1 |a|λ, sign(a)β, sign(a) γ − 2 (log |a|)β , α = 1.
π

5) For α ∈ (0, 2), X ∼ Sα (λ, β, 0) ⇔ −X ∼ Sα (λ, −β, 0).

6) X is symmetric iff β = γ = 0. It is symmetric about λγ iff β = 0.

7) Let α =
/ 1. X is strictly stable iff γ = 0.

Proof 1) Let ϕX , ϕXi be the characteristic function of X, Xi , i = 1, 2. It follows from Definition


α
2.1.2 that |ϕX (s)| = e−λ|s| , s ∈ R. Take the inversion formula for the characteristic function.
If |ϕX | is integrable on R (which is here the case) then the density fX of X exists and fX (s) =
1 R −isx ϕ (s)ds, x ∈ R. Additionally, the n−th derivative of f is
2π R e X X
 
n+1
(n) 1
Z Γ 2 n+1
fX (x) ≤ |s|n |ϕX (s)| ds = λ− 2 < ∞, x ∈ R, n ∈ N.
2π R | {z } πα
exp(−λ|s|α )

2) Prove it for the case α =


/ 1, the case α = 1 is treated similarly. Consider the characteristic
function of X1 + X2 , and take its logarithms:

log ϕX1 +X2 (s) = log(ϕX1 (s)ϕX2 (s)) = log ϕX1 (s) + log ϕX2 (s)
2
X  
= λj isγj − |s|α + s|s|α−1 iβj tg(πα/2)
j=1

= −|s|α (λ1 + λ2 ) + is(λ1 γ1 + λ2 γ2 ) + is|s|α−1 (λ1 β1 + λ2 β2 )tg(πα/2)


λ 1 γ1 + λ 2 γ2 λ−1 λ1 β1 + λ2 β2
 
λ
= (λ1 + λ2 ) is − |s| + is|s|
λ1 + λ2 λ1 + λ2
= λ(isγ − |s|α + isω(s, α, β)),
18 2 Properties of stable laws

with
λ1 γ1 + λ2 γ2 λ1 β1 + λ2 β2
λ = λ1 + λ2 , γ = ,β = .
λ1 + λ2 λ1 + λ2
So, X1 + X2 ∼ Sα (λ, β, γ) by Definition 2.1.2.
3) log ϕX+a (s) = isa + λisγ − λ|s|α + λisω(s, α, β) = λ(is(γ + a/λ) − |s|α + isω(s, λ, β)),
hence X + a ∼ Sα (λ, β, γ + a/λ).
4) Consider the case λ =/ 1.

log ϕaX (s) = log ϕX (as) = λ(iasγ − |as|α + iasω(as, α, β))


!
α a a|a|α−1 α−1
= λ|a| isγ α − |s|α + is |s| βtg(πα/2)
|a| |a|α
 
= λ|a|α isγ|a|1−α sign(a) − |s|α + issign(a)β|s|α−1 tg(πα/2) ,

hence aX ∼ Sα (λ|a|α , sign(a)β, γ|a|1−α sign(a)).


For α = 1, we have
2
 
log ϕaX (s) = log ϕX (as) = λ iasγ − |as| − iasβ log |as|
π
a a 2 a 2
 
= λ|a| isγ − is β log |a| − |s| − is β log |s|
|a| |a| π |a| π
2 2
   
= λ|a| isign(a)s γ − β log |a| − |s| − isign(a)sβ log |s|) ,
π π
  
hence aX ∼ S1 λ|a|, sign(a)β, sign(a) γ − β π2 log |a| .
5) follows from 4) with a = −1.
d
6) X is symmetric by definition iff X = −X, i.e., ϕX (s) = ϕ−X (s) = ϕX (−s), ∀s ∈ R, which
is only possible if ϕX (s) ∈ R, s ∈ R. Indeed, EeisX = E cos(sX) + iE sin(sX) = E cos(−sX) +
iE sin(−sX) = E cos(sX) − iE sin(sX) iff 2iE sin(sX) = 0, ∀s ∈ R. Using Definition 2.1.2,
ϕX (s) is real only if γ = 0 and ω(s, α, β) = 0, i.e., β = 0.
d
X is symmetric around λγ by definition iff X − λγ = −(X − λγ) = −X + λγ. By property
d
3) and 4), X − λγ ∼ Sα (λ, β, γ − γ), −X + γ ∼ Sα (λ, −β, −γ + γ). So, X − λγ = −X + λγ iff
β = 0.
7) Is already proven in Theorem 2.2.1.

Remark 2.3.1
1) The analytic form of the density of a stable law Sα (λ, β, γ) is explicitly known only in the
cases α = 2 (Gaussian law), α = 1 (Cauchy law), α = 1/2 (Lévy law).
2) Due to Property 3) of Theorem 2.3.1, the parameter γ (or sometimes λγ) is called shift
parameter.
3) Due to Property 4) of Theorem 2.3.1, the parameter λ (or sometimes λ1/α ) is called shape
or scale parameter. Notice that this name is natural for α = / 1 or α = 1, β = 0. In case
α = 1, β =/ 0, scaling of X by a results in a non-zero shift of the law of X by π2 β log |a|, hence
the use of this name in this particular case can namely be recommended.
4) Due to properties 5)-6) of Theorem 2.3.1, parameter β is called skewness parameter. If
β > 0(β < 0) then Sα (λ, β, γ) is said to be skewed to the right (left). Sα (λ, ±1, γ) is said to be
totally skewed to the right (for β = 1) or left (for β = −1).
2 Properties of stable laws 19

5) It follows from Theorem 2.2.1 and Theorem 2.3.1, 3) that if X ∼ Sα (λ, β, γ), α = / 1, then
X − λγ ∼ Sα (λ, β, 0) is strictly stable.
6) It follows from Theorem 2.2.1 and Definition 2.1.2 that no non-strictly 1-stable random
variable can be made strictly stable by shifting. Indeed, if S1 (λ, β, γ) is not strictly stable
then β =/ 0, which can not be eliminated due to log |s| in ω(s, λ, β). Analogously, every strictly
1-stable random variable can be made symmetric by shifting.
Corollary 2.3.1
Let Xi , i = 1, . . . , n be i.i.d. Sα (λ, β, γ)−distributed random variables, α ∈ (0, 2]. Then
(
d n1/α X1 + λγ(n − n1/α ), if α =
/ 1,
X1 + · · · + Xn =
nX1 + π2 λβn log n, if α = 1.

This means, cn and dn of Definition 2.1.3 has values


(
1/α λγ(n − n1/α ), if α =
/ 1,
cn = n , α ∈ (0, 2], dn = 2
π λβn log n, if α = 1.

Theorem 2.1.1 2). There, it is shown aX1 +


Proof It follows by induction from the proof of (
d λγ(a + b − c), α=/ 1,
bX2 = cX1 + d, with c = (aα + bα )1/α , d = 2
Take
λβ π (a log(a/c) + b log(b/c)), α = 1.
(
λγ(2 + 21/α ), α=/ 1,
n = 2, a = b = 1 ⇒ c2 = 21/α , d2 = The induction step is trivial.
λβ π2 2 log(2)), α = 1.

Corollary 2.3.2
It follows from theorem 2.3.1, 2) and 3) that if X1 , X2 ∼ Sα (λ, β, γ) are independent then
X1 − X2 ∼ Sα (2λ, 0, 0) and −X1 ∼ Sα (λ, −β, −γ).

Proposition 2.3.1. Let {Xn }n∈N be a sequence of random variables defined in the same prob-
ability space (Ω, F, P), Xn ∼ Sαn (λM M M M M
n , βn , γn ), n ∈ N, where αn ∈ (0, 2), λn > 0, βn ∈
M M M M M
[−1, 1], γn ∈ R. Assume that αn → α, λn → λ , βn → β as n → ∞ for some α ∈
d
(0, 2), λM > 0, β M ∈ [−1, 1], γ M ∈ R. Then Xn → X ∼ Sα (λM , β M , γ M ) as n → ∞. Here the
superscript “M” means the modified parametrisation, cf. formula (2.1.3) after Definition 2.1.2.
d
Proof Xn → X as n → ∞ is equivalent to ϕXn (s) → ϕX (s), n → ∞, s ∈ R, or, log ϕXn (s) =
λM M
n (isγn − |s|
αn + isω (s, α , β M )) → λM (isγ M − |s|α + isω (s, α, β M )) which is straight-
M n n M
n→∞
forward by the continuity of the modified parametrisation w.r.t. its parameters.

Our aim now is to prove the following result.

Proposition 2.3.2. Let X ∈ Sα (λ, 1, 0), λ > 0, α ∈ (0, 1). Then X ≥ 0 a.s.

This property justifies again the use of β as skewness parameter and brings a random variable
X ∈ Sα (λ, 1, 0) the name of stable subordinator. The above proposition will easily follows from
the next theorem.
Theorem 2.3.2
1) For α ∈ (0, 1), consider Xδ = N
P δ
k=1 Uδ,k to be compound Poisson distributed, where Nδ is
20 2 Properties of stable laws

a P oisson(δ −α )−distributed random variable, δ > 0, and {Uδ,k }k∈N are i.i.d. positive random
(
δ α /xα , x > δ,
variables, independent of Nδ , with P(Uδ,k > x) =
0, x ≤ δ.
d
Then Xδ → X, δ → 0, where X ∼ Sα (λ, 1, 0) with λ = Γ(1 − α) cos(πα/2).
2) Let X ∼ Sα (λ, 1, 0), α ∈ (0, 1). Then its Laplace transform ˆlX (s) := Ee−sX is equal to
ˆlX (s) = e−Γ(1−α)sα , s ≥ 0. (2.3.1)

Proof 1) Since the generating function of N ∼ P oisson(a) is equal to ĝN (z) = Ez N =


P∞ k −a ak k
P∞ k −a P∞ (az) = e−a eaz = ea(z−1) , z ∈ C, we have
k=0 z P(N = k) = k=0 z e k! = e k=0 k!
−α
ĝNδ (z) = eδ (z−1) , z ∈ C, and hence
     PNδ 
ϕXδ (s) = EeisXδ = E E eisXδ |Nδ = E E eis k=0
Uδ,k
|Nδ
 

Y δ −α (ϕUδ,1 (s)−1)
= E EeisUδ,1  = ĝNδ (ϕUδ,1 (s)) = e ,
k=1
R ∞ isx
dP(Uδ,1 ≤ x) = α δ∞ eisx δ α x−α−1 dx. So (since α δ∞ x−α−1 dx = −δ −α )
R R
where ϕUδ,1 (s) = 0 e
 Z ∞   Z ∞ 
ϕXδ (s) = exp α (eisx − 1)x−α−1 dx → exp α (eisx − 1)x−α−1 dx ,
δ δ→+0 0

which is of the form (2.1.6) with H(x) = −c1 x−α I(x > 0) as in Theorem 2.1.4 (c2 = 0).
 R ∞ isx
Consider ϕX (s) := exp α 0 (e − 1)x−α−1 dx , s ≥ 0, α ∈ (0, 1). Show that
Z ∞ isx
e −1 Γ(1 − α) −iαπ/2
dx = −sα e . (2.3.2)
0 xα+1 α
If it is true then log ϕX (s) = −|s|α Γ(1 − α)(cos(πα/2) − isign(s) sin(πα/2)) since for s < 0
we make the substitution s → −s, i → −i. Then, log ϕX (s) = −|s|α Γ(1 − α) cos(πα/2)(1 −
isign(s)tg(πα/2)), s ∈ R, which means that, according to Definition 2.1.2, X ∼ Sα (λ, 1, 0). Now
prove relation (2.3.2). It holds
Z ∞ isx Z ∞ isx−θx
e −1 e −1 1 ∞ −θx+isx
Z
dx = lim dx = lim − (e − 1)d(x−α )
0 xα+1 θ→+0 0 xα+1 θ→+0 α 0
!
1 −θx+isx 1 ∞ −θ + is ∞ e−θx+isx
Z
= lim − (e − 1) α + dx
θ→+0 α x 0 α 0 xα
θ − is ∞ eisx x1−α−1 e−θx
Z
= lim − 1−α Γ(1 − α)θ1−α dx
θ→+0 θ α 0 Γ(1 − α)
θ − is 1 (θ − is)1−1+α
= − lim 1−α Γ(1 − α) 1−α
= − lim 1−α Γ(1 − α)
θ→+0 θ α (1 − is/θ) θ→+0 θ α/θ1−α
(θ − is)α Γ(1 − α) Γ(1 − α) p α
= − lim =− lim θ2 + s2 eiξ
θ→+0 α α θ→+0
Γ(1 − α) α −i π α
=− s e 2 ,
α
where ξ = arg(θ − is) → −π/2.
θ→+0
2 Properties of stable laws 21

3) Similarly to said above,


 Z ∞   Z ∞ 
ĝXδ (s) = Ee−sXδ = exp α (eisx − 1)x−α−1 dx → α (eisx − 1)x−α−1 dx
δ δ→+0 0
 Z ∞   Z ∞ 
(sub. y = sx) = exp sα α(eiy − 1)y −α−1 dy = exp −sα x−α e−x dx
0 0
α
= exp{−s Γ(1 − α)}, s ≥ 0.

Proof of Proposition 2.3.2 Since Xδ ≥ 0, Xδ → X as in Theorem 2.3.2,1) it holds X ≥ 0.


δ→+0
This means that the support of the density f of X ∼ Sα (λ, 1, 0) is contained in R+ . Moreover,
one can show that suppf := {x ∈ R : f (x) > 0} = R+ by showing that ∀a, b > 0 : aα + bα = 1
it holds asuppf + bsuppf = suppf. It follows from this relation that suppf = R+ since it can
not be R.

Exercise 2.3.1
Show this!
Remark 2.3.2
/ 1, α ∈ (0, 2] : for X ∼ Sα (λ, 1, 0),
Actually, formula (2.3.1) is valid for all α =
 n o
exp − λ α ,
s / 1, α ∈ (0, 2],
α=
ˆlX (s) = n cos(πα/2) o s ≥ 0,
exp −λ 2 s log s , α = 1,
π


< 0,

 α ∈ (0, 1),
λ λ
where Γ(1 − α) = cos(πα/2) for α =
/ 1. Here, − cos(πα/2) = > 0, α ∈ (1, 2),


λ, α = 2.

Proposition 2.3.3. The support of Sα (λ, β, 0) is R, if β ∈ (−1, 1), α ∈ (0, 2).

Proof Let X ∼ Sα (λ, 1, 0), α ∈ (0, 2), β ∈ (−1, 1) with density f. It follows from properties
2)-4) of Theorem 2.3.1
(
that ∃ i.i.d. random variables Y1 , Y2 ∼ Sα (λ, 1, 0) and constants a, b >
d aY1 − bY2 , α=/ 1,
0, c ∈ R s.t. X = Since, Y1 ≥ 0 and −Y2 ≤ 0 a.s. by Proposition
aY1 − bY2 + c, α = 1.
2.3.2, and their support is the whole R+ (R− , resp.), it holds suppf = R.

Remark 2.3.3
One can prove that the support of Sα (λ, ±1, 0) is R as well, if α ∈ [1, 2).
Now consider the tail behavior of stable random variables. In the Gaussian case (α = 2), it
is exponential:
ϕ(x)
Proposition 2.3.4. Let X ∼ N (0, 1). Then, P(X < −x) = P(X > x) ∼ x ,x → ∞, where
2
ϕ(x) = √12π e−x /2 is the standard normal density.
22 2 Properties of stable laws

Proof Due to the symmetry of X, P(X < −x) = P(X > x), ∀x > 0. Prove the more accurate
inequality
1 1 ϕ(x)
 
− 3 ϕ(x) < P(X > x) < , ∀x > 0. (2.3.3)
x x x
ϕ(x)
The asymptotic P(X > x) ∼ x ,x → +∞ follows immediately from

it. 
2 /2 2 /2
First prove the left relation in (2.3.3). Since e−t < e−t 1+ 1
t2
, ∀t > 0, it holds
 
for x > 0 : P(X > x) = √12π x∞ e−t /2 dt ≤ √12π x∞ e−t /2 1 + t12 dt = √12π e−x /2 x1 , where
R 2 R 2 2

2
 
the last equality can be easily verified by differentiation w.r.t. x : − √12π e−x /2 1+ 1
x2
=
   0  
√1 − xx e −x2 /2 −e −x2 /2 1
= ϕ(x)
. Analogously, e −t2 /2 1− 3
<e −t2 /2 , ∀t > 0, hence
2π x2 x t2
Z ∞ Z ∞
1 1 1 3 1
   
2 /2 2 /2
− 3 ϕ(x) = √ e−t 1− 2
dt ≤ √ e−t dt = P(X > x), (2.3.4)
x x 2π x t 2π x

where again the left equality in (2.3.4) is proved by differentiation w.r.t. x.


Remark 2.3.4  
σ x−µ
If X ∈ N (µ, σ 2 ), then P(X > x) ∼ x−µ ϕ σ , x → +∞ accordingly.
However, for λ ∈ (0, 2), the behaviour of right and left hand side tail probabilities is polyno-
mial in x1 :
Proposition 2.3.5. Let X ∼ Sα (λ, β, γ), α ∈ (0, 2). Then
1+β 1−β
xα P(X > x) → cα λ, xα P(X < −x) → cα λ, as x → +∞,
2 2
where 
Z ∞ −1 1−α
sin x Γ(1−α) cos(πα/2) , α=
/1

cα = dx =
0 xn 2,
π α = 1.
Remark 2.3.5
1) The above proposition states, for β = ±1, that for
(
X ∼ Sα (λ, −1, 0), it holds P(X > x)xα → 0, x → +∞,
which means that the tails go to
X ∼ Sα (λ, 1, 0), it holds P(X < −x)xα → 0, x → +∞,
zero faster than x−α . But what is the correct asymptotic in this case? For α ∈ (0, 1) we know
that X is totally skewed to the left (right) and hence P(X > x) = 0, ∀x > 0 for β = −1 and
P(X < −x) = 0, ∀x > 0 for β = 1.
For α ≥ 1, this asymptotic is far from being trivial. Thus, it can be shown (see [6][Theorem
2.5.3]) that
  − α    α 
1
√ x 2(α−1) x (α−1)
P(X > x)
 ∼ αaα exp −(α − 1) αaα , α > 1,
x→+∞ 2πα(α−1)
  β = −1,
P(X > x)
 ∼ √12π exp − (π/2)λx−1
2 − e(π/2)λx−1 , α = 1,
x→+∞

where aα = (λ/ cos(π(2 − α)/2))1/α .


For β = 1 and P(X < −x) the same asymptotic applies, since P(X < −x) = P(−X > x),
and −X ∼ Sα (λ, −1, 0) with X ∼ Sα (λ, 1, 0).
2) In the specific case of SαS X, i.e., β = 0, X ∼ Sα (λ, 0, 0), Proposition 2.3.5 yields
P(X < −x) = P(X > x) ∼ λc2α x1α , x → +∞.
2 Properties of stable laws 23

Proposition 2.3.5 will be proved later after we have proven important results, needed for it.
Let us state now some corollaries.
Corollary 2.3.3
For any X ∼ Sα (λ, β, γ), 0 < α < 2 it holds E|X|p < ∞ iff p ∈ (0, α). In particular, E|X|α =
+∞.

Proof ItR follows immediately from the tail asymptotic of Proposition 2.3.5 and the formula
E|X|p = 0∞ P(|X|p > x)dx.

Proposition 2.3.6. Let X ∼ Sα (λ, β, 0) for 0 < α < 2, and β = 0 if α = 1. Then (E|X|p )1/p =
cα,β (p)λ1/α , where ∀p ∈ (0, α) and cα,β (p) is a constant s.t.
p/(2α)
2p−1 Γ(1 − p/α) απ p
   
cα,β (p) = R∞ 1 + β 2 tg2 cos arctg(βtg(απ/2)) .
p 0 u−p−1 sin2 udu 2 α

Proof We shell show only that (E|X|p )1/p = cα,β (p)λ1/α , where cα,β (p) = (E|X0 |p )1/p with
X0 ∼ Sα (1, β, 0). The exact calculation of cα,β (p) will be left without proof. The first statement
d
follows from Theorem 2.3.1,4), namely, since X = λ1/α X0 . Then (E|X|p )1/p = λ1/α (E|X0 |p )1/p =
cα,β (p).

2.4 Limit theorems


Let us reformulate Definition 2.1.1 as follows.
Definition 2.4.1
We say that the distribution fuction F belongs to the domain of attraction of distribution
function G if for a sequence of i.i.d. r.v.’s {Xn }n∈N , Xn ∼ F ∃ sequences of constants
{an }n∈N , {bn }n∈N : an ∈ R, bn > 0, ∀n ∈ N s.t.
n
1 X d
Xi − an → X ∼ G, n → ∞.
bn i=1

Let us state and prove the following result.


Theorem 2.4.1
1) G has a domain of attraction iff G is a distribution function of a stable law.
2) F belongs to the domain of attraction of N (µ, σ 2 ), σ > 0 iff
Z x
µ(x) := y 2 F (dy), x > 0
−x

is slowly varying at ∞. This holds, in particular, if F has a finite second moment (then
∃ limx→+∞ µ(x) = EX12 ).
3) F belongs to the domain of attraction of α-stable law, α ∈ (0, 2), iff
µ(x) ∼ x2−α L(x), (2.4.1)
where L : R+ → R+ is slowly varying at +∞ and it holds the tail balance condition
P(X > x) 1 − F (x) P(X < −x) F (−x)
= → p, = → q
P(|X| > x) 1 − F (x) + F (−x) x→+∞ P(|X| > x) 1 − F (x) + F (−x) x→+∞
(2.4.2)
24 2 Properties of stable laws

for some p, q ≥ 0 : p + q = 1 with X ∼ F.


4) Condition (2.4.1) equivalent to (2.4.3)

P(|X| > x) = 1 − F (x) + F (−x) ∼ x−α L(x). (2.4.3)


x→+∞

Remark 2.4.1
a) In Definition 2.4.1, one can choose bn = inf{x : P(|X1 | > x) ≤ n−1 }, an = nE(X1 I(|X1 | ≤
bn )).
b) It is quite clear that statements 2) and 3) are special cases of the following one:
1) F belongs to the domain of attraction of an α-stable law, α ∈ (0, 2], iff (2.4.1) and (2.4.2)
hold.
c) It can be shown that {bn } in Theorem 2.4.1 must satisfy the condition limn→∞ nL(b bα
n)
=
n
λcα , with cα as in Proposition 2.3.5. Then {an } can be chosen as

0,

 α ∈ (0, 1),
an = nb2n R sin(x/bn )dF (x),
R
α = 1,

 2R

nbn R xdF (x), α(1, 2).

Proof of Proposition 2.3.5 We just give the sketch of the proof. It is quite clear that
Sα (λ, β, γ) belongs to the domain of attraction of Sα (λ, β, 0) with bn = n1/α , cf. Theorems
2.1.3,2.1.4, Corollary 2.1.1 and Remark 2.1.5. Then the tail balance condition (2.4.2) holds
with p = 1+β 1−β
2 , q = 2 . By Remark 2.4.1 c), putting bn = n
1/α into it yields that L(x) in

(2.4.3) has the property limx→+∞ L(x) = cα λ. It follows from (2.4.2) and (2.4.3) of Theorem
2.4.1 that
xα P(X > x) ∼ xα pP(|X| > x) ∼ p, x → +∞.
1+β
xα x−α lim L(x) = pcα λ = cα λ,
x→+∞ 2
xα P(X < −x) ∼ qcα λ = cα 1−β
2 λ, x → +∞ is shown analogously.

Proof of Theorem 2.4.1 F belongs to the domain of attraction of a distribution function G


if, by Definition 2.4.1, ∃ i.i.d. r.v.’s {Xn }n∈N , Xn ∼ F, {an }n∈N {bn }n∈N ⊂ R : bn > 0, ∀n, s.t.
1
X −a b d
Sn = b1n ni=1 Xi − an = ni=1 i bn n → X ∼ G, n → ∞. Denote cn = an n1 , n ∈ N. In terms
P P n n

of characteristic functions, ϕSn (s) → ϕX (s) ∀s ∈ R, where


n→∞

n n
!
Xk − cn bn Xk − cn bn
X Y  
ϕSn (s) = E exp is = E exp is
k=1
bn k=1
bn
 n
= e−iscn ϕX1 (s/bn ) .
Xi −i.i.d.

Put ϕn (s) = ϕX1 (s/bn ), Fn (x) = F (bn x). Then the statement of Theorem 2.4.1 is equivalent to
 n
e−iscn ϕn (s) → ϕX (s), (2.4.4)
n→∞

where X is stable.
2 Properties of stable laws 25

Lemma 2.4.1
Under assumptions of Theorem 2.4.1, relation (2.4.4) is equivalent to

n(ϕn (s) − 1 − icn s) → η(s), n → ∞ (2.4.5)

where η(s) is a continuous function of the form η(s) = isa−bs2 + {x=0} isx −1−is sin x)dH(x)
R
/ (e
(cf. (2.1.6)) with H(·) from Theorem 2.1.2 and ϕX (s) = eη(s) , s ∈ R.
d
Proof 1) Show this equivalence in the symmetric case, i.e., if X1 = −X1 . Then it is clear that
we may assume cn = 0, ∀n ∈ N. Show that

ϕnn (s) → eη(s) ⇔ (2.4.6)


n→∞
n(ϕn (s) − 1) → η(s), (2.4.7)
n→∞

and η is continuous. First, if a characteristic function ϕ(s) = / 0∀s : |s| < s0 , then ∃! representa-
tion ϕ(s) = r(s)e iθ(s) , where θ(·) is continuous and θ(0) = 0. Hence, log ϕ(s) = log r(s) + iθ(s)
is well-defined, continuous, and log ϕ(0) = log r(0) + iθ(0) = log 1 + i0 = 0.
Let us show (2.4.7) ⇒ (2.4.6). It follows from (2.4.7) that ϕn (s) → 1 and by continuity
n→∞
theorem for characteristic functions, this convergence is uniform in any finite interval s ∈
(−s0 , s0 ). Then, log ϕn (s) is well-defined for large n (since ϕn (s) =
/ 0 there). Since

log z = z − 1 + o((z − 1)2 ) for |z − 1| < 1, (2.4.8)

it follows log ϕnn (s) = n log ϕn (s) = n(ϕn (s) − 1 + o((ϕn (s))2 )) ∼ n(ϕn (s) − 1) → η(s) by
n→∞ n→∞
(2.4.7). Then, ϕnn (s) → eη(s) , ∀s ∈ R and (2.4.6) holds.
n→∞
Let us show (2.4.6) ⇒ (2.4.7). Since η(0) = 0, then eη (s) = / 0 ∀s ∈ (−s0 , s0 ) for some
s0 > 0. Since the convergence of characteristic functions is uniform by continuity theorem,
ϕn (s) =/ 0 for all n large enough and for s ∈ (−s0 , s0 ). Taking logarithms in (2.4.6), we get
n log ϕn (s) → η(s). Using Taylor expansion (2.4.8), we get n(ϕn (s)−1) → η(s), and (2.4.7)
n→∞ n→∞
holds.
2) Show this equivalence in the general case cn =/ 0. More
  specifically, show that it holds if
ϕn (s) → 1 ∀s ∈ R, and nβn → 0, where βn = R sin bxn F (dx). Then
2
R
n→∞ n→∞

n(βn − cn ) → a, (2.4.9)
n→∞

and (2.4.5) writes equivalently as

n(ϕn (s) − 1 − iβn s) → η(s). (2.4.10)


n→∞

Without loss of generality set a = 0.


Notice that the proof of 1) does not essentially depend on the symmetry of X1 , i.e., equiv-
alence (2.4.6) ⇔ (2.4.7) holds for any characteristic functions {ϕn } s.t. ϕn (s) − 1 → 1
n→∞
∀s ∈ R. Applying this equivalence to {ϕn (s)e−iscn }n∈N leads to n(ϕn (s)e−icn s − 1) → η(s) =
n→∞
−bs2 + isx − 1 − is sin x)dH(x). Since we assumed that ϕ (s) → 1 it follows c
R
/ (e
{x=0} n n → 0,
n→∞ n→∞
while bn → ∞. Consider Im (n(ϕn (s) − eicn s )) → Im (eicn s η(s)) for s = 1. Since η(1) ∈ R
n→∞
and cn → 0, we get n (Im ϕn (1) − sin cn ) ∼ η(1) sin cn , since cn ∼ cn as cn → 0, where
n→∞
26 2 Properties of stable laws

R 
is/bn dF (x)
R
Im ϕn (1) = Im Re = R sin(x/bn )dF (x) = βn ⇒ n(βn − cn ) → 0. Hence,
s=1 n→∞
relation n(ϕn (s)e−icn s − 1) → η(s) one can write as n(ϕn (s)e−iβn s − 1) → η(s). But
n→∞ n→∞

n(ϕn (s)e−iβn s − 1) = n(ϕn (s) − 1 − iβn s)e−iβn s + n((1 + iβn s)e−iβn s − 1),
| {z }
→0,n→∞

since n((1 + iβn s)e−iβn s − 1) = n((1 + iβn s)(1 − iβn s + o(bn )) − 1) = n(1 + βn2 s2 + o(bn ) − 1) =
nb2n s2 + o(nbn ) → 0 by our assumption. We conclude that (2.4.4) ⇒ (2.4.5) holds.
n→∞
Conversely, if (2.4.9) and (2.4.8) hold then reading the above reasoning in reverse order we
go back to (2.4.4).
Now we have to show that ϕn (s) → 1, nβn2 → 0. The first statement is trivial since
n→∞ n→∞
2
ϕn (s) = ϕ(s/bn ) → ϕ(0) = 1, as bn → ∞. Let us show nβn2 = n (
R

R sin(x/bn )F (dx)) n→∞ 0.
By Corollary 2.1.1 bn ∼ n1/α h(n), n
→ ∞, where h(·) is slowly varying at +∞. It follows
p
from (2.4.3) that E|X1 | < ∞ ∀p ∈ (0, α). Then |βn | ≤ 2 0∞ bxn dF (x) = O(|βn |−p ) =
p
R

O(n−p/α h−p (n)) and nβn2 = O(n1−2p/α ) → 0 if β is chosen s.t. p > α/2.
n→∞

Now prove the following.


Lemma 2.4.2
Conditions of Theorem 2.4.1 are necessary and sufficient for relation (2.4.5) to hold with some
special sequences of constants {bn }, {cn }.
If this lemma is proven, then the proof of Theorem 2.4.1 is complete, since by Lemma 2.4.1
relation (2.4.5) and (2.4.4) are equivalent, and thus F belongs to the domain of attraction of
some α−stable law.

Proof of Lemma 2.4.2. Let relation (2.4.5) holds with some bn > 0 and an . This means,
d
equivalently, that Sn → X ∼ G. Since the case X ∼ N (0, 1) is covered by the CLT, let
n→∞
us exclude it as well as the trivial case X ≡ const. By Theorem 2.1.2-2.1.3 with kn = n
R R 1
Xnj = Xj /bn , an = An (y) − a − |u|<y udH(u) + |u|≥y u dH(u), X1 ∼ F,
Z ybn
X1 n n
 
An (y) = nE I(|X1 |/bn < y) = E(X1 I(|X1 | < bn y)) xdF (x),
bn bn bn −ybn

n(F (xbn ) − 1) → H(x), x > 0,
±y being continuity points of H, it follows that n→∞ and
nF (xbn ) → H(x), x < 0,
n→∞
 !2 
Z εbn Z εbn
n
lim lim sup  x2 dF (x) − x2 dF (x)  = b. (2.4.11)
ε→0 n→∞ b2 n −εbn −εbn

1) Show that bn → ∞+∞, bn+1 → 1, if X ≡


bn n→∞ / const a.s. By Remark 2.1.3 2), it holds property
(2.1.5), i.e., limn→∞ P(|X1 | < c) = 1 then the central limit theorem can be applied to {Xn }
with Pn
i=1 Xi − nEX1 d
√ √ → N (0, 1)
n VarX1 n→∞
2 Properties of stable laws 27


and it is not difficult to show (see Exercise 2.3.2 below) that bn = const n → ∞ in this
case. If ∃
/ c > 0 : P(|X1 | < c) = 1 then bn = / O(1), n → ∞ since that would contradict
limn→∞ P(|X1 | > bn ε) = 0 ⇒ ∃{nk }, nk → ∞ as k → ∞ : bnk → +∞. W.l.o.g. identify
sequences {n} and {nk }. Alternatively, one can agree that {Sn } is stochastically bounded
d
(which is the case if Sn → X) iff bn → +∞.
n→∞
Exercise 2.4.1
d d
Let {Fn }n∈N be a sequence of c.d.f. s.t. Fn (αn ·+βn ) → U (·), n → ∞, Fn (γn ·+δn ) → V (·), n →
∞ for some sequences {αn }, {βn }, {γn }, {δn } s.t. αn γn > 0, where U and V are c.d.f.’s, which
are not concentrated at one point. Then
γn δn − βn
→ a=
/ 0, → b
αn n→∞ αn n→∞
and V (x) = U (ax + b), ∀x ∈ R.
bn+1 d d Xn+1
Now show that bn → 1, n → ∞. Since Sn → X ≡
n→∞
/ const, it holds Sn+1 → X,
n→∞ bn+1 =
d n+1 P d d
Sn+1 − Sn → 0 ⇒ Xbn+1 → 0, n → ∞. 1
Thus, bn+1 Sn − an+1 → X and 1
bn Sn − an → X,
n→∞ n→∞ n→∞
bn+1
which means by Exercise 2.4.1, that →
bn n→∞ 1.
2) Prove the following.
αn+1
Proposition 2.4.1. Let βn → +∞, →
αn n→∞ 1. Let U be a monotone function s.t.
n→∞

lim αn U (βn x)ψ(x) (2.4.12)


n→∞

exists on a dense subset of R+ , where ψ(x) ∈ (0, +∞) on some interval I, then U is regularly
varying at +∞, ψ(x) = cxρ , ρ ∈ R.

Proof W.l.o.g. set ψ(1) = 1, and assume that U is non-decreasing and (2.4.12) holds for x = 1
(otherwise, a scaling in x can be applied). Set n = min{k ∈ N0 : βk+1 > t}. Then it holds
βn ≤ t < βn+1 , and
λn U (βn x) U (βn x) U (tx)
ψ(x) ∼ ∼ ≤
n→∞ λn+1 U (βn+1 ) n→∞ U (βn+1 ) U (t)
U (βn+1 x) λn+1 U (βn+1 x) ψ(x)
≤ ∼ ∼ = ψ(x)
U (βn ) n→∞ λn U (βn ) n→∞ ψ(1)
for all x, for which (2.4.12) holds. The application of Lemma 2.1.1 finishes the proof.
(
n(F (xbn ) − 1) → H(x), x > 0
3) Apply Proposition 2.4.1 to as n → ∞ with αn = n, βn =
nF (−xbn ) → H(−x), x>0
bn ⇒ 1 − F (x) = P(X1 > x), F (−x) = P(X1 < −x) are regularly varying at +∞, and
H(x) = c1 xρ1 , H(−x) = c2 xρ2 ,

P(X1 > x) ∼ xρ1 L1 (x), P(X1 < −x) ∼ xρ2 L2 (x), x → +∞, (2.4.13)

where L1 , L2 are slowly varying at+∞.


R 2 
εbn
Since (2.4.11) holds, limn→∞ bn2 µ(εbn ) − −εb n
xdF (x) is a bounded function of ε in the
n
R 2
n x
neighborhood of zero, hence by Proposition 2.4.1 with αn = b2n
, βn = bn , µ(x) − −x ydF (y)
28 2 Properties of stable laws

is regularly varying at +∞. By Theorem 2.1.4, ρ1 = ρ2 = −α, c1 < 0, c2 > 0, and evidently,
P(|X1 | > x) = 1 − F (x) + F (−x) ∼ x−α (L1 (x) + L2 (x)), so (2.4.3) holds.
x→+∞ | {z }
L(x)

Exercise 2.4.2
Show that then µ(x) ∼ x2−α L3 (x) is equivalent to (2.4.3). Show that tail balance condition
x→+∞
(2.4.2) follows from (2.4.13) with ρ1 = ρ2 = −α.
So we have proven that (2.4.5) ⇒ (2.4.2),(2.4.3) (or, equivalently, (2.4.1),(2.4.2)). Now let
us prove the inverse statement.
4) Let (2.4.1) hold. Since L1 is slowly varying, one can find a sequence {bn }, bn → ∞, n → ∞
s.t. bnα L(bn ) → c > 0 – some constant. (Compare Remark 2.4.1, c).) Then bn2 µ(bn x) ∼
n n→∞ n n→∞
n
2
bn
(bn x) 2−α L(b x)
n = n
bα L(bn x)x−α ∼ cx−α , x > 0 and hence
n n→∞

n(F (xbn ) − 1) → c1 x−α ,
n→∞ (2.4.14)
nF (−xbn ) → c2 x−α .
n→∞

Exercise 2.4.3
1) Show the last relation. Then 1) of Theorem 2.1.3 holds.
2) Prove that 2) of Theorem 2.1.3 holds as well, as a consequence of n
b2n
µ(bn x) ∼ cx−α and
n→∞
(2.4.14).
d
Then, by Theorem 2.1.3 Sn → X, and (2.4.5) holds. Lemma 2.4.2 is proven.
n→∞

The proof of Theorem 2.4.1 is thus complete. Part a) and the second half of part c) of
Remark 2.4.1 will remain unproven.

2.5 Further properties of stable laws


Proposition 2.5.1. Let X ∼ Sα (λ, β, γ) with α ∈ (1, 2]. Then EX = λγ.

In addition to a proof a using the law of large numbers, (see Exercise 4.1.14) let us give an
alternative proof here.

Proof By Corollary 2.3.3, E|X| < ∞ if α ∈ (1, 2). For α = 2 X is Gaussian and hence E|X| <
d
∞ is trivial. By Remark 2.3.1 5), X − αγ is strictly stable, i.e., X1 − µ + X2 − µ = c2 (X − µ)
d d
by Definition 2.1.3, where X1 = X2 = X, all independent r.v.’s. Taking expectations on both
sides yields 2E(X − µ) = c2 E(X − µ). Since cn = n1/α by Remark 2.1.5, c2 = 21/α , and hence
E(X − µ) = 0 ⇒ EX = µ.

Now we go on to show series representation of stable random variables. Some preparatory


definitions are in order.
Definition 2.5.1
Let X and Y be two random variables defined possibly on different probability space. One says
d
that X is a representation of Y if X = Y.
2 Properties of stable laws 29

Definition 2.5.2
Let {Ti }i∈N be the sequence of i.i.d. Exp(λ)-distributed random variables with λ > 0. Set
τn = ni=1 Ti ∀n ∈ N, τ0 = 0, and N (t) = max{n ∈ N0 : τn ≤ t}, t ≥ 0. The random process
P

N = {N (t), t ≥ 0} is called Poisson with intensity λ. Time instants Ti are called arrival times,
τi are interarrival times.
Exercise 2.5.1
Prove the following properties of a Poisson process N :

1. N (t) ∼ P oisson(λt), t > 0, and, in particular, EN (t) = VarN (t) = λt.

2. Let Ni = {Ni (t), t ≥ 0} be two independent Poisson processes with intensities λi , i = 1, 2.


Then {N1 (t) + N2 (t), t ≥ 0} is a Poisson process with intensity λ1 + λ2 (which is called
the superposition N1 + N2 of N1 and N2 .)

3. Tn ∼ Γ(n, λ), n ∈ N, where Γ(n, λ) is a Gamma distribution with parameters n and λ,


ETn = n/λ.

Clearly, all Tn are dependent random variables.

Proposition 2.5.2. Let N = {N (t), t ≥ 0} be a Poisson process with intensity one (λ = 1)


with arrival times {Tn }n∈N . Let {Rn }n∈N be a sequence of i.i.d. random variables, independent
−1/α
of {Tn }n∈N . Then X = ∞
P
n=1 Tn Rn is a strictly α−stable random variable provided that
α ∈ (0, 2] and this series converges a.s.
P∞ (i) −1/α (i)
Proof Let Xi = n=1 (Tn ) Rn , i = 1, 2, 3 be three independent copies of X, where
d (i) d (i)
{Rn } = {Rn }, i = 1, 2, 3, {Tn } = {Tn }, i = 1, 2, 3, and all threesequences are independent.
X + X =d 1/α
1 2 2 X,
By Definition 2.1.4 and Remark 2.1.5, it suffices to show that d 1/α
X + X + X = 3 X,
1 2 3
1/α ∞ −1/αRn , where {Tn /2}n∈N forms a Poisson process 2N with intensity
P
2 X = n=1 (Tn /2)
λ = 2, since τn /2 = (Tn − Tn−1 )/2 ∀n, and P(τn /2 ≥ x) = P(τn ≥ 2x) = exp(−2x), x ≥ 0.
P∞ 0 −1/α R0 , where {T 0 } are arrival times of the super-
It is clear that X1 + X2 = n=1 (Tn ) n n
position
 N 1 + N2 (being a Poisson process of intensity 2, cf. Exercise 2.5.1), and R0 n =
R(1) , if Tn0 = Tk
(1)
for some k ∈ N d d
k
Since {Rn }n∈N = {Rn0 }n∈N , and N1 + N2 = 2N, we
R(2) , if Tn0 =
(2)
Tm for some m ∈ N.
k
d
have X1 + X2 = X, so we are done. For X1 + X2 + X3 , the proof is analogous.

In order to get a series representation of a SαS random variable X, we’ll have to ensure the
a.s. convergence of this series. For that, we impose
(
restrictions on α ∈ (0, 2) and on {Rn } : we
+1, if Rn > 0,
assume Rn = εn Wn , where εn = sign(Rn ) = Wn = |Rn |, EWnα < ∞.
−1, if Rn ≤ 0,
Theorem 2.5.1 (LePage representation):
Let {εn }, {Wn }, {Tn } be independent
(
sequences of random variables, where {εn }n∈N are i.i.d.
+1, with probability 1/2,
Rademacher random variables, εn = , {Wn }n∈N are i.i.d. random
−1, with probability 1/2
variables with E|Wn |α < ∞, α ∈ (0, 2), and {Tn }n∈N is the sequence of arrival times of a unit
rate Poisson process N (λ = 1).
30 2 Properties of stable laws

a.s. −1/α E|W1 |α


Then X := ∞ Wn ∼ Sα (σ, 0, 0), where this series converges a.s., σ =
P
n=1 εn Tn cα , and
cα is a constant introduced in Proposition 2.3.5.
Remark 2.5.1
1) Proposition 2.5.2 yields the fact that X ∼ SαS, but it does not give insights into the value
of σ.
2) Since the distribution of X depends only on E|W1 |α , it does not matter, which {Wn } we
choose. A usual choice can be Wn ∼ U [0, 1], or Wn ∼ N (0, 1). Hence, Wn do not need to be
non-negative, as in the comment before Theorem 2.5.1.
3) The LePage representation is not used to simulate stable variables, since the convergence
of the series is rather slow. Indeed, methods in Chapter 3 are widely used.
4) Skewed stable variables have another series representation which will be given (without
proof) in Theorem 2.5.3 below.
5) It follows directly from Theorem 2.5.1 that for any SαS random variable X ∼ Sα (λ, 0, 0), it
1/α P
−1/α

cα λd ∞
has the LePage representation X = E|W 1|
α n=1 εn Tn Wn , where sequences {εn }, {Wn },
{Tn } are chosen as above. In particular, choosing the law of W1 s.t. E|W1 |α = λ reduces the rep-
d 1/α P
∞ −1/α −1/α
resentation to X = cα n=1 εn Tn Wn . Since Tn ↑ a.s. as n → ∞, the terms εn Tn Wn ↓
−1/α
stochastically, and one can show that the very first term ε1 T1 W1 dominates the whole tail
behaviour of X. In more details, by Proposition 2.3.5, it holds P(X > x) ∼ 12 cα λx−λ , and it
n→∞
is not difficult to see that
1/α −1/α
a) P(cα ε1 T1 W1 > x) ∼ 21 cα λx−α as x → +∞,
P∞ −1/α
b) P( n=0 εn Tn Wn > x) = o(x−α ) as x → +∞.
Exercise 2.5.2
Prove the statement of the previous Remark 5,a).

Proof of Theorem 2.5.1 1) Let {Un }n∈N be a sequence of i.i.d. U [0, 1]−distributed random
−1/α
variables, independent of {εn }n∈N and {Wn }n∈N . Then {Yn }n∈N given by Yn = εn Un Wn , n ∈
N is a sequence of symmetric i.i.d. random variables. Let us show that the law of Y1 lies in the
domain of attraction of a SαS random variable. For that, compare its tail probability
−1/α
P(|Y1 | > x) = P(U1 |W1 | > x) = P(U1 < x−α |W1 |α )
Z ∞ Z x Z ∞
= P(U1 < x−α ω α )dF|W | (ω) = x−α ω α dF|W | (ω) + dF|W | (ω)
0 0 x
Z x
= x−α ω α dF|W | (ω) + P(|W1 | > x),
0

where F|W | (x) = P(|W1 | ≤ x). So,


Z ∞
α
lim x P(|Y1 | > x) = ω d dF|W | (ω) + lim xα P(|W1 | > x) = E|W1 |α .
x→+∞ x→+∞
|0 {z } | {z }
E|W1 |α =0, since E|W1 |α

Hence, condition (2.4.3) of Theorem 2.4.1 is satisfied. Due to symmetry of Y1 , tail balance
condition (2.4.2) is obviously true with p = q = 1/2. Then, by Theorem 2.4.1 and Corollary
1 Pn d
2.1.1, it holds n1/α k=1 Yk → X ∼ Sα (σ, 0, 0), where the parameters (λ, β, γ) of the limiting
n→∞
E|W1 |α
stable law come from the proof of Theorem 2.1.1 with c1 = c2 = 2 (due to the symmetry
of Y1 and X).
2 Properties of stable laws 31

1 Pn
2) Rewrite n1/α k=1 Yk to show that is limiting random variables X coincides with
P∞ −1/α
k=1 εk Tk Wk .
Exercise 2.5.3
Let N be the Poisson process with intensity λ > 0 built upon arrival times {Tn }n∈N . Show that
d
a) under the condition {Tn+1 = t} it holds (T1/t , . . . , Tn/t ) = (u(1) , . . . , u(n) ), where u(k) , k =
1, . . . , n are order statistics of a sample (u1 , . . . , un ) with uk ∼ U (0, 1) being i.i.d. random
variables.
 
T1 d
b) Tn+1 , . . . , TTn+1
n
= (u(1) , . . . , u(n) ).
−1/α
Reorder the terms Yk in the sum nk=1 Yk in order of ascending uk , so to have nk=1 εk u(k) Wk .
P P

Since Wk and εk are i.i.d., this does not change the distribution of the whole sum. Then
n n n −1/α
1 1 1 Tk

X d X −1/α d X
Yk = εk U(k) Wk = εk Wk
n1/α k=1
n1/α k=1
n1/α k=1
Tn+1

n
1/α X
Tn+1

−1/α d
by Exercise 2.5.3 b). Then, by part 1), εk Tk Wk → X with X as above.
n k=1
n→∞
| {z }
=:Sn
d P∞ −1/α d
3) Show that Sn → k=1 εk Tk Wk , then we are done, since then Sα (σ, 0, 0) ∼ X =
Pn+1
−1/α τi a.s.
By the strong law of large numbers, it holds Tn+1 Tn+1 n+1
P∞ n+1
k=1 εk Tk Wk . n = n+1 n = n →
i=1
n+1
ET1 = 1, as n → ∞, since the Poisson process N has the unit rate, and T1 ∼ Exp(1). Then
P(A) = 1, where A = {limn→∞ Tnn = 1} ∩ T1 > 0. Let us show that ∀ω ∈ A
P∞ −1/α W (ω) < ∞. Apply the following three-series theorem by Kolmogorov
k=1 εk (ω)(Tk (ω)) k
(without proof).
Theorem 2.5.2 (Three-series theorem by Kolmogorov):
Let {Yn }n∈N be a sequence of independent random variables. Then ∞
n=1 Yn < ∞ a.s. iff
P

∀s > 0
P∞
a) n=1 P(|Yn | > s) < ∞
P∞
b) n=1 E(Yn I(|Yn | ≤ s)) < ∞
P∞
c) n=1 Var(Yn I(|Yn | ≤ s)) < ∞

See the proof in [1, Theorem IX.9.2.]


Let us check conditions a)-c) above. ∀s > 0
P∞ −1/α
Wn | > s) = ∞ α α
n=1 P(|Wn | > s Tn ) ≤
∞ α α
n=1 P(|W1 | > s c1 n) < ∞,
P P
a) n=1 P(|εn Tn
d
since ∃c1 , c2 > 0 : c1 n ≤ Tn (ω) ≤ c2 n ∀n > N (ω) (due to Tnn(ω) → 1 ∀ω ∈ A) and
n→∞
E|W1 |α < ∞ by assumptions.
−1/α −1/α −1/α −1/α
h i h i
b) It holds E εn Tn Wn I(|εn Tn Wn | ≤ s) = Eεn E Tn Wn I(|Tn Wn | ≤ s) = 0 by
|{z}
=0
independence of εn from Tn and i Wn , and by symmetry of {εn }. Then
−1/α −1/α
P∞ h
n=1 E εn Tn Wn I(|εn Tn Wn | ≤ s)) = 0 < ∞.
32 2 Properties of stable laws

c)
∞ h ∞
i by b) X h i
Var εn Tn−1/α Wn I(|εn Tn−1/α Wn | ≤ s) E Tn−2/α Wn2 I(|Tn−1/α Wn | ≤ s)
X
=
n=1 n=1
∞ ∞ Z s(c2 n)1/α
−2/α −2/α −2/α X −2/α
X h i
2 1/α
≤ c1 n E W1 I(|W1 | ≤ s(c2 n) ) = c1 n w2 dF|W | (w)
n=1 n=1 0
Z ∞ Z s(c2 x)1/α Z ∞ Z ∞
F ubini
≤ c3 x−2/α w2 dF|W | (w)dx = c3 w2 dF|W | (w) x−2/α dx
0 0 0 s−α c−1
2 w
α
Z ∞
= c4 wα dF|W | (w) = c4 E|W1 |α < ∞,
0

where c3 , c4 > 0.
d P∞ −1/α d P∞ −1/α
Hence, by Theorem 2.5.2 Sn → k=1 εk Tk Wk < ∞ a.s. and X = k=1 εk Tk Wk ∼
n→∞
Sα (σ, 0, 0).

Theorem 2.5.3 (LePage representation for skewed stable variables):


Let {Wn }n∈N be a sequence of i.i.d. random variables and let N = {N (t), t ≥ 0} be a unit rate
Poisson process with arrival times {Tn }n∈N , independent of {Wn }n∈N . Assume E|W1 |α < ∞, α ∈
−1/α (α)
/ 1, and E|W1 log(|W1 |)| < ∞, α = 1. Then X := ∞ Wn −κn ) ∼ Sα (λ, β, 0),
P
(0, 2), α = n=1 (Tn
E|W1 |α
where this convergence is a.s., λ = cα with cα being a constant introducing in Proposition
E(|W1 |α signW1 )
2.3.5, β = E|W1 |α , and


0, 0 < α < 1,
  R |W1 |/(n−1) sin x 
κ(α)
n = E W1 |W1 |/n x2
dx , α=1
 α n α−1 α−1

  
α − (n − 1) α EW1 , α > 1.
α−1

If α = 1, then
∞ Z |W1 |/(n−1) !!
sin x
Tn−1 Wn
X
X := − E W1 dx ∼ S1 (λ, β, γ), (2.5.1)
n=1 |W1 |/n x2

with λ and β as above, and γ = − λ1 E(W1 log |W1 |).

Proof see [3, §1.5.]

Some remarks are in order.


Remark 2.5.2
1) the statement of Theorem 2.5.3 can be easily converted into a representation: a random
d −1/α (α)
variable X ∼ Sα (λ, β, γ), 0 < α < 2, has a representation X = λγ + ∞ Wn − κn ),
P
n=1 (Tn
where the i.i.d. random variables {Wn }n∈N satisfy E|W1 |α = cα λ, E(|W1 |2 signW1 ) = cα βλ.
Apart from this restrictions on {Wn }n∈N , the choice of their distribution is deliberate.
2) Theorem 2.5.1 is a special case of Theorem 2.5.3 if we replace Wn by εn Wn , where {εn }n∈N
are independent of {Wn }n∈N i.i.d. random variables.
2 Properties of stable laws 33

3) The LePage representation of a stable subordinator X ∼ Sα (λ, 1, 0), λ > 0, α ∈ (0, 1),
−1/α
follows easily from Theorem 2.5.3. Indeed, set Wn = 1, ∀n. Then, ∞ ∼ Sα (c−1
P
n=1 Tn α , 1, 0),
d 1/α P∞ −1/α
so X = λ1/α cα n=1 Tn .
−1/α
4) For α ≥ 1, the series ∞
P
n=1 Tn Wn diverges in general, if Wn are not symmetric.
(α) −1/α
Hence, the correction κn is needed, which is of order of the E(Wn Tn ). Indeed, for λ > 1
−1/α −1/α −1/α (α)
E(Tn Wn ) = ETn EWn ∼ n EW1 ∼ κn . Analogously, for α = 1 E(Tn−1 Wn ) ∼
1/(n−1) sin x
n−1 EW1 ∼ 1/n
R
x2
dx · EW1 as in (2.5.1).
The following result yields the integral form of the cumulative distribution function of a SαS
law.
Theorem 2.5.4
1) Let X ∼ Sα (1, 0, 0) be a SαS random variable, α =
/ 1, α ∈ (0, 2]. Then
Z π/2 (
1  α  P(0 ≤ X ≤ x), α ∈ (0, 1),
exp −x α−1 κα (t) dt =
π 0 P(X > x), α ∈ (1, 2]

for x > 0, where


α
sin(αt) cos((1 − α)t) π
   
α−1
κα (t) = , t ∈ 0, .
cos t cos t 2
2) Let X ∼ Sα (1, 1, 0), α ∈ (0, 1]. Then
Z π/2
1  α 
P(X ≤ x) = exp −x α−1 κ̄α (t) dt, x > 0,
π −π/2

where α
sin(α(π/2 + t)) sin((1 − α)(π/2 + t)) π π
   
α−1
κ̄α (t) = ,t ∈ − , .
sin(π/2 + t) sin(π/2 + t) 2 2
See [5, Remark 1 p.78.]
3 Simulation of stable variables
In general, the simulation of stable laws can be demanding. However, in some particular cases,
it is quite easy.
Proposition 3.0.1 (Lévy distribution). Let X ∼ S1/2 (λ, 1, γ). Then X can be simulated by
d
representation X = λ2 Y −2 + λγ, where Y ∼ N (0, 1).
Proof It follows from Exercise 1.0.6,1) and Theorem 2.3.1, 3),4).
Proposition 3.0.2 (Cauchy distribution). Let X ∼ S1 (λ, 0, γ). Then X can be simulated by
representations
d
1) X = λ YY12 + λγ, where Y1 and Y2 are i.i.d. N (0, 1) random variables,
d
2) X = λtg(π(U − 1/2)) + λγ, where U ∼ U nif orm[0, 1].
Proof 1) Use Exercise 4.1.29 and the scaling properties of stable laws given in Theorem 2.3.1,
3),4).
d d
2) By Example 1.0.2 it holds tgY = Z, Y ∼ U [−π/2, π/2] = π(U − 1/2), Z ∼ Cauchy(0, 1) ∼
d
S1 (1, 0, 0). Then use again Theorem 2.3.1, 3),4) to get X = λZ + λγ.
Now we reduced the simulation of Lévy and Cauchy laws to the simulation of U [0, 1] and
N (0, 1) random variables. A realisation of a U [0, 1] is given by generators of pseudorandom
numbers built into any programming language. The simulation of N (0, 1) is more involved,
and we give it in the following Proposition 3.0.3 below. From this, it can be easily seen that
the method of Proposition 3.0.2, 2) is much more efficient and fast than that of Proposition
3.0.2, 1).
Proposition 3.0.3. 1) Let R and θ be independent random variables, R2 ∼ Exp(1/2), θ ∼
U [0, 2π]. Then X1 = R cos θ and X2 = R sin θ are independent N (0, 1)-distributed random
variables. √
d
2) A random variable X ∼ N ( u, σ 2 ) can be simulated by X = µ + σ −2 log U cos(2πV ),
where U, V ∼ U [0, 1] are independent.
Proof 1) For any x, y ∈ R consider
√ √
P(X1 ≤ x, X2 ≤ y) = P( R2 cos θ) ≤ x, R2 sin θ) ≤ y)
1 2π ∞ √ √ 1
Z Z
= I( t cos ϕ ≤ x, t sin ϕ ≤ y) e−t/2 dtdϕ = t = r2
2π 0 0 2
Z 2π Z ∞
1 2 x = r cos ϕ,
= I(r cos ϕ ≤ x, r sin ϕ ≤ y)re−r /2 drdϕ = 1
2π 0 0 x2 = r sin ϕ
1 ∞ ∞ x2 +x2
Z Z
− 12 2
= I(x1 ≤ x, x2 ≤ y)e dx1 dx2
2π 0 0
x2 x2
Z x y
1 1
Z
1 2
=√ e− 2 dx1 √ e− 2 dx2 = P(X1 ≤ x)P(X2 ≤ y).
2π 0 2π 0

34
3 Simulation of stable variables 35

Hence, X1 , X2 ∼ N (0, 1) are independent.


X−µ d
2) If X ∼ N (µ, σ 2 ) then Y = σ ∼ N (0, 1). By 1), Y = R cos Θ, where R2 ∼ Exp(1/2),
d d
Θ = 2πV, V ∼ U [0, 1]. Simulate R2 by
the inversion method, i.e. show that R = −2 log U, where
U ∼ U [0, 1], independent of V. Indeed, P(−2 log U ≤ x) = P(log U ≥ x/2) = P(U ≥ e−x/2 ) =
d √
1 − ex/2 , x ≥ 0. Hence −2 log U ∼ Exp(1/2), then, it holds X−µ
σ = −2 log U cos(2πV ), and we
are done.

Remark 3.0.1 (Inverse function method):


d
From the proof of Proposition 3.0.3, 2) it follows that for X ∼ Exp(λ) it holds X = − λ1 log U, U ∼
U [0, 1], λ > 0. This is the particular case of the so-called inverse function simulation method:
for any random variable X with c.d.f. FX (x) = P(X ≤ x) s.t. FX is increasing on (a, b)
d
−∞ ≤ a < b < +∞, limx→a+ FX (x) = 0, limx→b− FX (x) = 1 : it holds X = FX−1 (U ), where
U ∼ U [0, 1], and FX−1 is the quantile function of X. Indeed, we may write P(FX−1 (U ) ≤ x) =
P(U ≤ FX (x)) = FX (x), x ∈ (a, b), since P(U ≤ y) = y, ∀y ∈ [0, 1].
Theorem 3.0.1 (Simulation of Sα (1, 0, 0)):
Let X ∼ Sα (1, 0, 0), α ∈ (0, 2]. Then X can be simulated by representation

 1−α
sin(απ(U − 1/2)) cos((1 − α)π(U − 1/2))

α
d
X= , (3.0.1)
(cos(π(U − 1/2)))1/α − log V

where U, V ∼ U [0, 1] are independent random variables.

Proof Denote T = π(U −1/2), W = − log V. By Remark 3.0.1 it is clear that T ∼ U [−π/2, π/2],
W ∼ Exp(1). So (3.0.1) reduces to

 1−α
sin(αT ) cos((1 − α)T )

α
d
X= . (3.0.2)
(cos T )1/α W

d
1) α = 1 : Then (3.0.2) reduces to X = tgT, which was proven in Proposition 3.0.2,2).
  1−α
d Kα (T ) α
2) α ∈ (0, 1) : Under the condition T > 0, relation (3.0.2) rewrites as X = Y = W ,
 1/α
sin(αT ) cos((1−α)T )
where Kα (T ) = (cos T ) W as in Theorem 2.5.1.
Then

P(0 ≤ Y ≤ x) = P(0 ≤ Y ≤ x, T > 0) = |Y ≥ 0 ⇔ T > 0|


 
 1−α
Kα (T )

α α
= P 0 ≤ ≤ x, T > 0 = P(W ≥ Kα (T )x− 1−α , T > 0)
W
Z π/2 Z π/2
1 α
− 1−α W ∼Exp(1) 1  α 
= P(W ≥ Kα (t)x )dt = exp −Kα (t)x 1−α dt.
π 0 π 0

d
Hence, Y ∼ Sα (1, 0, 0) by Theorem 2.5.4 ⇒ X = Y ∼ Sα (1, 0, 0).
3) α ∈ (1, 2] is proven analogously as in 2) considering 1 − α < 0 and P(Y ≥ x) = P(Y ≥
x, T > 0).
36 3 Simulation of stable variables

Remark 3.0.2
d √ √
In the Gaussian case α = 2, the formula (3.0.2) reduces to X = W sin(2T
cos T
) T cos T
= W 2 sincos T =
(
√ √ W ∼ Exp(1)
2 2W sin T, where , so 2W ∼ Exp(1/2). Hence, X ∼ N (0, 2) is gener-
T ∼ U [−π/2, π/2]
ated by the algorithm 2) of Proposition 3.0.3, so formula (3.0.1) contains Proposition 2.4.7,2)
as a spacial case.
Now let us turn to the general case of simulating a random variable X ∼ Sα (λ, β, γ). We
show first that, to this end, it sufficient to know how to simulate X ∼ Sα (1, 1, 0).
Lemma 3.0.1
Let X ∼ Sα (λ, β, γ), α ∈ (0, 2). Then
(
d λγ + λ1/α Y, α=/ 1,
X= 2
(3.0.3)
λγ + π βλ log λ + λY, λ = 1,

where Y ∼ Sα (1, β, 0) can be simulated by

 1+β 1/α Y − 1−β 1/α Y ,


   
d 1 2 α=
/ 1,
Y =  2   2      (3.0.4)
 1+β Y − 1−β Y + λ (1 + β) log 1+β − (1 − β) log 1−β , α = 1,
2 1 2 2 π 2 2

with Y1 , Y2 ∼ Sα (1, 1, 0) being independent random variables.

Proof Relation (3.0.4) follows from the proof of Proposition 2.3.3 and Exercise 4.1.28. Relation
(3.0.3) follows easily from Theorem 2.3.1,3)-4).

Now let us simulate X ∼ Sα (1, 1, 0). First, we do it for α ∈ (0, 1).


Lemma 3.0.2
  1−α
d
Let X ∼ Sα (1, 1, 0), α ∈ (0, 1). Then X can be simulated by X = sin(αθ)
sin θ
sin((1−α)θ)
W sin θ
α
, where
θ and W are independent random variables, θ ∼ U [0, π], W ∼ Exp(1). As before, θ and W
d d
can be simulated by θ = πU, U ∼ U [0, 1], where W = − log V, V ∼ U [0, 1], where U and V are
independent.

Proof By Theorem 2.5.4, 2) we have true following representation formula for the c.d.f. P(X ≤
x) = FX (x) :
1 π/2
Z  α 
FX (x) = exp −x α−1 K̄α (t) dt, x > 0,
π −π/2
where
α
sin(α(π/2 + t)) sin((1 − α)(π/2 + t)) π π
   
1−α
K̄α (t) = ,t ∈ − , .
sin(π/2 + t) sin(π/2 + t) 2 2
The rest of the proof is exactly as in Theorem 3.0.1, 2).

Similar results can be proven for α ∈ [1, 2) :


3 Simulation of stable variables 37

Theorem 3.0.2
The random variable X ∼ Sα (1, 1, 0), α ∈ [1, 2) can be simulated by
   π W cos T 
 π2 (π/2 + T )tgT − log 2 π +T
 , α = 1,
d 2
X= 1   1−α
 1 + tg2 π α 2α sin(α(T +π/2))
 cos((1−α)T −απ/2) α

2 (cos T )1/α W , α ∈ (1, 2),

where W ∼ Exp(1) and T ∼ U [−π/2, π/2] are independent random variables.


Without proof.
4 Additional exercises

4.1
Exercise 4.1.1
Let X1 , X2 be two i.i.d. r.v.’s with probability density ϕ. Find a probability density of aX1 +bX2 ,
where a, b ∈ R.
Exercise 4.1.2
Let X be a symmetric stable random variable and X1 , X2 be its two independent copies. Prove
that X is a strictly stable r.v., i.e., for any positive numbers A and B, there is a positive number
C such that
d
AX1 + BX2 =CX.
Exercise 4.1.3 1. Prove that ϕ = {e−|x| , x ∈ R} is a characteristic function. (Check
Pólya’s criterion for characteristic functions.1 )

2. Let X be a real r.v. with characteristic function ϕ. Is X a stable random variable? (Verify
definition.)
Exercise 4.1.4
Let real r.v. X be Lévy distributed (see Exercise Sheet 1, Ex. 1-4). Find the characteristic
function of X. Give parameters (α, σ, β, µ) for the stable random variable X.
Hint: You may use the following formulas. 2
Z ∞ −1/(2x)
e √ √ q
cos(yx)dx = 2πe− |y| cos( |y|), y ∈ R,
0 x3/2
Z ∞ −1/(2x)
e √ √ q
sin(yx)dx = 2πe− |y| sin( |y|)signy, y ∈ R.
0 x3/2
Exercise 4.1.5
Let Y be a Cauchy distributed r.v. Find the characteristic function of Y. Give parameters
(α, σ, β, µ) for the stable random variable Y.
Hint: Use Cauchy’s residue theorem.
Exercise 4.1.6
Let X ∼ S1 (σ, β, µ) and a > 0. Is aX stable? If so, define new (α2 , σ2 , β2 , µ2 ) of aX.
Exercise 4.1.7
Let X ∼ N (0, σ 2 ) and A be a positive α−stable r.v. Is the new r.v. AX stable, strictly stable?
If so, find its stability index α2 .
1
Pólya’s theorem. If ϕ is a real-valued, even, continuous function which satisfies the conditions ϕ(0) = 1,
ϕ is convex for t > 0, limt→∞ ϕ(t) = 0, then ϕ is the characteristic function of an absolutely continuous
symmetric distribution.
2
Oberhettinger, F. (1973). Fourier transforms of distributions and their inverses: a collection of tables. Aca-
demic press, p.25

38
4 Additional exercises 39

Exercise 4.1.8
Let L be a positive slowly varying function, i.e., ∀x > 0

L(tx)
lim = 1. (4.1.1)
t→+∞ L(t)

1. Prove that x−ε ≤ L(x) ≤ xε for any fixed ε > 0 and all x sufficiently large.

2. Prove that limit (4.1.1) is uniform in finite intervals 0 < a < x < b.

Hint: Use a representation theorem:3 R 


ε(y)
A function Z varies slowly iff it is of the form Z(x) = a(x) exp 1x y dy , where ε(x) → 0
and a(x) → c < ∞ as x → ∞.
Definition 4.1.1 (Infinitely divisible distributions):
A distribution function F is called infinitely divisible if for all n ≥ 1, there is a distribution
function Fn such that
d
Z = Xn,1 + · · · + Xn,n ,
where Z ∼ F and Xn,k , 1 ≤ k ≤ n are i.i.d. r.v.’s with the distribution function Fn .
Exercise 4.1.9
For the following distribution functions check whether they are infinitely divisible.

1. (1 point) Gaussian distribution.

2. (1 point) Poisson distribution.

3. (1 point) Gamma distribution.

Exercise 4.1.10
Find parameters (a, b, H) in the canonic Lévy-Khintchin representation of a characteristic func-
tion for

1. (1 point) Gaussian distribution.

2. (1 point) Poisson distribution.

3. (1 point) Lévy distribution.

Exercise 4.1.11
What is wrong with the following argument? If X1 , . . . , Xn ∼ Gamma(α, β) are independent,
then X1 + · · · + Xn ∼ Gamma(nα, β), so gamma distributions must be stable distributions.
Exercise 4.1.12
Let Xi , i ∈ N be i.i.d. r.v.’s with a density symmetric about 0 and continuous and positive at
0. Prove
1 1 1
 
d
+ ··· + → X, n → ∞,
n X1 Xn
where X is a Cauchy distributed random variable.
Hint: At first, apply Khintchin’s theorem (T.2.2 in the lecture notes). Then find parameters
a, b and a spectral function H from Gnedenko’s theorem (T.2.3 in the lecture notes).
3
Feller, W. (1973). An Introduction to Probability Theory and its Applications. Vol 2, p.282
40 4 Additional exercises

Exercise 4.1.13
Show that the sum of two independent stable random variables with different α-s is not stable.
Exercise 4.1.14
Let X ∼ Sα (λ, β, γ). Using the weak law of large numbers prove that when α ∈ (1, 2], the shift
parameter µ = λγ equals EX.
Exercise 4.1.15
Let X be a standard Lévy distributed random variable. Compute its Laplace transform

E exp(−γX), γ > 0.

Exercise 4.1.16
Let X ∼ Sα0 (λ0 , 1, 0), and A ∼ Sα/α0 (λA , 1, 0), 0 < α < α0 < 1 be independent. The value of
0
λA is chosen s.t. the Laplace transform of A is given by E exp(−γA) = exp(−γ α/α ), γ > 0.
0
Show that Z = A1/α X has a Sα (λ, 1, 0) distribution for some λ > 0.
Exercise 4.1.17
Let X ∼ Sα (λ, 1, 0), α < 1 and the Laplace transform of X be given by E exp(−γX) =
exp(−cα γ α ), γ > 0,where cα = λα / cos(πα/2).

1. Show that
lim xα P{X > x} = Cα ,
x→∞

where Cα is a positive constant.


Hint: Use the Tauberian theorem.4

2. (2 points) Prove that

E|X|p < ∞, for any 0 < p < α,


E|X|p = ∞, for any p ≥ α.

Exercise 4.1.18
Let X1 , X2 be two independent α-stable random variables with parameters (λ, β, γ). Prove that
X1 − X2 is a stable random variable and find its parameters (α1 , λ1 , β1 , γ1 ).
Exercise 4.1.19
Let X1 , . . . , Xn be i.i.d Sα (λ, β, γ) distributed random variables and Sn = X1 + · · · + Xn . Prove
that the limiting distribution of

1. n−1/α Sn , n → ∞, if α ∈ (0, 1);

2. n−1 (Sn − 2π −1 λβn log n) − λγ, n → ∞, if α = 1;

3. n−1/α (Sn − nλγ), n → ∞, if α ∈ (1, 2];


4
(Feller 1971 Theorem XIII.5.4.) If L is slowly varying at infinity and ρ ∈ R+ , the following
relations are equivalent
Z ∞
1 1 1
 
U (t) ∼ tρ L(t), t → ∞, e−τ x dU (x) ∼ ρ
L , τ → 0.
Γ(ρ + 1) 0 τ τ
4 Additional exercises 41

is Sα (λ, β, 0).
Exercise 4.1.20
Let X1 , X2 . . . , be a sequence of i.i.d. random variables and let p > 0. Applying the Borel-
Cantelli lemmas, show that
1. E|X1 |p < ∞ if and only if limn→∞ n−1/p Xn = 0 a.s.,

2. E|X1 |p = ∞ if and only if lim supn→∞ n−1/p Xn = ∞ a.s.


Exercise 4.1.21
Let ξ be a non-negative random variable with the Laplace transform E exp(−λξ) = exp(−λα ), λ ≥
0. Prove that
Γ(1 − s)
Eξ αs = , s ∈ (0, 1).
Γ(1 − αs)
Exercise 4.1.22
Denote by Z ∞
f˜(s) := e−sx f (x)dx,
0

the Laplace transform of a real function f defined for all s > 0, whenever f˜ is finite. For the
following functions find the Laplace transforms (in terms of f˜):
1. For a ∈ R f1 (x) := f (x − a), x ∈ R+ , and f (x) = 0, x < 0.
2. For b > 0 f2 (x) := f (bx), x ∈ R+ .
3. f3 (x) := Rf 0 (x), x ∈ R+ .
4. f4 (x) := 0x f (u)du, x ∈ R+ .

Exercise 4.1.23
Let f˜, g̃ be Laplace transforms of functions f, g : R+ → R+ .
1. Find the Laplace transform of the convolution f ∗ g.

2. Prove the final value theorem: lims→0 sf˜(s) = limt→∞ f (t).


Exercise 4.1.24
Let {Xn }n≥0 be i.i.d. r.v.’s with a density symmetric about 0 and continuous and positive at 0.
Applying the Theorem 2.8 from the lecture notes, prove that cumulative distribution function
F (x) := P(X1−1 ≤ x), x ∈ R belongs to the domain of attraction of a stable law G. Find its
1 Pn −1 d
parameters (α, λ, β, γ) and sequences an , bn s.t. bn i=1 Xi − an → Y ∼ G as n → ∞.
Exercise 4.1.25
Let {Xn }n≥0 be i.i.d. r.v.’s with for x > 1

P(X1 > x) = θx−δ , P(X1 < −x) = (1 − θ)x−δ ,

where 0 < δ < 2. Applying the Theorem 2.8 from the lecture notes, prove that c.d.f. F (x) :=
P(X1 ≤ x), x ∈ R belongs to the domain of attraction of a stable law G. Find its parameters
d
(α, λ, β, γ) and sequences an , bn s.t. b1n ni=1 Xi − an → Y ∼ G as n → ∞.
P

Exercise 4.1.26
Let X be a random variable with probability density function f (x). Assume that f (0) =
/ 0 and
that f (x) is continuous at x = 0. Prove that
42 4 Additional exercises

1. if 0 < r ≤ 12 , then |X|−r belongs to the domain of attraction of a Gaussian law,

2. if r > 1/2 then |X|−r belongs to the domain of attraction of a stable law with stability
index 1/r.

Exercise 4.1.27
Find a distribution F which has infinite second moment and yet it is in the domain of attraction
of the Gaussian law.
Exercise 4.1.28
Prove the following statement which is used in the proof of Proposition 2.3.3.
Let X ∼ Sα (λ, β, 0) with α ∈ (0, 2). Then there exist two i.i.d. r.v.’s Y1 and Y2 with common
distribution Sα (λ, 1, 0) s.t.

 1+β 1/α Y − 1−β 1/α Y ,


   
d 1 2 if α =
/ 1,
X=  2   2  
 1+β Y − 1−β Y + λ (1 + β) log 1+β − (1 − β) log 1−β , if α = 1.
2 1 2 2 π 2 2

Exercise 4.1.29
Prove that for α ∈ (0, 1) and fixed λ, the family of distributions Sα (λ, β, 0) is stochastically
ordered in β, i.e., if Xβ ∼ Sα (λ, β, 0) and β1 ≤ β2 then P(Xβ1 ≥ x) ≤ P(Xβ2 ≥ x) for x ∈ R.
Exercise 4.1.30
Prove the following theorem.
Theorem 4.1.1
A distribution function F is in the domain of attraction of a stable law with exponent α ∈ (0, 2)
if and only if there are constants C+ , C− ≥ 0, C+ + C− > 0, such that

1. (
F (−y) C− /C+ , if C+ > 0,
lim =
y→+∞ 1 − F (y) +∞, if C+ = 0,

2. and for every a > 0 


limy→+∞ 1−F (ay) = a−α , if C+ > 0,
1−F (y)
limy→+∞ F (−ay) = a−α , if C− > 0.
F (−y)
Bibliography
[1] W. Feller. An introduction to probability theory and its applications, volume 2. John Wiley
& Sons, 2008.

[2] J. Nolan. Stable Distributions: Models for Heavy-Tailed Data.

[3] G. Samorodnitsky and M. S. Taqqu. Stable non-Gaussian random processes: stochastic


models with infinite variance, volume 1. CRC press, 1994.

[4] K. Sato. Lévy Processes and Infinitely Divisible Distributions. Cambridge Studies in Ad-
vanced Mathematics. Cambridge University Press, 1999.

[5] V. V. Uchaikin and V. M. Zolotarev. Chance and stability: stable distributions and their
applications. Walter de Gruyter, 1999.

[6] V. M. Zolotarev. One-dimensional stable distributions, volume 65. American Mathematical


Soc., 1986.

[7] V. M. Zolotarev. Modern theory of summation of random variables. Walter de Gruyter,


1997.

43

You might also like