Stable_Distributions
Stable_Distributions
Lecture Notes
Ulm
2016
Contents
1 Introduction 2
4 Additional exercises 38
Literature 43
i
Forewords
These lecture notes are based on the bachelor course “Stable distributions” which originally
tool place at Ulm University during the summer term 2016.
In modern applications, there is a need to model phenomena that can be measured by very
high numerical values which occur rarely. In probability theory, one talks about distributions
with heavy tails. One class of such distributions are stable laws which (apart from the Gaussian
one) do not have a finite variance. So, the aim of this course was to give an introduction into
the theory of stable distributions, its basic facts and properties.
The choice of material of the course is selective and was mainly dictated by its introductory
nature and limited lecture times. The main topics of these lecture notes are
1) Stability with respect to convolution
2) Characteristic functions and densities
3) Non-Gaussian limit theorem for i.i.d. random summands
4) Representations and tail properties, symmetry and skewness
5) Simulation.
For each topic, several exercises are included for deeper understanding of the subject. Since
the target audience are bachelor students of mathematics, no prerequisites other than basic
probability course are assumed.
You can find more information about this course at: https://www.uni-ulm.de/mawi/
mawi-stochastik/lehre/ss16/stable-distributions/
The author hopes you find these notes helpful. If you notice an error or would like to discuss
a topic further, please do not hesitate to contact the author at [email protected].
The author is also grateful to Dr. Vitalii Makogin for typesetting this lectures in LATEX,
making illustrations, and the selection of exercises.
1
1 Introduction
Let (Ω, F, P) be an abstract probability space. The property of stability of random variables
with respect to (w.r.t.) convolution is known for you from the basic course of probability. Let
X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ) be independent random variables. Then X1 + X2 ∼
d d
N (µ1 + µ2 , σ12 + σ22 ). One can restate this property as follows. Let X1 = X2 = X ∼ N (0, 1)
and X, X1 , X2 are independent. Then ∀a, b ∈ R aX1 + bX2 ∼ N (0, a2 + b2 ), and so
d
p
aX1 + bX2 = a2 + b2 X. (1.0.1)
| {z }
c≥0
d
Additionally, for any random variables X1 , . . . , Xn i.i.d., Xi = X, ∀i = 1, . . . , n, it holds
Pn d √
i=1 Xi = nX. Property (1.0.1) rewrites in terms of cumulative distribution functions of
Rx 2
X1 , X2 , X as Φ xa ? Φ xb = Φ xc , x ∈ R, where Φ(x) = √12π −∞ e−t /2 dt, x ∈ R, and ? is the
convolution operation.
It turns out that the normal law is not unique satisfying (1.0.1). Hence, it motivates the
following definition.
Definition 1.0.1
A random variable X is stable if ∀a, b ∈ R+ ∃c, d ∈ R, c > 0 s.t.
d
aX1 + bX2 = cX + d, (1.0.2)
where X1 , X2 are independent copies of X. X as above is called strictly stable if d = 0.
Remark 1.0.1
Let FX be the cumulative distribution function (c.d.f.) of X,i.e., FX (y) = P(X ≤ y), y ∈ R.
Then the property (1.0.2) rewrites as FX ay ? FX yb = FX y−d
c , y ∈ R, if a, b, c =
/ 0. The
case c = 0 corresponds to X ≡ const a.s., which is a degenerate case. Obviously, a constant
random variable is always stable. The property (1.0.1) shows that X ∼ N (0, 1) is strictly
stable.
Exercise 1.0.1
Show that X ∼ N (µ, σ 2 ) is stable for any µ ∈ R, σ 2 > 0. Find the parameters c and d in (1.0.2)
for it. Prove that X ∼ N (µ, σ 2 ) is strictly stable if and only if (iff) µ = 0.
The notion of (strictly) stability has first introduced by Paul Lévy in his book Calcul des
probabilités (1925). However, stable distributions (different from the normal ones), were known
long before. Thus, French mathematicians Poisson and Cauchy some 150 years before Lévy
found the distribution with density
λ
fλ (x) = , x ∈ R, (1.0.3)
π(x2 + λ2 )
depending on parameter λ > 0. Now this distribution bears theR name of Cauchy, and it is
known to be strictly stable. Its characteristic function ϕλ (t) = R eitx fλ (x)dx, t ∈ R has the
form ϕλ (t) = e−λ|t| .
2
1 Introduction 3
In 1919 the Danish astronomer J. Holtsmark found a law of random fluctuation of gravita-
3/2
tional field of some stars in space, which had characteristic function ϕ(t) = e−λktk , t ∈ R3 ,
which led to the family of characteristic functions
α
ϕ(t) = e−λ|t| , t ∈ R, λ > 0. (1.0.4)
For α = 3/2, it appeared to be strictly stable and now bears the name of Holtsmark. It needed
some time till it has proven by P.Lévy in 1927 that ϕ(t) as in (1.0.4) is a valid characteristic
function of some (strictly stable) distribution only for α ∈ (0, 2]. The theory of stable random
variables took its modern form after 1938 when the books by P.Lévy and A. Khinchin where
published.
Let us give further examples of stable laws and of their applications.
Example 1.0.1 (Constants):
Any constant c is evidently a stable random variable.
Example 1.0.2 (Cauchy distribution in nuclear physics):
Let a point source of radiation R be located at (0, 0, 1) and radiate its elementary particles
onto a screen S = {(x, y, 0), x, y ∈ R}. The screen S is covered by a thin layer of metal so
that it yields light flashes as the emitted particles reach it. Let (u, v, 0) be the coordinates
Figure 1.1:
of one of these (random) flashes. Due to the symmetry of this picture (the whole process of
radiation is rotationally symmetric around axis RA cf. Fig. 1.1) it is sufficient to find the
d
distribution of one coordinate of (u, v), say, u = v. Project the whole picture onto the plane
(x, z). Let Fu (x) = P(u ≤ x) be the c.d.f. of U. The angle α to the ray RU varies in (0, π)
if it arrives at S. It is logic to assume that α ∼ U [0, π]. Since tg α − 2 = u1 = u, it follows
π
α = π/2 + arctan x. Then for any x > 0 {U ≤ x} = {tg(α − π/2) ≤ x} = {α ≤ π/2 + arctan x}.
So,
π/2 + arctan x 1 1
FU (x) = P(α ≤ π/2 + arctan x) = = + arctan x
Z x
π 2 π
Z x
1 dy
= 2
= f1 (y)dy,
−∞ π 1 + y −∞
with f1 (·) as in (1.0.3), λ = 1. So, U ∼ Cauchy(0, 1). For instance, it describes the distribution
of energy of unstable states on nuclear reactions (Lorenz law).
4 1 Introduction
∞
X
ϕ(s) = qn sn , |s| < 1 (1.0.5)
n=0
∞
1 X Γ(n − 1/2) n
ϕ(s) = √ s , |s| < 1, (1.0.6)
2 π n=1 n!
1 Introduction 5
Γ(n−1/2)
which follows from ϕ(0) = 0, ϕ0 (0) = √1
2 1−0
= 12 , ϕ00 (0) = 32 , and so on: ϕ(n) (0) = 2Γ(1/2) , n ∈
N.
Exercise 1.0.5
Prove it inductively.
q
2π x x µ(x)
Recall the Stirling’s formula for Gamma function: ∀x > 0 Γ(x) = x e e , where
1
0 < µ < 12x . Comparing the form (1.0.5) and (1.0.6), we get
3. EX = VarX = ∞.
4. The standard Lévy distribution is strictly stable with c = 4 in (1.0.2), i.e., for independent
d d d
X1 = X2 = X : X1 + X2 = 4X.
The graph of fX (·) looks like it has its mode at x = 1/3, and f (0) = 0 by continuity, since
X1 +X2 d
limx→+0 f (x) = 0. Relation (1.0.2) from Exercise 1.0.6(4) can be interpreted as 2 = 2X,
d d
the arithmetic mean of X1 = X2 = X is distributed as 2X. Compare with the same property
of Cauchy distribution.
6 1 Introduction
Remark 2.1.1
Notice that this definition does not require the r.v. X1 to have a finite variance or even a finite
mean. But if σ 2 = VarX1 ∈ (0, +∞) then X ∼ N (0, 1) according to the central limit theorem
√ √
with bn = nσ, an = √nµ nσ
= n σµ , where µ = EX1 .
Definition 2.1.2
A non-constant random variable X is stable if its characteristic function has the form ϕX (s) =
eη(s) , s ∈ R, where η(s) = λ(isγ − |s|α + isω(s, α, β)), s ∈ R with
(
|s|α−1 βtg π2 α ,
α=/ 1,
ω(s, α, β) = (2.1.2)
−β π2 log |s|, α = 1,
α ∈ (0, 2], β ∈ [−1, 1], γ ∈ R, λ > 0. Here α is called stability index, β is the coefficient of
skewness, λ is the scale parameter, and µ = λγ is the shift parameter.
We denote the class of all stable distributions with given above parameters (α, β, λ, γ) by
Sα (λ, β, γ). Sometimes, the shift parameter µ is used instead of γ : Sα (λ, β, µ). X ∈ Sα (λ, β, γ)
means that X is a stable r.v. with parameters (α, β, λ, γ).
Unfortunately, the parametrisation of η(s) in Definition 2.1.2 is not a continuous function
of parameters (α, β, λ, γ). It can be easily seen that ω(s, α, β) → ∞ as α → 1 for any β = / 0,
instead of tending to −β 2 log |s|. To remedy this, we can introduce an additive shift +λβtg π2 α
π
7
8 2 Properties of stable laws
for α → 1 − 0.
Exercise 2.1.2
Show this convergence for α → 1 ± 0.
Let us give two more definitions of stability.
Definition 2.1.3
d
A random variable X is stable if for the sequence of i.i.d. r.v.’s {Xi }i∈N , Xi = X, ∀i ∈ N, for
any n ≥ 2 ∃cn > 0 and dn ∈ R s.t.
n
X d
Xi = cn X + dn . (2.1.4)
i=1
Definition 2.1.4
It turns out that this definition can be weakened Thus, it is sufficient for stability of X to
require (2.1.4) to hold only for n = 2, 3. We call it Definition 2.1.4.
Now let us formulate here equivalent statement.
Theorem 2.1.1
Definitions 1.0.1,2.1.1-2.1.4 are all equivalent for a non-degenerate random variable X (i.e.,
X≡ / const).
The proof of this result will require a number of auxiliary statements which now here to be
formulated. The first of them is a limit theorem describing domains of attraction of infinitely
divisible laws.
Theorem 2.1.2 (Khinchin):
Let {Xnj , j = 1 . . . kn , n ∈ N} be the sequence of series of independent random variables with
the property
lim max P(|Xnj | > ε) = 0, ∀ε > 0 (2.1.5)
n→∞ j=1...kn
Pkn
and with c.d.f. Fnj . Let Sn = j=1 Xnj − an , n ∈ N. Then a random variable X with c.d.f. FX
2 Properties of stable laws 9
d
is a weak limit of Sn (Sn → X, n → ∞) iff the characteristic function ϕX of X has the form
Z !
2 isx
ϕX (s) = exp isa − bs + e − 1 − is sin x dH(x) , s ∈ R, (2.1.6)
{x=0}
/
3. Laws of X with ch.f. ϕX as in (2.1.6) are called infinitely divisible. For more properties
of those, see lectures “Stochastics II”.
continuity points of H in (2.1.6). Introduce σnε = kj=1 Var(Xnj I(|Xnj | < y)), ε > 0. Let FX
P n
1
Z Z
an = An (y) − a − udH(u) + dH(u), n ∈ N.
|u|<y |u|≥y u
d
Then, Sn → X, n → ∞ (or Fn → F, n → ∞ weakly) iff
2) limε→0 lim supn→∞ σnε = limε→0 lim inf n→∞ σnε = 2b.
Without proof.
1. In order Sn = b1n ni=1 Xi − an from Definition 2.1.1 to fulfill condition
P
Remark 2.1.3
(2.1.5), it is sufficient to require bn → ∞, n → ∞. Indeed, in this case Xnj = Xj /bn ,
and, since Xj are i.i.d., lim→∞ maxj=1...kn P(|Xnj | > ε) = limn→∞ P(|X1 | > εbn ) = 0 if
bn → ∞.
2. Property (2.1.5) holds whenever Fn → FX weakly, where FX is non-degenerate, i.e.,
X≡ / const a.s. Indeed, let (2.1.5) does not hold, i.e., lim→∞ maxj=1...kn P(|Xnj | > ε) =
/0
for some ε > 0. Then ∃ a subsequence nk → ∞ as n → ∞ s.t. bnk = o(1). Since,
d d
Sn → X, n → ∞, s.t. ϕSnk (s) → ϕX (s), k → ∞, where
nk
s
Pnk
is Xj /bnk −isank
ϕSnk (s) = Ee j=1 = e−isank ϕX1 , s ∈ R,
bnk
nk
s
so, ϕSnk (s) = ϕX1 b nk (1+o(1)), k → ∞. Then for each small s ∈ Bδ (0) |ϕX1 (s)| =
|ϕX (sbnk )|1/nk (1 + o(1)) → 1, k → ∞, which can be only if |ϕX1 (s)| ≡ 1, ∀s ∈ R, and
hence |ϕX (s)| ≡ 1, which means X ≡ const a.s. This contradicts with our assumption
X≡ / const.
Definition 2.1.5 1) A function L : (0, +∞) → (0, +∞) is called slowly varying at infinity
if for any x > 0
L(tx)
→ 1, t → +∞.
L(t)
2) A function U : (0, +∞) → (0, +∞) is called regularly varying at infinity if U (x) =
xρ L(x), ∀x > 0, for some ρ ∈ R and some slowly varying (at infinity) function L.
Example 2.1.1 1. L(x) = | log(x)|p , x > 0 is slowly varying for each p ∈ R.
2. If limx→+∞ L(x) = p then L is slowly varying.
3. U (x) = (1 + x2 )p , x > 0 is regularly varying for each p ∈ R with ρ = 2p.
Lemma 2.1.1
A monotone function U : (0, +∞) → (0, +∞) is regularly varying at ∞ iff UU(tx)
(t) → ψ(x), t →
+∞ on a dense subset A of (0, +∞), and ψ(x) ∈ (0, +∞) on an interval I ∈ R+ .
The proof of Theorem 2.1.1 will make use of the following important statement which is
interesting on its own right.
2 Properties of stable laws 11
Theorem 2.1.4
Let X be a stable r.v. in the sense of Definition 2.1.1 with characteristic function ϕX as in
(2.1.6). Then its spectral function H has the form
(
−c1 x−α , x>0
H(x) = where α ∈ (0, 2), c1 , c2 ≥ 0.
c2 (−x)−α , x < 0,
Using Theorem 2.1.3, condition 1), it means that ∀x ∈ XH n(F (bn x) − 21 (1 + signx)) →
H(x), n → ∞, where F (y) = P(Xi ≤ y), y ∈ R.
Consider the case x > 0. If H(x) ≡ / 0 on R+ , so ∃x0 ∈ XH , x > 0 with q := −H(x0 ) > 0,
compare Fig. 2.1 For each t > 0, find an n = n(t) ∈ N s.t. n(t) = min{k : bk x0 ≤ t < bk+1 x0 }.
Since, F (x) = 1 − F (x) ↓ on R+ , we get
The same holds for the right-hand side of (2.1.7). Hence, for any x, y > 0 s.t. x0 x, x0 y, x0 xy ∈
F (txy)
XH we have F (t)
→ L(xy), → +∞. Otherwise,
by the same reasoning. As a result, we get the separation L(xy) = L(x)L(y) which holds for all
x, y > 0. (may be except for a countable number of exceptions since XH is at most countable.)
By definition of L(x) := − H(xq0 x) , L : R+ → R+ is non-decreasing, L(1) = 1, L(∞) = 0. It
can be shown (cf. the proof of Lemma 2.1.1) that the solution of the equation
(
L(xy) = L(x)L(y),
L(1) = 1, L(∞) = 0
is L(x) = 1/xα , α > 0. RHence, for x > 0 H(x) = −qL(x/x0 ) = H(x0 )x−α /x−α α
0R = x0 H(x0 )x
−α =
−α 2
−c1 x , c1 ≥ 0. Since 0<|x|<1 x dH(x) < ∞ (cf. Theorem 2.1.2), it holds 0<|x|<1 x 2−α−1 dx <
∞ ⇐⇒ 2 − α > 0 ⇐⇒ α < 2. Hence, 0 < α < 2, c1 ≥ 0 can be arbitrary.
The case x < 0 is treated analogously and leads to the representation H(x) = c2 (−x)−δ , c2 ≥
0, 0 < δ < 2.
12 2 Properties of stable laws
F (tx)
Show that α = δ. Since F (t)
∼ x−α , t → ∞ for x > 0, it means that F (s) is regularly
varying by Lemma 2.1.1. Hence, exists a slowly varying function h1 : (0, +∞) → (0, +∞) s.t.
F (x) = x−α h1 (x), x > 0. By property 1) of Theorem 2.1.3, nF (bn x) = nb−α
n x
−α h (b x) →
1 n
−α h1 (bn x)
H(x) = c1 x , n → ∞. Since, h1 (bn ) → 1, n → ∞, it holds
h1 (bn x)
c1 ← nb−α −α
n h1 (bn x) = nbn h1 (bn ) ∼ nb−α
n h1 (bn ), n → ∞. (2.1.8)
h1 (bn )
Analogously, we get F (x) = (−x)−δ h2 (−x), x < 0, where h2 : (0, +∞) → (0, +∞) is slowly
varying, and nb−δ
n h1 (bn ) ∼ c2 . Assuming c1 , c2 > 0 (otherwise the statement get trivial since
h1 (bn ) c1
either α or δ can be chosen arbitrary), we get b−α+δ
n h2 (bn ) → c2 > 0, n → ∞, where h1 /h2 is
slowly varying at +∞, which is possible only if α = δ.
Corollary 2.1.1
Under the conditions of Theorem 2.1.4, assume that c1 +c2 > 0. Then the normalizing sequence
bn in Definition 2.1.1 behaves as bn ∼ n1/α h(n), where h : (0, +∞) → (0, +∞) is slowly varying
at +∞.
Proof Assume, for simplicity, c1 > 0. Then, formula (2.1.8) yields n ∼ c1 bαn h−1
1 (bn ), α ∈ (0, 2).
1/α −1/α 1/α 1/α −1 1/α
Hence, bn ∼ n c1 (h1 (bn )) = n h(n), where h(n) = (c1 h1 (bn )) is slowly varying
at +∞ due to the properties of h1 .
Proof of Theorem 2.1.1. 1) Show the equivalence of Definitions 2.1.1 and 2.1.2.
Let X be a non-constant r.v. with characteristic function ϕX as in (2.1.6). Assume that
X is stable in the (sence of Definition 2.1.1. By Theorem 2.1.4, its spectral function H has
−c1 /|x|α x > 0,
the form H(x) = , α ∈ (0, 2), c1 , c2 ≥ 0. Put it into the formula (2.1.6):
c2 /|x|α , x<0
log ϕX (x) = isa − bs2 + c1 Qα (s) + c2 Qα (s), s ∈ R, where
Z ∞
Qα (s) = − e−isc − 1 + is sin x dx−α = Re (ψα (i, t))|t=−is ,
0
R∞
e−zx − e−tx x−α dx for z, t ∈ C : Re z, Re t > 0, α ∈ (0, 2). Integrating by
and ψα (z, t) = t 0
parts, we get
t +∞ Z
ψα (z, t) = (ze−zx − te−tx )x1−α dx
1−α 0
Z +∞ Z +∞
t
α−1 −zx 1−α α−1 −tx 1−α xz = y
= z e (zx) d(zx) − t (e )(tx) d(tx) =
1−α 0 0 xt = y
t
+∞ Z +∞ Z
= z α−1 e−y y 2−α−1 dy − tα−1 e−y y 2−α−1 dy
1−α 0 0
tΓ(2 − α) α−1
= z − tα−1 , for any α = / 1, Re z, Re t > 0.
1−α
tΓ(2 − α) α−1
ψ1 (z, t) = lim ψα (z, t) = lim (z − tα−1 )
α→1 α→1 1−α
t
= lim (e(α−1) log z − e(α−1) log t ) = |1 − α = x|
1−α→0 1−α
t
= lim (1 − x log z − 1 + x log t + o(x)) = t(log t − log z) = t log(t/z).
x→0 x
Then for α =
/ 1 we get
−isΓ(2 − α)
Qα (s) = Re (ei(π/2)(α−1) ) − (−is)α−1
1−α
−isΓ(2 − α)
= Re (ei(π/2)(α−1) ) − e(α−1)i(−π/2)signs |s|α−1
1−α
π π
α−1
= −isΓ(1 − α) cos (α − 1) − i(signs) sin (α − 1) |s|
2 2
πα πα πα
α−1 2 α
= −is sin Γ(1 − α) − sin i(signs)|s| Γ(1 − α) + i |s| Γ(1 − α) cos
2 2 2
α α−1
= −Γ(1 − α) cos(πα/2)|s| − is(1 − |s| )Γ(1 − α) sin(πα/2).
For α = 1
i.e., by (2.1.9), exp{−bs2 − d|s|α } = exp{−bs2 k 1−2/α − d|s|α }, which is only possible if b = 0.
Now set
(
d, if c1 + c2
> 0,
λ=
b, if c1 + c2
= 0 (Gaussian case) ,
(
(c1 − c2 )/λ, if c1 + c2 > 0,
β= (2.1.10)
0, if c1 + c2 = 0 (Gaussian case) ,
(
(c2 − c1 )Γ(1 − α), sin(πα/2) if α =
/ 1,
1
γ = λ (a + ā), where ā =
0, if α = 1.
14 2 Properties of stable laws
sequence {rm }m∈N , rm → n as m → ∞, and rm = 2jm 3km . Let cn (m) = cj2m ck3m , m ∈ N. Show
that {cn (m)}m∈N is bounded. It follows from (2.1.13) that rm Re (η(s)) = Re (η(cn (m)s)).
Assume that cn (m) is unbounded, then ∃ subsequence {cn (m0 )} such that |cn (m0 )| → ∞, m0 →
∞. Set s0 = scn (m0 ) in the last equation. Since rm0 → n, m0 → ∞, we get Re η(s0 ) =
s0 0
rm0 Re η( cn (m 0 ) ) → 0, m → ∞. Hence, |η(s)| ≡ 1, which can not be due to the assumption
that X ≡ / const.
Then {cn (m)}m∈N is bounded, and ∃ a subsequence {cn (m0 )}m0 ∈N such that |cn (m0 )| →
cn , m0 → ∞. Then ajm0 km0 = si (η(cn (m0 )) − rm0 η(s)) → si (η(cn − nη(s)) := dn . Hence, ∀n ∈ N
and s ∈ R it holds nη(s) = η(cn s) + isη(dn ), which is the statement of equation (2.1.12), so we
are done.
Remark 2.1.4
It follows from the proof of Theorem 2.1.1 1) that the parameter β = cc11 −c
+c2 , if c1 + c2 > 0 in
2
non-Gaussian case. Consider the extremal values of β = ±1. It is easy to see that for β = 1
c2 = 0, for β = −1 c1 = 0. This corresponds to the following situation in Definition 2.1.1:
a) Consider {Xn }n∈N to be i.i.d. and positive a.s., i.e., X1 > 0 a.s. By Theorem 2.1.3,1) it
follows that H(x) = 0, x < 0 =⇒ c2 = 0 =⇒ β = 1.
b) Consider {Xn }n∈N to be i.i.d. and negative a.s. As above, we conclude H(x) = 0, x > 0,
and c1 = 0 =⇒ β = −1.
Although this relation can not be inverted (from β ± 1 if does not follows that X > (<)0 a.s.),
it explains the situation of total skewness of a non-Gaussian X as a limit of sums of positive
or negative i.i.d. random variables Sn = b1n ni=1 Xi − an .
P
Remark 2.1.5
One can show that cn = n1/α in Definition 2.1.3, formula (2.1.4), for α ∈ (0, 2].
Proof We prove it only for strictly stable laws. First, for α = 2 (Gaussian case X, Xi ∼
d √
N (0, 1)) it holds ni=1 Xi ∼ N (0, n) = nX =⇒ cn = n1/α with α = 2.
P
Pn d
Now let α ∈ (0, 2). Let X be strictly stable, s.t. i=1 Xi = cn X. Take n = 2k , then
d d d
Sn = (X1 + X2 ) + (X3 + X4 ) + · · · + (Xn−1 + Xn ) = c2 (X10 + X20 + · · · + Xn/2
0
) = · · · = ck2 X,
| {z } | {z } | {z }
X10 X20 0
Xn/2
log n/ log 2
from which it follows cn = c2k = ck2 = c2 , so
log n
log cn = log c2 = log nlog c2 / log 2 , cn = n1/α2 , (2.1.14)
log 2
where α2 = log 2/ log c2 , for n = 2k , k ∈ N. Generalizing the above approach to n = mk turns,
we get
log m
cn = n1/αm , αm = , n = mk , k ∈ N. (2.1.15)
log cm
To prove that cn = n1/α0 it suffices to show that if cρ = r1/β then β = α0 . Now by (2.1.15)
crj = rj/αr and cρk = ρk/αρ . But for each k there exists a j such that rj < ρk ≤ rj+1 . Then
(crj )αr /αρ < cρk = ρk/αρ ≤ r1/αρ (crj )αr /αρ . (2.1.16)
16 2 Properties of stable laws
Note that Sm+n is the sum of the independent variables Sm and Sm+n − Sm distributed,
d
respectively, as cm X and cn X. Thus for symmetric stable distributions cm+n X = cm X1 + cn X2 .
Next put η = m + n and notice that due to the symmetry of the variables X, X1 , X2 we have for
t > 0 2P(X > t) ≥ P(X2 > tcη /cn ). It follows that for η > n the ratios cn /cη remain bounded.
So, it follows from (2.1.16) that
αr
cρk
r ≥ (cρk )αρ −αr
crj
and hence αr ≥ αρ . Interchanging the roles of r and ρ we find similarly that αr ≤ αρ and hence
αr = αρ ≡ α0 for any r, ρ ∈ N.
We get the conclusion that cn = n1/α0 , n ∈ N. It can be further shown that α0 = α.
Definition 2.1.6
d
A random variable X (or its distribution PX ) is said to be symmetric if X = −X. X is symmetric
about µ ∈ R if X − µ is symmetric. If X is α−stable and symmetric, we write X ∼ SαS. This
definition is justified by the property X ∼ Sα (λ, β, γ), X−symmetric ⇔ γ = β = 0, which will
be proven later.
1. ( (
λ(−|s|α + isω(s, α, β)) α=/ 1, γ = 0, α =
/ 1,
log ϕX (s) = i.e.
λ(isγ − |s|) α = 1, β = 0, α = 1
with ω(s, α, β) as in (2.1.2).
2. (form C) log ϕX (s) = −λC |s|α exp(− π2 θαsigns), where α ∈ (0, 2], λC > 0, θ ≤ θα =
2
min{1, α−1 }.
1) X has a density (i.e, has absolutely continuous distribution), which is bounded with all
its derivatives.
2) X1 + X2 ∼ Sα (λ, β, γ) with
β1 λ1 + β2 λ2 λ1 γ1 + λ2 γ2
λ = λ1 + λ2 , β= ,γ = .
λ1 + λ2 λ1 + λ2
7) Let α =
/ 1. X is strictly stable iff γ = 0.
log ϕX1 +X2 (s) = log(ϕX1 (s)ϕX2 (s)) = log ϕX1 (s) + log ϕX2 (s)
2
X
= λj isγj − |s|α + s|s|α−1 iβj tg(πα/2)
j=1
with
λ1 γ1 + λ2 γ2 λ1 β1 + λ2 β2
λ = λ1 + λ2 , γ = ,β = .
λ1 + λ2 λ1 + λ2
So, X1 + X2 ∼ Sα (λ, β, γ) by Definition 2.1.2.
3) log ϕX+a (s) = isa + λisγ − λ|s|α + λisω(s, α, β) = λ(is(γ + a/λ) − |s|α + isω(s, λ, β)),
hence X + a ∼ Sα (λ, β, γ + a/λ).
4) Consider the case λ =/ 1.
Remark 2.3.1
1) The analytic form of the density of a stable law Sα (λ, β, γ) is explicitly known only in the
cases α = 2 (Gaussian law), α = 1 (Cauchy law), α = 1/2 (Lévy law).
2) Due to Property 3) of Theorem 2.3.1, the parameter γ (or sometimes λγ) is called shift
parameter.
3) Due to Property 4) of Theorem 2.3.1, the parameter λ (or sometimes λ1/α ) is called shape
or scale parameter. Notice that this name is natural for α = / 1 or α = 1, β = 0. In case
α = 1, β =/ 0, scaling of X by a results in a non-zero shift of the law of X by π2 β log |a|, hence
the use of this name in this particular case can namely be recommended.
4) Due to properties 5)-6) of Theorem 2.3.1, parameter β is called skewness parameter. If
β > 0(β < 0) then Sα (λ, β, γ) is said to be skewed to the right (left). Sα (λ, ±1, γ) is said to be
totally skewed to the right (for β = 1) or left (for β = −1).
2 Properties of stable laws 19
5) It follows from Theorem 2.2.1 and Theorem 2.3.1, 3) that if X ∼ Sα (λ, β, γ), α = / 1, then
X − λγ ∼ Sα (λ, β, 0) is strictly stable.
6) It follows from Theorem 2.2.1 and Definition 2.1.2 that no non-strictly 1-stable random
variable can be made strictly stable by shifting. Indeed, if S1 (λ, β, γ) is not strictly stable
then β =/ 0, which can not be eliminated due to log |s| in ω(s, λ, β). Analogously, every strictly
1-stable random variable can be made symmetric by shifting.
Corollary 2.3.1
Let Xi , i = 1, . . . , n be i.i.d. Sα (λ, β, γ)−distributed random variables, α ∈ (0, 2]. Then
(
d n1/α X1 + λγ(n − n1/α ), if α =
/ 1,
X1 + · · · + Xn =
nX1 + π2 λβn log n, if α = 1.
Corollary 2.3.2
It follows from theorem 2.3.1, 2) and 3) that if X1 , X2 ∼ Sα (λ, β, γ) are independent then
X1 − X2 ∼ Sα (2λ, 0, 0) and −X1 ∼ Sα (λ, −β, −γ).
Proposition 2.3.1. Let {Xn }n∈N be a sequence of random variables defined in the same prob-
ability space (Ω, F, P), Xn ∼ Sαn (λM M M M M
n , βn , γn ), n ∈ N, where αn ∈ (0, 2), λn > 0, βn ∈
M M M M M
[−1, 1], γn ∈ R. Assume that αn → α, λn → λ , βn → β as n → ∞ for some α ∈
d
(0, 2), λM > 0, β M ∈ [−1, 1], γ M ∈ R. Then Xn → X ∼ Sα (λM , β M , γ M ) as n → ∞. Here the
superscript “M” means the modified parametrisation, cf. formula (2.1.3) after Definition 2.1.2.
d
Proof Xn → X as n → ∞ is equivalent to ϕXn (s) → ϕX (s), n → ∞, s ∈ R, or, log ϕXn (s) =
λM M
n (isγn − |s|
αn + isω (s, α , β M )) → λM (isγ M − |s|α + isω (s, α, β M )) which is straight-
M n n M
n→∞
forward by the continuity of the modified parametrisation w.r.t. its parameters.
Proposition 2.3.2. Let X ∈ Sα (λ, 1, 0), λ > 0, α ∈ (0, 1). Then X ≥ 0 a.s.
This property justifies again the use of β as skewness parameter and brings a random variable
X ∈ Sα (λ, 1, 0) the name of stable subordinator. The above proposition will easily follows from
the next theorem.
Theorem 2.3.2
1) For α ∈ (0, 1), consider Xδ = N
P δ
k=1 Uδ,k to be compound Poisson distributed, where Nδ is
20 2 Properties of stable laws
a P oisson(δ −α )−distributed random variable, δ > 0, and {Uδ,k }k∈N are i.i.d. positive random
(
δ α /xα , x > δ,
variables, independent of Nδ , with P(Uδ,k > x) =
0, x ≤ δ.
d
Then Xδ → X, δ → 0, where X ∼ Sα (λ, 1, 0) with λ = Γ(1 − α) cos(πα/2).
2) Let X ∼ Sα (λ, 1, 0), α ∈ (0, 1). Then its Laplace transform ˆlX (s) := Ee−sX is equal to
ˆlX (s) = e−Γ(1−α)sα , s ≥ 0. (2.3.1)
which is of the form (2.1.6) with H(x) = −c1 x−α I(x > 0) as in Theorem 2.1.4 (c2 = 0).
R ∞ isx
Consider ϕX (s) := exp α 0 (e − 1)x−α−1 dx , s ≥ 0, α ∈ (0, 1). Show that
Z ∞ isx
e −1 Γ(1 − α) −iαπ/2
dx = −sα e . (2.3.2)
0 xα+1 α
If it is true then log ϕX (s) = −|s|α Γ(1 − α)(cos(πα/2) − isign(s) sin(πα/2)) since for s < 0
we make the substitution s → −s, i → −i. Then, log ϕX (s) = −|s|α Γ(1 − α) cos(πα/2)(1 −
isign(s)tg(πα/2)), s ∈ R, which means that, according to Definition 2.1.2, X ∼ Sα (λ, 1, 0). Now
prove relation (2.3.2). It holds
Z ∞ isx Z ∞ isx−θx
e −1 e −1 1 ∞ −θx+isx
Z
dx = lim dx = lim − (e − 1)d(x−α )
0 xα+1 θ→+0 0 xα+1 θ→+0 α 0
!
1 −θx+isx 1 ∞ −θ + is ∞ e−θx+isx
Z
= lim − (e − 1) α + dx
θ→+0 α x 0 α 0 xα
θ − is ∞ eisx x1−α−1 e−θx
Z
= lim − 1−α Γ(1 − α)θ1−α dx
θ→+0 θ α 0 Γ(1 − α)
θ − is 1 (θ − is)1−1+α
= − lim 1−α Γ(1 − α) 1−α
= − lim 1−α Γ(1 − α)
θ→+0 θ α (1 − is/θ) θ→+0 θ α/θ1−α
(θ − is)α Γ(1 − α) Γ(1 − α) p α
= − lim =− lim θ2 + s2 eiξ
θ→+0 α α θ→+0
Γ(1 − α) α −i π α
=− s e 2 ,
α
where ξ = arg(θ − is) → −π/2.
θ→+0
2 Properties of stable laws 21
Exercise 2.3.1
Show this!
Remark 2.3.2
/ 1, α ∈ (0, 2] : for X ∼ Sα (λ, 1, 0),
Actually, formula (2.3.1) is valid for all α =
n o
exp − λ α ,
s / 1, α ∈ (0, 2],
α=
ˆlX (s) = n cos(πα/2) o s ≥ 0,
exp −λ 2 s log s , α = 1,
π
< 0,
α ∈ (0, 1),
λ λ
where Γ(1 − α) = cos(πα/2) for α =
/ 1. Here, − cos(πα/2) = > 0, α ∈ (1, 2),
λ, α = 2.
Proof Let X ∼ Sα (λ, 1, 0), α ∈ (0, 2), β ∈ (−1, 1) with density f. It follows from properties
2)-4) of Theorem 2.3.1
(
that ∃ i.i.d. random variables Y1 , Y2 ∼ Sα (λ, 1, 0) and constants a, b >
d aY1 − bY2 , α=/ 1,
0, c ∈ R s.t. X = Since, Y1 ≥ 0 and −Y2 ≤ 0 a.s. by Proposition
aY1 − bY2 + c, α = 1.
2.3.2, and their support is the whole R+ (R− , resp.), it holds suppf = R.
Remark 2.3.3
One can prove that the support of Sα (λ, ±1, 0) is R as well, if α ∈ [1, 2).
Now consider the tail behavior of stable random variables. In the Gaussian case (α = 2), it
is exponential:
ϕ(x)
Proposition 2.3.4. Let X ∼ N (0, 1). Then, P(X < −x) = P(X > x) ∼ x ,x → ∞, where
2
ϕ(x) = √12π e−x /2 is the standard normal density.
22 2 Properties of stable laws
Proof Due to the symmetry of X, P(X < −x) = P(X > x), ∀x > 0. Prove the more accurate
inequality
1 1 ϕ(x)
− 3 ϕ(x) < P(X > x) < , ∀x > 0. (2.3.3)
x x x
ϕ(x)
The asymptotic P(X > x) ∼ x ,x → +∞ follows immediately from
it.
2 /2 2 /2
First prove the left relation in (2.3.3). Since e−t < e−t 1+ 1
t2
, ∀t > 0, it holds
for x > 0 : P(X > x) = √12π x∞ e−t /2 dt ≤ √12π x∞ e−t /2 1 + t12 dt = √12π e−x /2 x1 , where
R 2 R 2 2
2
the last equality can be easily verified by differentiation w.r.t. x : − √12π e−x /2 1+ 1
x2
=
0
√1 − xx e −x2 /2 −e −x2 /2 1
= ϕ(x)
. Analogously, e −t2 /2 1− 3
<e −t2 /2 , ∀t > 0, hence
2π x2 x t2
Z ∞ Z ∞
1 1 1 3 1
2 /2 2 /2
− 3 ϕ(x) = √ e−t 1− 2
dt ≤ √ e−t dt = P(X > x), (2.3.4)
x x 2π x t 2π x
Proposition 2.3.5 will be proved later after we have proven important results, needed for it.
Let us state now some corollaries.
Corollary 2.3.3
For any X ∼ Sα (λ, β, γ), 0 < α < 2 it holds E|X|p < ∞ iff p ∈ (0, α). In particular, E|X|α =
+∞.
Proof ItR follows immediately from the tail asymptotic of Proposition 2.3.5 and the formula
E|X|p = 0∞ P(|X|p > x)dx.
Proposition 2.3.6. Let X ∼ Sα (λ, β, 0) for 0 < α < 2, and β = 0 if α = 1. Then (E|X|p )1/p =
cα,β (p)λ1/α , where ∀p ∈ (0, α) and cα,β (p) is a constant s.t.
p/(2α)
2p−1 Γ(1 − p/α) απ p
cα,β (p) = R∞ 1 + β 2 tg2 cos arctg(βtg(απ/2)) .
p 0 u−p−1 sin2 udu 2 α
Proof We shell show only that (E|X|p )1/p = cα,β (p)λ1/α , where cα,β (p) = (E|X0 |p )1/p with
X0 ∼ Sα (1, β, 0). The exact calculation of cα,β (p) will be left without proof. The first statement
d
follows from Theorem 2.3.1,4), namely, since X = λ1/α X0 . Then (E|X|p )1/p = λ1/α (E|X0 |p )1/p =
cα,β (p).
is slowly varying at ∞. This holds, in particular, if F has a finite second moment (then
∃ limx→+∞ µ(x) = EX12 ).
3) F belongs to the domain of attraction of α-stable law, α ∈ (0, 2), iff
µ(x) ∼ x2−α L(x), (2.4.1)
where L : R+ → R+ is slowly varying at +∞ and it holds the tail balance condition
P(X > x) 1 − F (x) P(X < −x) F (−x)
= → p, = → q
P(|X| > x) 1 − F (x) + F (−x) x→+∞ P(|X| > x) 1 − F (x) + F (−x) x→+∞
(2.4.2)
24 2 Properties of stable laws
Remark 2.4.1
a) In Definition 2.4.1, one can choose bn = inf{x : P(|X1 | > x) ≤ n−1 }, an = nE(X1 I(|X1 | ≤
bn )).
b) It is quite clear that statements 2) and 3) are special cases of the following one:
1) F belongs to the domain of attraction of an α-stable law, α ∈ (0, 2], iff (2.4.1) and (2.4.2)
hold.
c) It can be shown that {bn } in Theorem 2.4.1 must satisfy the condition limn→∞ nL(b bα
n)
=
n
λcα , with cα as in Proposition 2.3.5. Then {an } can be chosen as
0,
α ∈ (0, 1),
an = nb2n R sin(x/bn )dF (x),
R
α = 1,
2R
nbn R xdF (x), α(1, 2).
Proof of Proposition 2.3.5 We just give the sketch of the proof. It is quite clear that
Sα (λ, β, γ) belongs to the domain of attraction of Sα (λ, β, 0) with bn = n1/α , cf. Theorems
2.1.3,2.1.4, Corollary 2.1.1 and Remark 2.1.5. Then the tail balance condition (2.4.2) holds
with p = 1+β 1−β
2 , q = 2 . By Remark 2.4.1 c), putting bn = n
1/α into it yields that L(x) in
(2.4.3) has the property limx→+∞ L(x) = cα λ. It follows from (2.4.2) and (2.4.3) of Theorem
2.4.1 that
xα P(X > x) ∼ xα pP(|X| > x) ∼ p, x → +∞.
1+β
xα x−α lim L(x) = pcα λ = cα λ,
x→+∞ 2
xα P(X < −x) ∼ qcα λ = cα 1−β
2 λ, x → +∞ is shown analogously.
n n
!
Xk − cn bn Xk − cn bn
X Y
ϕSn (s) = E exp is = E exp is
k=1
bn k=1
bn
n
= e−iscn ϕX1 (s/bn ) .
Xi −i.i.d.
Put ϕn (s) = ϕX1 (s/bn ), Fn (x) = F (bn x). Then the statement of Theorem 2.4.1 is equivalent to
n
e−iscn ϕn (s) → ϕX (s), (2.4.4)
n→∞
where X is stable.
2 Properties of stable laws 25
Lemma 2.4.1
Under assumptions of Theorem 2.4.1, relation (2.4.4) is equivalent to
where η(s) is a continuous function of the form η(s) = isa−bs2 + {x=0} isx −1−is sin x)dH(x)
R
/ (e
(cf. (2.1.6)) with H(·) from Theorem 2.1.2 and ϕX (s) = eη(s) , s ∈ R.
d
Proof 1) Show this equivalence in the symmetric case, i.e., if X1 = −X1 . Then it is clear that
we may assume cn = 0, ∀n ∈ N. Show that
and η is continuous. First, if a characteristic function ϕ(s) = / 0∀s : |s| < s0 , then ∃! representa-
tion ϕ(s) = r(s)e iθ(s) , where θ(·) is continuous and θ(0) = 0. Hence, log ϕ(s) = log r(s) + iθ(s)
is well-defined, continuous, and log ϕ(0) = log r(0) + iθ(0) = log 1 + i0 = 0.
Let us show (2.4.7) ⇒ (2.4.6). It follows from (2.4.7) that ϕn (s) → 1 and by continuity
n→∞
theorem for characteristic functions, this convergence is uniform in any finite interval s ∈
(−s0 , s0 ). Then, log ϕn (s) is well-defined for large n (since ϕn (s) =
/ 0 there). Since
it follows log ϕnn (s) = n log ϕn (s) = n(ϕn (s) − 1 + o((ϕn (s))2 )) ∼ n(ϕn (s) − 1) → η(s) by
n→∞ n→∞
(2.4.7). Then, ϕnn (s) → eη(s) , ∀s ∈ R and (2.4.6) holds.
n→∞
Let us show (2.4.6) ⇒ (2.4.7). Since η(0) = 0, then eη (s) = / 0 ∀s ∈ (−s0 , s0 ) for some
s0 > 0. Since the convergence of characteristic functions is uniform by continuity theorem,
ϕn (s) =/ 0 for all n large enough and for s ∈ (−s0 , s0 ). Taking logarithms in (2.4.6), we get
n log ϕn (s) → η(s). Using Taylor expansion (2.4.8), we get n(ϕn (s)−1) → η(s), and (2.4.7)
n→∞ n→∞
holds.
2) Show this equivalence in the general case cn =/ 0. More
specifically, show that it holds if
ϕn (s) → 1 ∀s ∈ R, and nβn → 0, where βn = R sin bxn F (dx). Then
2
R
n→∞ n→∞
n(βn − cn ) → a, (2.4.9)
n→∞
R
is/bn dF (x)
R
Im ϕn (1) = Im Re = R sin(x/bn )dF (x) = βn ⇒ n(βn − cn ) → 0. Hence,
s=1 n→∞
relation n(ϕn (s)e−icn s − 1) → η(s) one can write as n(ϕn (s)e−iβn s − 1) → η(s). But
n→∞ n→∞
n(ϕn (s)e−iβn s − 1) = n(ϕn (s) − 1 − iβn s)e−iβn s + n((1 + iβn s)e−iβn s − 1),
| {z }
→0,n→∞
since n((1 + iβn s)e−iβn s − 1) = n((1 + iβn s)(1 − iβn s + o(bn )) − 1) = n(1 + βn2 s2 + o(bn ) − 1) =
nb2n s2 + o(nbn ) → 0 by our assumption. We conclude that (2.4.4) ⇒ (2.4.5) holds.
n→∞
Conversely, if (2.4.9) and (2.4.8) hold then reading the above reasoning in reverse order we
go back to (2.4.4).
Now we have to show that ϕn (s) → 1, nβn2 → 0. The first statement is trivial since
n→∞ n→∞
2
ϕn (s) = ϕ(s/bn ) → ϕ(0) = 1, as bn → ∞. Let us show nβn2 = n (
R
→
R sin(x/bn )F (dx)) n→∞ 0.
By Corollary 2.1.1 bn ∼ n1/α h(n), n
→ ∞, where h(·) is slowly varying at +∞. It follows
p
from (2.4.3) that E|X1 | < ∞ ∀p ∈ (0, α). Then |βn | ≤ 2 0∞ bxn dF (x) = O(|βn |−p ) =
p
R
O(n−p/α h−p (n)) and nβn2 = O(n1−2p/α ) → 0 if β is chosen s.t. p > α/2.
n→∞
Proof of Lemma 2.4.2. Let relation (2.4.5) holds with some bn > 0 and an . This means,
d
equivalently, that Sn → X ∼ G. Since the case X ∼ N (0, 1) is covered by the CLT, let
n→∞
us exclude it as well as the trivial case X ≡ const. By Theorem 2.1.2-2.1.3 with kn = n
R R 1
Xnj = Xj /bn , an = An (y) − a − |u|<y udH(u) + |u|≥y u dH(u), X1 ∼ F,
Z ybn
X1 n n
An (y) = nE I(|X1 |/bn < y) = E(X1 I(|X1 | < bn y)) xdF (x),
bn bn bn −ybn
n(F (xbn ) − 1) → H(x), x > 0,
±y being continuity points of H, it follows that n→∞ and
nF (xbn ) → H(x), x < 0,
n→∞
!2
Z εbn Z εbn
n
lim lim sup x2 dF (x) − x2 dF (x) = b. (2.4.11)
ε→0 n→∞ b2 n −εbn −εbn
√
and it is not difficult to show (see Exercise 2.3.2 below) that bn = const n → ∞ in this
case. If ∃
/ c > 0 : P(|X1 | < c) = 1 then bn = / O(1), n → ∞ since that would contradict
limn→∞ P(|X1 | > bn ε) = 0 ⇒ ∃{nk }, nk → ∞ as k → ∞ : bnk → +∞. W.l.o.g. identify
sequences {n} and {nk }. Alternatively, one can agree that {Sn } is stochastically bounded
d
(which is the case if Sn → X) iff bn → +∞.
n→∞
Exercise 2.4.1
d d
Let {Fn }n∈N be a sequence of c.d.f. s.t. Fn (αn ·+βn ) → U (·), n → ∞, Fn (γn ·+δn ) → V (·), n →
∞ for some sequences {αn }, {βn }, {γn }, {δn } s.t. αn γn > 0, where U and V are c.d.f.’s, which
are not concentrated at one point. Then
γn δn − βn
→ a=
/ 0, → b
αn n→∞ αn n→∞
and V (x) = U (ax + b), ∀x ∈ R.
bn+1 d d Xn+1
Now show that bn → 1, n → ∞. Since Sn → X ≡
n→∞
/ const, it holds Sn+1 → X,
n→∞ bn+1 =
d n+1 P d d
Sn+1 − Sn → 0 ⇒ Xbn+1 → 0, n → ∞. 1
Thus, bn+1 Sn − an+1 → X and 1
bn Sn − an → X,
n→∞ n→∞ n→∞
bn+1
which means by Exercise 2.4.1, that →
bn n→∞ 1.
2) Prove the following.
αn+1
Proposition 2.4.1. Let βn → +∞, →
αn n→∞ 1. Let U be a monotone function s.t.
n→∞
exists on a dense subset of R+ , where ψ(x) ∈ (0, +∞) on some interval I, then U is regularly
varying at +∞, ψ(x) = cxρ , ρ ∈ R.
Proof W.l.o.g. set ψ(1) = 1, and assume that U is non-decreasing and (2.4.12) holds for x = 1
(otherwise, a scaling in x can be applied). Set n = min{k ∈ N0 : βk+1 > t}. Then it holds
βn ≤ t < βn+1 , and
λn U (βn x) U (βn x) U (tx)
ψ(x) ∼ ∼ ≤
n→∞ λn+1 U (βn+1 ) n→∞ U (βn+1 ) U (t)
U (βn+1 x) λn+1 U (βn+1 x) ψ(x)
≤ ∼ ∼ = ψ(x)
U (βn ) n→∞ λn U (βn ) n→∞ ψ(1)
for all x, for which (2.4.12) holds. The application of Lemma 2.1.1 finishes the proof.
(
n(F (xbn ) − 1) → H(x), x > 0
3) Apply Proposition 2.4.1 to as n → ∞ with αn = n, βn =
nF (−xbn ) → H(−x), x>0
bn ⇒ 1 − F (x) = P(X1 > x), F (−x) = P(X1 < −x) are regularly varying at +∞, and
H(x) = c1 xρ1 , H(−x) = c2 xρ2 ,
P(X1 > x) ∼ xρ1 L1 (x), P(X1 < −x) ∼ xρ2 L2 (x), x → +∞, (2.4.13)
is regularly varying at +∞. By Theorem 2.1.4, ρ1 = ρ2 = −α, c1 < 0, c2 > 0, and evidently,
P(|X1 | > x) = 1 − F (x) + F (−x) ∼ x−α (L1 (x) + L2 (x)), so (2.4.3) holds.
x→+∞ | {z }
L(x)
Exercise 2.4.2
Show that then µ(x) ∼ x2−α L3 (x) is equivalent to (2.4.3). Show that tail balance condition
x→+∞
(2.4.2) follows from (2.4.13) with ρ1 = ρ2 = −α.
So we have proven that (2.4.5) ⇒ (2.4.2),(2.4.3) (or, equivalently, (2.4.1),(2.4.2)). Now let
us prove the inverse statement.
4) Let (2.4.1) hold. Since L1 is slowly varying, one can find a sequence {bn }, bn → ∞, n → ∞
s.t. bnα L(bn ) → c > 0 – some constant. (Compare Remark 2.4.1, c).) Then bn2 µ(bn x) ∼
n n→∞ n n→∞
n
2
bn
(bn x) 2−α L(b x)
n = n
bα L(bn x)x−α ∼ cx−α , x > 0 and hence
n n→∞
n(F (xbn ) − 1) → c1 x−α ,
n→∞ (2.4.14)
nF (−xbn ) → c2 x−α .
n→∞
Exercise 2.4.3
1) Show the last relation. Then 1) of Theorem 2.1.3 holds.
2) Prove that 2) of Theorem 2.1.3 holds as well, as a consequence of n
b2n
µ(bn x) ∼ cx−α and
n→∞
(2.4.14).
d
Then, by Theorem 2.1.3 Sn → X, and (2.4.5) holds. Lemma 2.4.2 is proven.
n→∞
The proof of Theorem 2.4.1 is thus complete. Part a) and the second half of part c) of
Remark 2.4.1 will remain unproven.
In addition to a proof a using the law of large numbers, (see Exercise 4.1.14) let us give an
alternative proof here.
Proof By Corollary 2.3.3, E|X| < ∞ if α ∈ (1, 2). For α = 2 X is Gaussian and hence E|X| <
d
∞ is trivial. By Remark 2.3.1 5), X − αγ is strictly stable, i.e., X1 − µ + X2 − µ = c2 (X − µ)
d d
by Definition 2.1.3, where X1 = X2 = X, all independent r.v.’s. Taking expectations on both
sides yields 2E(X − µ) = c2 E(X − µ). Since cn = n1/α by Remark 2.1.5, c2 = 21/α , and hence
E(X − µ) = 0 ⇒ EX = µ.
Definition 2.5.2
Let {Ti }i∈N be the sequence of i.i.d. Exp(λ)-distributed random variables with λ > 0. Set
τn = ni=1 Ti ∀n ∈ N, τ0 = 0, and N (t) = max{n ∈ N0 : τn ≤ t}, t ≥ 0. The random process
P
N = {N (t), t ≥ 0} is called Poisson with intensity λ. Time instants Ti are called arrival times,
τi are interarrival times.
Exercise 2.5.1
Prove the following properties of a Poisson process N :
In order to get a series representation of a SαS random variable X, we’ll have to ensure the
a.s. convergence of this series. For that, we impose
(
restrictions on α ∈ (0, 2) and on {Rn } : we
+1, if Rn > 0,
assume Rn = εn Wn , where εn = sign(Rn ) = Wn = |Rn |, EWnα < ∞.
−1, if Rn ≤ 0,
Theorem 2.5.1 (LePage representation):
Let {εn }, {Wn }, {Tn } be independent
(
sequences of random variables, where {εn }n∈N are i.i.d.
+1, with probability 1/2,
Rademacher random variables, εn = , {Wn }n∈N are i.i.d. random
−1, with probability 1/2
variables with E|Wn |α < ∞, α ∈ (0, 2), and {Tn }n∈N is the sequence of arrival times of a unit
rate Poisson process N (λ = 1).
30 2 Properties of stable laws
Proof of Theorem 2.5.1 1) Let {Un }n∈N be a sequence of i.i.d. U [0, 1]−distributed random
−1/α
variables, independent of {εn }n∈N and {Wn }n∈N . Then {Yn }n∈N given by Yn = εn Un Wn , n ∈
N is a sequence of symmetric i.i.d. random variables. Let us show that the law of Y1 lies in the
domain of attraction of a SαS random variable. For that, compare its tail probability
−1/α
P(|Y1 | > x) = P(U1 |W1 | > x) = P(U1 < x−α |W1 |α )
Z ∞ Z x Z ∞
= P(U1 < x−α ω α )dF|W | (ω) = x−α ω α dF|W | (ω) + dF|W | (ω)
0 0 x
Z x
= x−α ω α dF|W | (ω) + P(|W1 | > x),
0
Hence, condition (2.4.3) of Theorem 2.4.1 is satisfied. Due to symmetry of Y1 , tail balance
condition (2.4.2) is obviously true with p = q = 1/2. Then, by Theorem 2.4.1 and Corollary
1 Pn d
2.1.1, it holds n1/α k=1 Yk → X ∼ Sα (σ, 0, 0), where the parameters (λ, β, γ) of the limiting
n→∞
E|W1 |α
stable law come from the proof of Theorem 2.1.1 with c1 = c2 = 2 (due to the symmetry
of Y1 and X).
2 Properties of stable laws 31
1 Pn
2) Rewrite n1/α k=1 Yk to show that is limiting random variables X coincides with
P∞ −1/α
k=1 εk Tk Wk .
Exercise 2.5.3
Let N be the Poisson process with intensity λ > 0 built upon arrival times {Tn }n∈N . Show that
d
a) under the condition {Tn+1 = t} it holds (T1/t , . . . , Tn/t ) = (u(1) , . . . , u(n) ), where u(k) , k =
1, . . . , n are order statistics of a sample (u1 , . . . , un ) with uk ∼ U (0, 1) being i.i.d. random
variables.
T1 d
b) Tn+1 , . . . , TTn+1
n
= (u(1) , . . . , u(n) ).
−1/α
Reorder the terms Yk in the sum nk=1 Yk in order of ascending uk , so to have nk=1 εk u(k) Wk .
P P
Since Wk and εk are i.i.d., this does not change the distribution of the whole sum. Then
n n n −1/α
1 1 1 Tk
X d X −1/α d X
Yk = εk U(k) Wk = εk Wk
n1/α k=1
n1/α k=1
n1/α k=1
Tn+1
n
1/α X
Tn+1
−1/α d
by Exercise 2.5.3 b). Then, by part 1), εk Tk Wk → X with X as above.
n k=1
n→∞
| {z }
=:Sn
d P∞ −1/α d
3) Show that Sn → k=1 εk Tk Wk , then we are done, since then Sα (σ, 0, 0) ∼ X =
Pn+1
−1/α τi a.s.
By the strong law of large numbers, it holds Tn+1 Tn+1 n+1
P∞ n+1
k=1 εk Tk Wk . n = n+1 n = n →
i=1
n+1
ET1 = 1, as n → ∞, since the Poisson process N has the unit rate, and T1 ∼ Exp(1). Then
P(A) = 1, where A = {limn→∞ Tnn = 1} ∩ T1 > 0. Let us show that ∀ω ∈ A
P∞ −1/α W (ω) < ∞. Apply the following three-series theorem by Kolmogorov
k=1 εk (ω)(Tk (ω)) k
(without proof).
Theorem 2.5.2 (Three-series theorem by Kolmogorov):
Let {Yn }n∈N be a sequence of independent random variables. Then ∞
n=1 Yn < ∞ a.s. iff
P
∀s > 0
P∞
a) n=1 P(|Yn | > s) < ∞
P∞
b) n=1 E(Yn I(|Yn | ≤ s)) < ∞
P∞
c) n=1 Var(Yn I(|Yn | ≤ s)) < ∞
c)
∞ h ∞
i by b) X h i
Var εn Tn−1/α Wn I(|εn Tn−1/α Wn | ≤ s) E Tn−2/α Wn2 I(|Tn−1/α Wn | ≤ s)
X
=
n=1 n=1
∞ ∞ Z s(c2 n)1/α
−2/α −2/α −2/α X −2/α
X h i
2 1/α
≤ c1 n E W1 I(|W1 | ≤ s(c2 n) ) = c1 n w2 dF|W | (w)
n=1 n=1 0
Z ∞ Z s(c2 x)1/α Z ∞ Z ∞
F ubini
≤ c3 x−2/α w2 dF|W | (w)dx = c3 w2 dF|W | (w) x−2/α dx
0 0 0 s−α c−1
2 w
α
Z ∞
= c4 wα dF|W | (w) = c4 E|W1 |α < ∞,
0
where c3 , c4 > 0.
d P∞ −1/α d P∞ −1/α
Hence, by Theorem 2.5.2 Sn → k=1 εk Tk Wk < ∞ a.s. and X = k=1 εk Tk Wk ∼
n→∞
Sα (σ, 0, 0).
If α = 1, then
∞ Z |W1 |/(n−1) !!
sin x
Tn−1 Wn
X
X := − E W1 dx ∼ S1 (λ, β, γ), (2.5.1)
n=1 |W1 |/n x2
3) The LePage representation of a stable subordinator X ∼ Sα (λ, 1, 0), λ > 0, α ∈ (0, 1),
−1/α
follows easily from Theorem 2.5.3. Indeed, set Wn = 1, ∀n. Then, ∞ ∼ Sα (c−1
P
n=1 Tn α , 1, 0),
d 1/α P∞ −1/α
so X = λ1/α cα n=1 Tn .
−1/α
4) For α ≥ 1, the series ∞
P
n=1 Tn Wn diverges in general, if Wn are not symmetric.
(α) −1/α
Hence, the correction κn is needed, which is of order of the E(Wn Tn ). Indeed, for λ > 1
−1/α −1/α −1/α (α)
E(Tn Wn ) = ETn EWn ∼ n EW1 ∼ κn . Analogously, for α = 1 E(Tn−1 Wn ) ∼
1/(n−1) sin x
n−1 EW1 ∼ 1/n
R
x2
dx · EW1 as in (2.5.1).
The following result yields the integral form of the cumulative distribution function of a SαS
law.
Theorem 2.5.4
1) Let X ∼ Sα (1, 0, 0) be a SαS random variable, α =
/ 1, α ∈ (0, 2]. Then
Z π/2 (
1 α P(0 ≤ X ≤ x), α ∈ (0, 1),
exp −x α−1 κα (t) dt =
π 0 P(X > x), α ∈ (1, 2]
where α
sin(α(π/2 + t)) sin((1 − α)(π/2 + t)) π π
α−1
κ̄α (t) = ,t ∈ − , .
sin(π/2 + t) sin(π/2 + t) 2 2
See [5, Remark 1 p.78.]
3 Simulation of stable variables
In general, the simulation of stable laws can be demanding. However, in some particular cases,
it is quite easy.
Proposition 3.0.1 (Lévy distribution). Let X ∼ S1/2 (λ, 1, γ). Then X can be simulated by
d
representation X = λ2 Y −2 + λγ, where Y ∼ N (0, 1).
Proof It follows from Exercise 1.0.6,1) and Theorem 2.3.1, 3),4).
Proposition 3.0.2 (Cauchy distribution). Let X ∼ S1 (λ, 0, γ). Then X can be simulated by
representations
d
1) X = λ YY12 + λγ, where Y1 and Y2 are i.i.d. N (0, 1) random variables,
d
2) X = λtg(π(U − 1/2)) + λγ, where U ∼ U nif orm[0, 1].
Proof 1) Use Exercise 4.1.29 and the scaling properties of stable laws given in Theorem 2.3.1,
3),4).
d d
2) By Example 1.0.2 it holds tgY = Z, Y ∼ U [−π/2, π/2] = π(U − 1/2), Z ∼ Cauchy(0, 1) ∼
d
S1 (1, 0, 0). Then use again Theorem 2.3.1, 3),4) to get X = λZ + λγ.
Now we reduced the simulation of Lévy and Cauchy laws to the simulation of U [0, 1] and
N (0, 1) random variables. A realisation of a U [0, 1] is given by generators of pseudorandom
numbers built into any programming language. The simulation of N (0, 1) is more involved,
and we give it in the following Proposition 3.0.3 below. From this, it can be easily seen that
the method of Proposition 3.0.2, 2) is much more efficient and fast than that of Proposition
3.0.2, 1).
Proposition 3.0.3. 1) Let R and θ be independent random variables, R2 ∼ Exp(1/2), θ ∼
U [0, 2π]. Then X1 = R cos θ and X2 = R sin θ are independent N (0, 1)-distributed random
variables. √
d
2) A random variable X ∼ N ( u, σ 2 ) can be simulated by X = µ + σ −2 log U cos(2πV ),
where U, V ∼ U [0, 1] are independent.
Proof 1) For any x, y ∈ R consider
√ √
P(X1 ≤ x, X2 ≤ y) = P( R2 cos θ) ≤ x, R2 sin θ) ≤ y)
1 2π ∞ √ √ 1
Z Z
= I( t cos ϕ ≤ x, t sin ϕ ≤ y) e−t/2 dtdϕ = t = r2
2π 0 0 2
Z 2π Z ∞
1 2 x = r cos ϕ,
= I(r cos ϕ ≤ x, r sin ϕ ≤ y)re−r /2 drdϕ = 1
2π 0 0 x2 = r sin ϕ
1 ∞ ∞ x2 +x2
Z Z
− 12 2
= I(x1 ≤ x, x2 ≤ y)e dx1 dx2
2π 0 0
x2 x2
Z x y
1 1
Z
1 2
=√ e− 2 dx1 √ e− 2 dx2 = P(X1 ≤ x)P(X2 ≤ y).
2π 0 2π 0
34
3 Simulation of stable variables 35
1−α
sin(απ(U − 1/2)) cos((1 − α)π(U − 1/2))
α
d
X= , (3.0.1)
(cos(π(U − 1/2)))1/α − log V
Proof Denote T = π(U −1/2), W = − log V. By Remark 3.0.1 it is clear that T ∼ U [−π/2, π/2],
W ∼ Exp(1). So (3.0.1) reduces to
1−α
sin(αT ) cos((1 − α)T )
α
d
X= . (3.0.2)
(cos T )1/α W
d
1) α = 1 : Then (3.0.2) reduces to X = tgT, which was proven in Proposition 3.0.2,2).
1−α
d Kα (T ) α
2) α ∈ (0, 1) : Under the condition T > 0, relation (3.0.2) rewrites as X = Y = W ,
1/α
sin(αT ) cos((1−α)T )
where Kα (T ) = (cos T ) W as in Theorem 2.5.1.
Then
d
Hence, Y ∼ Sα (1, 0, 0) by Theorem 2.5.4 ⇒ X = Y ∼ Sα (1, 0, 0).
3) α ∈ (1, 2] is proven analogously as in 2) considering 1 − α < 0 and P(Y ≥ x) = P(Y ≥
x, T > 0).
36 3 Simulation of stable variables
Remark 3.0.2
d √ √
In the Gaussian case α = 2, the formula (3.0.2) reduces to X = W sin(2T
cos T
) T cos T
= W 2 sincos T =
(
√ √ W ∼ Exp(1)
2 2W sin T, where , so 2W ∼ Exp(1/2). Hence, X ∼ N (0, 2) is gener-
T ∼ U [−π/2, π/2]
ated by the algorithm 2) of Proposition 3.0.3, so formula (3.0.1) contains Proposition 2.4.7,2)
as a spacial case.
Now let us turn to the general case of simulating a random variable X ∼ Sα (λ, β, γ). We
show first that, to this end, it sufficient to know how to simulate X ∼ Sα (1, 1, 0).
Lemma 3.0.1
Let X ∼ Sα (λ, β, γ), α ∈ (0, 2). Then
(
d λγ + λ1/α Y, α=/ 1,
X= 2
(3.0.3)
λγ + π βλ log λ + λY, λ = 1,
Proof Relation (3.0.4) follows from the proof of Proposition 2.3.3 and Exercise 4.1.28. Relation
(3.0.3) follows easily from Theorem 2.3.1,3)-4).
Proof By Theorem 2.5.4, 2) we have true following representation formula for the c.d.f. P(X ≤
x) = FX (x) :
1 π/2
Z α
FX (x) = exp −x α−1 K̄α (t) dt, x > 0,
π −π/2
where
α
sin(α(π/2 + t)) sin((1 − α)(π/2 + t)) π π
1−α
K̄α (t) = ,t ∈ − , .
sin(π/2 + t) sin(π/2 + t) 2 2
The rest of the proof is exactly as in Theorem 3.0.1, 2).
Theorem 3.0.2
The random variable X ∼ Sα (1, 1, 0), α ∈ [1, 2) can be simulated by
π W cos T
π2 (π/2 + T )tgT − log 2 π +T
, α = 1,
d 2
X= 1 1−α
1 + tg2 π α 2α sin(α(T +π/2))
cos((1−α)T −απ/2) α
2 (cos T )1/α W , α ∈ (1, 2),
4.1
Exercise 4.1.1
Let X1 , X2 be two i.i.d. r.v.’s with probability density ϕ. Find a probability density of aX1 +bX2 ,
where a, b ∈ R.
Exercise 4.1.2
Let X be a symmetric stable random variable and X1 , X2 be its two independent copies. Prove
that X is a strictly stable r.v., i.e., for any positive numbers A and B, there is a positive number
C such that
d
AX1 + BX2 =CX.
Exercise 4.1.3 1. Prove that ϕ = {e−|x| , x ∈ R} is a characteristic function. (Check
Pólya’s criterion for characteristic functions.1 )
2. Let X be a real r.v. with characteristic function ϕ. Is X a stable random variable? (Verify
definition.)
Exercise 4.1.4
Let real r.v. X be Lévy distributed (see Exercise Sheet 1, Ex. 1-4). Find the characteristic
function of X. Give parameters (α, σ, β, µ) for the stable random variable X.
Hint: You may use the following formulas. 2
Z ∞ −1/(2x)
e √ √ q
cos(yx)dx = 2πe− |y| cos( |y|), y ∈ R,
0 x3/2
Z ∞ −1/(2x)
e √ √ q
sin(yx)dx = 2πe− |y| sin( |y|)signy, y ∈ R.
0 x3/2
Exercise 4.1.5
Let Y be a Cauchy distributed r.v. Find the characteristic function of Y. Give parameters
(α, σ, β, µ) for the stable random variable Y.
Hint: Use Cauchy’s residue theorem.
Exercise 4.1.6
Let X ∼ S1 (σ, β, µ) and a > 0. Is aX stable? If so, define new (α2 , σ2 , β2 , µ2 ) of aX.
Exercise 4.1.7
Let X ∼ N (0, σ 2 ) and A be a positive α−stable r.v. Is the new r.v. AX stable, strictly stable?
If so, find its stability index α2 .
1
Pólya’s theorem. If ϕ is a real-valued, even, continuous function which satisfies the conditions ϕ(0) = 1,
ϕ is convex for t > 0, limt→∞ ϕ(t) = 0, then ϕ is the characteristic function of an absolutely continuous
symmetric distribution.
2
Oberhettinger, F. (1973). Fourier transforms of distributions and their inverses: a collection of tables. Aca-
demic press, p.25
38
4 Additional exercises 39
Exercise 4.1.8
Let L be a positive slowly varying function, i.e., ∀x > 0
L(tx)
lim = 1. (4.1.1)
t→+∞ L(t)
1. Prove that x−ε ≤ L(x) ≤ xε for any fixed ε > 0 and all x sufficiently large.
2. Prove that limit (4.1.1) is uniform in finite intervals 0 < a < x < b.
Exercise 4.1.10
Find parameters (a, b, H) in the canonic Lévy-Khintchin representation of a characteristic func-
tion for
Exercise 4.1.11
What is wrong with the following argument? If X1 , . . . , Xn ∼ Gamma(α, β) are independent,
then X1 + · · · + Xn ∼ Gamma(nα, β), so gamma distributions must be stable distributions.
Exercise 4.1.12
Let Xi , i ∈ N be i.i.d. r.v.’s with a density symmetric about 0 and continuous and positive at
0. Prove
1 1 1
d
+ ··· + → X, n → ∞,
n X1 Xn
where X is a Cauchy distributed random variable.
Hint: At first, apply Khintchin’s theorem (T.2.2 in the lecture notes). Then find parameters
a, b and a spectral function H from Gnedenko’s theorem (T.2.3 in the lecture notes).
3
Feller, W. (1973). An Introduction to Probability Theory and its Applications. Vol 2, p.282
40 4 Additional exercises
Exercise 4.1.13
Show that the sum of two independent stable random variables with different α-s is not stable.
Exercise 4.1.14
Let X ∼ Sα (λ, β, γ). Using the weak law of large numbers prove that when α ∈ (1, 2], the shift
parameter µ = λγ equals EX.
Exercise 4.1.15
Let X be a standard Lévy distributed random variable. Compute its Laplace transform
E exp(−γX), γ > 0.
Exercise 4.1.16
Let X ∼ Sα0 (λ0 , 1, 0), and A ∼ Sα/α0 (λA , 1, 0), 0 < α < α0 < 1 be independent. The value of
0
λA is chosen s.t. the Laplace transform of A is given by E exp(−γA) = exp(−γ α/α ), γ > 0.
0
Show that Z = A1/α X has a Sα (λ, 1, 0) distribution for some λ > 0.
Exercise 4.1.17
Let X ∼ Sα (λ, 1, 0), α < 1 and the Laplace transform of X be given by E exp(−γX) =
exp(−cα γ α ), γ > 0,where cα = λα / cos(πα/2).
1. Show that
lim xα P{X > x} = Cα ,
x→∞
Exercise 4.1.18
Let X1 , X2 be two independent α-stable random variables with parameters (λ, β, γ). Prove that
X1 − X2 is a stable random variable and find its parameters (α1 , λ1 , β1 , γ1 ).
Exercise 4.1.19
Let X1 , . . . , Xn be i.i.d Sα (λ, β, γ) distributed random variables and Sn = X1 + · · · + Xn . Prove
that the limiting distribution of
is Sα (λ, β, 0).
Exercise 4.1.20
Let X1 , X2 . . . , be a sequence of i.i.d. random variables and let p > 0. Applying the Borel-
Cantelli lemmas, show that
1. E|X1 |p < ∞ if and only if limn→∞ n−1/p Xn = 0 a.s.,
the Laplace transform of a real function f defined for all s > 0, whenever f˜ is finite. For the
following functions find the Laplace transforms (in terms of f˜):
1. For a ∈ R f1 (x) := f (x − a), x ∈ R+ , and f (x) = 0, x < 0.
2. For b > 0 f2 (x) := f (bx), x ∈ R+ .
3. f3 (x) := Rf 0 (x), x ∈ R+ .
4. f4 (x) := 0x f (u)du, x ∈ R+ .
Exercise 4.1.23
Let f˜, g̃ be Laplace transforms of functions f, g : R+ → R+ .
1. Find the Laplace transform of the convolution f ∗ g.
where 0 < δ < 2. Applying the Theorem 2.8 from the lecture notes, prove that c.d.f. F (x) :=
P(X1 ≤ x), x ∈ R belongs to the domain of attraction of a stable law G. Find its parameters
d
(α, λ, β, γ) and sequences an , bn s.t. b1n ni=1 Xi − an → Y ∼ G as n → ∞.
P
Exercise 4.1.26
Let X be a random variable with probability density function f (x). Assume that f (0) =
/ 0 and
that f (x) is continuous at x = 0. Prove that
42 4 Additional exercises
2. if r > 1/2 then |X|−r belongs to the domain of attraction of a stable law with stability
index 1/r.
Exercise 4.1.27
Find a distribution F which has infinite second moment and yet it is in the domain of attraction
of the Gaussian law.
Exercise 4.1.28
Prove the following statement which is used in the proof of Proposition 2.3.3.
Let X ∼ Sα (λ, β, 0) with α ∈ (0, 2). Then there exist two i.i.d. r.v.’s Y1 and Y2 with common
distribution Sα (λ, 1, 0) s.t.
Exercise 4.1.29
Prove that for α ∈ (0, 1) and fixed λ, the family of distributions Sα (λ, β, 0) is stochastically
ordered in β, i.e., if Xβ ∼ Sα (λ, β, 0) and β1 ≤ β2 then P(Xβ1 ≥ x) ≤ P(Xβ2 ≥ x) for x ∈ R.
Exercise 4.1.30
Prove the following theorem.
Theorem 4.1.1
A distribution function F is in the domain of attraction of a stable law with exponent α ∈ (0, 2)
if and only if there are constants C+ , C− ≥ 0, C+ + C− > 0, such that
1. (
F (−y) C− /C+ , if C+ > 0,
lim =
y→+∞ 1 − F (y) +∞, if C+ = 0,
[4] K. Sato. Lévy Processes and Infinitely Divisible Distributions. Cambridge Studies in Ad-
vanced Mathematics. Cambridge University Press, 1999.
[5] V. V. Uchaikin and V. M. Zolotarev. Chance and stability: stable distributions and their
applications. Walter de Gruyter, 1999.
43