0% found this document useful (0 votes)
4 views

cp467_12_lecture14_compression1

The lecture discusses image compression, explaining its necessity due to large file sizes from digital images and the need for efficient storage, transmission, and data access. It covers concepts such as lossless and lossy compression, entropy, and various coding methods like Shannon-Fano and Huffman coding. Additionally, it addresses the importance of statistical redundancy and correlation in achieving effective compression.

Uploaded by

Mariam Mirza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

cp467_12_lecture14_compression1

The lecture discusses image compression, explaining its necessity due to large file sizes from digital images and the need for efficient storage, transmission, and data access. It covers concepts such as lossless and lossy compression, entropy, and various coding methods like Shannon-Fano and Huffman coding. Additionally, it addresses the importance of statistical redundancy and correlation in achieving effective compression.

Uploaded by

Mariam Mirza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 146

Lecture 14 Image Compression

1. What and why image compression


2 Basic
2. B i conceptst
3. Encoding/decoding, entropy
What is Data and Image Compression?

ƒ Data compression is the art and science of representing


information in a compact form
form.
ƒ Data is a sequence of symbols taken from a discrete
alphabet.
l h b t
Why do we need Image Compression?

Still Image
g
• One page of A4 format at 600 dpi is > 100 MB.
• One color image in digital camera generates 10-30 MB.
• Scanned 3”×7” photograph at 300 dpi is 30 MB.

Digital Cinema
•4K×2K×3 ×12 bits/pel = 48 MB/frame or 1 GB/sec
or 70 GB/min.
Why do we need Image Compression?

1) Storage
2)) Transmission
3) Data access
1990-2000
Disc capacities : 100MB -> 20 GB (200 times!)
but seek time : 15 milliseconds Æ 10 milliseconds
and transfer rate : 1MB/sec ->2 MB/sec.

Compression improves overall response time in some


applications.
pp
Source of images

•Image scanner
•Digital camera
•Video camera,
•Ultra-sound (US), Computer Tomography (CT),
Magnetic resonance image (MRI), digital X-ray (XR),
Infrared.
•etc.
Image types

IMAGE UNIVERSAL
COMPRESSION COMPRESSION

Binary
Gray-scale
images
images
Textual
Video data
images Colour
True colour palette
images images
Why we can compress image?

• Statistical redundancy:
1) Spatial correlation
a) Local - Pixels at neighboring locations have
similar intensities.
b) Global - Reoccurring patterns.
2) Spectral correlation – between color planes.
3) Temporal correlation – between consecutive frames.
• Tolerance to fidelity:
1) PPerceptual
t l redundancy.
d d
2) Limitation of rendering hardware.
Lossy vs. Lossless compression

Lossless compression: reversible, information preserving


text compression algorithms,
binary images, palette images

Lossy compression: irreversible


grayscale, color, video

Near-lossless
Near lossless compression:
medical imaging, remote sensing.
Rate measures

size of the compressed file C


Bitrate: = bits/pel
pixels in the image N

size of the original file N ⋅k


Compression ratio: =
size of the compressed file C
Distortion measures
1 N

Mean average error (MAE): MAE =


N
∑yi =1
i − xi

1 N
Mean square error (MSE): MSE = (
∑ i i
y − x )2

N i =1

Signal-to-noise
Signal to noise ratio (SNR): SNR = 10 ⋅ logg10 σ 2 MSE[ ]
(decibels)

Pulse-signal-to-noise ratio (PSNR): PSNR = 10 ⋅ log10 A2 MSE [ ]


(decibels)
A is amplit
amplitude signal: A = 28-1=255
de of the signal 1 255 for 8
8-bits
bits signal
signal.
Other issues
• Coder and decoder computation complexity
• Memory requirements
• Fixed rate or variable rate
• Error resilience
• Symmetric or asymmetric
• Decompress at multiple resolutions
• Decompress at various bit rates
• Standard or proprietary
Entropy

Set of symbols (alphabet) S={s1, s2, …, sN},


y
N is number of symbols in the alphabet.
p
Probability distribution of the symbols: P={p1, p2, …, pN}
According to Shannon, the entropy H of an information
source S is defined as follows:

N
H = −∑ pi ⋅ log
g 2 ( pi )
i =1
Entropy

The amount of information in symbol si,


i other
in h words,
d the
h number
b off bi
bits to code
d or code
d llength
h
for the symbol si:

H ( si ) = − log 2 ( pi )
The average number of bits for the source S:
N
H = −∑ pi ⋅ log 2 ( pi )
i =1
Entropy for binary source: N=2

1-p
S {0,1}
S={0 1}
p0=p
p
p1=1-p
1p
0 1

H = −( p ⋅ log
g 2 p + (1 − p ) ⋅ log
g 2 (1 − p ))

H=1
H 1 bit for p0=p
p1=0.5
0.5
Entropy for uniform distribution: pi=1/N

Uniform distribution of probabilities: pi=1/N:


N
H = −∑ (1 / N ) ⋅ log 2 (1 / N ) = log 2 ( N )
i =1

Pi=1/N

s1 s2 sN

Examples:
N= 2: pi=0.5;; H=log
g2((2)) = 1 bit
N=256: pi=1/256; H=log2(256)= 8 bits
How to get the probability distribution?
1) Static modeling:
a) The same code table is applied to all input data.
b) One-pass method (encoding)
c) No side information
2) Semi
Semi-adaptive
adaptive modeling:
a) Two-pass method: (1) analysis and (2) encoding.
b) Side information needed (model, code table)
3) Adaptive (dynamic) modeling:
a) One-pass method: analysis and encoding
b) Updating the model during encoding/decoding
c) No side information
Static vs. Dynamic: Example

S = {a,b,c}; Data: a,a,b,a,a,c,a,a,b,a.


1) Static model: pi=1/10
H = -log2(1/10)=1.58 bits

2) Semi-adaptive method: p1=7/10; p2=2/10; p3=1/10;


H = -(0.7*log
(0.7 log20.7 + 0.2
0.2*log
log20.2 + 0.1
0.1*log
log20.1)
0.1)=1.16
1.16 bits
3) Adaptive method: Example

S = {a,b,c}; Data: a,a,b,a,a,c,a,a,b,a.


S b l 1
Symbol 2 3 4 5 6 7 8 9 10
a 1 2 3 3 4 5 5 6 7 7
b 1 1 1 2 2 2 2 2 2 3
c 1 1 1 1 1 1 2 2 2 2
pi 0.33 0.5 0.2 0.5 0.57 0.13 0.56 0.60 0.18 0.58
H 1.58 1.0 2.32 1.0 0.81 3.0 0.85 0.74 2.46 0.78

(1/10)(1.58 1.0 2.32 1.0 0.81 3.0 0.85 0.74 2.46 0.78)
H=(1/10)(1.58+1.0+2.32+1.0+0.81+3.0+0.85+0.74+2.46+0.78
H
=1.45 bits/char
1.16 < 1.45 < 1.58
S.-Ad. Ad. Static
Coding methods

• Shannon-Fano Coding
• Huffman Coding
• Predictive coding
• Block coding

• Arithmetic code
• Golomb-Rice codes
Shannon-Fano Code: A top-down approach

1)) Sort symbols


y accordingg their p
probabilities:
p1 ≤ p2 ≤ … ≤ pN

2) Recursively
R i l didivide
id iinto
t parts,
t each
h with
ith approx. th
the same
number of counts (probability)
Shannon-Fano Code: Example (1 step)

si pi
A,B,
A B C C,D,E
DE
A- 15/39
15,7, 6,6,5
B- 7/39
C- 6/39
D- 6/39 0 1
E- 5/39

A,B C,D,E
15+7 =22 6+6+5=17
Shannon-Fano Code: Example (2 step)

si pi
A,B,
A B C C,D,E
DE
A- 15/39
15,7, 6,6,5
B- 7/39
C- 6/39
D- 6/39 0 1
E- 5/39

A,B C,D,E
15+7 =22 6+6+5=17

0 1 0 1

A B C D,E
15 7 6 6+5=11
Shannon-Fano Code: Example (3 step)

si pi
A,B,
A B C C,D,E
DE
A- 15/39
15,7, 6,6,5
B- 7/39
C- 6/39
D- 6/39 0 1
E- 5/39

A,B C,D,E
15+7 =22 6+6+5=17

0 1 0 1

A B C D,E
15 7 6 6+5=11
0 1
D E
6 5
Shannon-Fano Code: Example (Result)

Symbol
y pi -log
g2(pi) Code Subtotal
A 15/39 1.38 00 2*15
B 7/39 2.48 01 2*7
C 6/39 2 70
2.70 10 2*6
D 6/39 2.70 110 3*6
E 5/39 2.96 111 3*5
T t l
Total: 89 bit
bits
0 1

0 1 0 1
Binary tree
A B C 0 1

D E

H=89/39=2.28 bits
Shannon-Fano Code: Encoding

A - 00 Message: B A B A C A C A D E
B - 01 Codes: 01 00 01 00 10 00 10 00 110 111
C - 10
D - 110 Bitstream: 0100010010001000110111
E - 111

0 1

0 1 0 1
Binary tree
A B C 0 1

D E
Shannon-Fano Code: Decoding

A - 00 Bitstream: 0100010010001000110111 (23 bits)


B - 01 Codes: 01 00 01 00 10 00 10 00 110 111
C - 10
D - 110 Messaage: B A B A C A C A D E
E - 111

0 1

0 1 0 1
Binary tree
A B C 0 1

D E
Huffman Code: A bottom-up approach

INIT:
Put all nodes in an OPEN list
list, keep it sorted all times
according their probabilities;.
REPEAT
a) From OPEN pick two nodes having the lowest
probabilities, create a parent node of them.
b) Assign the sum of the children’s probabilities
to the parent node and inset it into OPEN
c) Assign code 0 and 1 to the two branches of the
tree, and delete the children from OPEN.
Huffman Code: Example

Symbol pi -log2(pi) Code Subtotal


A 15/39 1 38
1.38 0 2*15
B 7/39 2.48 100 3*7
C 6/39 2.70 101 3*6
D 6/39 2.70 110 3*6
E 5/39 2.96 111 3*5
Total: 39/39 87 bits
0 1
24/39
A 15/39 0 1
11/39
13/39 Binary tree
0 1 1

B 7/39 C D 6/39 E 5/39


6/39

H=87/39=2.23 bits
Huffman Code: Decoding

A-0 Bitstream: 1000100010101010110111 ((22 bits))


B - 100 Codes: 100 0 100 0 101 0 101 0 110 111
C - 101
D - 110 Message: B A B A C A C A D E
E - 111

0 1
A 0 1
Binary tree
0 1 1

B C D E
Properties of Huffman code

• Optimum
p code for a g
given data set requires
q two p
passes.
• Code construction complexity O(N logN).
• Fast lookup table based implementation.
implementation
• Requires at least one bit per symbol.
• Average
A codeword
d d llength
h iis within
i hi one bi
bit off zero-order
d
entropy (Tighter bounds are known): H ≤ R < H+1 bit
• Susceptible to bit errors
errors.
Unique prefix property

No code is a prefix to any other code


code, all symbols are
the leaf nodes
Shannon-Fano
Shannon Fano and Huffman
NOT prefix 0 1 codes are prefix codes
(D)
0 1 C

A B

Legend: Shannon (1948) and Fano (1949);


Huffman (1952) was student of Fano at MIT.
Fano: ”Construct minimum-redundancy code → final exam is passed!”
Predictive coding

1) Calculate prediction value: yi=f(neibourhood


f(neibourhood of xi).
)
2) Calculating the prediction error: ei= yi- xi.
3) Encode the prediction error ei.
Predictive model for grayscale images

y=xi-xi-1

Histogram of the original image and Residual image

E t
Entropy: Ho= 7.8
7 8 bit
bits/pel
/ l (?) Hr=5.1
5 1 bit
bits/pel
/ l (?)
Coding without prediction

f0=8;
8 p0=p=8/64
8/64 =0.125;
0 125
f1=56; p1 =(1-p)=56/64=0.875

Entropy:

H =-((8/64)*log2(8/64)+(56/64)*log2(56/64))=0.544 bits/pel
Prediction for binary images by pixel above

f p
16 16/64

48 48/64

Entropy:

H =-((16/64)*log2(16/64)+(48/64)*log2(48/64))=0.811 bits/pel
Wrong predictor!
Prediction for binary images pixel to the left

f p
1 16/64

63 63/64

Entropy:

H =-((1/64)*log2(1/64) + (63/64)*log2(63/64)) =0.116 bits/pel


Good predictor!
Comparison of predictors:

• Without prediction: H= 0.544 bits/pel

• Prediction byy p
pixel above: H = 0.811 bits /pel
p ((bad!))

• Prediction by pixel to the left: H=0.116 bits/pel (good!)


Shortcoming of Huffman codes
Alphabet: a, b.
pa=p=0.99, pb=q=0.01
1) Entropy
H1=-(p*log
(p log2(p)+q
(p)+q*log
log2(q))
(q))=0.081
0.081 bits/pel

2) Huffman code: pa=’0’, pb=’1’


Bitrate R1 = 1*p+1*q = p+q = 1 bit/pel!
Make a new alphabet blocking
g symbols!
y
Block coding: n=2

New alphabet: ’A’=’aa’, ’B’=’ab’, ’C’=’ba’, ’D’=’bb’


pA=p2=0.9801, pB=pq=0.0099, pC=pq=0.0099, pD= q2=0.0001

1)) Entropy:
py H2=-(0.9801*log
( g2((0.9801)) + 0.0099*log
g2((0.0099)) +
+ 0.0099*log2(0.0099) + 0.0001*log2(0.0001))=

=((0.0284+0.0659
0.0659+0.0659
0.0659+0.0013)/2= 0.081 bits/pel
Why H2=H1?

2) H
Huffman
ff code:
d cA=’0’,
’0’ cB=’10’,
’10’ cC=’110’,
’110’ cD=’111’
’111’
LA=1, LB=2, LC=2, LD=3
Bit t R2 = (1*p
Bitrate (1* A+2*p
2* B+3*p
3* C+3*p
3* D)/2=0.515
)/2 0 515 bits/pel
bit / l
Block coding: n=3

’A’=’aaa’ -> pA=p3


’B ’ b’ ’C’
’B=’aab’, ’C’=’aba’,
’ b ’ ’D’
’D’=’baa’ > pB= pC=pD=p2q
’b ’ ->
’E’=’abb’, ’F’=’bab’, ’G’=’bba’ -> pE= pF=pG=pq2
’H’=’bbb’ -> pH=q3
Huffman code:
cA=’0’, cB=’10’, cC=’110’, cD=’1110’
cE=
=’111100
111100, cB=
=’111101’
111101 , cG=’111110’
= 111110 , cH=
=’111111’
111111
Entropy H3?
Bitrate:
R3 = (1*pA+2*pB+3*pC+4*pD+6*(pE+pF+pG+pH))/3=
= 0.353 bits/pel
Block coding: n→ ∞

pa=p=0.99, pb=q=0.01
Entropy Hn=0.081 bits/pel
Bitrate for Hufman coder:
n= 1:
1 R1 = 1.0
1 0 bit 2 symbols
b l in
i alphabet
l h b t
n= 2: R2 = 0.515 bits 4 symbols in alphabet
n= 3: R3 = 0.353 bits 8 symbols in alphabet
If block size n → ∞? Hn ≤ Rn < Hn+1/n
1 N 1 N 1
∑ n 2 n n n∑
n i =1
p ( B ) log p ( B ) ≤ R *

i =1
p ( Bn ) log 2 p ( Bn ) +
n
Problem - alphabet size and Huffman table size grows
exponentially with number of symbols n blocked.
Block coding: Example 2, n=1

pa=56/64
pb=8/64

1) Entropy
H=-((56/64)*log
H=-((56/64) log2(56/64)+(8/64)
(56/64)+(8/64)*log
log2(8/64))=0.544
(8/64))=0 544 bits/pel
2) Huffman code: a= ’0’; b=’1’

Bit t R=
Bitrate: R 1 bit/pel
bit/ l
Block coding: Example 2, n=4

pA=12/16
pB=4/16

1) Entropy
H=-((12/16)*log
H ((12/16) log2(12/16) (4/16) log2(4/16))/4=0.203
(12/16)+(4/16)*log 0.203 bits/pel
2) Huffman code: A=’0’, B=’1’
Bitrate R = (1
(1*pA+1
1*pB)/4
)/4=0.250
0.250 bits/pel
Binary image compression
• Run-length coding
• Predictive coding
• READ code
• Block coding
• G3 and G4
• JBIG: Prepared by Joint Bi-Level Image Expert Group in
1992
Compressed file size
Model size
n=1: Model size: pa, pb → 21*8 bits
n=2: Model size: pA, pB , pC, pD → 22*8 bits
n=k: Model size: {pA, pB , …, pD} → 2k*8 bits
Compressed data size for S symbols in input file:
R*S bits, where R is bitrate (bits/pel)
Total size: Model size + R*S bits

Difference between entropy H and bitrate R!


Run-length coding idea

• Pre-processing
p g method,, g
good when one symbol
y
occurs with high probability or when symbols are
dependent
• Count
C t how
h many repeated
t d symbol
b l occur
• Source ’symbol’ = length of run

Example: …, 4b, 9w, 2b, 2w, 6b, 6w, 2b, ...


Run-length
Run length encoding: CCITT standard

R
Resolution:
l ti
Image: 1728*1,188
or 2 Mbytes
Transmission time: T=7 min
Run-length
Run length encoding: Example

RL Code
4b ’011’
9w ’10100’
2b ’11
2w ’0111’
6b ’0010’
0010
6w ’1110’
2b ’11’
length Huffman encoding: 0 ≤ n ≤ 63
Run-length
Run

...
Run-length
Run length Huffman encoding: n > 63

Examples:
n=30w: code=’00000011’
n=94w=64w+30w: code=’11011 00000011’
n=64w=64w+ 0w: code=’11011 00110101’
Predictive coding: Idea

• Predict the pixel value on the basis of past pixel(s)


• Send ‘0’ if prediction is correct, ‘1’ if prediction is not
correct.
P di t ffor xi : yi = xi-1
Predictor
Prediction error: ei = xi-xi-1
Example: alphabet S = {0
{0,1}
1}
Data: (0) 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 → H=1.0 bit
Errors: 0 0 0 0 1 0 0 0 0 0 0 0 -1 0 0 0
(If e < 0 then e = e+2) Why 2?
Errors: 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 → H=0.5 bit
Four-pixel prediction function

99.76 % 96.64 %

62.99 % 77.14 %

83.97 % 94.99 %

87 98 %
87.98 61.41 %

71.05 % 61.41 %

86.59 % 78.74 %

70.10 % 78.60 %

95.19 % 91.82 %
READ Code (1)

• Code the location of run boundary relative to the


previous row.
READ = ”Relative Element Address Designate”
g
• The READ code includes three coding modes:
o Pass mode
o Vertical mode
o Horizontal mode
READ Code: Principles

• Vertical mode:
The position of each color change is coded with respect
to a nearby change position of the same color on the
reference line
line, if one exists
exists. "Nearby"
Nearby is taken to mean
within three pixels.
• Horizontal mode:
There is no nearby change position on the reference line,
one-dimensional run-length coding - called
• Pass code:
The reference line contains a run that has no counterpart
in the current line; next complete run of the opposite
color
l iin th
the reference
f liline should
h ld bbe skipped.
ki d
READ: Codes fo modes

wl = length of the white run bl = length of the black run


Hw = Huffman code of white run Hb = Huffman code of black run

(For Hufman codes see previous slides)


READ code

• There is an all-white line above the page, which used as the


reference line for the 1st scan line of the page.
• Each line is assumed start with a white pixel
pixel, which is ignored by
receiver.
• Pointer a0 is set to an imaginary white pel on the left of the coding
line, and a1 is set to point to the 1st black pel on the coding line.
The first run length is | a0a1 |-1.
|1
• Pointers b1 and b2 are set to point to the start of the 1st and 2nd
runs on the reference line, respectively.
• The encoder assumes an extra pel on the right of the line, with a
color opposite that of the last pixel.
Pass (a) and Vertical mode (b1
(b1,b2)
b2)
Horizontal mode (c1
(c1,c2)
c2)
Flowchart
READ Code: Example
reference line

current line
vertical mode horizontal mode pass vertical mode horizontal mode
-1 0 3 white 4 black code +2 -2 4 white 7 black

010 1 001 1000 011 0001 000011 000010 001 1011 00011

code generated
Block Coding: Idea

• Divide the image


g into blocks of p
pixels.
• A totally white block (all-white block) is coded by ’0’.

• All other blocks (non


(non-white
white blocks) thus contain at least
one black pixel. They are coded with a 1-bit as a prefix
followed by the contents of the block (bit by bit in
row-major order) or with Huffman code.
• The Block Coding can be applied to difference (error)
image for predictive coding approach.

(see also Lecture 2)


Block Coding: Huffman codes for k=0
k 0, 1
Block Coding: Huffman codes for k=2
k 2
Block Coding: Huffman codes for k=3
k 3, 4
Hierarchical block encoding: Principle

• In the hierarchical variant of the block coding g the bit map


p
is first divided into b*b blocks (typically 16*16).
• These blocks are then divided into q quadtree structure of
blocks in the following manner:
If a particular b*b block is all-white, it is coded by ’0’.
Oth
Otherwise
i th the bl
block
k iis coded
d dbby ’1’ and d th
then di
divided
id d iinto
t
four equal sized subblocks which are recursively coded
in the same manner.
Hierarchical block encoding: (1)

Code: ’1’
1 L=1
L 1

L=2
Code: ’0111’
0111
Hierarchical block encoding ()

L=3
Codes: 0011 0111 1000

L=4
Codes: 0111 1111
1111 1100
0101 1010

Totally: 1+4+12+24 = 41 bits


Hierarchical block encoding: Example
Image to be compressed: Code bits:
1 0111 0011 0111 1000 0111 1111 1111
0101 1010 1100

x x x x x x x x x x
x x x x x x x x x x
x x x x
x x x x
x x x x
x x x x
1 x x x x
x x x x

0 1 1 1

1+4+12+24=41 0 0 1 1 0 1 1 1 1 0 0 0

0111 1111 1111 0101 1010 1100


CCITT Group 3 (G3) and Group 4 (G4)

• The RLE and READ algorithms


g are included in image
g
compression standards, known as CCITT G3 and G4.
(used in FAX-machines).

RLE
Bits
Run length
Pixel
Buffer 0101101100...
Boundary Bits
points READ
CCITT Group 3 (G3)

• Every kk-th
th line is coded by RLE RLE-method
method and
the READ-code is applied for the rest of the lines.
• The first ((virtual)) p
pixel is white
• EOL code after every line to synchronize code
• Six EOL codes after every page
• Binary documents only
CCITT Group 4 (G4)

• All lines are codes by READ


• The first reference line (above image) is white
• EOL code after every line to synchronize code
• Six EOL codes after every page
• Option for grayscale and color images
G3 and G4: Results

Resolution Low (200×100) High (200×200)


Scheme G3 G4 G3 G4
Bits per pel 0.13 0.11 0.09 0.07
Seconds 57 47 74 61

7 min → 1 min
Comparison of algorithms

25.0
Compression ratio
20.0

15 0
15.0
23.3
10.0 18.0 18.9 17.9

5.0 8.9 10.8 11.2 9.8 10.3


7.9

0.0
COMPRESS GZIP PKZIP BLOCK RLE 2D-RLE ORLE G3 G4 JBIG

COMPRESS = Unix standard 2D-RLE


2D RLE = 2
2-dimensional
dimensional RLE [WW92]
compression software ORLE = Ordered RLE [NM80]
GZIP = Gnu compression software G3 = CCITT Group 3 [YA85]
PKZIP = Pkware compression software G4 = CCITT Group 4 [YA85]
BLOCK = Hi
Hierarchical
hi l block
bl k coding
di [KJ80] JBIG = ISO/IEC Standard draft [PM93]
RLE = Run-length coding [NM80]
Quantization

Any analog quantity that is to be processed by a digital


computer or digital system must be converted to an
integer number proportional to its amplitude. The
conversion process between analog samples and
discrete-valued samples is called quantization.

Input signal Quantizer Quantized signal


Uniform quantizer: M=8 levels

Input-output characteristic of uniform quantizer


Nonuniform quantizer: M = 8 levels

Input-output characteristic of nonuniform quantizer


Nonuniform quantizer: M = 8 levels

Input-output characteristic of nonuniform quantizer


Quantization error

I
Input
t signal
i lx

Quantized signal q(x)

Quantization error:
e(x) = x−q(x)
Distortion measure

Probability density function (pdf) of x is p(x)


Quantization error: e(x) = x − q (x)
Mean (average value) μ of quantization error:
M aj

μ = E[ x − q( x)] = ∑ ∫ ( x − yi ) p ( x)dx
j =1 a j −1

Variance σ2 of quantization error as distortion measure:


M aj

σ 2 = E[( x − q( x)) 2 ] = ∑ ∫ ( x − y j ) 2
p( x)dx
j =1 a j −1 j
Optimal quantization problem

Gi
Given a signal
i l x, with
ith probability
b bilit ddensity
it function
f ti
(or histogram) p(x), find a quantizer q(x) of x, which
minimizes the quantization error variance σ2:

M aj

σ 2
opt = min ∑ ∫ (x − y )
{a j },{ y j } j =1 a
j
2
p ( x)dx
j −1
Lossy image compression

• DPCM: Prediction error quantization


• Block Truncation Coding (BTC)
• Vector Quantization (VQ)
• Transform Coding (DCT, JPEG)
• Subband Coding
• Wavelet Coding (JPEG2000)
Data Bitstream
Transformation Quantization Encoding

Model
Part 1: DPCM

y=xi-xi-1

Histogram of the original image and Residual image

E t
Entropy: Ho= 7.8
7 8 bit
bits/pel
/ l (?) Hr=5.1
5 1 bit
bits/pel
/ l (?)
Prediction error quantization with open loop

ei=xi−xi-1 → q(ei)

DPCM is Differential Pulse Code Modulation


Quantization with open loop: Decoding

yi=yi-1+q(ei)

Problem: error accumulation!


W/o quantization With quantization
xn=xn-1+ei ⇒ yn=yi-1+q(en)
yn−xn= [x1+q(e2)+... +q(en)] − [x1+e2+... +en]=
= (q(e2)−e2)+... +(q(en) −en);

Variance: σ = σ + (n − 1)σ
2 2 2
y x q
Closed loop: Encoding

ei=xi−xi-1 → q(ei)

ei= xi− zi-1


zi= zi-1+q(e
q( i)
Closed loop: Decoding

zi=zi-1+q(ei)

Error accumulation?
acc m lation? No!
W/o quantization With quantization
en= xn− zn-1
n 1 or xn=z
zn-1
n 1+e
en ⇒ zn=zzn-1 q(en)
n 1+q(e

xn− zn-1=(zn-1+en)−(zn-1+q(en))=en−q(en);
Example

• Open
p loop:
p q
quantization step
p is 8
xj: 81 109 129 165 209 221
ej: 28 20 36 44 12
[ej/8] 4 3 5 6 2
q(ej) 32 24 40 48 16
yj: 81 113 137 177 225 241

• Closed loop: quantization step is 8


xj: 81 109 129 165 209 221
ej: 28 16 36 40 12
q(ej): 32 16 40 40 16
zj: 81 113 129 169 209 225
Entropy
xi
Entropy
py reduction: xi-1

ΔH=H0−H1=log2(σ0/σ1)
σ21=2σ20(1 − ρ(Δ)),
Δ Δ Δ
where σ20 is variance of the data x,
σ21 is variance of the predection error e,
ρ(Δ) is correlation coefficient of the pixels xi and xi-1
or ΔH= 0.5log2[2(1 − ρ(Δ))].
Example: If ρ(Δ) =0.8 → −log2[2*0.2]=1.3 bits
If ρ(Δ) =0.9 → −log2[2*0.1]=2.3 bits
Optimum linear prediction
m
• 1-D Linear predictor: xˆi = ∑ a j xi − j
j =1

Usually m=3

• 2-D and 3-D linear predictors


Part 2
2. Block Truncation Coding

• Divide the image into 4×4 blocks;


• Quantize the block into two representative values a and b;
• Encode (1) the representative values a and b
and
d (2) the
h significance
i ifi map iin the
h bl
block.
k

Original Bit-plane Reconstructed

2 9 12 15 0 1 1 1 2 12 12 12
2 11 11 9 0 1 1 1 2 12 12 12
2 3 12 15 0 0 1 1 2 2 12 12
3 3 4 14 0 0 0 1 2 2 2 12

x = 7.94 q = 9 a a[2.3]
a=[2.3]
= = 2.3
2
σ = 4.91 b=[12.3]=12
b = 12.3
1 How to construct quantizer?
1.

• The first two moments p


preserving
gqquantization:
m

∑x
m
< x >= ∑ i
1
m i < x 2 >= 1
m x 2

i =1 i =1
σ =< x > − < x >
2 2 2

• Threshold for quantization: T=<x>; na+nb=m


m < x >= na a + nbb
m < x 2 >= na a 2 + nb b 2

nb na
a =< x > −σ ⋅ b = x +σ ⋅
na nb
2 Optimal scalar quantizer (”AMBTC”)
2. ( AMBTC )

• Minimize quantization error:

⎧⎪ 2⎫

( )
D = min ⎨ ∑ xi − a + ∑ xi − b ⎬
a ,b ,T ⎪
2
( )
⎩ xi <T xi ≥T ⎪⎭

• Max-Lloyd solution:
a+b
1 T= 1
a = ⋅ ∑ xi 2 b = ⋅ ∑ xi
na xi <T nb xi ≥T

• How to find the a,b,T? See Max-Lloyd algorithm.


Example of BTC
Original Bit-plane Reconstructed

2 9 12 15 0 1 1 1 2 12 12 12
2 11 11 9 0 1 1 1 2 12 12 12
2 3 12 15 0 0 1 1 2 2 12 12
3 3 4 14 0 0 0 1 2 2 2 12

x = 7.94 q =9 9
T=
T a = 2.3
a=[2.3] = 2
σ = 4.91 na=7
nb=9
b = 12.3
b=[12.3]=12

D = σ a2 + σ b2 = 7 + 43 = 50

2 3 4 5 6 7 8 9 10 11 12 13 14 15
a T b
Example of optimal quantizer (”AMBTC”)
( AMBTC )
Original Bit-plane Reconstructed

2 9 12 15 0 1 1 1 2
3 12 12 12
2 11 11 9 0 1 1 1 3 12 12 12
2
2 3 12 15 0 0 1 1 3
2 3 12 12
2
3 3 4 14 0 0 0 1 3
2 23 3
2 12

x = 7.94 q =9 9
T=
T a = 2.3
a=[2.7] = 3
σ = 4.91 na=7
nb=9
b=[12.0]=12
b = 12.3

D = σ a2 + σ b2 = 4 + 43 = 47

2 3 4 5 6 7 8 9 10 11 12 13 14 15
a T b
Representative levels compression

• Main idea of BTC:


Image → ”smooth part” + ”detailed part”
(a and b) (bit-planes)

• We
W can treat
t t sett off a’s
’ and
d b’s
b’ as an image:
i
1. Predictive encoding of a and b
2 Lossless image compression algorithm
2.
(FELICS, JPEG-LS, CALIC).
3 Lossy compression: DCT (JPEG)
3.
Significance bits compression

Binary image:
• Lossless binary image compression
methods (JBIG, context modeling with
arithemtic coding)
• Lossy image compression (vector
quantization,
ti ti with
ith sub-sampling
b li and
d
interpolating missing pixels, filtering)
Bitrate and Block size

pixels in block: k2 ppels


The number of p
• BTC:
1. Values
a ues ’a’
a aand
d ’b’:
b (8(8+8)
8) b
bits
s
2. Significance bits: k2 bits
Bitrate: R=(16+k2)/k2 =(1+16/k2) bit/pel
Example: k=4: R=(1+16/42) = 2 bit/pel
• Bigger block → smaller bitrate R,
R bigger distortion D
• Smaller block → bigger bitrate R, smaller distortion D
Trade-off between Rate and Distortion
Quadtree segmentation

1. Divide the image


g into blocks
of m1×m1 size.
2. FOR EACH BLOCK
IF (σ < σ0) THEN apply BTC
ELSE divide into four
subblocks: m=m/2
3. REPEAT step 2 UNTIL
(σ < σ0) OR m=m2

here m2 is minimal block size


The hierarchy of the blocks is represented by a quadtree
structure.
Example of BTC
AMBTC HBTC-VQ

bpp = 2.00 bpp=8


bpp 8 bpp = 1.62
mse = 40.51 mse = 15.62

Block size: 4x4 Block size: 2x2 .. 32x32


JPEG

• JPEG = Joint Photographic Experts Group

• Lossy coding of continuous tone still images (color and


grayscale)
• Based on Discrete Cosine Transform (DCT):
0) Image is divided into block N×N
1) The blocks are transformed with 2-D DCT
2) DCT coefficients are quantized
3) The quantized coefficients are encoded
JPEG: Encoding and Decoding

Source FDCT Quantizer Entropy


py Compresse
p
Image Data Encoder Image Dat

8x8 blocks
Table Table
Specifications Specifications

Compressed Entropy
Dequantizer IDCT Reconstructed
Image Data Decoder Image Data

Table Table
S ifi ti
Specifications S ifi ti
Specifications
Divide image into N×N blocks

Input image 8x8 block


2-D
2 D DCT basis functions: N=8
N 8

Low
o High
Low Low

8x8 block

High High
Low High
2-D
2 D Transform Coding

y00 + y23
y01 y10 y12

...
1-D
1 D DCT basis functions: N=8
N 8
u=0 u=1 u=2 u=3
1.0 1.0 1.0 1.0

0.5 0.5 0.5 0.5

0 0 0 0

-0.5 -0.5 -0.5 -0.5

-1.0 -1.0 -1.0 -1.0

u=4 u=5 u=6 u=7


1.0 1.0 1.0 1.0

05
0.5 05
0.5 05
0.5 05
0.5

0 0 0 0

-0.5 -0.5 -0.5 -0.5

-1.0 -1.0 -1.0 -1.0

⎧ 1
N −1
⎡ (2 j + 1)kπ ⎤ ⎪ N
for k = 0
x j = ∑ α (k )C (k ) ⋅ cos ⎢ ⎥
α (k ) = ⎨
k =0 ⎣ 2 N ⎦ ⎪ 2
⎩ N
ffor k = 1,2,..., N − 1
Zig-zag
Zig zag ordering of DCT coefficients

DC: Direct current

AC: Alternating current

Converting
g a 2-D matrix into a 1-D array,
y, so that the
frequency (horizontal and vertical) increases in this order
and the coefficents variance are decreasing in this order.
Example of DCT for image block

Matlab: y=dct(x)
Distribution of DCT coefficients

DC coefficient AC coefficient

DC: uniformly distributed


AC: distribution resembles Laplacian pdf
Bit allocation for DCT coefficients

• Lossy operation to reduce bit


bit-rate
rate
• Vector or Scalar Quantizer?
p
• Set of optimal scalar quantizers?
q
• Set of scalar quantizers with fixed quantization tables
Bit allocation for DCT coefficients

Minimize the total distortition D


⎧N 2 − 2 bi ⎫
D = min ⎨∑ hiσ i 2 ⎬
{bi }
⎩ i =1 ⎭
See Lecture 10
N
subject to ∑b
i =1
i =B

here bi is number of bits for coefficient yi,


B is a given total number of bits,

hi =
1
12
{∫ [ p ( x)] dx
i d }
13 3
Optimal bit allocation for DCT coefficients

Solution of the optimization task with Lagrange multiplier


method:
B 1 σi 1 2
hi
Bitrate: bi = + log 2 2 + log 2
N 2 θ 2 H
Distortion: D = NHθ 2 2 − B N
1N 1N
⎛ N −1
2⎞ ⎛N −1

where θ = ⎜⎜ ∏ σ k ⎟⎟
2
; H = ⎜⎜ ∏ hk ⎟⎟
⎝ k =0 ⎠ ⎝ k =0 ⎠
Minimal distortion

Distortion: D = NHθ 2 2 − B N
1N
⎛ N −1
2⎞
where
h θ = ⎜⎜ ∏ σ k ⎟⎟
2

⎝ k =0 ⎠

Distortion D is minimal, if θ2 is minimal.


Product of diagonal elements is greater than or equal
to the determinant of the (positive semidefinite) matrix.
Equality is attained iff the matrix is diagonal.
KLT provides minimum of θ2 (and minimum of distortion D)
among other transforms!
Default quantization matrix Q

yq(k,l)=round[y(k,l)/Q(k,l)]

Examples: 236/16 → 15 Matlab: Qy=quant (y)


-22/11 → -2
Quantization of DCT coefficients: Example

Ordered DCT coefficients: 15,0,-2,-1,-1,-1,0,0,-1,-1, 54{’0’}.


Dequantization

z (k,l)=yq(k,l)·Q(k,l)

Examples: 236/16 → 15
-22/11 → -2

Matlab: z=dequant (Qy)


Original DCT block
Inverse DCT

See: x=idct(y)

Original block
Encoding of quantized DCT coefficients

• Ordered data: 15,0,-2,-1,-1,-1,0,0,-1,-1, 54{’0’}.


• Encoding:
g
♦ DC: ?

♦ AC: ?
Encoding of quantized DCT coefficients

• DC coefficient for the current block is predicted of


that of the previous block, and error is coded using
Huffman coding
• AC coefficients:
( ) Huffman code,, arithmetic code for non-zeroes
(a)
(b) run-length encoding: (number of ’0’s, non-’0’-symbol)
Performance of JPEG algorithm

8 bpp 0.6 bpp

0.37 bpp 0.22 bpp


Compression of color images
RGB vs YCbCr

• 24 bits RGB representation:


p apply
pp y DCT for each
component separately
- does not make use of the correlation between color
components
- does not make use of lowe sensitivity of the human eyes
to cchrominance
o a ce co
component
po e t
• Convert RGB into a YCbCr representation: Y is luminance,
and Yb,, Yc are chrominance
- Downsample the two chrominance components
RGB ⇔ YCbCr conversion

Luminance Y and two chrominances Cb and Cr


Chrominance subsampling

4:4:4 4:2:2 4:1:1 4:2:0


1:1 2:1 Hor 4:1 Hor 2:1 Hor&Vert

Y pixel Cb and Cr pixel


Quantization of DCT coefficients

For illuminance For chrominance


Performance of JPEG algorithm

• Grayscale 8 bits images:


- 0.5 bpp: excellent quality

• Color 24 bits images:


- 0.25-0.50 bpp: moderate to good
- 0.50
0 50-0
0.75
75 bpp: good to very good
- 0.75-1.00 bpp: excellent, sufficient for most applications
- 1.00-2.00 bpp: indistiniguishable from original
JPEG ⇒JPEG2000

For illuminance

JPEG: 0.25 bpp JPEG2000: 0.25 bpp


JPEG 2000

• JPEG 2000 is a new still image


g compression
p standard
• ”One-for-all” image codec:
* Different image types: binary, grey-scale, color,
multi-component
* Different applications: natural images, scientific,
medical remote sensing text, rendered graphics
* Different imaging models: client/server, consumer
electronics, image library archival, limited buffer
and resources.
History

• Call for Contributions in 1996


• The 1st Committee Draft (CD) Dec. 1999
• Final Committee Draft ((FCD)) in March 2000
• Accepted as Draft International Standard in Aug. 2000
• Published as ISO Standard in Jan. 2002
Key components
• Transform
– Wavelet
– Wavelet packet
– Wavelet in tiles
• Quantization
– Scalar
• Entropy coding
– (EBCOT) code once, truncate anywhere
– Rate-distortion
Rate distortion optimization
– Context modeling
– Optimized
p codingg order
Key components

Visual
„ Weighting
„ Masking

Region of interest (ROI)


Lossless color transform
Error resilience
2-D
2 D wavelet transform

Original Transform Coeff.


128, 129, 125, 64, 65, … 4123, -12.4, -96.7, 4.5, …
Quantization of wavelet coefficients

Transform Coeff. Quantized Coeff.(Q=64)


4123, -12.4, -96.7, 4.5, … 64, 0, -1, 0, …
Quantizer with dead zone

δ 2δ

⎢ s[m , n ] ⎥ ⎧0 + s
ν [m , n ] = ⎢ ⎥, χ [[m,, n]] = ⎨
⎣ δ ⎦ ⎩1 − s

Quantized Magnitude Sign


Entropy coding

0 1 1 0 1 1 0 1 0 1...

Coded Bitstream

Quantized Coeff.(Q=64)
64, 0, -1, 0, …
EBCOT

• Key features of EBCOT: Embedded Block Coding with


Optimized Truncation
– Low memory requirement in coding and decoding
– Easy rate control
– High compression performance
– Region of interest (ROI) access
– Error resilience
– Modest complexity
Block structure in EBCOT

Encode each block separately &


record the bitstream of each block.
Block size is 64x64.
Progressive encoding
Quantizer with dead zone

δ 2δ

⎢ s[m , n ] ⎥ ⎧0 + s
ν [m , n ] = ⎢ ⎥, χ [[m,, n]] = ⎨
⎣ δ ⎦ ⎩1 − s

Quantized Magnitude Sign


ROI: Region of interest

Scale-down the coefficients outside the ROI so those are


in lowerer bit-planes.
Decoded or refined ROI bits before the rest of the image.
ROI: Region of interest

• Sequence
q based code
– ROI coefficients are coded as independent sequences
– Allows random access to ROI without fully decoding
– Can specify exact quality/bitrate for ROI and the BG
• Scaling based mode:
– Scale
S l ROI maskk coefficients
ffi i t up (decoder
(d d scales
l d down))
– During encoding the ROI mask coefficients are found
significant at early stages of the coding
– ROI always coded with better quality than BG
– Can't specify rate for BG and ROI
Tiling

• Image ⇒ Component ⇒ Tile ⇒ Subband ⇒ Code


Code-
Block ⇒ Bit-Planes
JPEG 2000 vs JPEG

DCT

WT
JPEG 2000 vs JPEG: Quantization

JPEG

JPEG 2000
JPEG 2000 vs JPEG: 0.3
0 3 bpp

JPEG

JPEG 2000
JPEG 2000 vs JPEG: Bitrate
Bitrate=0
0.3
3 bpp

MSE=150 MSE=73
PSNR=26.2 db PSNR=29.5 db
JPEG 2000 vs JPEG: Bitrate
Bitrate=0
0.2
2 bpp

MSE=320 MSE=113
PSNR=23.1 db PSNR=27.6 db

You might also like