Image Compression
Image Compression
Technology (NUST)
School of Electrical Engineering and Computer Science
(SEECS)
Khawar Khurshid
Image
Compression
Khawar Khurshid
Motivation
Storage needed for a two-hour standard television
movie (Color)
Image size = 720 x 480 pixels
Frame rate = 30 fps (frame per seconds)
frames
pixels
bytes
30
x (720 x 480)
x3
= 31,104,000 bytes/sec
sec
frame
pixel
For 2 hour movie
bytes
sec
2
31,104,000
x (60 )
x 2 hrs = 2.24 x 1011 bytes = 224 GB
sec
hr
3
Khawar Khurshid
Image Compression
Principal objective
To minimize the number of bits required to represent an image
Applications
Transmission: Broadcast TV, remote sensing via satellite, military
communications via aircraft, radar and sonar, teleconferencing, computer
communications,
Khawar Khurshid
Overview
Image data compression methods fall into two common categories:
Information preserving compression
Khawar Khurshid
Khawar Khurshid
b
Compression Ratio (C )
b
1
Releative data redundancy (R) 1
C
of the first dataset b
Data Redundancy
Image compression techniques can be designed for
reducing or eliminating the data redundancy
Three basic data redundancies
Spatial and Temporal redundancy
Coding redundancy
Irrelevant information
Khawar Khurshid
Spatial Redundancy
Image features
All 256 gray levels are equally
probable uniform histogram
(variable length coding can not
be applied)
The gray levels of each line are
selected randomly so pixels are
independent of one another in
vertical direction
Pixels along each line are
identical, they are completely
dependent on one another in
horizontal direction
A computer generated
(synthetic) 8-bit image
M = N = 256
Spatial redundancy
9
Khawar Khurshid
Spatial Redundancy
The spatial redundancy can be eliminated by using run-length pairs (a
mapping scheme)
Run length pairs has two parts
Start of new intensity
Number of consecutive pixels having that intensity
Example (consider the image shown in previous slide)
Each 256 pixel line of the original image is replaced by a single 8-bit
intensity value
Length of consecutive pixels having the same intensity = 256
Compression Ratio =
10
256 x 256 x 8
128
[256 256] x 8
Khawar Khurshid
Coding Redundancy
A natural m-bit coding method assigns m-bit to each gray level without
considering the probability that gray level occurs
very likely to contain coding redundancy
Basic concept?
Utilize the probability of occurrence of each gray level (histogram) to
determine length of code representing that particular gray level:
variable-length coding
Assign shorter code words to the gray levels that occur most
frequently or vice versa
11
Khawar Khurshid
Coding Redundancy
Let 0 rk 1: Gray levels (discrete random variable)
pr (rk ) :Propability of occurrence of rk
nk :Frequency of gray level rk
n :Total number of pixels in the image
L :Total number of gray level
l (rk ) :Number of bits used to represent rk
Lavg :Average length of code words assigned to gray levels
L 1
Lavg
nk
l (rk ) pr (rk ) where pr (rk ) , k 0,1, 2,
n
k 0
, L 1
Hence, the total number of bits required to code and MxN pixel image is MNLavg
For a natural m-bit coding Lavg= m
12
Khawar Khurshid
13
256 x 256 x 8
4.42
256 x 256x1.81
R = 1 1/4.42 = 0.774
Khawar Khurshid
256 x 256 x 8
4.42
256 x 256x1.81
R = 1 1/4.42 = 0.774
Khawar Khurshid
Irrelevant Information
The eye does not respond with equal
sensitivity to all visual information
Certain information has less relative
importance than other information in normal
visual processing
A computer generated
(synthetic) 8-bit image
M = N = 256
This image appears
homogeneous so we can
use its mean value to
encode this image
Khawar Khurshid
Redundancy
Coding redundancy
Due to different occurrence rates
Inter-pixel Redundancy
Spatial Redundancy
Psycho-visual redundancy
16
Khawar Khurshid
Redundancy
17
Khawar Khurshid
Redundancy - Recap
Compression Ratio?
Relative Redundancy?
18
Khawar Khurshid
Fidelity Criteria
Quantify the nature and extent of information loss
Level of information loss can be expressed as a function of the
original (input) and compressed-decompressed (output) image
Given an MxN image f ( x, y ), its compressed-then-decompressed
e( x, y) f ( x, y) f ( x, y)
Total Error:
M 1 N 1
[ f ( x, y) f ( x, y)]
x 0 y 0
19
Khawar Khurshid
Fidelity Criteria
Normally the objective fidelity criterion parameters are as
follows:
Root mean square error:
erms
M 1 N 1
2
[ f ( x, y ) f ( x, y)]
x 0 y 0
1
2
SNRms
f ( x, y ) 2
x 0 y 0
M 1 N 1
( x, y ) f ( x, y )]2
[
f
x 0 y 0
20
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Compression Techniques
24
Khawar Khurshid
Recall
A computer generated
(synthetic) 8-bit image
M = N = 256
Spatial redundancy
25
Khawar Khurshid
Recall
26
256 x 256 x 8
128
[256 256] x 8
Khawar Khurshid
27
Khawar Khurshid
28
Khawar Khurshid
Initial 62
pixels are 1,
Next 87 pixels
are zero & so
on
29
Khawar Khurshid
Huffman Coding
Variable length code
Error-free compression technique
Reduce only coding redundancy by minimizing the Lavg and assign shorter
code words to the most probable gray levels
30
Khawar Khurshid
Huffman Coding
Arrange the symbol probabilities pi in a decreasing order; consider them
(pi) as leaf nodes of a tree
While there are more than two nodes:
Merge the two nodes with smallest probability to form a new node whose
probability is the sum of the two merged nodes
Arrange the combined node according to its probability in the tree
Repeat until only two nodes are left
31
Khawar Khurshid
Huffman Coding
Arrange the symbol probabilities pi in a decreasing order; consider them
(pi) as leaf nodes of a tree
While there are more than two nodes:
Merge the two nodes with smallest probability to form a new node whose
probability is the sum of the two merged nodes
Arrange the combined node according to its probability in the tree
Repeat until only two nodes are left
32
Khawar Khurshid
Huffman Coding
Arrange the symbol probabilities pi in a decreasing order; consider them
(pi) as leaf nodes of a tree
While there are more than two nodes:
Merge the two nodes with smallest probability to form a new node whose
probability is the sum of the two merged nodes
Arrange the combined node according to its probability in the tree
Repeat until only two nodes are left
33
Khawar Khurshid
Huffman Coding
Arrange the symbol probabilities pi in a decreasing order; consider them
(pi) as leaf nodes of a tree
While there are more than two nodes:
Merge the two nodes with smallest probability to form a new node whose
probability is the sum of the two merged nodes
Arrange the combined node according to its probability in the tree
Repeat until only two nodes are left
34
Khawar Khurshid
Huffman Coding
Arrange the symbol probabilities pi in a decreasing order; consider them
(pi) as leaf nodes of a tree
While there are more than two nodes:
Merge the two nodes with smallest probability to form a new node whose
probability is the sum of the two merged nodes
Arrange the combined node according to its probability in the tree
Repeat until only two nodes are left
35
Khawar Khurshid
Huffman Coding
Arrange the symbol probabilities pi in a decreasing order; consider them
(pi) as leaf nodes of a tree
While there are more than two nodes:
Merge the two nodes with smallest probability to form a new node whose
probability is the sum of the two merged nodes
Arrange the combined node according to its probability in the tree
Repeat until only two nodes are left
36
Khawar Khurshid
Huffman Coding
Starting from the top, arbitrarily assign 1 and 0 to each pair of branches
merging into a node
Continue sequentially from the root node to the leaf node where the
symbol is located to complete the coding
Khawar Khurshid
Huffman Coding
Starting from the top, arbitrarily assign 1 and 0 to each pair of branches
merging into a node
Continue sequentially from the root node to the leaf node where the
symbol is located to complete the coding
Khawar Khurshid
Huffman Coding
Starting from the top, arbitrarily assign 1 and 0 to each pair of branches
merging into a node
Continue sequentially from the root node to the leaf node where the
symbol is located to complete the coding
Khawar Khurshid
Huffman Coding
Starting from the top, arbitrarily assign 1 and 0 to each pair of branches
merging into a node
Continue sequentially from the root node to the leaf node where the
symbol is located to complete the coding
Khawar Khurshid
Huffman Coding
Starting from the top, arbitrarily assign 1 and 0 to each pair of branches
merging into a node
Continue sequentially from the root node to the leaf node where the
symbol is located to complete the coding
Khawar Khurshid
Huffman Coding
Consider the following encoded strings of code symbols
010100111100
The sequence can be decoded by just examining the
string from left to right
010100111100
a3 a1 a2 a2 a6
42
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Khawar Khurshid
Entry
255
255
256
257
Khawar Khurshid
Entry
255
255
256
39 39
257
39 126
258
126 126
259
126 39
260
39 39 126
261
126 126 39
262
256
258
260
259
257 126
39 39 126 126
263
126 39 39
264
39 126 126
fn f n 1
m
f round
n
i f n i
i 1
en f n fn
Khawar Khurshid
Khawar Khurshid
Arithmetic coding
Variable length code
Error-free compression technique
A sequence of source symbols is assigned a single arithmetic
code word
one-to-one correspondence between source symbol and code word
does not exist
53
Khawar Khurshid
Arithmetic coding
The code word defines an interval of real numbers
in the range 0 and 1
Each symbol of the message reduces the size of the
interval in accordance with its probability of occurrence
Set low to 0.0 Set high to 1.0
While there are still input symbols do
Get an input symbol code_
range = high - low.
high = low + range*high_range(symbol)
low = low + range*low_range(symbol)
End of While
54
Khawar Khurshid
Arithmetic coding
Symbol
Probability
Range
.2
[0,0.2)
.3
[0.2, 0.5)
.1
[0.5, 0.6)
.2
[0.6, 0.8)
.1
[0.8, 0.9)
.1
[0.9, 1)
55
Khawar Khurshid
After
seeing
1
0.9
0.8
0.6
0.5
!
u
0.5
e
!
u
0.26
!
u
0.236
Symbol
Probability
Range
.2
[0,0.2)
.3
[0.2, 0.5)
.1
[0.5, 0.6)
.2
[0.6, 0.8)
.1
[0.8, 0.9)
.1
[0.9, 1)
i
!
u
0.2336
!
u
0.2336
!
u
0.2
0.2
0.2
0.23
0.233
0.23354
Khawar Khurshid
Arithmetic Decoding
Get encoded number
Do
find symbol whose range straddles the encoded number
output the symbol
range = high_symbol low_symbol
encoded number = encoded number - low_symbol
encoded number = encoded number / range
until no more symbols
Symbol
Probability
Range
.2
[0,0.2)
.3
[0.2, 0.5)
.1
[0.5, 0.6)
.2
[0.6, 0.8)
.1
[0.8, 0.9)
.1
[0.9, 1)
Khawar Khurshid
Symbol
Probability
Range
.2
[0,0.2)
.3
[0.2, 0.5)
.1
[0.5, 0.6)
.2
[0.6, 0.8)
.1
[0.8, 0.9)
.1
[0.9, 1)
Apply decoding
The range lies entirely within the space the model allocate for a
output
Symbol
Probability
Range
.2
[0,0.2)
.3
[0.2, 0.5)
.1
[0.5, 0.6)
.2
[0.6, 0.8)
.1
[0.8, 0.9)
.1
[0.9, 1)
Encoded
Symbol
Output
Symbol
Low
High
Range
0.23354
0.2
0.5
0.3
0.1118
0.2
0.2
Khawar Khurshid
Arithmetic coding
Draw the encoding sequence using arithmetic coding for the
following symbols. Clearly write all the calculated values and
draw the figure for each iteration of the encoding process,
starting with iteration zero.
symbols = {a, b, c, d}
corresponding probabilities = {0.2, 0.2, 0.4, 0.2}
Khawar Khurshid
61
Khawar Khurshid
End
Image Compression
62
Khawar Khurshid