Lec13 Image-Compression Lec
Lec13 Image-Compression Lec
compression
Compression ratio:
Relevant Data Redundancy
Example:
The compression ratio is denoted by:
5
The reduction in file size is necessary to meet
the bandwidth requirements for many
transmission systems, and for the storage
requirements in computer databases
7
Why Can We Compress?
• Spatial redundancy
• Neighboring pixels are not independent but correlated
Temporal redundancy
Fundamentals
7
Lavg l2 (rk ) pr (rk ) n1
k 0
CR
2(0.19) 2(0.25) 2(0.21) 3(0.16) 4(0.08)
Compression ratio:
n2
5(0.06) 6(0.03) 6(0.02) 1
Rd 1
Relative data redundancy:
2.7 bits CR
3 1
CR 1.11 Rd 1 0.099
2 .7 1.11
Coding Redundancy
Spatial Redundancy
Geometric Redundancy
Normalized Autocorrelation
A(n)
(n)
A(0)
1 N 1 n
A(n) f ( x, y) f ( x, y n)
N n y 0
H.R. Pourreza
Inter-pixel Redundancy
Original
Binary
C R 2.63
1
Rd 1 0.62
2.63
Run-length
H.R. Pourreza
Psycho-visual Redundancy
Improved Gray-Scale
H.R. Pourreza
Psycho-visual Redundancy
IGS Quantization
Fidelity Criteria
Objective fidelity:
Level of information loss can be expressed as a function of the original and the
1/ 2
1 M 1N1
ˆ
Root-mean-square error
erms 2
[ f ( x, y ) f ( x, y )]
MN x 0 y 0
M 1N1
fˆ ( x, y ) 2
x 0 y 0
Mean-square signal-to-noise
SNRms M 1 N 1
ratio
[ fˆ ( x, y)
x 0 y 0
f ( x, y )]2
Fidelity Criteria
• By absolute rating
• By means of side-by-side comparison of and
f ( x, y ) fˆ ( x, y )
Fidelity Criteria
units of information!
using
units/pixel
Entropy:
(e.g., bits/pixel)
Redundancy
• Redundancy:
(data vs info)
where:
image
Entropy Estimation (cont’d)
• First order estimate of H:
image
Differences in Entropy Estimates
16
Differences in Entropy
Estimates (cont’d)
• What is the entropy of the pixel differences image?
• Criteria
• Subjective: based on human observers
• Objective: mathematically defined criteria
Subjective Fidelity Criteria
Lossless Compression
Taxonomy of Lossless Methods
Huffman Coding
(addresses coding redundancy)
• Backward Pass
Assign code symbols going backwards
Huffman Coding (cont’d)
• Lavg assuming Huffman coding:
0 1
2) Subdivide [0, 1) based on the probabilities of α i
Encode
α1 α2 α3 α3 α4
[0.06752, 0.0688)
0.8
code: 0.068
0.4
(must be inside sub-interval)
0.2
Example (cont’d)
• The message α1 α2 α3 α3 αis4 encoded using 3 decimal digits or 3/5 =
0.6 decimal digits per source symbol.
α4
0.8 0.72 0.688 0.5856 0.57152
Decode 0.572
α3 (sequence length=5)
α2
0.2 0.48 0.592 0.5664 0.56768
α1 α3 α3 α1 α2 α4
0.0 0.4
0.56 0.56 0.5664
LZW Coding
(addresses interpixel redundancy)
• Requires no prior knowledge of symbol probabilities.
0 0
1 1
. .
255 255
256 -
511 -
LZW Coding (cont’d)
Example:
As the encoder examines image
39 39 126 126
pixels, gray level sequences
39 39 126 126
(i.e., blocks) that are not in the
39 39 126 126
dictionary are assigned to a new
39 39 126 126
entry.
Dictionary Location Entry
0 0
1 1
- Is 39 in the dictionary……..Yes
. . - What about 39-39………….No
255 255
256 - 39-39
* Add 39-39 at location 256
511 -
Example
39 39 126 126 Concatenated Sequence: CS = CR + P
39 39 126 126
(CR) (P)
39 39 126 126
39 39 126 126
CR = empty
repeat
P=next pixel
CS=CR + P
If CS is found:
(1) No Output
(2) CR=CS
else:
(1) Output D(CR)
(2) Add CS to D
(3) CR=P
Decoding LZW
• Use the dictionary for decoding the “encoded output”
sequence.
• The dictionary need not be sent with the encoded
output.
• Can be built on the “fly” by the decoder as it reads the
received code words.
Run-length coding (RLC)
(addresses interpixel redundancy)
• Reduce the size of a repeating string of symbols (i.e., runs):
e.g., (0,1)(1,1)(0,1)(1,0)(0,2)(1,4)(0,2)
Bit-plane coding
(addresses interpixel redundancy)
62
63
• Typical compression ratios of 0.5 to 1.2 are
achieved with complex 8-bit monochrome images
64
• The compression results using this method can
be improved by preprocessing to reduce the
number of gray levels, but then the compression
is not lossless
65
• As the adjacent pixel values are highly
correlated, adjacent pixel values tend to be
relatively close in gray level value, and this can
be problematic for RLC
66
Lossy Methods - Taxonomy
Lossy Compression
• Transform the image into some other domain to
reduce interpixel redundancy.
Example: Fourier Transform
K << N
K-1 K-1
Transform Selection
Forward:
Inverse:
if u=0 if v=0
if u>0 if v>0
DCT (cont’d)
• Basis functions for a 4x4 image (i.e., cosines of different frequencies).
DCT (cont’d)
Using DFT WHT DCT
8 x 8 sub-images
yields 64 coefficients
per sub-image.
Reconstructed images
by truncating
50% of the
coefficients
Reconstructions
DFT
has n-point periodicity
DCT
has 2n-point periodicity
JPEG Compression
Entropy
encoder
Accepted as
an
international
image
compression
standard in
1992.
Entropy
decoder
JPEG - Steps
1. Divide image into 8x8 subimages.
Quantization
JPEG Steps (cont’d)
5. Order the coefficients using zig-zag ordering
- Creates long runs of zeros (i.e., ideal for run-length encoding)
JPEG Steps (cont’d)
6. Encode coefficients:
symbol_1 symbol_2
(SIZE) (AMPLITUDE)
predictive
coding:
Intermediate Symbol
Sequence – AC coeff
# bits
Original
Effect of Quantization:
non-homogeneous 8 x 8 block
Effect of Quantization:
non-homogeneous 8 x 8 block
(cont’d)
Quantized De-quantized
Effect of Quantization:
non-homogeneous 8 x 8 block
(cont’d)
Reconstructed
Error is high!
Original:
Case Study: Fingerprint
Compression
• FBI is digitizing fingerprints at 500 dots per inch with 8 bits of
grayscale resolution.
• A single fingerprint card turns into about 10 MB of data!
No “blocky” artifacts.
WSQ Algorithm