0% found this document useful (0 votes)
25 views

Image Compression 2 Update

This document discusses image compression techniques. It describes the basic components of image compression models including the source encoder, decoder, and quantizer. The source encoder uses mapping and quantization to reduce redundancies in images, while the quantizer is the only lossy component. Entropy and information theory are also covered, showing how to calculate the entropy of an image segment. Error-free compression methods like Huffman coding are then explained, outlining the process of assigning variable-length codes to reduce coding redundancies.

Uploaded by

Mohamd barca
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Image Compression 2 Update

This document discusses image compression techniques. It describes the basic components of image compression models including the source encoder, decoder, and quantizer. The source encoder uses mapping and quantization to reduce redundancies in images, while the quantizer is the only lossy component. Entropy and information theory are also covered, showing how to calculate the entropy of an image segment. Error-free compression methods like Huffman coding are then explained, outlining the process of assigning variable-length codes to reduce coding redundancies.

Uploaded by

Mohamd barca
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Multimedia Computing

Department of Computer Techniques Engineering

By
Dr. Fadwa Al Azzo
[email protected]
Multimedia Computing

Image Compression Technique (2)


Image Compression
Image Compression Models

 The Source Encoder and Decoder:

 The Source Encoder reduces/eliminates any coding, interpixel or psychovisual redundancies.


 The Source Encoder contains 3 processes:
• Mapper: Transforms the image into array of coefficients reducing interpixel redundancies. This is a reversible process
which is not lossy.
• Quantizer: This process reduces the accuracy and hence psychovisual redundancies of a given image. This process is
irreversible and therefore lossy.
• Symbol Encoder: This is the source encoding process where fixed or variable-length code is used to represent mapped
and quantized data sets. This is a reversible process (not lossy). Removes coding redundancy by assigning shortest
codes for the most frequently occurring output values.
Image Compression
Image Compression Models

 The Source Encoder and Decoder:

 The Source Decoder contains two components.


• Symbol Decoder: This is the inverse of the symbol encoder and the reverse of the variable-length coding is
applied.
• Inverse Mapper : Inverse of the removal of the interpixel redundancy.

 The only Lossy element is the Quantizer which removes the psychovisual redundancies causing irreversible loss.
 Every Lossy Compression methods contain the quantizer module.
 If Error-Free compression is desired the quantizer module is removed.
Image Compression
 Information Theory-Entropy

• Measuring Information: The information in an image can be modeled as a probabilistic process,


where we first develop a statistical model of the image generation process. The information content
(entropy) can be estimated based on this model.

• Entropy encoding is a method of lossless compression that is performed on an image after the
quantization stage. It enables one to represent an image in a more efficient way with less memory
needed for storage or transmission.

• The information per source (symbol or pixel), which is also referred as entropy is calculated by:

where 𝑃(𝑎𝑗 ) refers to the source symbol/pixel probabilities.


𝐽 refers to the number of symbols or different pixel.
Image Compression
 Information Theory-Entropy

• Measuring Information: The entropy of the given 8-bit image segment can be calculated by:

21 21 21 95 169 243 243 243

21 21 21 95 169 243 243 243

21 21 21 95 169 243 243 243

21 21 21 95 169 243 243 243

• The entropy of this image is calculated by:

2
ln(𝑛𝑢𝑚)
𝑙𝑜𝑔2 (𝑛𝑢𝑚) =
ln(2)

Solution:
3
3 ln(8) −0.98
𝑙𝑜𝑔2 = = = −1.42
8 ln(2) 0.69

1
1 ln(8) −2.07
𝑙𝑜𝑔2 = = = −3.01
8 ln(2) 0.69

3 3 1 1 1 1 3 3
E=- [ 𝑙𝑜𝑔2 + 𝑙𝑜𝑔2 + 𝑙𝑜𝑔2 + 𝑙𝑜𝑔2 ]
8 8 8 8 8 8 8 8

E = - [(0.375)(−1.42) + (0.125)(−3.01) +(0.125)(−3.01)+ (0.375)(−1.42)]

E = - [(-0.523) + (-0.376) + (-0.376) + (-0.523)]

E = - [-1.8] = 1.8 bits/pixel (bps)


Image Compression
 Error-Free Compression

• Error-Free Compression is generally composed of two relatively independent operations:


(1) reduce the interpixel redundancies
(2) introduce a coding method to reduce the coding redundancies.
• The coding redundancy can be minimized by using a variable-length coding method where the shortest
codes are assigned to most probable gray levels.
• The most popular variable-length coding method is the Huffman Coding.

• Huffman Coding: The Huffman coding involves the following 2 steps:


(1) Create a series of source reductions by ordering the probabilities of the symbols and
combining the lowest probability symbols into a single symbol and replace in the next
source reduction.
(2) Code each reduced source starting with the smallest source and working back to the original
source. Use 0 and 1 to code the simplest 2 symbol source.
Image Compression
Image Compression
 Error-Free Compression

• Huffman Coding

(1) Huffman source reductions:


𝑎𝑖 ’𝑠 corresponds to the available
gray levels in a given image.

(2) Huffman code assignments:


The first code assignment is done for 𝑎2 with
the highest probability and the last assignments
are done for 𝑎3 and 𝑎5 with the lowest
probabilities.
Image Compression
 Error-Free Compression

• Huffman Coding: Note that the shortest codeword (1) is given for the symbol/pixel with the highest
probability (𝑎2 ). The longest codeword (01011) is given for the symbol/pixel with the lowest probability
(𝑎5 ).
• The average length of the code is given by:

• The entropy of the source is given by:

• The resulting Huffman coding efficiency is %97.3 (2.14/2.2).


• Note that Huffman Coding is not optimal and many more efficient versions of it as well as other variable-
length coding methods can be used.

You might also like