0% found this document useful (0 votes)
31 views7 pages

DC 6

Vector quantization (VQ) is a block coding technique that quantizes blocks of data instead of single samples. In VQ, an input image is partitioned into blocks which are then encoded by finding the closest matching codeword in a codebook. The index of the codeword is transmitted rather than the block itself, achieving compression. VQ performance depends on codebook size and vector size, with complexity increasing exponentially with vector size, so small blocks are typically used.

Uploaded by

srigoutham2414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

DC 6

Vector quantization (VQ) is a block coding technique that quantizes blocks of data instead of single samples. In VQ, an input image is partitioned into blocks which are then encoded by finding the closest matching codeword in a codebook. The index of the codeword is transmitted rather than the block itself, achieving compression. VQ performance depends on codebook size and vector size, with complexity increasing exponentially with vector size, so small blocks are typically used.

Uploaded by

srigoutham2414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Vector quantization:

Vector quantization (VQ) is a block-coding technique that quantizes blocks of data instead
of single sample. VQ exploits relation existing between neighboring signal samples by
quantizing them together.

 In general, a VQ scheme can be divided into two parts: the encoding procedure,
and the decoding procedure which is depicted in figure.
 At, the encoder, input image is partitioned into a set of non- overlapping image
blocks. The closest code word in the code hook is then found for each image block.
 Here, the closest code word for a given block is the one in the code book that has
the minimum squared Euclidean distance from the input block.
 Next, the corresponding index for each searched closest code word is transmitted
to the decoder.
 Compression is achieved because the Indices of the closest code words in the code
book are sent to the decoder instead of the image blocks themselves.
 The goal of VQ code-book generation is to find an optimal code book that yields
the lowest possible distortion when compared with all other code books of the
same size.
 VQ performance is directly proportional to the code-book size and the vector size.

Figure 36

 The computational complexity in a VQ technique increases exponentially with the


size of the vector blocks.
 Therefore, the blocks used by VQ are usually small. The encoder searches the code
book and attempts to minimize the distortion between the original image block
and the chosen vector from the code book according to some distortion metric.
 The search complexity increases with the number of vectors in the code book. To
minimize the search complexity, the tree search vector quantization scheme was
introduced.
 VQ can be used to compress an image both in the spatial domain and in the
frequency domain.
 Vector quantization is a lossy data-compression scheme based on the principles of
block coding.
 A vector quantizer maps a data set in an n-dimensional data space into a finite sect
of vectors. Each vector is called a code vector or a code word.
 The set of all code words is called a code book. Each input vector can be associated
with an index of a code word and this index is transferred instead of the original
vector.
 The index can be decoded to get the code word that it represented.

LZW compression
LZW compression is a method to reduce the size of Tag Image File
Format (TIFF) or Graphics Interchange Format (GIF) files. It is a table-
based lookup algorithm to remove duplicate data and compress an original
file into a smaller file. LZW compression is also suitable
for compressing text and PDF files. The algorithm is loosely based on the
LZ78 algorithm that was developed by Abraham Lempel and Jacob Ziv in
1978.

Invented by Abraham Lempel, Jacob Ziv and Terry Welch in 1984, the LZW
compression algorithm is a type of lossless compression. Lossless
algorithms reduce bits in a file by removing statistical redundancy without
causing information loss. This makes LZW -- and other lossless algorithms,
like ZIP -- different from lossy compression algorithms that reduce file size
by removing less important or unnecessary information and cause
information loss.

The LZW algorithm is commonly used to compress GIF and


TIFF image files and occasionally for PDF and TXT files. It is part of
the Unix operating system's file compression utility. The method is simple
to implement, versatile and capable of high throughput in hardware
implementations. Consequently, LZW is often used for general-purpose
data compression in many PC utilities.LZW reduces the size of TIFF or GIF files.

LZW compression works


The LZW compression algorithm reads a sequence of symbols, groups
those symbols into strings and then converts each string into codes. It
takes each input sequence of bits of a given length -- say, 12 bits -- and
creates an entry in a table for that particular bit pattern, consisting of the
pattern itself and a shorter code. The table is also called
a dictionary or codebook. It stores character sequences chosen
dynamically from the input text and maintains correspondence between the
longest encountered words and a list of code values.

As the input is read, any repetitive results are substituted with the shorter
code, effectively compressing the total amount of input. The shorter code
takes up less space than the string it replaces, resulting in a smaller file. As
the number of long, repetitive words increases in the input data, the
algorithm's efficiency also increases. Compression occurs when the output
is a single code instead of a longer string of characters. This code can be
of any length and always has more bits than a single character.

The LZW algorithm does not analyze the incoming text. It simply adds
every new string of characters it sees into a code table. Since it tries to
recognize increasingly longer and repetitive phrases and encode them,
LZW is referred to as a greedy algorithm

The LZW algorithm doesn't analyze incoming text. It reads a sequence of symbols,
groups those symbols into strings and then converts each string into codes in a table.
Code table in LZW compression
Unlike earlier approaches, such as LZ77 and LZ78, the LZW algorithm
includes a lookup table of codes as part of the compressed file. Typically,
the number of table entries is 4,096. In the code table, codes 0-255 are
assigned to represent single bytes from the input file. Before the algorithm
starts encoding, the table contains only the first 256 entries. The rest of the
table is blank. In other words, the first 256 codes are assigned to the
standard character set by default.

The remaining codes are assigned to strings as the algorithm proceeds


with the compression. When encoding starts, the algorithm identifies
repeated sequences in the data and adds them to the code table so that it
fills up with more entries. For file compression, codes 256 through 4,095
are used to represent sequences of bytes. These codes refer to substrings,
while codes 0-255 refer to individual bytes.

The decoding program that decompresses the file can build the table by
using the algorithm as it processes the encoded input. It takes each code
from the compressed file and translates it through the code table that's
being built to find the character that code represents.

Advantages and drawbacks of LZW compression


The LZW algorithm quickly compresses large TIFF or GIF files. It works
especially well for files containing a lot of repetitive data, which is common
with monochrome images.

One drawback of LZW compression is that compressed files without


repetitive information can be large, defeating the purpose of compression.
Another issue is that some versions of the algorithm are copyrighted, so
companies must pay royalties or licensing fees to use it. These fees may
get added to the product cost.

Finally, LZW is not the most efficient compression algorithm. Other


algorithms are available to compress files faster and more efficiently.

LZW compression vs. ZIP compression


LZW and ZIP are both lossless compression methods, meaning no data is
lost after compression. TIFF files retain their quality after being compressed
into smaller files using either LZW or ZIP. That said, compressed TIFF files
can be slightly slower to work with because they require more processing
effort to open and close them.
LZW, like ZIP, is a lossless compression method, which means no data is lost after
compression.
LZW and ZIP provide good results with 8-bit TIFF files. For 16-bit TIFF
files, the ZIP algorithm performs better than LZW. In fact, LZW tends to
make 16-bit files larger. Generally, both algorithms work efficiently when
they can group a lot of similar data and work on images that are low on
detail and contain few tones. These images compress more than images
containing lots of detail or different tones.

GIF (Graphics Interchange Format)

GIF (Graphics Interchange Format) is not a data compression method. The


original version of GIF is known as GIF87a. It is graphical image format that
uses variant of LZW to compress the graphical data and allows to send
image between different computers. It scans the image row by row and
discovers pixel correlated within row not between rows. GIF uses growing
and dynamic dictionary for compressing data.
Steps:
1. It takes number of bits per pixel b as parameter. For
monochromatic image b=2 and for image with 256 colors or
shades b=8.
2. It uses dictionary with 2^(b+1) entries. At each fill up dictionary
will be in double in size until 4096 entries and remain as static.
3. At this point, encoder monitors the compression ratio and may
decide to discard dictionary and start with new and empty one.
4. At the time of taking decisions on discard, Encoder emits 2^b value
as clear code which is sign for decoder to discard the dictionary.

Pointer get longer one byte from dictionary to dictionary and output are in
block of 8 bytes. Each pointer preceded by header of 255 bytes maximum
and terminates by bytes of 8 zeros. Pointer stores with LSB(Least
Significant Bit) on left. Last block contains of value which is 2^(b+1).
GIF compression is inefficient because GIF is in one dimensional while
image is in two dimensional. So, GIF is not used by today’s web browsers.

You might also like