Encoder & Decoder
Encoder & Decoder
An encoder is a device used to change a signal (such as a bitstream) or data into a code.
The code may serve any of a number of purposes such as compressing information for
transmission or storage, encrypting or adding redundancies to the input code, or
translating from one code to another. This is usually done by means of a programmed
algorithm, especially if any part is digital, while most analog encoding is done with
analog circuitry.
Contents
[hide]
• 6 External links
I3 I2 I1 I0 O1 O0
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1
4 to 2 encoder
The encoder has the limitation that only one input can be active at any given time. If two
inputs are simultaneously active, the output produces and undefined combination. To
prevent this we make use of the priority encoder.
I3 I2 I1 I0 O1 O0
0 0 0 d 0 0
0 0 1 d 0 1
0 1 d d 1 0
1 d d d 1 1
4 to 2 priority encoder
Data compression
From Wikipedia, the free encyclopedia
In computer science and information theory, data compression or source coding is the
process of encoding information using fewer bits (or other information-bearing units)
than an unencoded representation would use through use of specific encoding schemes.
For example, the ZIP file format, which provides compression, also acts as an archiver,
storing many source files in a single destination output file.
As with any communication, compressed data communication only works when both the
sender and receiver of the information understand the encoding scheme. For example,
this text makes sense only if the receiver understands that it is intended to be interpreted
as characters representing the English language. Similarly, compressed data can only be
understood if the decoding method is known by the receiver.
Contents
[hide]
• 6 External links
Lossless compression schemes are reversible so that the original data can be
reconstructed, while lossy schemes accept some loss of data in order to achieve higher
compression.
However, lossless data compression algorithms will always fail to compress some files;
indeed, any compression algorithm will necessarily fail to compress any data containing
no discernible patterns. Attempts to compress data that has been compressed already will
therefore usually result in an expansion, as will attempts to compress encrypted data.
In practice, lossy data compression will also come to a point where compressing again
does not work, although an extremely lossy algorithm, like for example always removing
the last byte of a file, will always compress a file up to the point where it is empty.
25.888888888
25.[9]8
Interpreted as, "twenty five point 9 eights", the original string is perfectly recreated, just
written in a smaller form. In a lossy system, using
26
instead, the original data is lost, at the benefit of a smaller file size.
[edit] Applications
The above is a very simple example of run-length encoding, wherein large runs of
consecutive identical data values are replaced by a simple code with the data value and
length of the run. This is an example of lossless data compression. It is often used to
optimize disk space on office computers, or better use the connection bandwidth in a
computer network. For symbolic data such as spreadsheets, text, executable programs,
etc., losslessness is essential because changing even a single bit cannot be tolerated
(except in some limited cases).
For visual and audio data, some loss of quality can be tolerated without losing the
essential nature of the data. By taking advantage of the limitations of the human sensory
system, a great deal of space can be saved while producing an output which is nearly
indistinguishable from the original. These lossy data compression methods typically offer
a three-way tradeoff between compression speed, compressed data size and quality loss.
Lossy image compression is used in digital cameras, to increase storage capacities with
minimal degradation of picture quality. Similarly, DVDs use the lossy MPEG-2 codec for
video compression.
[edit] Theory
The theoretical background of compression is provided by information theory (which is
closely related to algorithmic information theory) and by rate-distortion theory. These
fields of study were essentially created by Claude Shannon, who published fundamental
papers on the topic in the late 1940s and early 1950s. Cryptography and coding theory are
also closely related. The idea of data compression is deeply connected with statistical
inference.
Many lossless data compression systems can be viewed in terms of a four-stage model.
Lossy data compression systems typically include even more stages, including, for
example, prediction, frequency transformation, and quantization.
The Lempel-Ziv (LZ) compression methods are among the most popular algorithms for
lossless storage. DEFLATE is a variation on LZ which is optimized for decompression
speed and compression ratio, although compression can be slow. DEFLATE is used in
PKZIP, gzip and PNG. LZW (Lempel-Ziv-Welch) is used in GIF images. Also
noteworthy are the LZR (LZ-Renau) methods, which serve as the basis of the Zip
method. LZ methods utilize a table-based compression model where table entries are
substituted for repeated strings of data. For most LZ methods, this table is generated
dynamically from earlier data in the input. The table itself is often Huffman encoded (e.g.
SHRI, LZX). A current LZ-based coding scheme that performs well is LZX, used in
Microsoft's CAB format.
The very best compressors use probabilistic models whose predictions are coupled to an
algorithm called arithmetic coding. Arithmetic coding, invented by Jorma Rissanen, and
turned into a practical method by Witten, Neal, and Cleary, achieves superior
compression to the better-known Huffman algorithm, and lends itself especially well to
adaptive data compression tasks where the predictions are strongly context-dependent.
Arithmetic coding is used in the bilevel image-compression standard JBIG, and the
document-compression standard DjVu. The text entry system, Dasher, is an inverse-
arithmetic-coder.
There is a close connection between machine learning and compression: a system that
predicts the posterior probabilities of a sequence given its entire history can be used for
optimal data compression (by using arithmetic coding on the output distribution), while
an optimal compressor can be used for prediction (by finding the symbol that compresses
best, given the previous history). This equivalence has been used as justification for data
compression as a benchmark for "general intelligence" [1].
Enable inputs must be on for the decoder to function, otherwise its outputs assume a
single "disabled" output code word. Decoding is necessary in applications such as data
multiplexing, 7 segment display and memory address decoding.
The simplest decoder circuit would be an AND gate because the output of an AND gate
is "High" (1) only when all its inputs are "High".Such output is called as "active High
output".If instead of AND gate,the NAND gate is connected the output will be "Low" (0)
only when all its inputs are "High".such output is called as "active low output".
A slightly more complex decoder would be the n-to-2n type binary decoders. These type
of decoders are combinational circuits that convert binary information from 'n' coded
inputs to a maximum of 2n unique outputs. We say a maximum of 2n outputs because in
case the 'n' bit coded information has unused bit combinations, the decoder may have less
than 2n outputs. We can have 2-to-4 decoder, 3-to-8 decoder or 4-to-16 decoder. We can
form a 3-to-8 decoder from two 2-to-4 decoders (with enable signals).
Similarly, we can also form a 4-to-16 decoder by combining two 3-to-8 decoders. In this
type of circuit design, the enable inputs of both 3-to-8 decoders originate from a 4th
input, which acts as a selector between the two 3-to-8 decoders. This allows the 4th input
to enable either the top or bottom decoder, which produces outputs of D(0) through D(7)
for the first decoder, and D(8) through D(15) for the second decoder.
It is important to note that a decoder that contains enable inputs is also known as a
decoder-demultiplexer. Thus, we have a 4-to-16 decoder produced by adding a 4th input
shared among both decoders, producing 16 outputs.