v1 Covered
v1 Covered
Research Article
Keywords: Discrete Wavelet Transform, Huffman Coding, Image Compression, Quad tree Decomposition,
Thresholding
DOI: https://doi.org/10.21203/rs.3.rs-434952/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License
An efficient compression of gray scale images using Wavelet Transform
Abstract
The rapid development of technology and the standardization of digital photography have led to
an explosive growth in digital image distribution and reproduction. The enhancement of storage
capacity in computer disks and advancement in networking have not been able to keep pace with
the demands of handling, storing and finally transmitting huge volume of image data. Only
proper image compression technologies seem to offer a solution to this challenge. The
importance of digital image compression in multimedia applications has inspired extensive
research all over the world. The present study recommends a newly formulated algorithm by
computing Discrete Wavelet Transform (DWT) in combination with thresholding and quadtree
decomposition. Findings prove that the proposed technique is at par with EZW image
compression algorithm in terms of quality performance at the same bit rate, and obviates the
need for employing any other conventional standard image compression techniques.
Keywords: Discrete Wavelet Transform, Huffman Coding, Image Compression, Quad tree
Decomposition, Thresholding.
1 Introduction
The rapid change in the widespread use of computers, internet teleconferencing, and satellite
communication have inspired recent researchers to focus their attention on digital image
compression and maintain a standard quality for these multimedia applications. This enhances
the need to promote resourceful and sophisticated techniques that would help to achieve optimal
level of compression and fulfill the requirements of users. Compressing data [1] saves storage
capacity; transfer files speedily and reduce the cost involved in storage hardware and network
bandwidth.
Data redundancy is one of the fundamental components in digital image compression. Broadly
speaking, three basic data redundancies have been identified as available in an image pixel viz.
Coding redundancy, inter-pixel redundancy and psycho-visual redundancy. By applying
appropriate encoding method, coding redundancy can be eliminated and information be
represented in the form of codes. Inter-pixel redundancy is defined as failure to identify and
utilize data relationships. It is of two kinds, namely, Inter-pixel spatial redundancy and Inter-
pixel temporal redundancy. Inter-pixel spatial redundancy occurs due to a correlation between
the neighboring pixels in an image. It depends upon the resolution of the image. Again, inter-
pixel temporal redundancy is the statistical correlation between pixels from successive frames in
video sequence. Conversely, Psycho-visual redundancies came into existence because the human
perception of information in an image fails to receive quantitative analysis of every pixel or
luminance value.
Image compression is achieved when any one or more of the above redundancies are reduced or
eliminated. It comprises of Lossless Compression and Lossy Compression. The former explains
compression in which the reconstructed image is an exact replica of the original image, with no
information lost during the compression process. In contrast, lossy compression is just the
reverse, where the reconstructed image is in no way exactly similar to the original one.
Image transforms are popularly adopted for decorating the pixels in image compression. To
reduce the dependency between pixels, some standard image compression methods are
employed. Few of these transformation tools include Discrete Fourier transform (DFT),
Karhonen Loeve transform (KLT) [2], Principal component Analysis (PCA), Singular value
decomposition (SVD), Discrete Cosine transform (DCT) [3,4], Discrete Wavelet transform
(DWT) [5] and the like. Of all transformation tools available so far, DCT is most appropriately
applied in one of the powerfully recognized JPEG [6] image compression standard. However,
JPEG2000 [7] is based on DWT. Studies reveal that DWT has few advantages over DCT. Firstly,
DCT is applied on block images and causes block artifacts, indicating loss of information. On the
other hand, DWT provides much better compression ratio, without losing much information of
image. This is because the latter does not work on blocks of images and its coefficient is
localized.
Of late, one method of decomposing signal that has become increasingly popular is the use of
wavelets. Entropy coding is administered on the DWT coefficients to compress the image and
encourage efficient storage. The image is then passed through a series of analysis filter bank by
using DWT. The analysis filter bank consists of Low-pass and High-pass filters that would allow
both coarse and detail information to get extracted. Once image processing is over, image
coefficients are divided into Approximation and Detail sub bands. To obtain the reconstructed
image from these sub-bands, DWT uses synthesis filter bank.
Against this backdrop, the present authors have attempted to adopt a novel scheme by
introducing DWT in combination with thresholding and quad tree decomposition to reduce
symbol of coefficient for efficient compression of images. The result parameters of the proposed
algorithm has been compared with Improved Embedded Zerotrees of wavelet transforms (IEZW)
[8] based on the performance measure of peak signal-to-noise ratio (PSNR) at different low bit-
rate. Findings reveal impressive results for grayscale images. Moreover, it is easily
comprehensible and manifests simple implementation on Matlab code. Thus, the proposed model
attempts to establish an alternative measure to EZW without applying the concept of any other
single conventional standard image compression methods.
This article has been divided into six sections. Section 1 is the introductory portion of the study.
Section 2 deals with previous literature surveys related to the present study. Section 3 explains
the preliminaries. In Section 4, the proposed algorithm is designated. Section 5 relates to the
results obtained from experiments undertaken, while section 6 is the concluding part of the study.
2 Related Work
G. K. Wallace first proposed an Image compression standard known as JPEG algorithm [6] [9].
This tool is widely used for compression of gray scale images. It uses a lossy form of
compression based on DCT. DCT is applied on 8×8 rectangular blocks of data. The signal is
packed into few DCT coefficients after capturing the spatial redundancy by itself. It has a major
drawback as it is subjected to block artifacts. Conversely, as discussed earlier in the introductory
section, DWT is not based on block artifacts and its coefficient is more localized; thus indicating
an advantage over DCT. Besides, the statistical qualities of the wavelet transform have been
widely explored and today, the wavelet based image coding techniques have been considered as
the latest, sophisticated and most useful development in the field of image compression. Studies
reveal that the pyramids or dyadic wavelet decomposition [10-11] offers high energy compaction
with high quality reconstructed images and is essentially useful in image compression [11-12].
For these reasons, within the past few years, DWT has become effectively operational for
compression of the digital images [12-14] and has been suggested as a better alternative of DCT.
Of all image compression methods based on DWT approach known to us so far, embedded zero
tree wavelet (EZW) [15] is the most popular. This newly proposed algorithm was first introduced
by J.M. Shapiro in 1993. It is based primarily on four key concepts. Firstly, it should be a
discrete wavelet transform (hierarchical sub-band decomposition). Secondly, it should predict the
absence of significant information across scales by exploiting the self-similarity inherent in
images; thirdly, it has entropy-coded successive-approximation quantization (SAQ), and finally,
it enables to achieve universal lossless data compression via adaptive arithmetic coding.
Nevertheless, it has few shortcomings. For example, some redundancy exists on few of the high
frequency sub-bands. Here, for each SAQ iteration, the EZW coder scans all wavelet coefficients
in each sub-band with respect to a given threshold. To overcome this problem, E.S. Kang [16]
and others proposed a modified technique known as improved embedded zero tree wavelet coder
(IEZW). Here, unlike the EZW coder, the proposed IEZW coder scans only coefficients of
significant sub-bands and thereby significantly reduces the bit redundancy of the EZW. To
eradicate this existing defect in EZW, J. Zhong designed another technique [17] based on
quantized coefficient partitioning by using morphological operation. In this mechanism, instead
of encoding the coefficients in each sub band line-by-line, regions in which most quantized
coefficients were significant were extracted by morphological dilation and encoded first. This
was followed by using zerotrees to encode the remaining space which had mostly zeros. Findings
also confirmed that the proposed algorithm was far superior to the EZW. Also, the authors
reported that obtained results were compared favorably with the most efficient wavelet-based
image compression algorithms reported so far. In [18], A. Quafi et al proposed “A Modified
Embedded Zero tree Wavelet (MEZW) Algorithm for Image Compression”, where the authors
modified Shapiro’s EZW model of proposed algorithm. In their approach, the authors distributed
entropy by using six symbols in place of four in EZW and also optimized the coding by a binary
grouping of elements before coding. The results obtained showed remarkable improvement on
the PSNR and compression ratio obtained by Shapiro, without affecting the computing time.
Brahimi et al [19] proposed a technique for reducing the scanning and symbol redundancy of the
existing EZW based on the use of six symbols instead of four. The main purpose of this
technique was to avoid the encoding of the children of each significant coefficient based on the
condition that there was no significant descendant. This enabled to [20] improve the quality of
reconstructed image after decoding. In a newly formulated technique, Chen et al made use of the
Compressed Sensing (CS) theory and EZW. Much later, extensive research on these areas
prompted T. Brahimi et al [19] to propose a far better, effective technique. The method involved
reducing the number of zerotrees as well as the scanning and symbol redundancy of the existing
EZW, based on the use of a new significant symbol map which was represented in a more
efficient way.
In this paper, to exploit the properties of the DWT, Quad tree decomposition and thresholding
techniques have been developed in combination, as a new image compression algorithm which is
expected to yield significantly better performance in the light of PSNR at the same bit-rate
without using the zero - tree concept of EZW. Here, no single standard image compression
methods like EBCOT [21], SPIHT [10] and JPEG 2000[7] have been employed. Moreover, the
application of this method has been experimentally verified on different images and proved to
provide promising results.
3 Preliminaries
In order to understand our proposed study better, it is essential to discuss the concepts behind the
operational implementation of Discrete wavelet transform (DWT) and Quad tree decomposition
methods in the field of data image compression. The former is applicable to decompose the
image and reduce redundancy; whereas the latter is used to achieve high compression ratios and
preserve edge integrity.
DWT, known for its use as wavelet decomposition method, is used as a lossy coding
compression. Its high energy compaction capability is expected to provide a well-designed
coding technique that would reduce redundancy in an image and enable to achieve optimum
compression. It works in a recursive manner and divides image into low pass and high pass
elements.
The two-dimensional DWT is executed by a two-Channel wavelet filter bank. The image is
initially scanned in horizontal direction, then passed through a filter to produce low pass and
high pass frequency data. Once this output image data is generated, it is scanned in vertical
direction to create various sub bands. The low frequency LL sub-band has significant
information of the original image and is commonly called the Approximate Image; the LH, HL
and HH sub-band denotes the Details of the Image. Each sub band is reduced to 1/4th the size of
actual image [1] [22-24].
The basic underlying principal can be best understood by the mathematical equation and
expression denoted below:
𝑓(𝑥) = ∑𝑘 𝑐𝑗0 (𝑘)𝜑𝑗0,𝑘 + ∑∞
𝑗=𝑗0 ∑𝑘 𝑑𝑗 (𝑘)𝛹𝑗,𝑘 (𝑥), where 𝑐𝑗0 (𝑘) and 𝑑𝑗 (𝑘) are scaling and detail
coefficient respectively.
Suppose function 𝑓(𝑥) is expanded and sequence of numbers of 𝑓(𝑥) discrete then resulting
coefficient is recognized as the discrete wavelet transform. Most interestingly, DWT works
perfectly well even when images are processed at multiple resolutions.
The working approach of DWT is shown in Fig. 1. In this figure, output expressions are
known as DWT coefficient. ℎ𝜑(−𝑛) and ℎ𝛹(−𝑚) are scaling and wavelet vectors which are low
pass and high pass decomposition filters respectively. The output of Fig. 1 is calculated as:
𝑤𝛹𝐻 (𝑗, 𝑚, 𝑛) = ℎ𝛹 (−𝑚) ∗ [ℎ𝜑 (−𝑛) ∗ 𝑤𝜑 (𝑗 + 1, 𝑚, 𝑛)|𝑛 = 2𝑘, 𝑘 ≥ 0]|𝑚 = 2𝑘, 𝑘 ≥ 0 Where *
denotes convolution.
In Fig. 1, we get four lower scale components after input decomposition. ℎ𝜑(−𝑛) and ℎ𝛹(−𝑚) are
called approximation coefficient and w is created by these two. {𝑤𝛹𝑖 𝑓𝑜𝑟 𝑖 = 𝐻, 𝑉, 𝐷} is
known as detail coefficients.
The main application of DWT is in image compression where DWT is decomposing the images
into lower and higher sub bands. Theoretically, though images can be decomposed up to the
level of infinity, most researchers prefer to limit decomposition up to 3- level sub band [25-27].
3.2 Quadtrees
A Quadtree is defined as a tree data structure in which each internal node has four children.
Quadtrees are two-dimensional analog of octrees. They are generally used to partition a two-
dimensional space by recursively subdividing it into four quadrants or sections. The subdivided
sections may be square or rectangular, or may have arbitrary shapes. Quadtree decomposition
uses the qtdecomp function. This function works by dividing a square image into four equal-
sized square blocks first, and then testing each block to see if it meets some criterion of
homogeneity (for example, if all the pixels in the block are within a specific dynamic range). In
case a block meets the criterion; it is not divided any further. If it fails to meet the criterion, it is
subdivided again into four blocks, and the test criterion is applied to those blocks. This process is
repeated iteratively until each block meets the criterion. The result might have blocks of several
different sizes [28].
Some common uses of quad trees are
Image representation
Image compression
4 Proposed Work
4.1 Basic Approach
The main objective of our approach is to use the features of DWT transformations. Firstly, DWT
decomposed the key image after which Huffman Encoding was applied for additional
improvement of the compression performance. The authors used a 2-level Haar wavelet
transform for decomposing the 8-bit key images of size 512x512 pixels.
DWT details have zero mean and small variation. Using a Huffman Coding, DWT coefficients
are restricted only to the most significant coefficients, ignoring the rest. The proposed algorithm
is shown below:
Encoding Algorithm:
Algorithmic Steps:
Step 1: Apply DWT on the gray scale image, I X Y so that it is decomposed into lower and
higher sub bands.
Step 2: Obtain the median of the approximate coefficients, and use median as a base of log and
calculate the logarithmic coefficients accordingly. The purpose of this step is to convert the
higher value into lower value for enhancing the compression ratio. Preprocess the detail
coefficients so that the detail coefficients are converted into the nearest integer.
Step 3: Apply entropy based smoothing on higher sub band according to their textual features to
accept significant bits and discard the insignificant ones.
Step 4: Apply quadtree decomposition to reduce symbols of approximate and detail coefficients.
Quadtree decomposition for approximate image is optional. It is used for higher bit-rate and
lower PSNR value.
Step 5: Encode the approximate and details coefficients by Huffman coding.
Step 6: Get common compressed bit streams.
Step 7: Find Compression Ratio/bit-rate.
Decoding Algorithm
Input: Common compressed bit streams
Output: Reconstructed approximate image
Algorithmic steps:
Step1: Obtain compressed lower and higher sub band coefficient bit streams from common
compressed bit streams.
Step2: Find reconstructed lower and higher sub band coefficient from compressed approximate
and detail coefficient bit streams using reverse Huffman coding.
In our proposed method, a 3-level ‘Haar Wavelet’ transform decomposes the gray scale image.
The next step involves getting approximate and detail Image. Next, the approximate and details
coefficients are preprocessed. Once this operation is over, both coefficients are then encoded by
Huffman code to get a sequence of digital data. To get the reconstructed image, the binary data is
decoded. For this, the encoding process is just reversed. The concept behind this newly designed
method is best illustrated by the algorithm and flowchart as given below:
Original Image(I,J)
512×512
DW T Transform
(LL,LH,HL,HH)
Adaptive Quadtree
Quantization Decomposition
Quadtree
Decomposition
Huffman Encoding
Compressed
bitsteams
Y
Fig. 3 Overall scheme of proposed image compression algorithm
5 Experimental Results and Discussions
The performance of lossy compression approaches can be estimated with the help of certain
indicators mentioned below.
a. Peak-Signal-to-noise-ratio (psnr): psnr is the usual scale to compute the compressed
image worth. For the general case of 8 bits per picture element of key image, the peak snr
(psnr) can be expressed as [29-30]
2552
psnr (dB) 10 log10 (1)
mse
Where worth 255 is utmost possible worth that can be attained by the image signal. mse
in (1) denotes the mean squared error of the image expressed as
f i, j F i, j
1 2
mse
n i j
Here n is the whole quantity of pixels,
F (i, j) indicates the pixel value in the compressed image and f (i, j) expresses a pixel
value in the original image.
Soriginal
CR
Scompressed
Where Soriginal the dimension of the original is image data and Scompressed is the dimension
(numeral of bits) of the compressed image data.
The image compression algorithms proposed in this paper were applied on different grayscale
images. Fig. [5(a-f)] represents the results on test image ‘Lena’. The performance of our
proposed algorithm applied on different set of test images ‘Airplane’, ‘Lena’, ‘Peppers’,
‘Barbara’, ‘Goldhill’ and ‘Sailboat’ of size 512×512 are represented in Fig. 4.
Bitrate Proposed
Image IMPIEZW[8]
(bpp) (DWT)
0.03125 23.8106 22.3540
0.0625 25.7746 25.7853
0.125 28.1217 27.0395
Airplane
0.25 31.2195 29.6331
0.5 34.9951 32.7346
1 40.3689 34.5345
0.03125 23.8158 22.8452
0.0625 26.2563 25.7797
0.125 28.9792 29.0153
Lena
0.25 31.6442 31.5772
0.5 34.7579 34.2418
1 38.6311 35.6323
0.03125 24.5045 23.0027
0.0625 27.0365 27.9992
0.125 29.8714 28.6553
Peppers
0.25 33.0125 29.9554
0.5 35.7291 31.0221
1 38.3256 32.4194
0.03125 21.1572 21.3986
0.0625 22.4375 23.8611
0.125 23.4877 23.9054
Barbara
0.25 25.925 24.0173
0.5 29.0292 26.0830
1 33.8263 29.2387
0.03125 23.5271 24.2379
0.0625 24.4855 27.1712
0.125 26.9261 27.3161
Goldhill
0.25 28.7573 28.4243
0.5 31.1567 29.4883
1 34.4445 31.3273
0.03125 22.5333 21.9817
0.0625 24.6006 22.4062
0.125 26.3312 26.1665
Boat
0.25 28.7099 27.0335
0.5 31.9414 29.9632
1 36.0863 31.0084
(a)Airplane (b) Lean (c) Pepper (d) Barbara (e) Goldhill (f) Boat
Fig. 4 Grayscale test images of size 512×512
(a) (b)
(c) (d)
(e) (f)
Fig. 5 Lena (Fig a-f; reconstructed image) at different bit-rates 0.03125, 0.0625, 0.125, 0.25, 0.5
,1 with PSNR 22.9172, 25.8127, 29.0219, 31.5810,34.3612,35.6233 and SSIM
0.7013,0.7436,0.8133,0.8519,0.9023,0.9117 respectively.
36
IMPIEZW
Proposed Technique(DWT)
34
32
PSNR(dB)
30
28
26
24
22
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Bitrates(BPP)
Fig. 6 Performance comparisons of Proposed Coders (DWT) against the methods IMPIEZW [8]
for Gold-hill gray scale image at the same range of bitrates [0.03125 0.0625 0.125 0.25 0.5 1]
(BPP).
6 Conclusion
In the present paper, the authors presented a technique where the image has been decomposed by
the combined application of DWT and Haar wavelet transform. The said method has divided
image into two parts. One is the approximate image, playing a key role since approximate
coefficients possess one of the most sensitive data in DWT transform. As discussed earlier,
approximate image is a compressed representation of the original one and has therefore been
handled here with utmost care; else, any impact upon the reconstructed image is expected to
greatly reduce the PSNR value. The second part includes the stage of preprocessed image where
the detailed coefficients are smoothed based on their textual feature and their outcomes are
quantized. This causes reduction in the number of insignificant symbols. The quadtree
decomposition functions in order to significantly reduce the data size and allow working on
smaller data. The outcomes of the proposed technique have been compared with various state-
of–the–art image compression techniques. The quantitative and visual results showed the
superiority of the proposed algorithm over the state-of–the-art techniques.
Declarations:
Funding: Nil
Conflict of interest: The authors declare that they have no conflict of interest.
Availability of data material: Not applicable
Code availability: Not applicable
References
1. Ranjan, R. (2021) “Canonical Huffman Coding Based Image Compression using Wavelet”, Wireless Personal
Communications., 117(3):2193-2206.
2. Leung, Raymond, and David Taubman. (2005) "Transform and embedded coding techniques for Maximum
efficiency and random accessibility in 3-D scalable compression." IEEE Transactions on Image Processing,
14(10): 1632-1646.https://doi.org/10.1109/TIP.2005.851707
3. Song, Han Sae, and Nam Ik Cho. (2009) "DCT-based embedded image compression with a new coefficient
sorting method." IEEE Signal Processing Letters, 16(5): 410-413.https://doi.org/10.1109/LSP.2009.2016010
4. Ponomarenko, Nikolay N., et al. (2007) "High-quality DCT-based image compression using partition schemes."
IEEE Signal Processing Letters, 14(2): 105-108.https://doi.org/10.1109/LSP.2006.879861
5. Song, Xiaoying, et al. (2016) "Three-dimensional separate descendant-based SPIHT algorithm for fast
compression of high-resolution medical image sequences." IET Image Processing,
11(1):80-87.https://doi.org/10.1049/ ietipr.2016.0564
6. Wallace, Gregory K. (1992) "The JPEG still picture compression standard." IEEE transactions on consumer
electronics, 38(1): 18- 34.
7. Christopoulos, Charilaos, Athanassios Skodras,and Touradj Ebrahimi. (2000) "The JPEG2000 still image
coding system: an overview." IEEE transactions on consumer electronics 46(4): 1103-1127.
https://doi.org/10.1109/30.920468
8. T. Brahimi,F. Laouir,Larbi Boubchir,Arab Ali-Cherif, (2017) “An improved wavelet-based image coder for
embedded greyscale and colour image compression”, Int. J. Electron. Commun.(AEU) 73:183-192.
9. G. K. Wallace. (1990) "Overview of the JPEG (ISO/CCITT) still image compression standard. Image
Processing Algorithms and Techniques", Proceedings of the SPIE, 1244: 220-233.
10. A. Said, W.A. Pearlman (1996) “A new fast and efficient image codec based on set partitioning in hierarchical
trees”, IEEE Trans. Circ. Syst. Video Technol. 6 (3):243–250.
11. T. Brahimi, A. Melit, F. Khelifi and D. Boutana, (2006) “Improvements to SPIHT for lossless image coding,” in
Proc. Of ICTTA’06, pp. 1445-1450.
12. S. Mallat,( 1989) ″A theory for multiresolution signal decomposition: the R. wavelet representation, ″IEEE
Trans. Pattern Analysis and Machine Intelligence, 11(7).
13. M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies ,( 1992) “ Image coding using wavelet transform, ”
IEEE Trans. Image Processing, 1(2):205–220.
14. S.Welstead, (1999) ″ Fractal and wavelet image compression techniques, ″Spie Optical Engineering Press,
Washington, USA.
15. J.M. Shapiro, (1993) Embedded image coding using zero trees of wavelet coefficients, IEEE Trans. Signal
Process. 41(12): 3445–3462.
16. Kang ES, Tanaka T, Ko SJ. (1999) Improved embedded zerotree wavelet coder. Proc IEE Electron Lett;
35(9):705–6.
17. C. Cai, S.K. Mitra, Runtao Ding, (2002) “Smart wavelet image coding: X-tree approach, Signal Processing”,
82:239–249.
18. J.Zhong, C.H. Leung, (2000) “An Improved Embedded Zerotree wavelet Image coding method based on
coefficient partitioning using Morphological operation, International Journal of Pattern Recognition and
Artificial Intelligence”, 14(6): 795-807.
19. A. Ouafi, A. T. Ahmed, Z. Baarir, A. Zitouni, (2008) “A Modified Embedded Zerotree Wavelet (MEZW)
Algorithm for Image Compression”, J Math Imaging, 30: 298–307
20. T. Brahimi,F. Laouir, N. Kechacha ,An efficient wavelet-Based Image Coder,IEEE Conf.,2008,pp.1-4
21. Z. Chen, C. Mu, (2014) “An Improvement of Embedded Zerotree wavelet coding Based on Compressed
Sensing”, IEEE Conf.
22. D. Taubman, (2000) High performance scalable image compression with EBCOT, IEEE Trans. Image Process,
9(7):1158–1170.
23. Michail Krinidis,Nikos Nikolaidis, Ioannis Pitas, (2007) “The discrete modal transform and its application to
lossy image compression”, Signal Processing: Image Communication, 22:480-504.
24. ISO/IEC10918-1 ITU-TRec.T.81, Information Technology – Digital Compression and Coding of Continuous-
Tone Still Images, 1992.
25. ISO/IEC15444-1 j ITU- TRec.T.800, Information Technology-JPEG 2000 Image Coding System: Core Coding
System, 2002.
26. S. Benchikh, and M. Corinthios, (2011) “A Hybrid Image Compression Technique Based on DWT and DCT
Transforms”, Advanced Infocom Technology 2011 (ICAIT 2011), IEEE, 2011, pp. 1-8.
27. Tim Braylants, Adrian Munteanu, Peter Schelkens, (2015) “Wavelet based volumetric medical image
compression”, Signal Processing: Image Communication, 31:112-133.
28. F. Keissarian, “A New Quadtree-based Image Compression Technique using Pattern Matching
Algorithm”,ICCES,12(4): 137-143
29. Cosman, P.C., Gray, R.M. and Olshen,R.A., (1994) “Evaluating quality of compressed medical images: SNR,
Subjective rating, and diagnostic accuracy. Proceedings of the IEEE, 30: 857-865.
30. R.Reddy M,R.KS,V.B,S.SD, (2018) “A new approach for the image compression to the medical images using
PCA-SPIHT”,Biomedical Research ,Special Issue:S481-S486,ISSN 0970-938X
31. O. Yildirim,R.S. Tan and U.R. Acharya,(2018) An efficient compression of ECG Signals using deep
convolutional autoencoders, Elsevier, Cognitive System Research52(2018) 198-211.
32. P.M. Latha,A.A. (2018) Fathima,Collective compression of Images using Averaging and Transform Coding,
Elsevier, Measurement, S0263-2241(18): 31181-3.
20
100
40
200 60
80
300
100
400 120
20 40 60 80 100 120
500
100 200 300 400 500
Horizontal Detail Image after EBS Vertical Detail Image after EBS Diagonal Detail Image after EBS
Horizontal Detail Image after Thresholding Vertical Detail Image after Thresholding Diagonal Detail Image after Thresholding
Horizontal Detail Image after Quantization Vertical Detail Image after Quantization Diagonal Detail Image after Quantization
Reconstructed Image
Fig. 7 Demonstration of Barbara image at bit-rate 0.0625 (BPP) at different stages of proposed
Method