Why Image compression is important?
How Image compression has come a long way?
Image compression is nearly mature, but there is always room for improvement.
This document discusses image compression. It defines image compression as reducing the amount of data required to represent a digital image without significant loss of information. The goal is to minimize the number of bits required to represent an image in order to reduce storage and transmission requirements. Image compression removes three types of redundancies: coding, interpixel, and psychovisual. It describes lossy and lossless compression methods and variable-length coding techniques like Huffman coding which assign shorter codes to more probable values to reduce coding redundancy.
This document provides an overview of image compression techniques. It defines key concepts like pixels, image resolution, and types of images. It then explains the need for compression to reduce file sizes and transmission times. The main compression methods discussed are lossless techniques like run-length encoding and Huffman coding, as well as lossy methods for images (JPEG) and video (MPEG) that remove redundant data. Applications of image compression include transmitting images over the internet faster and storing more photos on devices.
This document provides an overview of image compression. It discusses what image compression is, why it is needed, common terminology used, entropy, compression system models, and algorithms for image compression including lossless and lossy techniques. Lossless algorithms compress data without any loss of information while lossy algorithms reduce file size by losing some information and quality. Common lossless techniques mentioned are run length encoding and Huffman coding while lossy methods aim to form a close perceptual approximation of the original image.
1) The document discusses implementing various image compression algorithms such as discrete cosine transform (DCT), discrete wavelet transform (DWT), run length encoding (RLE), and quantization.
2) These algorithms aim to reduce image file size by eliminating redundant or unnecessary pixel data in order to more efficiently store and transmit images.
3) Key steps involve applying transforms to extract coefficients, then quantizing coefficients to remove insignificant values without significantly impacting image quality.
The document discusses digital image compression. It describes how image compression works by removing redundant data from images to reduce file sizes. It also discusses various image file formats and compression standards like JPEG and MPEG that are commonly used to compress images and video. Finally, it explains several lossy and lossless compression methods and algorithms, such as Huffman coding and Golomb coding, that form the technical basis for these compression standards.
The document discusses the JPEG image compression standard. It describes the basic JPEG compression pipeline which involves encoding, decoding, colour space transform, discrete cosine transform (DCT), quantization, zigzag scan, differential pulse code modulation (DPCM) on the DC component, run length encoding (RLE) on the AC components, and entropy coding using Huffman or arithmetic coding. It provides details on quantization methods, quantization tables, zigzag scan, DPCM, RLE, and Huffman coding used in JPEG to achieve maximal compression of images.
This document provides an overview of image compression techniques. It discusses how image compression works to reduce the number of bits needed to represent image data. The main goals of image compression are to reduce irrelevant and redundant image information to produce smaller and more efficient file sizes for storage and transmission. The document outlines different compression methods including lossless compression, which compresses data without any loss, and lossy compression, which allows for some loss of information in exchange for higher compression ratios. Specific techniques like run length encoding are also explained.
Presentation given in the Seminar of B.Tech 6th Semester during session 2009-10 By Paramjeet Singh Jamwal, Poonam Kanyal, Rittitka Mittal and Surabhi Tyagi.
Image compression introductory presentationTariq Abbas
This document discusses image compression techniques. It explains that the goal of compression is to reduce the amount of data needed to represent a digital image by eliminating redundant information like coding, interpixel, and psychovisual redundancies. Compression can be lossy or lossless. Lossy methods allow for data loss but provide higher compression, while lossless preserves all image data. Common lossy techniques include JPEG, which uses discrete cosine transform and quantization, and lossless methods include run length and Huffman encoding.
This document discusses image compression techniques. It begins by defining image compression as reducing the data required to represent a digital image. It then discusses why image compression is needed for storage, transmission and other applications. The document outlines different types of redundancies that can be exploited in compression, including spatial, temporal and psychovisual redundancies. It categorizes compression techniques as lossless or lossy and describes several algorithms for each type, including Huffman coding, LZW coding, DPCM, DCT and others. Key aspects like prediction, quantization, fidelity criteria and compression models are also summarized.
This document provides an overview of digital image processing and image compression techniques. It defines what a digital image is, discusses the advantages and disadvantages of digital images over analog images. It describes the fundamental steps in digital image processing as well as types of data redundancy that can be exploited for image compression, including coding, interpixel, and psychovisual redundancy. Common image compression models and lossless compression techniques like Lempel-Ziv-Welch coding are also summarized.
A description about image Compression. What are types of redundancies, which are there in images. Two classes compression techniques. Four different lossless image compression techiques with proper diagrams(Huffman, Lempel Ziv, Run Length coding, Arithmetic coding).
This presentation is about JPEG compression algorithm. It briefly describes all the underlying steps in JPEG compression like picture preparation, DCT, Quantization, Rendering and Encoding.
JPEG compression involves four key steps:
1) Applying the discrete cosine transform (DCT) to 8x8 pixel blocks, transforming spatial information to frequency information.
2) Quantizing the transformed coefficients, discarding less important high-frequency information to reduce file size.
3) Scanning coefficients in zigzag order to group similar frequencies together, further compressing the data.
4) Entropy encoding the output, typically using Huffman coding, to remove statistical redundancy and achieve further compression.
The document discusses image compression techniques. It explains that the goal of image compression is to reduce irrelevant and redundant image data to store and transmit images more efficiently. There are three main types of redundancy reduced in image compression: coding, interpixel, and psychovisual. Lossless compression preserves all image data using techniques like Huffman coding, run-length coding, arithmetic coding, and Lempel-Ziv coding. Lossy compression allows for some quality loss and higher compression ratios.
This document provides an overview of a research project on image compression. It discusses image compression techniques including lossy and lossless compression. It describes using discrete wavelet transform, lifting wavelet transform, and stationary wavelet transform for image transformation. Experiments were conducted to compare the compression ratio and processing time of different combinations of wavelet transforms, vector quantization, and Huffman/Arithmetic coding. The results were analyzed to evaluate the compression performance and efficiency of the different methods.
The document discusses data compression fundamentals including why compression is needed, information theory basics, classification of compression algorithms, and the data compression model. It notes that digital representations of analog signals require huge storage and bandwidth for transmission. Compression aims to represent source data with as few bits as possible while maintaining acceptable fidelity through modeling and coding phases. Algorithms can be lossless or lossy depending on whether reconstruction is exact. Performance is evaluated based on compression ratio, quality, complexity, and delay.
Image Compression: It is the Art & Science of reducing the amount of data required to represent an image
The number of images compressed and decompressed daily is innumerable
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
This topic comes under the Image Processing.In this comparison between JPEG and JPEG 2000 compression standard techniques is made.The PPT comprises of results, analysis and conclusion along with the relevant outputs
This document discusses digital image compression techniques. It begins by defining digital images and the need for compression due to the large size of digital images. It then describes the three main types of redundancy in digital images that compression techniques aim to remove: coding redundancy, interpixel redundancy, and psychovisual redundancy. The document outlines different lossless and lossy compression techniques and how they work to remove these different types of redundancies in order to reduce the size of digital images.
The document provides information on various techniques for image compression, including lossless and lossy compression methods. For lossless compression, it describes run-length encoding, entropy coding, and area coding. For lossy compression it discusses reducing the color space, chroma subsampling, and transform coding using DCT and wavelets. It also covers segmentation/approximation methods, spline interpolation, fractal coding, and bit allocation techniques for optimal compression.
Design of Image Compression Algorithm using MATLABIJEEE
This document summarizes an image compression algorithm designed using Matlab. It discusses various image compression techniques including lossy and lossless compression methods. Lossy compression methods like JPEG are suitable for natural images while lossless methods are preferred for medical or archival images. The document also describes the image compression process involving encoding, quantization, decoding and calculation of compression ratios and quality metrics like PSNR and SNR. Specific compression algorithms discussed include Block Truncation Coding and techniques that exploit coding, inter-pixel and psychovisual redundancies like DCT used in JPEG.
JPEG is a lossy compression method for color or grayscale images. It works best on continuous-tone images where adjacent pixels have similar colors. The JPEG standard defines several modes of operation and uses various techniques like color space transformation, discrete cosine transformation (DCT), quantization, differential pulse-code modulation, run length encoding, and Huffman coding to achieve high compression ratios while maintaining good image quality. Key aspects of the JPEG process include converting images to luminance and chrominance color space, applying DCT, quantizing coefficients, encoding DC values with DPCM, and entropy coding remaining coefficients.
This document summarizes image compression techniques. It discusses:
1) The goal of image compression is to reduce the amount of data required to represent a digital image while preserving as much information as possible.
2) There are three main types of data redundancy in images - coding, interpixel, and psychovisual - and compression aims to reduce one or more of these.
3) Popular lossless compression techniques, like Run Length Encoding and Huffman coding, exploit coding and interpixel redundancies. Lossy techniques introduce controlled loss for further compression.
comparision of lossy and lossless image compression using various algorithmchezhiyan chezhiyan
This document compares lossy and lossless image compression using various algorithms. It discusses the need for image compression to reduce file sizes for storage and transmission. Lossy compression provides higher compression ratios but some loss of information, while lossless compression retains all information without loss. The document proposes comparing algorithms like Fractal image compression and LZW, analyzing parameters like SNR, PSNR, and MSE for formats like BMP, TIFF, PNG and JPEG. It provides details on how the LZW and Fractal compression algorithms work.
The document provides an overview of JPEG image compression. It discusses that JPEG is a commonly used lossy compression method that allows adjusting the degree of compression for a tradeoff between file size and image quality. The JPEG compression process involves splitting the image into 8x8 blocks, converting color space, applying discrete cosine transform (DCT), quantization, zigzag scanning, differential pulse-code modulation (DPCM) on DC coefficients, run length encoding on AC coefficients, and Huffman coding for entropy encoding. Quantization is the main lossy step that discards high frequency data imperceptible to human vision to achieve higher compression ratios.
Digital image processing involves compressing images to reduce file sizes. Image compression removes redundant data using three main techniques: coding redundancy reduction assigns shorter codes to more common pixel values; spatial and temporal redundancy reduction exploits correlations between neighboring pixel values; and irrelevant information removal discards visually unimportant data. Compression is achieved by an encoder that applies these techniques, while a decoder reconstructs the image for viewing. Popular compression methods include Huffman coding and arithmetic coding. Compression allows storage and transmission of images and video using less data while maintaining acceptable visual quality.
Presentation given in the Seminar of B.Tech 6th Semester during session 2009-10 By Paramjeet Singh Jamwal, Poonam Kanyal, Rittitka Mittal and Surabhi Tyagi.
Image compression introductory presentationTariq Abbas
This document discusses image compression techniques. It explains that the goal of compression is to reduce the amount of data needed to represent a digital image by eliminating redundant information like coding, interpixel, and psychovisual redundancies. Compression can be lossy or lossless. Lossy methods allow for data loss but provide higher compression, while lossless preserves all image data. Common lossy techniques include JPEG, which uses discrete cosine transform and quantization, and lossless methods include run length and Huffman encoding.
This document discusses image compression techniques. It begins by defining image compression as reducing the data required to represent a digital image. It then discusses why image compression is needed for storage, transmission and other applications. The document outlines different types of redundancies that can be exploited in compression, including spatial, temporal and psychovisual redundancies. It categorizes compression techniques as lossless or lossy and describes several algorithms for each type, including Huffman coding, LZW coding, DPCM, DCT and others. Key aspects like prediction, quantization, fidelity criteria and compression models are also summarized.
This document provides an overview of digital image processing and image compression techniques. It defines what a digital image is, discusses the advantages and disadvantages of digital images over analog images. It describes the fundamental steps in digital image processing as well as types of data redundancy that can be exploited for image compression, including coding, interpixel, and psychovisual redundancy. Common image compression models and lossless compression techniques like Lempel-Ziv-Welch coding are also summarized.
A description about image Compression. What are types of redundancies, which are there in images. Two classes compression techniques. Four different lossless image compression techiques with proper diagrams(Huffman, Lempel Ziv, Run Length coding, Arithmetic coding).
This presentation is about JPEG compression algorithm. It briefly describes all the underlying steps in JPEG compression like picture preparation, DCT, Quantization, Rendering and Encoding.
JPEG compression involves four key steps:
1) Applying the discrete cosine transform (DCT) to 8x8 pixel blocks, transforming spatial information to frequency information.
2) Quantizing the transformed coefficients, discarding less important high-frequency information to reduce file size.
3) Scanning coefficients in zigzag order to group similar frequencies together, further compressing the data.
4) Entropy encoding the output, typically using Huffman coding, to remove statistical redundancy and achieve further compression.
The document discusses image compression techniques. It explains that the goal of image compression is to reduce irrelevant and redundant image data to store and transmit images more efficiently. There are three main types of redundancy reduced in image compression: coding, interpixel, and psychovisual. Lossless compression preserves all image data using techniques like Huffman coding, run-length coding, arithmetic coding, and Lempel-Ziv coding. Lossy compression allows for some quality loss and higher compression ratios.
This document provides an overview of a research project on image compression. It discusses image compression techniques including lossy and lossless compression. It describes using discrete wavelet transform, lifting wavelet transform, and stationary wavelet transform for image transformation. Experiments were conducted to compare the compression ratio and processing time of different combinations of wavelet transforms, vector quantization, and Huffman/Arithmetic coding. The results were analyzed to evaluate the compression performance and efficiency of the different methods.
The document discusses data compression fundamentals including why compression is needed, information theory basics, classification of compression algorithms, and the data compression model. It notes that digital representations of analog signals require huge storage and bandwidth for transmission. Compression aims to represent source data with as few bits as possible while maintaining acceptable fidelity through modeling and coding phases. Algorithms can be lossless or lossy depending on whether reconstruction is exact. Performance is evaluated based on compression ratio, quality, complexity, and delay.
Image Compression: It is the Art & Science of reducing the amount of data required to represent an image
The number of images compressed and decompressed daily is innumerable
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
This topic comes under the Image Processing.In this comparison between JPEG and JPEG 2000 compression standard techniques is made.The PPT comprises of results, analysis and conclusion along with the relevant outputs
This document discusses digital image compression techniques. It begins by defining digital images and the need for compression due to the large size of digital images. It then describes the three main types of redundancy in digital images that compression techniques aim to remove: coding redundancy, interpixel redundancy, and psychovisual redundancy. The document outlines different lossless and lossy compression techniques and how they work to remove these different types of redundancies in order to reduce the size of digital images.
The document provides information on various techniques for image compression, including lossless and lossy compression methods. For lossless compression, it describes run-length encoding, entropy coding, and area coding. For lossy compression it discusses reducing the color space, chroma subsampling, and transform coding using DCT and wavelets. It also covers segmentation/approximation methods, spline interpolation, fractal coding, and bit allocation techniques for optimal compression.
Design of Image Compression Algorithm using MATLABIJEEE
This document summarizes an image compression algorithm designed using Matlab. It discusses various image compression techniques including lossy and lossless compression methods. Lossy compression methods like JPEG are suitable for natural images while lossless methods are preferred for medical or archival images. The document also describes the image compression process involving encoding, quantization, decoding and calculation of compression ratios and quality metrics like PSNR and SNR. Specific compression algorithms discussed include Block Truncation Coding and techniques that exploit coding, inter-pixel and psychovisual redundancies like DCT used in JPEG.
JPEG is a lossy compression method for color or grayscale images. It works best on continuous-tone images where adjacent pixels have similar colors. The JPEG standard defines several modes of operation and uses various techniques like color space transformation, discrete cosine transformation (DCT), quantization, differential pulse-code modulation, run length encoding, and Huffman coding to achieve high compression ratios while maintaining good image quality. Key aspects of the JPEG process include converting images to luminance and chrominance color space, applying DCT, quantizing coefficients, encoding DC values with DPCM, and entropy coding remaining coefficients.
This document summarizes image compression techniques. It discusses:
1) The goal of image compression is to reduce the amount of data required to represent a digital image while preserving as much information as possible.
2) There are three main types of data redundancy in images - coding, interpixel, and psychovisual - and compression aims to reduce one or more of these.
3) Popular lossless compression techniques, like Run Length Encoding and Huffman coding, exploit coding and interpixel redundancies. Lossy techniques introduce controlled loss for further compression.
comparision of lossy and lossless image compression using various algorithmchezhiyan chezhiyan
This document compares lossy and lossless image compression using various algorithms. It discusses the need for image compression to reduce file sizes for storage and transmission. Lossy compression provides higher compression ratios but some loss of information, while lossless compression retains all information without loss. The document proposes comparing algorithms like Fractal image compression and LZW, analyzing parameters like SNR, PSNR, and MSE for formats like BMP, TIFF, PNG and JPEG. It provides details on how the LZW and Fractal compression algorithms work.
The document provides an overview of JPEG image compression. It discusses that JPEG is a commonly used lossy compression method that allows adjusting the degree of compression for a tradeoff between file size and image quality. The JPEG compression process involves splitting the image into 8x8 blocks, converting color space, applying discrete cosine transform (DCT), quantization, zigzag scanning, differential pulse-code modulation (DPCM) on DC coefficients, run length encoding on AC coefficients, and Huffman coding for entropy encoding. Quantization is the main lossy step that discards high frequency data imperceptible to human vision to achieve higher compression ratios.
Digital image processing involves compressing images to reduce file sizes. Image compression removes redundant data using three main techniques: coding redundancy reduction assigns shorter codes to more common pixel values; spatial and temporal redundancy reduction exploits correlations between neighboring pixel values; and irrelevant information removal discards visually unimportant data. Compression is achieved by an encoder that applies these techniques, while a decoder reconstructs the image for viewing. Popular compression methods include Huffman coding and arithmetic coding. Compression allows storage and transmission of images and video using less data while maintaining acceptable visual quality.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document compares the performance of discrete cosine transform (DCT) and wavelet transform for gray scale image compression. It analyzes seven types of images compressed using these two techniques and measured performance using peak signal-to-noise ratio (PSNR). The results show that wavelet transform outperforms DCT at low bit rates due to its better energy compaction. However, DCT performs better than wavelets at high bit rates near 1bpp and above. So wavelets provide better compression performance when higher compression is required.
The document describes the design of a simple digital camera system using different processor types. It outlines the key functional requirements like capturing and storing images, as well as non-functional requirements around size, power consumption and performance. It then provides more details on implementing specific modules like the CCD sensor interface, image compression codec and controller to demonstrate one way the system could be partitioned among different processor types.
Data compression reduces the size of data files by removing redundant information while preserving the essential content. It aims to reduce storage space and transmission times. There are two main types of compression: lossless, which preserves all original data, and lossy, which sacrifices some quality for higher compression ratios. Common lossless methods are run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods include JPEG, MPEG, and MP3.
image processing for jpeg presentati.pptnaghamallella
JPEG is a lossy image compression format that works best on continuous-tone images. It uses discrete cosine transform, quantization, zigzag scanning, differential pulse-code modulation, run-length encoding, and Huffman encoding to achieve high compression ratios while maintaining good image quality. Key aspects of JPEG include adjustable compression levels, support for grayscale and color images, and sequential, progressive, lossless, and hierarchical decoding modes.
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve embedding capacity and image quality. The authors propose modifying the default 8x8 quantization table by changing frequency values to increase the peak signal-to-noise ratio and capacity while decreasing the mean square error of embedded images. Experimental results on test images show increased capacity, PSNR and reduced error when using the modified versus default table, indicating improved stego image quality. The proposed method aims to securely embed more data with less distortion than traditional DCT-based steganography.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
introduction to jpeg for image proce.pptnaghamallella
The document provides an introduction to JPEG, an image compression method. It describes how JPEG works by first converting images from RGB color space to YCbCr color space and downsampling the chrominance channels. It then applies discrete cosine transform (DCT) to blocks, quantizes the coefficients, scans them in zigzag order, and applies entropy coding like Huffman coding. The decoding process reverses these steps to decompress the image.
The document discusses data compression fundamentals including why compression is needed, information theory basics, classification of compression algorithms, and compression performance metrics. It notes that high quality audio, video, and images require huge storage and bandwidth that compression addresses. Compression algorithms involve modeling data redundancy and entropy encoding. Lossy compression achieves higher compression but with reconstruction error, while lossless compression exactly reconstructs data. Key metrics include compression ratio, subjective quality scores, and objective measures like PSNR.
This document provides an overview of data compression techniques. It discusses lossless compression algorithms like Huffman encoding and LZW encoding which allow for exact reconstruction of the original data. It also discusses lossy compression techniques like JPEG and MPEG which allow for approximate reconstruction for images and video in order to achieve higher compression rates. JPEG divides images into 8x8 blocks and applies discrete cosine transform, quantization, and run length encoding. MPEG spatially compresses each video frame using JPEG and temporally compresses frames by removing redundant frames.
Unit 3 Image Compression and Segmentation.pptxAmrutaSakhare1
Compression techniques are essential for efficient data storage and transmission. There are two forms of compression: lossless and lossy. Understanding the differences between these strategies is critical for selecting the best solution depending on the unique requirements of various applications. In this article, we will discuss the differences between lossy and lossless compression.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
This document summarizes an article that proposes modifications to the JPEG 2000 image compression standard to achieve higher compression ratios while maintaining acceptable error rates. The proposed Adaptive JPEG 2000 technique involves pre-processing images with a transfer function to make them more suitable for compression by JPEG 2000. This is intended to provide higher compression ratios than the original JPEG 2000 standard while keeping root mean square error within allowed thresholds. The document provides background on JPEG 2000 and lossy image compression techniques, describes the proposed pre-processing approach, and indicates it was tested on single-channel images.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
This document discusses network security and cryptography. It defines four requirements for secure transactions: confidentiality, integrity, authentication, and non-repudiation. It also defines cryptography as the science of encrypting messages to make them secure and immune to attacks. The two main categories of cryptography are symmetric-key and asymmetric-key cryptography. Symmetric-key cryptography uses the same key to encrypt and decrypt, while asymmetric-key cryptography uses public and private key pairs. Digital signatures, public key infrastructure, certificates, and cryptanalysis are also discussed.
DevOps aims to streamline development through collaboration, standardization, and automation between development and operations teams. It balances rapid development releases with operational stability and security. DevOps is a cultural change rather than a technology - it emphasizes continuous integration, deployment, and delivery to minimize the time between writing code and releasing it to users. While tools can help, DevOps is something organizations do through their processes rather than something they buy.
This document outlines the many benefits and positive attributes of an employer, including great benefits, compensation, opportunities for growth, a supportive team culture, work-life balance, and opportunities to have an impact. The workplace prioritizes employees, has a friendly and flexible environment, and aims to align company and employee goals.
Scrum is an agile framework for managing complex projects. It utilizes empirical process control through short development cycles called sprints, where self-organizing cross-functional teams work to deliver a potentially shippable product increment. Key roles include the product owner, scrum master, and development team. Ceremonies like sprint planning, daily stand-ups, sprint reviews, and retrospectives provide transparency and opportunities to inspect and adapt the process as needed.
Prabhat Kumar was awarded the Certified Scrum Professional designation on October 20, 2016 by Scrum Alliance for completing their certification requirements. His certification is valid until October 20, 2018 and entitles him to privileges offered by Scrum Alliance.
Prabhat Kumar completed the requirements to become a Certified ScrumMaster and was awarded this designation on November 29, 2015. His certification is valid until November 29, 2017 and entitles him to privileges offered by Scrum Alliance. The Chairman of the Board of Scrum Alliance certified Prabhat Kumar.
The document contains checklists for Sprint Planning and Daily Standup meetings in Scrum. The Sprint Planning checklist ensures the required attendees plan sprint goals and user stories to meet the Definition of Done. The Daily Standup checklist helps the team identify daily progress, problems, and collaboration on sprint goals. Both meetings aim to increase team synergy and self-organization to deliver working software incrementally.
This document certifies that Prabhat Kumar and Saket Bansal of iZenBridge Consultancy Pvt. Ltd. have successfully completed the learning and evaluation requirements to become ICAgile Certified Professionals in Agile Coaching (ICP-ACC). As certified professionals, they have demonstrated acquired knowledge in Agile Coaching practices as assessed by ICAgile instructors.
Telangana State, India’s newest state that was carved from the erstwhile state of Andhra
Pradesh in 2014 has launched the Water Grid Scheme named as ‘Mission Bhagiratha (MB)’
to seek a permanent and sustainable solution to the drinking water problem in the state. MB is
designed to provide potable drinking water to every household in their premises through
piped water supply (PWS) by 2018. The vision of the project is to ensure safe and sustainable
piped drinking water supply from surface water sources
GenAI for Quant Analytics: survey-analytics.aiInspirient
Pitched at the Greenbook Insight Innovation Competition as apart of IIEX North America 2025 on 30 April 2025 in Washington, D.C.
Join us at survey-analytics.ai!
AI Competitor Analysis: How to Monitor and Outperform Your CompetitorsContify
AI competitor analysis helps businesses watch and understand what their competitors are doing. Using smart competitor intelligence tools, you can track their moves, learn from their strategies, and find ways to do better. Stay smart, act fast, and grow your business with the power of AI insights.
For more information please visit here https://www.contify.com/
2. Goal
Store image data as efficiently as possible
Ideally, want to
– Maximize image quality
– Minimize storage space and processing resources
Can’t have best of both worlds
What are some good compromises?
3. Why is it possible to compress images?
Data != information/knowledge
Data >> information
Key idea in compression: only keep the info.
But why is data != info? Answer: Redundancy
Statistical redundancy
– Spatial redundancy and coding redundancy
Phychovisual redundancy
– Greyscale redundancy and frequency redundancy
5. Coding redundancy
Redundancy when mapping from the pixels (symbols)
to the final compressed binary code (Information theory)
Example:
Lavg,1 = 3 bits/symbol
Lavg,2 = 4x0.1+2x0.2+0.5+4x0.05+3x0.15 = 1.95 bits/sym
Code 2 is also unique and shorter
6. Phychovisual redundancy
The ”end-user” is a human => only represent the info.
which can be perceived by the Human Visual System
(HSV)
From a data’s point of view => lossy
From the HSV’s point of view => lossless
Intensity redundancy
Frequency redundancy
Low-level
Processing
unit
Incident
light
Preceived visual
information
High-level
Processing
unit
7. Intensity redundancy
Weber’s law: ∆I / I1 ~ constant
∆I = I1 – I2, where just recognizable
The high (bright) values need
a less accurate representation
compared to the low (dark) values
Weber’s law holds for all
human senses!
I2
I1
Sound
∆I
∆I
Noise level
I2
I2
I1
I1
8. Frequency redundancy
Human eye functions as a lowpass filter =>
– High frequencies in an image can be ”ignored” without the
HVS noticing
– Key issue in lossy image compression
Now we know why image compression is possible:
Redundancies
Investigate how to implement these redundancies
in algorithms
9. Two main schools of image
compression
Lossless
– Stored image data can
reproduce original image
exactly
– Takes more storage
space
– Uses entropy coding only
(or none at all)
– Examples: BMP, TIFF,
GIF
Lossy
– Stored image data can
reproduce something that
looks “close” to the
original image
– Uses both quantization
and entropy coding
– Usually involves transform
into frequency or other
domain
– Examples: JPEG, JPEG-
2000
11. BMP (Bitmap)
Use 3 bytes per pixel, one
each for R, G, and B
Can represent up to 224
= 16.7
million colors
No entropy coding
File size in bytes =
3*length*height, which can be
very large
Can use fewer than 8 bits per
color, but you need to store
the color palette
Performs well with ZIP, RAR,
etc.
12. GIF (Graphics Interchange Format)
Can use up to 256 colors
from 24-bit RGB color space
– If source image contains
more than 256 colors, need
to reprocess image to fewer
colors
Suitable for simpler images
such as logos and textual
graphics, not so much for
photographs
Uses LZW lossless data
compression
13. JPEG (Joint Photographic Experts
Group)
Most dominant image
format today
Typical file size is about
10% of that of BMP (can
vary depending on
quality settings)
Unlike GIF, JPEG is
suitable for photographs,
not so much for logos
and textual graphics
16. Preprocess
Shift values [0, 2P
- 1] to [-2P-1
, 2P-1
- 1]
– e.g. if (P=8), shift [0, 255] to [-127, 127]
– DCT requires range be centered around 0
Segment each component into 8x8 blocks
Interleave components (or not)
– may be sampled at different rates
17. Interleaving
Non-interleaved: scan from left to right, top to
bottom for each color component
Interleaved: compute one “unit” from each
color component, then repeat
– full color pixels after each step of decoding
– but components may have different resolution
18. Color Transformation (optional)
Down-sample chrominance components
– compress without loss of quality (color space)
– e.g., YUV 4:2:2 or 4:1:1
Example: 640 x 480 RGB to YUV 4:1:1
– Y is 640x480
– U is 160x120
– V is 160x120
19. Interleaving
ith color component has dimension (xi, yi)
– maximum dimension value is 216
– [X, Y] where X=max(xi) and Y=max(yi)
Sampling among components must be integral
– Hi and Vi; must be within range [1, 4]
– [Hmax, Vmax] where Hmax=max(Hi) and Vmax=max(Vi)
xi = X * Hi / Hmax
yi = Y * Vi / Vmax
21. Forward DCT
Convert from spatial to frequency domain
– convert intensity function into weighted sum of
periodic basis (cosine) functions
– identify bands of spectral information that can be
thrown away without loss of quality
Intensity values in each color plane often
change slowly
22. Understanding DCT
For example, in R3
, we can write (5, 2, 9) as the
sum of a set of basis vectors
– we know that [(1,0,0), (0,1,0), (0,0,1)] provides one
set of basis functions in R3
(5,2,9) = 5*(1,0,0) + 2*(0,1,0) + 9*(0,0,1)
DCT is same process in function domain
23. DCT Basic Functions
Decompose the intensity function into a
weighted sum of cosine basis functions
25. 1D Forward DCT
Given a list of n intensity values I(x),
where x = 0, …, N-1
Compute the N DCT coefficients:
1...0,
2
)12(
cos)()(
2
)(
1
0
−=
+
= ∑
−
=
nu
n
x
xIuC
n
uF
n
x
µπ
=
=
otherwise
ufor
uCwhere
1
,0
2
1
)(
26. 1D Inverse DCT
Given a list of n DCT coefficients F(u),
where u = 0, …, n-1
Compute the n intensity values:
=
=
otherwise
ufor
uCwhere
1
,0
2
1
)(
1...0,
2
)12(
cos)()(
2
)(
1
0
−=
+
= ∑
−
=
nx
n
x
uCuF
n
xI
n
u
µπ
27. Extend DCT from 1D to 2D
Perform 1D DCT on each
row of the block
Again for each column of
1D coefficients
– alternatively, transpose
the matrix and perform
DCT on the rows
X
Y
28. Equations for 2D DCT
Forward DCT:
Inverse DCT:
+
+
= ∑∑
−
=
−
= m
vy
n
ux
yxIvCuC
nm
vuF
m
y
n
x 2
)12(
cos*
2
)12(
cos*),()()(
2
),(
1
0
1
0
ππ
+
+
= ∑∑
−
=
−
= m
vy
n
ux
vCuCuvF
nm
xyI
m
v
n
u 2
)12(
cos*
2
)12(
cos)()(),(
2
),(
1
0
1
0
ππ
30. Quantization
Divide each coefficient by integer [1, 255]
– comes in the form of a table, same size as a block
– multiply the block of coefficients by the table, round result to
nearest integer
In the decoding process, multiply the quantized
coefficients by the inverse of the table
– get back a number close to original
– error is less than 1/2 of the quantization number
Larger quantization numbers cause more loss
32. Entropy Encoding
Compress sequence of quantized DC and AC
coefficients from quantization step
– further increase compression, without loss
Separate DC from AC components
– DC components change slowly, thus will be
encoded using difference encoding
33. DC Encoding
DC represents average intensity of a block
– encode using difference encoding scheme
– use 3x3 pattern of blocks
Because difference tends to be near zero, can
use less bits in the encoding
– categorize difference into difference classes
– send the index of the difference class, followed by
bits representing the difference
34. AC Encoding
Use zig-zag ordering of coefficients
– orders frequency components from low->high
– produce maximal series of 0s at the end
Apply RLE to ordering
35. Huffman Encoding
Sequence of DC difference indices and values
along with RLE of AC coefficients
Apply Huffman encoding to sequence – Exploits
sequence’s statistics by assigning frequently used
symbols fewer bits than rare symbols
Attach appropriate headers
Finally have the JPEG image!
36. JPEG Decoding Steps
Basically reverse steps performed in encoding
process
– Parse compressed image data and perform
Huffman decoding to get RLE symbols
– Undo RLE to get DCT coefficient matrix
– Multiply by the quantization matrix
– Take 2-D inverse DCT of this matrix to get
reconstructed image data
37. Reconstruction Error
Resulting image is “close” to original image
Usually measure “closeness” with MSE (Mean
Squared Error) and PSNR (Peak Signal to
Noise Ratio) – Want low MSE and high PSNR
38. Example - One everyday photo with
file size of 2.76 MB
39. Example - One everyday photo with
file size of 600 KB
40. Example - One everyday photo with
file size of 350 KB
41. Example - One everyday photo with
file size of 240 KB
42. Example - One everyday photo with
file size of 144 KB
43. Example - One everyday photo with
file size of 88 KB
44. Analysis
Near perfect image at 2.76M, so-so image at 88K
Sharpness decreases as file size decreases
False contours visible at 144K and 88K
– Can be fixed by dithering image before quantization
Which file size is the best?
– No correct answer to this question
– Answer depends upon how strict we are about image quality, what
purpose image is to be used for, and the resources available