0% found this document useful (0 votes)
45 views

Unit - 2

UNIT - 2

Uploaded by

vanshthakral2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Unit - 2

UNIT - 2

Uploaded by

vanshthakral2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

MT , UNIT - 2

UNIT - II

Types of Images

There are three types of images. They are as following:

Binary Images

It is the simplest type of image. It takes only two values i.e, Black and White or 0 and 1. The
binary image consists of a 1-bit image and it takes only 1 binary digit to represent a pixel.
Binary images are mostly used for general shape or outline.

For Example: Optical Character Recognition (OCR).

Binary images are generated using threshold operation. When a pixel is above the threshold
value, then it is turned white('1') and which are below the threshold value then they are turned
black('0')

Gray-scale images

Grayscale images are monochrome images, Means they have only one color. Grayscale
images do not contain any information about color. Each pixel determines available different
grey levels.
A normal grayscale image contains 8 bits/pixel data, which has 256 different grey levels. In
medical images and astronomy, 12 or 16 bits/pixel images are used.

1
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

Colour images

Colour images are three band monochrome images in which, each band contains a different
color and the actual information is stored in the digital image. The color images contain gray
level information in each spectral band.

The images are represented as red, green and blue (RGB images). And each color image has
24 bits/pixel means 8 bits for each of the three color band(RGB).

8-bit color format

8-bit color is used for storing image information in a computer's memory or in a file of an
image. In this format, each pixel represents one 8 bit byte. It has 0-255 range of colors, in
which 0 is used for black, 255 for white and 127 for gray color. The 8-bit color format is also
known as a grayscale image. Initially, it was used by the UNIX operating system.

2
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

16-bit color format

The 16-bit color format is also known as high color format. It has 65,536 different color
shades. It is used in the system developed by Microsoft. The 16-bit color format is further
divided into three formats which are Red, Green, and Blue also known as RGB format.

In RGB format, there are 5 bits for R, 6 bits for G, and 5 bits for B. One additional bit is
added in green because in all the 3 colors green color is soothing to eyes.

24-bit color format

The 24-bit color format is also known as the true color format. The 24-bit color format is also
distributed in Red, Green, and Blue. As 24 can be equally divided on 8, so it is distributed
equally between 3 different colors like 8 bits for R, 8 bits for G and 8 bits for B.

RGB: The RGB colour model is the most common colour model used in Digital image
processing and openCV. The colour image consists of 3 channels. One channel each for one
colour. Red, Green and Blue are the main colour components of this model. All other
colours are produced by the proportional ratio of these three colours only. 0 represents the
black and as the value increases the colour intensity increases.
Properties:
 This is an additive colour model. The colours are added to the black.
 3 main channels: Red, Green and Blue.
3
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

 Used in DIP, openCV and online logos


.

Colour combination:
Green(255) + Red(255) = Yellow
Green(255) + Blue(255) = Cyan
Red(255) + Blue(255) = Magenta
Red(255) + Green(255) + Blue(255) = White

CMYK: CMYK colour model is widely used in printers. It stands for Cyan, Magenta,
Yellow and Black (key). It is a subtractive colour model. 0 represents the primary colour
and 1 represents the lightest colour. In this model, point (1, 1, 1) represents black, and
(0,0,0) represents white. It is a subtractive model thus the value is subtracted from 1 to vary
from least intense to a most intense colour value.

1-RGB = CMY
Cyan is negative of Red.
Magenta is negative of Green.
Yellow is negative of Blue.

4
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

Differences between RGB and CMYK color schemes:


RGB Color Scheme CMYK Color Scheme

Used for digital works. Used for print works.

Primary Colors: Cyan, Magenta,


Primary colors: Red, Green, Blue
Yellow, Black

Additive Type Mixing Subtractive Type Mixing.

Colors of images are more vibrant Colors are less vibrant.

RGB Scheme has wider range of colors than CMYK has lesser range of colors
CMYK than RGB.

file formats:- JPEG, PNG, GIF etc. file formats:- PDF, EPS etc

Basically it is used for online logos, online ads, Basically it is used for business cards,
digital graphics, photographs for website, social stationary, stickers, posters,
media, or apps etc. brochures etc.

5
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

Color models in video


1. YUV Color Model

Description:

 The YUV color model is used primarily in video broadcasting and compression. It
separates the image into luminance (Y) and chrominance (UV) components, which
makes it useful for applications where color and brightness need to be handled
separately.

Components:

 Y (Luminance): Represents the brightness of the image. It carries the grayscale


information and is crucial for the perception of detail.
 U (Chrominance Blue): Represents the difference between the blue color component
and the luminance.
 V (Chrominance Red): Represents the difference between the red color component
and the luminance.

Usage:

 Broadcast Television: Used in PAL (Phase Alternating Line) and SECAM


(Séquentiel Couleur à Mémoire) television systems.
 Video Compression: The separation of luminance and chrominance helps in
compressing color information efficiently without significantly affecting perceived
quality.

Technical Details:

 The YUV color model can be converted to and from RGB, making it versatile for
video processing. The model separates color information from brightness, which
allows for more efficient compression and broadcasting.

2. YIQ Color Model

Description:

 The YIQ color model was developed for the NTSC (National Television System
Committee) television broadcasting standard. It also separates luminance from
chrominance, but the chrominance components are defined differently than in YUV.

6
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

Components:

 Y (Luminance): Represents the brightness of the image, similar to Y in the YUV


model.
 I (In-phase Chrominance): Represents the difference between the red color
component and the green color component.
 Q (Quadrature Chrominance): Represents the difference between the blue color
component and the green color component.

Usage:

 NTSC Television: Used in the NTSC standard for color television broadcasting in
North America and other regions.
 Color Processing: Useful in systems that need to handle color and brightness
separately, with a focus on efficient broadcast transmission.

Technical Details:

 The YIQ model is designed to be compatible with the NTSC color broadcast system.
The I and Q components are orthogonal to each other, allowing for efficient encoding
and decoding of color information.

3. YCbCr Color Model

Description:

 The YCbCr color model is commonly used in video compression and digital
television. It’s a modern evolution of the YUV model, designed to work well with
digital video formats and compression algorithms.

Components:

 Y (Luminance): Represents the brightness of the image, similar to Y in both YUV


and YIQ models.
 Cb (Chrominance Blue): Represents the difference between the blue color
component and the luminance (Y).
 Cr (Chrominance Red): Represents the difference between the red color component
and the luminance (Y).

Usage:

 Video Compression: Widely used in video compression standards like JPEG,


MPEG-2, and H.264/AVC. The separation of luminance and chrominance

7
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

components allows for chroma subsampling, which reduces the amount of color data
while preserving visual quality.
 Digital Television: Used in digital TV broadcasting and digital video formats for its
efficiency and compatibility with modern compression techniques.

Technical Details:

 Chroma Subsampling: In formats like 4:2:2 or 4:2:0, chroma information (Cb and
Cr) is sampled at lower resolutions than the luminance (Y) data. This is based on the
observation that the human eye is less sensitive to color detail than to brightness.
 Conversion: YCbCr can be easily converted to and from RGB, making it suitable for
video processing and color management in various digital systems.

Comparison of YUV, YIQ, and YCbCr

 YUV vs. YIQ:


o Both models separate luminance from chrominance but use different
chrominance components.
o YUV is used in PAL and SECAM systems, while YIQ is used in NTSC
systems.
o YUV components are used in analog systems, whereas YIQ is specifically
designed for the NTSC broadcast system.
 YUV vs. YCbCr:
o YCbCr is a digital color model derived from YUV, adapted for digital video
and compression.
o YCbCr uses a different scaling and offset compared to YUV, which makes it
more suitable for modern video processing.
o Both models separate luminance from chrominance but YCbCr is more
aligned with digital encoding and is often used in video compression
standards.
 YIQ vs. YCbCr:
o YIQ is used for analog NTSC broadcasts, while YCbCr is used for digital
video formats.
o YIQ uses chrominance components I and Q, whereas YCbCr uses Cb and Cr,
with different interpretations and scales.
o YCbCr’s design aligns with digital video compression techniques, while YIQ
was designed for broadcast television.

What is Video?

Generally, a video is used to generate a steady source of still pictures as it is a series of


electronic signals, which simulate movement. Videos can be used for education,
entertainment, or other purposes, which use pictures, graphics, or text. In modern times, there

8
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

are various websites or webpages that contain streaming or downloadable video, which can
be watched by the users on their computer or other similar devices.

Video signals are electrical signals that transmit video information from a source to a display
device. They are used in a wide range of applications, from television broadcasting to
surveillance systems. The quality of the video signal is determined by various factors such as
resolution, color depth, and refresh rate. There are several types of video signals used in
different applications, each with its unique features and specifications. In this article, we will
discuss different types of video signals in detail.
1. Composite Video Signal:

Composite video signal is the most basic type of video signal. It consists of a single signal
that carries all the video information, including the color and brightness. The composite video
signal is typically transmitted using an RCA connector or a BNC connector. The signal is not
separated into different components, which means that it has lower quality compared to other
types of video signals.

Composite video signal is widely used in consumer electronics such as DVD players, VCRs,
and older TVs. It has a resolution of 480i and a refresh rate of 60Hz. The color information is
encoded using the NTSC, PAL, or SECAM standard, depending on the region.

Advantages:

 Low cost of implementation


 Compatible with a wide range of devices
 Can be transmitted over long distances

Disadvantages:

 Poor video quality compared to digital signals


 Prone to noise and interference
 Limited color resolution

2. S-Video Signal:

S-Video signal, also known as Y/C signal, is an improvement over the composite video
signal. It separates the color and brightness information into two separate signals. This
separation improves the image quality by reducing the color bleeding and improving the
sharpness of the image.

9
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

S-Video signal is transmitted using a 4-pin mini-DIN connector. It has a resolution of 480i
and a refresh rate of 60Hz. S-Video signal is commonly used in older TVs, camcorders, and
video game consoles.

Advantages:

 Better video quality than composite video


 Reduced noise and interference
 Improved color resolution

Disadvantages:

 Not as widely supported as composite video


 Limited resolution compared to digital signals

3. Component Video Signal:

Component video signal is an analog video signal that separates the video information into
three separate signals: red, green, and blue (RGB). This separation allows for higher quality
video transmission compared to composite and S-video signals.

Component video signal is typically transmitted using three RCA connectors or three BNC
connectors. It has a resolution of 480p, 720p, or 1080i, depending on the device. Component
video signal is commonly used in high-definition TVs, DVD players, and video game
consoles.

Advantages:

 Excellent video quality


 High color resolution
 Reduced noise and interference

Disadvantages:

 Requires three cables to transmit the signal


 Not as widely supported as composite video

Analog video standards refer to the systems and technologies used for broadcasting and
recording video signals before the widespread adoption of digital formats. These standards
were designed to transmit video and audio information over analog signals. Here’s an
overview of the most notable analog video standards:

10
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

ANALOG VIDEO STANDARDS

1. NTSC (National Television System Committee)

Description:

 NTSC is one of the most well-known analog television standards, primarily used in
North America and parts of South America and Asia.

Technical Details:

 Resolution: Approximately 480 lines of vertical resolution (480i).


 Frame Rate: 30 frames per second (fps), with each frame divided into two interlaced
fields.
 Aspect Ratio: 4:3 (standard definition).
 Color Encoding: YIQ color model, where Y represents luminance, I represents the
in-phase chrominance, and Q represents the quadrature chrominance.

Use Cases:

 Analog television broadcasting in the United States, Canada, and Japan.


 VHS tapes and early camcorders used in NTSC regions.

2. PAL (Phase Alternating Line)

Description:

 PAL is a color encoding system used in many countries across Europe, Africa, Asia,
and Australia. It was designed to improve color stability and reduce color artifacts
compared to NTSC.

Technical Details:

 Resolution: Approximately 576 lines of vertical resolution (576i).


 Frame Rate: 25 frames per second (fps), with each frame divided into two interlaced
fields.
 Aspect Ratio: 4:3 (standard definition).
 Color Encoding: YUV color model, where Y represents luminance, U represents the
chrominance blue difference, and V represents the chrominance red difference. The
phase of the color subcarrier is alternated to reduce color errors.

Use Cases:

 Analog television broadcasting in Europe, Australia, and parts of Asia and Africa.
11
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

 VHS tapes and camcorders used in PAL regions.

3. SECAM (Séquentiel Couleur à Mémoire)

Description:

 SECAM is an analog color television standard used primarily in France, parts of


Eastern Europe, and some countries in Africa. It was developed to address color
stability issues present in earlier systems.

Technical Details:

 Resolution: Approximately 625 lines of vertical resolution (625i).


 Frame Rate: 25 frames per second (fps), with each frame divided into two interlaced
fields.
 Aspect Ratio: 4:3 (standard definition).
 Color Encoding: Uses a unique method of color encoding where color information is
transmitted sequentially and stored in memory, reducing color artifacts compared to
NTSC.

Use Cases:

 Analog television broadcasting in France, former Soviet Union countries, and parts of
Africa.
 SECAM-compatible VHS tapes and camcorders.

Digital video standards refer to a set of established guidelines and specifications that define
how digital video data is encoded, transmitted, and decoded. These standards ensure
consistency, compatibility, and quality across various digital video systems and applications.
They cover aspects such as resolution, frame rate, color representation, compression
techniques, and data interfaces.

Here’s an overview of the key aspects of digital video standards:

1. Resolution and Aspect Ratio

Resolution:

 Defines the number of pixels in each dimension of a video image. Common


resolutions include:
 Standard Definition (SD): 720x480 pixels (480p).
 High Definition (HD): 1280x720 pixels (720p) and 1920x1080 pixels (1080p).
 Ultra High Definition (UHD): 3840x2160 pixels (4K) and 7680x4320 pixels (8K).

12
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

Aspect Ratio:

 The ratio of the width to the height of the video frame. Common aspect ratios are:
 4:3: Traditional TV format.
 16:9: Widescreen format used for HD and UHD TVs.

2. Color Space and Depth

Color Space:

 Defines the range of colors that can be represented in a video. Common color spaces
include:
 Rec. 601: Used for SD video.
 Rec. 709: Used for HD video.
 Rec. 2020: Used for UHD video, providing a broader color gamut.

Color Depth:

 Refers to the number of bits used to represent each color channel. Higher color depth
allows for more colors and smoother gradients. Common depths are:
 8-bit: Provides 256 shades per channel.
 10-bit: Provides 1024 shades per channel, often used in professional video.

3. Frame Rate

Frame Rate:

 The number of frames displayed per second (fps). Common frame rates include:
 24 fps: Standard for film and many streaming services.
 30 fps: Common for TV and some online content.
 60 fps: Used for high-definition video and high-speed content.

4. Compression Techniques

Compression:

 Reduces the size of video files to make them easier to store and transmit. There are
two main types:
 Lossy Compression: Reduces file size by removing some data, which may affect
quality. Examples include:
 H.264 (AVC): Widely used for streaming and Blu-ray.
 H.265 (HEVC): Provides better compression efficiency than H.264.
 Lossless Compression: Reduces file size without losing any data. Examples include:
 ProRes: Used in professional video editing.
13
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

 FFV1: A lossless video codec used in archival.

5. Data Interfaces and Formats

Data Interfaces:

 Defines how video data is transmitted between devices. Common interfaces include:
 HDMI (High-Definition Multimedia Interface): Transmits high-definition video
and audio.
 DisplayPort: Supports high resolutions and multiple displays.
 SDI (Serial Digital Interface): Used in professional video production.

File Formats:

 The container formats that bundle video data with audio and metadata. Common
formats include:
 MP4: Popular for streaming and storage.
 MKV (Matroska): Supports multiple video, audio, and subtitle streams.
 AVI: An older format with support for various codecs.

6. Standards and Specifications

MPEG Standards:

 Developed by the Moving Picture Experts Group for video compression:


 MPEG-1: For digital storage on CDs.
 MPEG-2: For DVDs and digital TV.
 MPEG-4: Includes multiple profiles for different applications.

ITU Standards:

 Set by the International Telecommunication Union:


 ITU-R BT.601: Standard for SD video.
 ITU-R BT.709: Standard for HD video.
 ITU-R BT.2020: Standard for UHD video.

Importance of Digital Video Standards

 Interoperability: Ensure compatibility between different devices and systems,


allowing seamless playback and editing across various platforms.
 Quality: Define parameters to maintain high video quality across different resolutions
and formats.
 Efficiency: Use compression techniques to optimize storage and transmission,
balancing quality with file size.
14
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

 Consistency: Provide a consistent viewing experience across different devices and


applications.

1. Chroma Subsampling

Introduction: Chroma subsampling is a critical concept in digital video processing that


focuses on optimizing the representation of color information. In video and image data, the
human eye is less sensitive to color detail compared to brightness detail. This property is
leveraged to reduce the amount of color data without significantly affecting the perceived
quality of the image or video.

Background:

 Human Vision Sensitivity: Research shows that the human visual system is more
sensitive to changes in brightness (luminance) than to changes in color (chrominance).
This difference is exploited to compress video data efficiently.
 Color Information vs. Luminance: In digital video, color information is less critical
to overall image quality than brightness information. By reducing the resolution of
color data, we can save on data bandwidth and storage while maintaining visual
quality.

Chroma Subsampling Formats:

 4:4:4: This format samples all color channels (Y, U, V or Y, Cb, Cr) at full
resolution. It provides the highest quality but requires the most data. It is typically
used in professional video editing and production where preserving the maximum
detail is crucial.
 4:2:2: Here, the chroma channels (U and V) are sampled at half the resolution of the
luminance channel (Y) horizontally. This strikes a balance between quality and data
size and is commonly used in video production and broadcasting.
 4:2:0: In this format, chroma channels are sampled at half the resolution of luminance
both horizontally and vertically. This is widely used in consumer video formats such
as DVDs, Blu-ray, and streaming video due to its efficient compression.
 4:1:1: This format samples chroma channels at one-quarter the resolution of
luminance horizontally, which is often used in certain digital video systems but is less
common today.

Impact: Chroma subsampling is essential in digital video encoding as it significantly reduces


the amount of data needed to represent color information. This reduction allows for more
efficient compression and transmission of video while maintaining acceptable visual quality,
making it ideal for various video applications, from streaming to broadcast.

15
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

2. CCIR Standards for Digital Video

Introduction: The International Radio Consultative Committee (CCIR), now known as the
International Telecommunication Union (ITU), established several important standards for
digital video. These standards define how video data should be formatted, transmitted, and
displayed to ensure compatibility and quality across different systems.

Key Standards:

 CCIR 601 (ITU-R BT.601):


 Overview: Introduced in the 1980s, CCIR 601 defines the digital encoding format for
standard definition (SD) video. It was a pioneering standard that laid the groundwork
for digital video processing.
 Resolution: Specifies SD resolutions, such as 720x480 pixels for NTSC (North
America) and 720x576 pixels for PAL (Europe).
 Color Space: Uses the YUV color model with 4:2:2 chroma subsampling, defining
how color information is encoded alongside luminance.
 Impact: Provided a standardized way to digitize and store video, which was crucial
for the transition from analog to digital video systems and enabled the development of
digital broadcasting and recording technologies.

 CCIR 656 (ITU-R BT.656):


 Overview: Defines a digital video interface standard for transmitting video data based
on the CCIR 601 specification.
 Resolution and Signal: Supports SD video formats and specifies the electrical
characteristics and format for digital video interfaces.
 Impact: Facilitated the interconnection of digital video equipment and systems,
enabling reliable and consistent video data transfer.

 ITU-R BT.709:
o Overview: This standard was introduced to define parameters for high-
definition television (HDTV) systems, including color space and resolution.
o Resolution: Supports HD resolutions like 1280x720 (720p) and 1920x1080
(1080p).
o Color Space: Similar to CCIR 601 but adapted for HD, providing a color
gamut and range suitable for high-definition video.
o Impact: Standardized high-definition video production and broadcasting,
supporting the transition from SD to HD content.
 ITU-R BT.2020 (Rec. 2020):
o Overview: Defines the parameters for ultra-high-definition television
(UHDTV), including 4K and 8K resolutions.
o Resolution: Supports 4K (3840x2160) and 8K (7680x4320) resolutions,
providing a significant leap in video detail and clarity.
16
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

o Color Space: Offers a broader color gamut than Rec. 709, enhancing color
reproduction and providing a more immersive viewing experience.
o Impact: Facilitates the development of advanced UHDTV systems and high-
resolution content, pushing the boundaries of video quality and viewer
experience.

3. HDTV (High-Definition Television)

Introduction: High-Definition Television (HDTV) represents a significant advancement


over Standard Definition (SD) television, providing higher resolution, improved image
quality, and enhanced audio. HDTV standards were developed to meet the growing demand
for better viewing experiences and to leverage advances in digital technology.

HDTV Resolutions and Standards:

 HD Ready:
o Overview: Refers to a display or device that supports 720p resolution
(1280x720). It may not necessarily support higher resolutions like 1080p.
o Use Cases: Often used for entry-level HDTVs and early high-definition
content.
 Full HD (FHD):
o Overview: Refers to a resolution of 1080p (1920x1080). It provides a higher
level of detail compared to HD Ready and is widely used for HD broadcasts
and content.
o Use Cases: Standard for most HDTVs, Blu-ray players, and streaming
services.
 Ultra HD (UHD):
o Overview: Includes 4K (3840x2160) and 8K (7680x4320) resolutions,
offering even greater detail and clarity. UHD represents the latest
advancements in high-definition television.
o Use Cases: UHD content is used in advanced broadcasting, streaming
services, and high-end display technologies.

Color Standards:

 Rec. 709 (ITU-R BT.709): Specifies color space parameters for HD, defining the
color gamut and standard for HD television and content.
 Rec. 2020 (ITU-R BT.2020): Expands the color gamut and resolution capabilities for
UHD, providing a more extensive range of colors and higher resolutions.

Impact:

 Broadcasting and Streaming: HDTV standards have transformed how content is


delivered, making high-definition video available for a wide range of applications
from television broadcasts to online streaming.
17
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)
MT , UNIT - 2

 Consumer Electronics: Modern HDTVs, Blu-ray players, and streaming devices


adhere to these standards to provide enhanced viewing experiences.
 Video Production: High-definition standards have become the norm for professional
video production, offering greater detail and visual fidelity in content creation.

18
KIRTI VERMA
ASSISTANT PROFESSOR
PIET (DCA)

You might also like