0% found this document useful (0 votes)
457 views

8K Resolution Camera System

The document discusses high resolution 8K camera systems. It describes two approaches - one using four 8 megapixel sensors and another using three 33 megapixel sensors. It covers the components, imaging methods, features and advantages/disadvantages of these 8K camera systems.

Uploaded by

Rutuja Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
457 views

8K Resolution Camera System

The document discusses high resolution 8K camera systems. It describes two approaches - one using four 8 megapixel sensors and another using three 33 megapixel sensors. It covers the components, imaging methods, features and advantages/disadvantages of these 8K camera systems.

Uploaded by

Rutuja Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

SEMINAR REPORT

Entitled

“8K Resolution Camera System”


Submitted to the Department of Electronics Engineering
In Partial Fulfillment of the Requirement for the Degree of

: Presented & Submitted By :


Ms. Rutuja Pawar
(Roll No. U17EC052)
B. TECH. IV (EC), 7th Semester

: Guided By :
Prof. Prashant K. Shah
Professor or Associate Professor or Assistant Professor, ECED.

(Year: 2020_21)

DEPARTMENT OF ELECTRONICS ENGINEERING


Sardar Vallabhbhai National Institute of Technology
Surat-395007, Gujarat, INDIA.
Sardar Vallabhbhai National Institute of Technology
Surat-395 007, Gujarat, INDIA.

ELECTRONICS ENGINEERING DEPARTMENT

This is to certify that the SEMINAR REPORT entitled “8K Resolution Camera
System” is presented & submitted by Candidate Ms. Rutuja J. Pawar, bearing Roll No.
U17EC052, of B.Tech. IV, 7th Semester in the partial fulfillment of the requirement for
the award of B. Tech. degree in Electronics & Communication Engineering for
academic year 2020-21.

She has successfully and satisfactorily completed her Seminar Exam in all
respect. We, certify that the work is comprehensive, complete and fit for evaluation.

Prof. / Dr. P. K. Shah


Professor or Associate Professor or Assistant Professor &
Seminar Guide

SEMINAR EXAMINERS:
Name of Examiner Signature with date

1. Dr. J. N. Sarvaiya
2. Vindheshwari Singh
3.

Dr. A. D. Darji DEPARTMENT SEAL


Associate Professor & (October - 2020)
Head, ECED, SVNIT.
ACKNOWLEDGEMENT

With this acknowledgement, I would like to take the opportunity to thank each and every
one who helped me and encouraged me in the process of writing this report. I would also
thank everyone who helped me better present the topics discussed in this report and
gather better understanding of the same. With great pleasure, I present to you this report
titled ‘8K Resolution Camera System.’

I would offer sincere gratitude towards my guide Prof. P. K. Shah for his invaluable
guidance and support. With his valuable assistance and suggestions for the improvement,
I have been able to bring this report to successful completion.

I would also thank our Head of Department Dr. A. D. Darji for his cooperation and
encouragement. Also, I am grateful to my friends and family who supported me in this
task throughout its course. It is with sincere efforts that I present this seminar report to
you.

Rutuja Pawar

U17EC052

i
ABSTRACT

SEMINAR TITLE: 8K Resolution Camera System

An imaging system incorporates electronics in conjunction with imaging optics. A digital


camera is an imaging system that includes image sensors and processors, each of which
is a subsystem of the imaging system. This seminar focuses on high resolution camera
systems with a resolution of 8K (7680 x 4320), i.e., 33 million pixels. An overview of
two such camera systems is discussed.

One of the systems uses a four sensor imaging method using four 8 megapixel sensors.
This system serves as a better practical camera system as it is close packed. The other
system uses three 33 megapixel sensors to achieve the desired resolution.

The high resolution camera system has to deliver a desirable response while taking into
account the size constraints. The structure of a practical camera system and its features
have also been discussed in this report.

Signature of Student
Student Name: Ms. Rutuja Pawar
Roll No. U17EC052
Guide Name: Prof. P. K. Shah
Date of Seminar Exam: 07/ 11/ 2020
Timeslot of Seminar Exam: 10 a.m. to 1 p.m.
Abstract Submission Date: 01/ 11/ 2020

Examiner Name: Dr. J. N. Sarvaiya & Vindheshwari Singh

*****

ii
TABLE OF CONTENTS

PAGE
CHAPTER NO TITLE NO.
Acknowledgement i.
Abstract ii.
List of Figures v.
List of Tables vi.

1. Introduction 1.
1.1 History 2.
1.2 Digital Camera 4.
1.3 Video Camera 4.
2. Image Sensors 6.
2.1 Sensor Construction 7.
2.1.1 Charge-Coupled Device (CCD) 7.
Complementary Metal Oxide
2.1.2 8.
Semiconductor (CMOS)
2.2 Sensor Features 10.
2.3 Spectral Properties 12.
3. Display Resolution 15.
3.1 Standard-Definition Television 16.
3.2 Enhanced-Definition Television 18.
3.3 High-Definition Television 19.
3.4 Ultra-High Definition Television 20.
4. Video Encoding and Formats 22.
4.1 Video Codec 23.
4.2 Types of Video Codec 24.
4.3 Video Formats 26.
5. SHV Camera System 28.
5.1 Four-Sensor Imaging Method 28.
5.1.1 Optical Block 28.
5.1.2 Enhancement of Image Resolution 29.
5.1.3 Additional Features 30.

iii
6. Practical SHV Camera System 32.
6.1 System Structure 32.
6.2 Specific Features 34.
SHV Camera System with three 33-
7. 36.
megapixel CMOS image sensors
7.1 Camera Design 36.
7.2 33-megapixel CMOS Image Sensor 38.
High Band-width signal transmission
7.3 39.
interfaces
8. Advantages and Disadvantages 40.
8.1 Advantages 40.
8.2 Disadvantages 40.
9. Summary 41.
References
Acronyms

iv
LIST OF FIGURES

FIGURE PAGE
TITLE
NO. NO.

1.1 Camera Obscura to trace an image 2


1.2 Pinhole Camera Diagram. 3
1.3 Daguerreotype photographic camera. 3
1.4 Film and digital cameras. 4
1.5 Professional Video Camera 5
2.1 Image sensor 6
2.2 Block Diagram of a charge-coupled device 8
Block diagram of a complementary metal oxide
2.3 9
semiconductor
2.4 Illustration of camera sensor pixels with RGB colour. 11
2.5 Illustration of standard sensor size dimensions 12
2.6 Single chip colour CCD sensor using Bayer filter 13
2.7 Three-chip colour CCD camera sensor using prism 14
3.1 Standard display resolutions for PC monitors 16
3.2 PAL 17
3.3 Aspect ratios 18
3.4 Comparison of 8K, 4K, HDTV, SDTV resolutions 19
3.5 98-inch 8K TV from Samsung 21
4.1 Video Codec 22
4.2 Video and Audio Codec 23
4.3 Lossless vs Lossy Compression 24
Colour separation prism structure for four-sensor
5.1 29
imaging method
5.2 Spatial pattern of pixels 30
6.1 Practical SHV camera 32
6.2 Block Diagram of SHV Camera System. 33
6.3 Checking for chromatic aberration 34
6.4 Before and after chromatic aberration correction 35
7.1 Block diagram of camera system 37
7.2 33-megapixel image sensor 38
7.3 Camera head appearance 38

v
LIST OF TABLES

TABLE PAGE
TITLE
NO. NO.

2.1 Comparison of CCD and CMOS sensors 10.


4.1 Types of Video Codecs. 26.

vi
CHAPTER 1

INTRODUCTION

Imaging systems incorporate imaging electronics in conjunction with imaging optics.


These imaging systems deliver images on a display screen after processing the input from
the camera head. The camera head includes a type of image sensors that capture the
incident light. There has been a great evolution in the existing camera systems over years.
The first attempt at a camera was the ‘Camera Obscura’. Gradually, there was
development of the photographic camera, followed by digital cameras and video
cameras.

Further development in the camera systems has been in terms of the image quality. Image
quality is essentially decided by the camera or the device used to capture the image. An
important parameter that plays the role in determining the quality of the image is the
image resolution. In the general sense, higher resolution gives a higher image quality.
However, this is true in terms of the capturing device and not the image processing. Thus,
image capture devices with higher resolution give higher image quality. Other factors
like display resolution, physical factors during capture, etc. also contribute to the quality.

Most TVs around the 1990s were 720 x 480 which is about 350,000 pixels. The TV
resolution has changed at a huge rate since. There was a gradual change from analog to
digital technology. Later, there was development of the HDTV format. In the last few
years, further development of higher resolution gave 4K Ultra HDTVs. Correspondingly,
there was also need for TVs with high display resolution. For noticeable change in
resolution, it was required to have greater screen size. Similar development is expected
for 8K TVs with a pixel count of 33 million. Hence, we discuss two 8K resolution camera
systems in this seminar report.

1
1.1. History

Camera Obscura

Prior to the photographic camera, the first attempt at a camera was the camera obscura.
Camera obscura (Latin for "dark room") is the characteristic optical phenomenon that
happens when the incident light from an image at the opposite side of wall passes through
a pin-sized hole and falls on the opposite wall, inverted vertically as well as horizontally.
The most established known record of this rule is a portrayal by Han Chinese philosopher
Mozi (ca. 470 to ca. 391 BC). Mozi accurately declared that the camera obscura image
is modified on the grounds that light goes in straight lines from its source. The utilization
of a lens in the opening of a wall or shut window screen of an obscured space to project
images utilized for help in drawing has been around since 1550.

Fig 1.1. A craftsman utilizing an eighteenth century camera obscura to trace a picture.
(Courtesy of https://www.wikipedia.org/)

Pinhole Camera

Camera obscuras were utilized for drawing since around 1550. Since the late seventeenth
century, convenient camera obscura gadgets in tents and boxes were utilized. There was
better understanding of the path followed by light as it passes through the hole upon the
opposite wall to give a focused image as shown in Fig 1.2.

2
Photographic Camera

The principal photographic camera created for business produce was a daguerreotype
camera, worked by Alphonse Giroux in 1839. The camera structure was made up of two
boxes with a landscape lens on the external box and a focusing screen and image plate
on the inner box.

Fig 1.2. Pinhole Camera Diagram.


(Courtesy of https://www.rimstar.org/science_electronics_projects/)

By sliding the inner box, objects at different separations could be brought to as sharp a
focus as wanted. For a long time, exposure times were long enough that the photographer
just eliminated the lens cap, checked the number of seconds (or minutes) assessed to be
needed by the lighting conditions, at that point placed the cap. As more sensitive
photographic materials came into picture, cameras started to use mechanical shutter
systems that permitted exceptionally short and perfectly timed exposures to be made.
George Eastman, in 1885, was the first to use a paper photographic film which was
followed by celluloid in 1889.

Fig 1.3. Daguerreotype photographic camera.

(Courtesy of http://www.fotovoyage.com/the-daguerreotype-camera-1839/)

3
1.2. Digital Camera
Instead of the photographic film used in the earlier cameras, digital cameras used digital
memory cards to store the images. The development of digital camera image sensors
technology resulted from metal-oxide-semiconductor (MOS) technology, which in turn
began in 1959 with the creation of the MOSFET (MOS field-impact semiconductor) at
Bell Labs. This prompted the development of digital semiconductor image sensors,
including the charge-coupled device (CCD) and later the CMOS sensor.

Fig 1.4. Film and digital cameras.

(Courtesy of https://www.slideshare.net/)

Analog cameras use films where a chemical reaction takes place upon contact with
incident light. Digital cameras directly convert the incident light to electronic signals and
were less expensive than their analog counterparts.

1.3. Video Camera


Video cameras are the cameras used for capturing digital motion pictures which served
initially as a tool for the broadcast television system. The use of video cameras has
become very common in other applications as well. The main application is in direct
image transfer to a screen which aids real-time surveillance in security systems and live
telecast for TVs. The other important application is for storing the digital moving pictures
for further processing at a later stage.

4
With the advent of CCD, the video camera systems shifted from the use of tube-based
sensors to CCD image sensors. Although these gave lighter and compact systems with
ease of handling, the early CCD cameras didn’t deliver good resolution when compared

Fig 1.5. Professional Video Camera

(Courtesy of https://www.bhphotovideo.com/)

to the tube cameras. It was possible to use slenderer cables between the camera head and
the camera control unit (CCU). The CCU can have partial control of the colour balance,
shutter speed, etc., while the focus and framing is controlled by camera operator. In more
complex systems, more control is with the CCU and it aims at delivering a maximum
quality. Another unit dedicated to delivering better quality is Video processor (VP) which
also pays attention to dynamic range, flare, etc.

5
CHAPTER 2

IMAGE SENSORS

Imaging hardware, notwithstanding imaging optics, assume a huge part in the


presentation of an imaging system. Legitimate incorporation of everything including
camera, capture board, software, and links brings about ideal framework execution. Prior
to diving into any extra subjects, it is imperative to comprehend the camera sensor and
key ideas and wording related with it.

Fig 2.1. Image sensor

(Courtesy of https://www.ephotozine.com/)

The core of any camera is the sensor; present day sensors are strong state electronic
gadgets containing up to large number of discrete photodetector locales called pixels. In
spite of the fact that there are numerous camera makers, most of sensors are delivered by
just a modest bunch of organizations. In any case, two cameras with a similar sensor can
have totally different execution and properties because of the design of the interface
electronics. Before, cameras utilized phototubes, for example, Vidicons and Plumbicons

6
as image sensors. Despite the fact that they are not, at this point utilized, their imprint on
classification related with sensor size and organization stays even today. Today,
practically all sensors in machine vision can be categorized as one of two classifications:
Charge-Coupled Device (CCD) and Complementary Metal Oxide Semiconductor
(CMOS) imagers.

2.1. Sensor Construction

2.1.1. Charge-Coupled Device (CCD)

The charge-coupled device (CCD) was designed in 1969 by researchers at Bell Labs in
New Jersey, USA. For quite a long time, it was the pervasive innovation for capturing
images, from digital astrophotography to machine vision review. The CCD sensor is a
silicon chip that contains a variety of photosensitive sites (Fig 2.2). The term charge-
coupled device really alludes to the technique by which charge packets are moved around
on the chip from the photo-sites to readout, a shift register, much the same as the thought
of a bucket brigade. Clock pulses make expected wells to move charge packets around
on the chip, prior to being changed over to a voltage by a capacitor. The CCD sensor is
itself an analog device, yet the yield is quickly changed over to a digital signal by
means of analog-to-digital converter (ADC) in digital cameras, either on or off chip. In
analog cameras, the voltage from each site is perused out in a specific grouping, with
synchronization pulses added sooner or later in the signal chain for reproduction of the
image.

The charge packets are restricted to the speed at which they can be moved, so the charge
transfer is liable for the principle CCD disadvantage of speed, yet additionally prompts
the high sensitivity and consistency of the CCD. Since each charge packet sees a similar
voltage change, the CCD is uniform over its photosensitive sites. The charge move
additionally prompts blooming, wherein charge from one photosensitive site pours out
over to neighbouring destinations because of a limited well depth or charge limit, setting

7
a maximum breaking point on the helpful unique scope of the sensor. This wonder shows
itself as the spreading out of bright spots in images from CCD cameras.

Fig 2.2. Block diagram of a charge-coupled device

(Courtesy of https://www.edmundoptics.com/)

To make up for the low well depth in the CCD, micro-lenses are utilized to improve the
fill factor, or photosensitive region, to make up for the space on the chip taken up by the
charge-coupled shift registers. This improves the efficiency of the pixels, yet leads to rise
in the angular sensitivity for approaching light beams, necessitating that they hit the
sensor close to normal incidence for efficient collection.

2.1.2. Complementary Metal Oxide Semiconductor (CMOS)

The complementary metal oxide semiconductor (CMOS) was designed in 1963 by Frank
Wanlass. Be that as it may, he didn't get a patent for it until 1967, and it didn't turn out
to be broadly utilized for imaging applications until the 1990s The charge from the pixels
of CMOS sensor is changed to voltage at the site itself. It is then passed through several
digital to analog converters (DACs) present on chip.

8
Fig 2.3. Block diagram of a complementary metal oxide semiconductor

(Courtesy of https://www.edmundoptics.com/)

Intrinsic to its design, CMOS is a digital device. Each site is basically a photodiode and
three transistors, playing out the elements of resetting or activating the pixel, amplifying
and converting charge, and choice or multiplexing. This prompts the fast performance of
CMOS sensors, yet additionally low sensitivity just as high fixed-pattern noise in the
numerous charge-to-voltage converting circuits.

The multiplexing arrangement of a CMOS sensor is regularly combined with an


electronic rolling shutter; despite the fact that, with extra transistors at the pixel site, a
global shutter can be cultivated wherein all pixels are exposed at the same time and
afterward readout consecutively in sequence. An extra benefit of a CMOS sensor is its
low power consumption and release contrasted with a CCD sensor, because of less
movement of charge, or current. Likewise, the CMOS sensor's capacity to deal with high
light levels without blooming considers its utilization in unique high dynamic range

9
cameras, even fit for imaging welding creases or such. CMOS cameras likewise will in
general be more compact than equivalent digital CCD, as digital CCD cameras require
extra off-chip ADC hardware.

Table 2.1. Comparison of CCD and CMOS sensors

Sensor CCD CMOS


Pixel Signal Electron Packet Voltage
Chip Signal Analog Digital
Fill Factor High Moderate
Responsivity Moderate Moderate – High
Noise Level Low Moderate – High
Dynamic Range High Moderate
Uniformity High Low
Resolution Low – High Low – High
Speed Moderate - High High
Power
Moderate – High Low
Consumption
Complexity Low Moderate
Cost Moderate Moderate

The multilayer MOS creation cycle of a CMOS sensor doesn't take into consideration the
utilization of micro-lenses on the chip, in this manner diminishing the collection
efficiency or fill factor of the sensor than a CCD counterpart. The less efficiency and
inconsistency from pixel-to-pixel adds to a lower which gives poor image quality.

2.2. Sensor Features

Pixels

At the point when light from an image falls on a camera sensor, it is gathered by a 2D
array of little potential wells called pixels. The image is partitioned into these little
discrete pixels. The data from these photo-sites is gathered, composed, and moved to a

10
screen to be shown. The pixels might be photodiodes or photo-capacitors, for instance,
which produce a charge corresponding to the measure of light incident on that discrete
spot of the sensor, spatially confining and storing it.

Fig 2.4 Illustration of camera sensor pixels with RGB colour and infrared blocking
features. (Courtesy of https://www.edmundoptics.com/)

In digital cameras, pixels are regularly square. Pixel sizes generally range between 3 -
10μm. In spite of the fact that sensors are regularly identified basically by the quantity of
pixels, the size is significant to imaging optics. Huge pixels have, by and large, high
charge saturation limits and high signal-to-noise ratios (SNRs). With smaller pixels, it
turns out to be genuinely simple to accomplish high goal for a fixed sensor size and
magnification, in spite of the fact that issues, for example, blooming become more
serious and pixel crosstalk brings down the contrast at high spatial frequencies. A direct
value of sensor resolution is the quantity of pixels per millimetre.

Sensor Sizes

The size of a camera sensor's active region is significant in deciding the framework's
field of view (FOV). Given a fixed primary magnification (dictated by the imaging lens),
bigger sensors yield higher FOVs. There are a few standard area-scan sensor sizes: ¼",
1/3", ½", 1/1.8", 2/3", 1" and 1.2", with bigger sizes accessible (Fig 2.5). The terminology

11
Fig 2.5 Illustration of sensor size dimensions for standard camera sensors

(Courtesy of https://www.edmundoptics.com/)

of these goes back to the Vidicon vacuum tubes utilized for TV imagers, so note that the
actual sizes of the sensors vary. Note: There is no immediate association between the
sensor size and its measurements; it is just a convention. Generally, an aspect ratio of 4:3
(Horizontal: Vertical) is seen.

2.3. Spectral Properties

Monochrome Cameras

Generally, the range of CMOS and CCD sensors is given as 400-1000nm. However, they
also respond to the range of wavelengths from around 350-1050nm. Some advanced
cameras have an infra-red (IR) filter for visible spectrum images which can be removed
when working in the near-IR region. CMOS sensors are generally more sensitive in the
IR region than CCD sensors.

Colour Cameras

The basis of sensors being the photoelectric effect, and the number of electrons ejected.
They provide information about the intensity of light and not its wavelength. Hence,
different colours cannot be distinguished. Colour CCD cameras are classified as single

12
chip and three-chip. The single chip camera uses filters like Bayer filter (Fig 2.6) which
separates the incident light into different colours. Different sets of pixels are then used
for each colour. These cameras have lower resolution than monochrome cameras as more
number of pixels are used to identify the colour. Different manufacturers employ
different algorithms and hence the specific resolution differs accordingly.

Fig 2.6 Single chip colour CCD camera sensor using Bayer filter

(Courtesy of https://www.edmundoptics.com/)

A solution to the resolution problem is offered by the three-chip colour CCD cameras. It
uses a prism (Fig 2.7) to split the incident light into its RGB (red, green, blue)
components. As this method doesn’t use any algorithm or a set of pixels to determine the
colour, it offers higher resolution. This is because each pixel has different values of
intensities for the different RGB components.

13
Fig 2.7. Three-chip colour CCD camera sensor using prism

(Courtesy of https://www.edmundoptics.com/)

Sensors are the fundamental components of a camera system. For greater image quality
it is essential to choose the correct technology associated with it. It is, therefore, vital to
understand camera sensors and their features. This will help design and choose the proper
physical devices compatible with it. Studying these are essentially important to give the
desired high image quality.

14
CHAPTER 3

DISPLAY RESOLUTION

Display resolution refers to the number of pixels present in either direction of a digital
display screen such as TV or computer monitors. A resolution of 1920 x 1080 refers to
1920 pixels in horizontal while 1080 pixels in vertical direction. In case of any display
screen its resolution is the actual number of pixels in the two dimensional matrix. Thus,
when these values are fixed, any incoming video needs to be scaled using video or image
processors in order to be viewed with the fixed display resolution.

The term display resolution can be misleading in case of physical displays for monitors
or phones, etc. This is because it means the maximum number of pixels of the screen
while there is no information of the density of the pixels of the screen representing the
image. Resolution can also mean, the pixel density which is the number of pixels per unit
distance. For instance, it is usually given in pixels per inch (PPI) in digital measurement.
The resolution in analog measurement is found by taking into account the height of the
screen. A square of the same dimensions, then, is used to find the horizontal resolution.
It is ‘lines per picture height.’

The display resolution can also give information about the range of acceptable
resolutions of the incoming image that can be displayed on the screen. These are higher
than the display resolution and have to be scaled accordingly. There are other factors that
affect the human perception of the display resolutions. The aspect ratio of each pixel and
that of the screen are not required to be same. This affects the perception because the

15
Fig 3.1. Standard display resolutions offered by a computer monitor screen.

same information is represented in smaller or larger area depending on the difference in


aspect ratios. An input with higher resolution than that of the display, gives a sharper
image and vice versa. Some of the standard resolutions are broadly classified into
Standard-Definition Television (SDTV), Enhanced-Definition Television (EDTV),
High-Definition Television (HDTV) and Ultra-High-Definition Television (UHDTV).

3.1. Standard-Definition Television (SDTV):


It is called standard-definition as it was the dominant resolution for television in the latter
half of the 20th century. It was mainly categorised by 576i, coming from PAL and
SECAM systems, and 480i from the NTSC system.

SDTV is the basic version of digital television with primitive picture quality. It is an
improvement over the analog systems. However, the newer technologies certainly offer
far better resolutions. Hence, although it serves as a reference standard for newer
technology, SDTV system is now obsolete.

PAL

PAL stands for Phase Alternating Line. PAL is system for colour encoding analog
television in broadcast TV systems. The common broadcasting resolution in different

16
countries is 576i (where ‘i’ stands for interlaced). 576 refers to the number of lines of
vertical resolution of the incoming image. Further, there are usually some additional lines
added without any information to give the display screen time to process the interlaced

Fig 3.2. PAL

(Courtesy of https://www.wikipedia.org/)

video. In an interlaced video, two fields are used to form a single frame. One field
contains the even-numbered and other odd-numbered lines. Thus, the deinterlacing
requires some time giving an input delay. The name "Phase Alternating Line" depicts the
way that the phase of part of the colour data on the video signal is reversed with each
line, which naturally adjusts phase errors in the transmission of the signal by
counterbalancing them, while affecting the vertical frame colour resolution. These
systems use YUV space to encode colour.

SECAM

SECAM is another colour encoding system along with PAL and NTSC. SECAM uses a
signal for colour information represented as Chrominance or C. The intensity information
is provided by the luminance or Y signal. There needs to be insertion of colour
information using the Chrominance signal into the monochrome signal. Due to this, the
luminance signal strength is reduced by some extent. Only one C component is
transmitted at a time.

17
NTSC

NTSC or National Television System Committee is another analog television colour


standard. NTSC also uses interlaced video signals. It also uses a luminance and
chrominance system. The signal contains Y or Luminance information calculated from
the individual colour signals, and the C or chrominance contains only colour information.
Hence, each colour source is treated similar to a monochrome source. In order to display
on black & white screens, the colour information is simply discarded. Thus, NTSC is
backward compatible with black & white systems.

3.2. Enhanced-Definition Television (EDTV):


Enhanced-Definition Television (EDTV) is the next format that is superior to SDTV. The
common resolutions are 480p and 576p (where ‘p’ refers to progressive scan). As
opposed to separate odd and even fields in interlaced scanning as mentioned earlier,
progressive scanning refers to sequential display of lines of every frame. It is possible to
broadcast several EDTV signals simultaneously due to their lower bandwidths as
compared to HDTV signals.

Fig 3.3. Aspect ratios for different resolutions

(Courtesy of https://idearocketanimation.com/)

480p

This refers to a vertical resolution of 480 pixels and the horizontal resolution can be 640
pixels (aspect ratio of 4:3) or 720 pixels (aspect ratio of 3:2), etc.

18
576p

Similar to 480p, this refers to a vertical resolution of 576 pixels and the horizontal
resolution depends on the aspect ratio.

3.3. High-Definition Television (HDTV):


This system offers much higher resolution than its predecessors. It is the most common
existing video format in television broadcasts currently. The common resolutions under
this format are 720p, 1080i, 1080p. Its comparison in terms of frame size and aspect
ratios with EDTV can be seen in Fig 3.3. All of the content available today isn’t in HD
format. However, HDTVs can be used to watch the available HD content.

Fig 3.4. Comparison of 8K, 4K, HDTV and SDTV resolutions

(Courtesy of https://www.wikipedia.org/)

720p

This refers to a 720-pixel vertical resolution with progressive scanning. The aspect ratio
generally used is 16:9 giving a resolution of 1280 x 720. (Fig 3.3)

19
1080i and 1080p

This is often referred to as Full HD format to distinguish from the 720p HD format. The
vertical resolution of 1080 pixels comes in both interlaced and progressive scanning
patterns. It uses a widescreen aspect ratio of 16:9 giving a resolution of 1920 x 1080.
This results in a total of 2.1 million pixels, i.e., 2.1 megapixels. It is also called 2K format
for reference due to its horizontal resolution.

3.4. Ultra-High-Definition Television (UHDTV):


Ultra-High-Definition Television (UHDTV) also called Super Hi-Vision (SHV) include
4K UHD and 8K UHD formats. It offers an even higher resolution than its predecessor
HDTV formats. Although 4K TVs are widely available, as of today, there isn’t much
content available in the 4K format. Even rarer is the 8K format, with fewer and more
expensive TVs available. However, soon enough, as 4K TVs become more prominent,
8K TVs will undergo a similar development over the coming years.

4K UHD

4K TV, is the latest high quality TV resolution which gives sharp and clear images with
huge screen sizes of around 40 inches. The resolution is given as 3840 x 2160 pixels, i.e.,
a total of 8 megapixels. As its horizontal resolution is almost 4000, it is called 4K format
for reference. Although there is lack of much 4K content, it can upscale the regular
content to the required resolution of 4K. There are other variations of resolution available
depending on the aspect ratios.

8K UHD

The highest resolution available today, is the 8K resolution. There aren’t many 8K TVs
around and the available ones are very expensive as well. (Fig 3.5).

20
8K refers to the horizontal resolution of approximately 8000 pixels. The common
resolution, with the aspect ratio of 16:9, is 7680 x 4320 which makes a total of about 33
million pixels. This can vary according to the aspect ratio.

The new 8K format becoming the standard is still being speculated. Although it is
expected to become more popular in future, it will still account for a small percentage of
the total UHDTVs in the market. The increase in the display resolution is favourable only
with the compatible high data rate transmission circuitry for truly high quality images.
Despite the fact that 8K resolution has to overcome a lot of constraints, their development
is being encouraged. The 8K resolution imaging is encouraged so that the content can be
downscaled to give sharper images on 4K displays. This offers a great advantage to
filmmakers.

Fig 3.5. 98-inch 8K TV from Samsung

(Courtesy of https://www.samsung.com/us/)

From Full HD (1080p) TVs offering a 2-megapixel image, 4K offering an 8-megapixel


image, to 8K offering a 33-megapixel one, the latest resolution offers a sharper image
with higher depth, perceptible to human beings. 8K TVs have been around for some
years, though they aren’t popular. 8K content also exists but is rare and soon, there will
be increase the content. Hence, it is possible to expect a surge in the availability and
popularity of both 8K content and display devices.

21
CHAPTER 4

VIDEO ENCODING AND FORMATS

Video encoding is the method of compression of a video or bringing about a change in


its format. This includes conversion between digital and analog formats as well.
Compression aims at reducing the size of the video. This process discards some of the
information from the signal to achieve the desired size. When decompression is done,
the regenerated video is almost but not entirely the same as the original video. When
there is more compression, more data is discarded further deviating the regenerated video
from the original.

Fig 4.1. Video codec

(Courtesy of https://www.muvi.com/)

Encoding is a crucial process because that makes it easier for transmission. The reduction
in bandwidth aids the transmission while not compromising severely on quality. The

22
internet speeds are normally not adequate to stream the video directly. Hence, video
compression comes into picture. Another important parameter is the bit rate or the
information per unit time. Thus, this will decide whether or not the video can be streamed
without interruption.

The most important reason of video encoding is the compatibility with system.
Sometimes, even when the content is compressed to a desired size, it is still encoded to
ensure that it is compatible with the system. Compatibility essentially means that it can
function with programs which require specified encoding specifications. This process is
mainly governed by video codecs or compression standards.

4.1. Video Codec

Fig 4.2. Video and audio codec

(Courtesy of https://www.freemake.com/)

Video codec is the process of compression and decompression of digital video. This can
be done with the help of software or hardware. The compressed video usually is in a
format that is a standard for video compression. This is in general lossy compression as
it lacks some of the information from the original video. Hence, the decompressed video
has a lower quality than the original. Every codec consists of an encoder to compress,

23
decoder to decompress. Hence, it is called ‘codec’, where ‘co’ comes from encoder and
‘dec’ comes from decoder.

Some of the video codecs are H.264, VP8, RV40, etc. These mainly are used for video
signal and in conjunction with an audio signal that has a separate codec (Fig 4.2). Some
audio codecs are MP3, FLAC, etc. These codecs should not to be misunderstood as the
video formats.

4.2. Types of Video Codec


All the codecs are classified into:

(a) Lossless codecs: These codecs (H.264, Lagarith, Huffyuv) imitate a video
exactly, without compromising quality. Encoding with lossless codec generally
leads to incredible quality however, they take a great deal of hard drive space.

Fig 4.3. Lossless vs Lossy compression


(Courtesy of https://www.freemake.com/)
(b) Lossy codecs. Even though lossy codecs (Xvid, DivX, VP3, MPEG4) lose some
measure of video data, recordings with such codecs consume less space than
lossless ones. Lossy codecs can be transformative, predictive, or a blend of the
two sorts. The first kind cuts up the file and quantizes it into a more productive

24
space. The second ones dispose of all superfluous information and furthermore
spare space.

MPEG Codecs

These are codecs are given by MPEG (Motion Picture Experts Group) specifications.
MPEG-1 codec gives excellent video and MP3 sound that can be played on all cutting
edge music gadgets. MPEG-2 is the main standard for DVD and Blu-ray. While the
MPEG-1 codec takes into account just progressive scanning, MPEG-2 additionally
underpins interlacing. MPEG-4 handles both progressive and interlaced video, and it
gives better compression methods and more modest video size than MPEG-2.

H.264

One more well-known codec worth mentioning is H.264. It’s the most popular choice for
HD video. It can use both lossy and lossless compression depending on the settings that
you choose (frame rate, frame size, and file size). H.264 is up to 2 times more efficient
as basic MPEG-4 compression, which leads to smaller file sizes and seamless playback
on more devices. Hence, H.264 is the most popular standard.

XVID/ DIVX

DivX is used commercially, while XviD is open source. The DivX codec can pack long
video sections into little sizes while keeping up moderately high visual quality. The vast
majority of the DivX recordings use AVI video format and DivX or Div format.

HEVC

HEVC or H.265 is a development of H.264 with double the efficiency. This is utilized
for 4K videos and Blu-ray. Although HEVC is becoming well-known, it is not supported
by a variety of software.

25
Table 4.1. Types of Video Codecs.

Compression
Codec Formats
method

MP4, MKV,
H.264 Lossless / Lossy
3GP, FLV

HEVC MKV Lossy

XviD AVI, MKV Lossy

DivX AVI Lossy

MPEG1 Video CD, MPG Lossy

DVD (VOB), Blu-


MPEG2 Lossless
ray (TS), MPG

MPEG4 MP4, AVI, MKV Lossy

4.3. Video Formats


After encoding and decoding is done by the video codec, the files are stored in the video
containers or video formats.

Some of the video formats are as follows:

MP4

MPEG-4 Part 14 or MP4 is one of the oldest digital video formats presented in 2001.
Most digital systems and gadgets are compatible with MP4. An MP4 format can store
audio, video, images, and text. Moreover, MP4 gives top notch video while keeping up
generally little video sizes.

26
MOV

MOV is a well-known video document from Apple. It was intended to be compatible


with the QuickTime player. MOV file contains video, audio, captions, timecodes and
other media types. It is viable across various renditions of QuickTimePlayer, both for
Mac and Windows. Since it is an extremely excellent video design, MOV documents
take essentially more memory space on a PC.

FLV

FLV is used with Adobe Flash Player. It is one of the most well-known and flexible video
designs upheld by all video systems and programs. The FLV design is a decent decision
for online video streaming sites like YouTube. They have a generally little file size which
makes them simple to download. The main disadvantage is that it's not viable with
numerous cell phones like iPhones.

AVI

The AVI file format was presented in 1992 by Microsoft is still generally utilized today.
The AVI video format utilizes less compression than other video formats, for example,
MPEG or MOV. This gives an extremely huge file sizes, roughly 2-3 GB for each
moment of video. It tends to be an issue for clients with restricted disc space. You can
likewise make AVI video files with no compression. This makes the files lossless. A
lossless file will keep its quality after some time, paying little mind to how often you
open or spare the file. Furthermore, this killed the utilization of codecs in video players.

MKV

This format has audio, video and subtitles embedded together. MKV is flexible and
simple in use and supports almost any video and audio format.

27
CHAPTER 5

SHV CAMERA SYSTEM

5.1 Four-Sensor Imaging method


A pragmatic camera is desperately required for assessing the SHV frameworks now a
work in progress and making demonstration programs. To satisfy these needs, a four-
sensor imaging method for the model camera frameworks has been developed. So as to
get high-resolution and better quality pictures with sensors having moderately not many
pixels, this technique utilizes two image sensors for detecting green light, one sensor for
red light, and one for blue light. Some of the features of four sensor imaging are as
follows:

5.1.1. Optical Block

The structure of colour separation prism is as seen in Fig 5.1. The incoming light is split
into four components with respect to their colours, namely, red, blue and two green
components. The red and blue light are split with the same features as in a 3-sensor
imaging system with red, blue and green (RGB) colours. The colour regeneration also
has the same features as the basic systems.

The major difference between 3-sensor imaging and this method is the presence of a half-
mirrored beam splitter for further splitting the green light into two exactly same
components. A pixel offset in the spatial domain for the green light is brought about in
order to increase the resolution.

28
It is possible to keep the system compact as the prism with a beam splitter can be made
of a similar size as the 3-sensor prism. The camera system also is not much bigger than
the earlier 3-sensor camera as the extra hardware of sensor drive, etc., does not contribute
much to the actual camera circuit size.

Fig 5.1. Colour separation prism structure for four-sensor imaging method [1]

5.1.2. Enhancement of Image Resolution

The spatial offset between the two green sensors is diagonal while the blue and green
sensors are shifted in horizontal and vertical directions, respectively. The arrangement of
pixels is as seen in a Bayer filter (Fig 2.6). The Nyquist frequency of green light becomes
twice that of one image sensor in both directions due to the offset being diagonal. Thus,
the resolution of green signal effectively becomes twice. Also, green signal has greater
contribution to the luminance than the others. Luminance signal forms a major part in
the resolution. Hence, we can effectively increase the resolution of the colour image.

29
5.1.3. Additional Features

This method of imaging improves the image resolution while using sensors with small
number of pixels. Moreover, there are other advantages of this method as well. When the
optical format is consistent and the resolution is increased, there has to be an increase in
the number of pixels. This in turn results in diminishing pixel size. The reduction of pixel
area results in deterioration of image quality. This happens because of reduction in
saturation level possibly along with signal-to-noise ratio. These issues are resolved due
to four-sensor imaging as the sensors have lesser number of pixels but larger in their area.

Fig 5.2 Spatial pattern of pixels[1]

The advantages of four sensor imaging are as follows:

1. High sensitivity as the aperture ratio can be increased.


2. High saturation level of the sensor ensures high dynamic range.

30
3. Larger pixel size gives better output, which reduces the effective cost.

As the single green sensor easily saturates, this serves as an improvement in that respect.

Some disadvantages of this method can be stated in the following terms. An additional
sensor leads to more power consumption while also increases the size of the camera. But,
these do not affect the system as badly as speculated. Despite the increase in power
consumption due to the extra sensor, the lower number of pixels in each one makes for
little overall power consumption. We can conclude that the increase in size or power
consumption can, in fact, be neglected.

31
CHAPTER 6

PRACTICAL SHV CAMERA SYSTEM

6.1. System Structure


A pragmatic SHV camera is designed by applying the four-sensor imaging method. This
system has 8 million CMOS image sensors. The parts of this camera system include: a
camera head, camera control unit (CCU) and a video processor (VP). (Fig 6.2)

Fig 6.1 Practical SHV camera[1]

An optical format of 1.25-inch renders the camera head compact. (Fig 6.1) The camera
also has zoom lenses along with the fixed focal length lens. The adjustment of the focus
is possible from the CCU which makes it compatible with a high resolution display
screen.

32
First the fixed pattern noise (FPN) is corrected in the camera head. This is followed by
adjustment of various parameters of the image for better quality. After applying a pre-
knee curve while converting from 12-b to 10-b. The function of this is to deliver higher
dynamic range and detailed output in a highlighted region. Signals are converted into 16
1.5G serial digital interfaces (SDIs). (Fig 6.2)

Fig 6.2 Block Diagram of SHV Camera System. [1]

This data is then passed through three 10 Gbps interfaces. The output from these is then
multiplexed with Wavelength Division Multiplexing (WDM). This resulting signal can
be transferred over an optical fibre. The signal then goes to the CCU and VP where it is
converted back to 16 1.5G SDI signals. In the next stage, the signal goes through image
processing. After the image is processed, a 33-megapixel (7680 x 4320 pixels) image is

33
generated. These Super-Hi-Vision (SHV) signals can be displayed on 4K display screens.
For this, filter processing is applied.

6.2. Specific Features

Optical Format
Before beginning the design process, it is necessary to inspect the optical format that can
be used in the cameras. A 2/3-in lens results in a 50% response for HDTV at their Nyquist
frequency. A similar response for SHV at their Nyquist frequency is possible with a 1.25-
in optical format. For better resolution, generally, a larger optical format is used.
However, this leads to increased camera head size. Hence, for a practical use camera,
1.25-in format is suitable.

1.25-in 8-megapixel image sensor


The sensor contains 4112 x 2168 pixels, and the camera contains an effective number of
7680 x 4320 pixels. The number of pixels in the camera is four times that of the 1080p
format. Thus, the circuitry used in HDTV systems can be compatible with this format.
Also, CMOS sensors are employed because of their comparatively low power
consumption than CCD sensors as well as their lesser size.

Resolution characteristics
After consideration of different optical characteristics, an edge resolution of
approximately 4320 lines horizontally is expected. This is after considering that between

Fig 6.3 Checking for chromatic aberration[1]

green sensors a spatial offset of half a pixel pitch exists. If there is an error in the offset
position, the resolution is deteriorated. When an error of 0.1-pixel pitch is assumed the

34
edge resolution further decreases to 3400 TV lines. However, this is still greater than the
edge resolution of 2160 TV lines in one-CMOS.

Fig 6.4 Before and after chromatic aberration correction[1]

35
CHAPTER 7

SHV CAMERA SYSTEM WITH THREE 33-


MEGAPIXEL CMOS IMAGE SENSORS

7.1. Camera Design


Block diagram of this system is seen in Fig 7.1. Due to its higher bandwidth, it can be
understood that it will take consume higher power and have more complex processing
equipment that is greater in size.

The camera can be split into the camera head and the CCU. This makes the system
compact while reducing power consumption. The camera head is made up simply of
sensors, their drivers and interface for the transmitting the signal to CCU. In the camera
head, the sensor drive frequency is changed to a frequency suitable for transmission by a
standard optical fibre.

The high resolution signal is split into its colours with each one having 24 HD signals.
Thus, 72 HD signals in all are sent for transmitting through the interface and cable. Then,
the CCU, upon receiving the signal changes it back to 72 HD signals. General, processing
for the image parameters is carried out by the signal processor.

There are two ways to achieve chromatic aberration correction. It can be done physically
or by signal processing. The physical method requires a lens for correction which can
contribute to the increment in the size. Therefore, to avoid this signal processing is used
in this system. Along with the benefits of size, this also improves the efficiency of
correction.

36
Fig 7.1. Block diagram of camera system[1]

The main considerations for construction are image sensors, pixel size. Another
consideration is of the interface for signal transmission. The chromatic aberration
correction is also another factor to be considered.

37
7.2. 33-megapixel CMOS Image Sensor
To achieve desired results, a CMOS-based pixel structure is used which has three
transistors. It is desired to achieve a camera SNR of 45 dB. This value comes from the
8.3-megapixel CMOS image sensors. The desired response is 20% or higher at Nyquist
frequency. Upon various considerations with respect to full-well capacity, dark random
noise, etc., it was found that all of the constraints were met for a pixel size of 3.8 µm2.

Fig 7.2. 33-megapixel image sensor [1]

Other features include an analog to digital converter for converting analog signals from
the sensors in order to make them compatible for the later stages of processing within the
device. The equipment includes multiple ports in an attempt to increase the data rate.

Fig 7.3 Camera head appearance [1]

38
7.3. High Band-width signal transmission interfaces
The interface to aid transmission between the camera head and CCU and compatible with
a HDTV cable. This cable consists of 2 single-mode fibres. The return signal from the
CCU takes up one of these. So, the other one is used for transmitting the signal after
WDM.

The results for transmission distance are expected to be similar to that for the HDTV
cameras. For achieving these desired performance parameters, a video interface is
developed that can link a number of signals of 10 SDI.

The 10G SDI transformation unit utilized here transforms an 8 HD SDI signal worth of
image information into one 10G SDI optical signal. The optical transmission hardware
employs nine of these units to change 72 HD SDI signals worth of SHV signals yield
from the three head boards into nine 10G SDI optical signals. Besides, to decrease the
power utilization and to empower these optical signals to be transmitted utilizing the
earlier HDTV camera cables, dense frequency division multiplexing (DWDM) is opted
for in order to multiplex the nine signals and give them to the following stage, the CCU.
Here, signal processing is applied for various image parameter adjustment and chromatic
aberration correction.

39
CHAPTER 8

ADVANTAGES AND DISADVANTAGES

8.1. Advantages
With 8K technology, the distinction between adjacent pixels becomes less recognisable
to the human eye. This effect can be achieved at different distances for different screen
sizes. A greater screen size makes it possible to achieve sharper images with greater depth
from a farther distance.

This system offers another application in filmmaking. The filmmakers can use this format
which can employ higher resolution with the aspect ratio of wide screen and then
downscale it in the editing stage. This, too, gives an effect of sharper image for human
perception.

8.2. Disadvantages
The four sensor imaging has certain disadvantages as discussed in the four-sensor
imaging section. The major issue with these systems is in compression. The codecs
employed work better at higher bit rate but the image quality deteriorates at lower bit
rates. It uses higher bandwidth, too, due to the amount of data being transmitted in order
to achieve higher resolution.

40
CHAPTER 9

SUMMARY

Camera Obscura was the first attempt at a camera followed by photographic, analog and
digital cameras. The basic units of a camera are sensors. Sensors are further classified
into monochrome and colour sensors which are in turn classified as single-chip and three-
chip sensors. The single-chip sensors use filters such as Bayer filter and certain
algorithms for the colour information acquisition while the three-chip sensor employs a
colour separation prism. There are two types of sensors CCDs and CMOS image sensors.
CMOS are the newer development of the two with compact size. Various display
resolutions have evolved over the years. The basic reference standard is the SDTV and
the developments resulted in EDTV, SDTV, HDTV and the latest UHDTV. There is
requirement for the proper codec for the SHV signals generated. There are variety of
different codecs and video formats available.

A four-sensor imaging method for achieving 8K resolution camera system was discussed
and a practical system applying the method along with the design considerations was also
discussed. It utilizes four 8-megapixel sensors in place of 33-megapixel. This gives
comparatively larger pixel size and associated benefits. The four sensors are used for
different colour components with two for green and others for blue and red. This
improves the green signal resolution which is associated with luminance signal. Thus,
improving the resolution considerably. The practical camera can be employed for
recording various events and generating SHV content.

A prototype camera system using 33-megapixel sensors was also discussed. It met
various constraints to achieve desired results. This system achieved a response of greater
than 20% with desired SNR. Despite the constraints, the resolution achieved was far
higher than that in case of HDTV resolution. Also, the desired results that were achieved
were set by the results obtained in the four-sensor imaging method. Further research is
being conducted for higher dynamic range and performance improvements

41
REFERENCES

1. T. Yamashita and K. Mitani, "8K Extremely-High-Resolution Camera Systems,"


in Proceedings of the IEEE, vol. 101, no. 1, pp. 74-88, Jan. 2013, doi:
10.1109/JPROC.2012.2217371.

2. T. Watabe et al., "A 33Mpixel 120fps CMOS image sensor using 12b column-
parallel pipelined cyclic ADCs," 2012 IEEE International Solid-State Circuits
Conference, San Francisco, CA, 2012, pp. 388-390, doi:
10.1109/ISSCC.2012.6177047.

3. S. Sakaida et al., "The Super Hi-Vision Codec," 2007 IEEE International


Conference on Image Processing, San Antonio, TX, 2007, pp. I - 21-I - 24, doi:
10.1109/ICIP.2007.4378881.

4. I. Takayanagi et al., "A 1.25-inch 60-frames/s 8.3-M-pixel digital-output CMOS


image sensor," in IEEE Journal of Solid-State Circuits, vol. 40, no. 11, pp. 2305-
2314, Nov. 2005, doi: 10.1109/JSSC.2005.857375.

5. Rastislav Lukac, “Single-Sensor Digital Color Imaging Fundamentals,” in


Single-Sensor Imaging, Taylor & Francis Group, LLC, Boca Raton, Florida, US,
2009

6. Websites:
a. https://www.wikipedia.org/
b. https://www.edmundoptics.com/
c. https://www.freemake.com/
d. https://medium.com/
e. https://www.arrow.com/en/research-and-events/articles/
ACRONYMS

TV - Television

HDTV – High-Definition Television

CCD – Charge-Coupled Device

CMOS – Complementary Metal Oxide Semiconductor

CCU – Camera Control Unit

VP – Video Processor

SNR – Signal-to-Noise Ratio

PPI – Pixels per inch

SDTV – Standard-Definition Television

EDTV – Enhanced-Definition Television

UHDTV – Ultra-High Definition Television

PAL – Phase Alternating Line

SHV – Super Hi-Vision

You might also like