0% found this document useful (0 votes)
52 views36 pages

CE Consumer Electronics Material S

This is consumer electronics material s

Uploaded by

sreedhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views36 pages

CE Consumer Electronics Material S

This is consumer electronics material s

Uploaded by

sreedhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit III: Television:

Basics of Television: Elements of TV communication system,


Scanning and its need, Need of synchronizing and blanking pulses,
VSB, Composite Video Signal, Colour Television: Primary,
secondary colours, Concept of Mixing, Colour Triangle, Camera
tube, PAL TV Receiver, NTSC, PAL, SECAM
Introduction:
 The word television means viewing from a distance.
 Television is the transmission of picture information over an
electric communication channel.
 It is desired that the picture picked up by television receiver be a
faithful reproduction of the scene televised from a TV studio.
 Any scene being televised has a wide range of specific features
and qualities.
 It may contain a great number of colours, gradations of shade,
coarse and fine details, motion may be present in a variety of
forms and the objects making up the scene are usually in three
dimensions.
 ELEMENTS OF A TELEVISION SYSTEM :
 Television is an extension of the science of radio
communications, embodying all of its fundamental principles
and possessing all of its complexities and making use of most
of the known techniques of electronic circuitry.
sound transmitter and a sound receiver:
Fig. : Simplified block diagram of a (a) sound transmitter and (b) a sound receiver

 In the case of the transmission and reproduction of sound, the


fundamental problem is to convert time variations of acoustical
energy into electrical information, translate this into radio
frequency (RF) energy in the form of electromagnetic waves
radiated into space .
 At receiving point, reconvert part of the resultant
electromagnetic energy existing at that point into acoustical
energy.
Picture transmitter and picture receiver:
 In the case of television, there is the parallel problem of
converting space and time variations of luminosity into
electrical information, transmitting and receiving this, as in the
case of sound, and reconverting the electrical information
obtained at a receiving location into an optical image.

Fig: Simplified block diagram of (a) picture transmitter and (b) picture receiver

 When the information to be reproduced is optical in character,


the problem fundamentally is much more complex than it is in
the case of aural information.
 At each instant of time only a single piece of information,
since any electrical waveform representing any type of sound
is a single-valued function of time regardless of the complexity
of the waveform.
 In the corresponding optical case, at any instance there is an
infinite number of pieces of information existing
simultaneously i.e the brightness which exists at each point of
the scene to be reproduced.
 The information is a function of two variables, time and space.
 Since the practical difficulties of transmitting all this
information simultaneously and decoding it at the receiving
end.
 This information expressed within the form of a single-valued
function of time (the process known as scanning ).
The elements of a simple broadcast television system are:
 An image source. This is the electrical signal that represents a
visual image, and may be derived from a professional video
camera in the case of live television, a video tape recorder for
playback of recorded images
 A sound source. This is an electrical signal from a microphone or
from the audio output of a video tape recorder.
 A transmitter, which generates radio signals (radio waves) and
encodes them with picture and sound information.
 A television antenna coupled to the output of
the transmitter for broadcasting the encoded signals.
 A television antenna to receive the broadcast signals.
 A receiver (also called a tuner), which decodes the picture and
sound information from the broadcast signals, and whose input is
coupled to the antenna of the television set.
 A display device, which turns the electrical signals into visual
images.
 An audio amplifier and loudspeaker, which turns electrical
signals into sound waves (speech, music, and other sounds) to
accompany the images.
Television as a system:
 Television operates as a system, the station, and the numerous
receivers within range of its signals.
 The station produces two kinds of signals, video originating
from the camera, and sound from a microphone or other source.
 Each of the two signals is generated, processed, transmitted
separately, video as an amplitude-modulated signal, and sound
as a frequency-modulated signal.
 Each station is assigned a particular channel to reduce the
possibility of inter-channel interference. A station may operate
in either of two bands. VHF (very high frequency) channels 2
through 13, extending from 54 to 216 MHz, or UHF (ultra high
frequency) channels 14 through 83, extending from 470 to 890
MHz.

Fig. Television as a system (a) transmitter and (b) receiver


SCANNING PROCESS :
 Scanning is defined as the process which permits the
conversion of information expressed in space and time
coordinates into time variations only.
 An optical image of a scene on a photosensitive surface, is
scanned by a beam of electrons, i.e. all points on the image are
sequentially contacted by this beam, as a result of this
scanning, through capacitive, resistive, or photo emissive
effects at the surface.
 An electrical signal may be obtained that is directly
proportional in amplitude to the brightness at the particular
point being scanned by the beam.
 Although the picture content of the scene may be changing
with time, if the scanning beam moves at such a rate that any
portion of the scene content does not have time to move
perceptibly in the time required for one complete scan of the
image, the resultant electrical information contains the true
information existing in the picture during the time of the scan.
SCANNING METHODS AND ASPECT RATIO :
ASPECT RATIO:
 In the scanning process the amount of detail actually converted
to useful information depends on the total percentage of picture
area actually contacted by the electron beam.
 The scanning process is such that the picture area is traversed
at repeating intervals, giving in effect a series of single pictures
much in the same manner that motion pictures are presented.
 The repetition rate of these successive pictures (referred to as
frame rate) determines the apparent continuity of a moving
scene. When no picture the beam will trace white rectangle
called Raster
 The width to height ratio of TV screen is called ASPECT
RATIO.Aspect ratio= W/H=4/3
SCANNING METHODS:
1) LINEAR SCANNING :
 Linear scanning of the image is to scan the image from left to
right, starting at the top and tracing successive lines until the
bottom of the picture is reached, then returning the beam to the
top and repeating the process.
 The direction of the arrows on the heavy lines indicates the
forward (trace) or useful scanning time. The dashed lines
indicate the retrace time, which is not utilised in converting
picture information into useful electrical information.

Fig. 30.5 Linear Scanning (a) Top to bottom scanning path of the beam
(b) Vertical retrace path of the beam
(c) Deflecting field waveforms
a)Horizontal scanning:
 The linear rise of current in the horizontal deflection coils
deflects the beam across the screen with a continuous, uniform
motion for the trace from left to right.
 At the peak of the rise, the sawtooth wave reverses direction
and decreases rapidly to its initial value. This fast reversal
produces the retrace or flyback..
 The horizontal scanning frequency is 15,625 Hz in the 625 line
system.
b)Vertical scanning.:
 The electron beam is being deflected horizontally, the
sawtooth current provided to the vertical deflection coils
moves the electron beam from top to bottom of the raster.
 As a result, when traveling from top to bottom, the beam
creates one below the other.
 The vertical scanning beam is deflected to the bottom of the
raster by the trace component of the sawtooth wave, After that,
the beam is quickly vertically retraced back to the top.
 The horizontal scanning is ongoing during vertical retrace, and
numerous lines are scanned during this time.
 The vertical scanning frequency is 25Hz, that is every second
25 frames are scanned.

FLICKER:
 The television picture's scanning rate of 25 frames per second is
insufficient to allow the brightness of one picture or frame to the next.
The screen is blanked in between each frame as a result of this effect is
called flicker .
 As the screen alternates between bright and dark, the effect is a distinct
flicker of light.
Interlaced scanning:
 In television pictures an effective rate of 50 vertical scans per
second is utilized to reduce flicker. This is accomplished by
increasing the downward rate of travel of the scanning electron
beam, so that every alternate line gets scanned instead of every
successive line.
 If during one vertical scanning field alternate scanning (odd)
lines are formed and during the second scanning field the
remaining (even) lines are formed, at the end of the one frame
period all the lines are formed. This process called interlaced
scanning .
 In television, the odd-line field and the even-line field make
one frame.

=
(c)
Fig. One complete frame in interlaced scanning
(a) Odd line field in interlaced scanning
(b) Even line field in interlaced scanning
(c) Odd-line field in (a) and even line field in (b) interlaced to form a complete frame

 The spot traces all the odd lines of the picture. When the spot
reaches the bottom of the picture, it has traced all the odd lines
i.e. a total of 312.5 lines in the 625 line system. This is called
the odd-line field. Fig. (a).
 After the spot has traced all the odd lines, the screen is blanked
and the spot is returned quickly to the top of the screen once
again. The time during which the spot travels back to the top of
the screen is called the vertical retrace period.
 The timing of the this period is such that the spot reaches the
top centre of the screen when it again starts tracing lines in the
picture as shown in Fig. (b). On this second tracing, the spot
traces even lines.
 When the spot completes the tracing of all even lines i.e. a total
of 312.5 lines in the 625 line system and is at the bottom of the
picture, it has traced the even-line field, depicted on Fig. (c).
 The screen is then blanked once again, and the spot is returned
to the top-left-hand corner of the screen, Fig. (d)

(a) (b) (c) (d)

Fig. Interlaced scanning; (a) odd line field (b) transition period (c) even line field and (d) transition period

Composite Video Signal (CVS):


 Composite video signal consists of a camera signal
corresponding to the desired picture information ,blanking
pulses to make the retrace invisible, and synchronizing pulses
to synchronize the transmitter and receiver scanning.
 A horizontal synchronizing(sync) pulse is needed at the end of
each active line period where as a vertical sync pulse is
required after each field is scanned.
 The amplitude of both horizontal and vertical sync pulses is
kept the same to obtain higher efficiency of picture signal
transmission but their duration(width) is to be different for
separating them at the receiver.
 sync pulses are needed consecutively and not simultaneously
with the picture signal, these are sent on a time division basis
and thus form a part of the composite video signal.

Composite video signal can be represented either with positive


polarity or with negative polarity.
Positive Polarity Negative Polarity

In the case of Positive polarity, In case of negative polarity,


whiter the scene, higher is the brighter the scene, smaller is the
amplitude of the video signal. amplitude.
Blanking level is kept at the Here sync pulse is positive, that is
zero level. above the blanking level. Black is
Below zero level is the sync just below the blanking level.
pulse. Sync top is at the most Brighter the scene, lower is its
negative point . level below the blanking level.
In case when the video signal is White is near the bottom.
produced by photo conduction
type camera tubes, bright white
light gives a high amplitude of
video signal.
At the receiver; for reproducing
white on the fluorescent screen,
a stronger signal is needed and
for reproducing black, zero
signals are needed

composite Video signal dimensions:


 The Voltage levels for sync pulse, blanking pulse and video are
different, so that these pulses can be separated easily in the
receiver.
 The level of the video signal when the picture detail being
transmitted corresponds to the maximum whiteness to be
handled is referred to as peak-white level, is fixed at 10 to
12.5 percent of the maximum value of the signal while the
black level corresponds to approximately 72 percent and the
sync pulses are added at 75 percent level called the blanking
level.and sync top level is at 100%.
 The difference between the black level and blanking level is
known as the ‘Pedestal’.

D.C. component of the video signal:


 The amplitude variations for individual picture elements, the
video signal has an average value or dc component
corresponding to the average brightness of the scene. In the
absence of dc component the receiver cannot follow changes in
brightness,
Pedestal height:
 The pedestal height is the distance between the pedestal level
and the average value (dc level) of the video signal.
 This indicates average brightness since it measures how much
the average value differs from the black level.
Blanking pulses:
 The composite video signal contains horizontal and vertical
blanking pulses to blank the corresponding retrace intervals.
 The repetition rate of horizontal blanking pulses is therefore
equal to the line scanning frequency of 15625 Hz.
 Similarly the frequency of the vertical blanking pulses is equal
to the field-scanning frequency of 50 Hz.
 The level of the blanking pulses is distinctly above the picture
signal information; these are not used as sync pulses.
 Sync pulses are added above the blanking level and occupy
upper 25% of the composite video signal amplitude.
Sync pulse and video signal amplitude ratio:
 The overall arrangement of combining the picture signal and
sync pulses where about 65 per cent of the carrier amplitude is
occupied by the video signal and the upper 25 per cent by the
sync pulses.
 The final radiated signal has a picture to sync signal ratio (P/S)
equal to 10/4.
Figure : Horizontal and vertical blanking pulses in video signal. Sync pulses are added above the
blanking level and occupy upper 25% of the composite video signal amplitude.

Horizontal Blanking period and Sync pulse details:


 The interval between horizontal scanning lines is indicated by
H.
 Out of a total line period of 64 μs, the line blanking period is
12 μs.
 During this interval a line synchronizing pulse is inserted.
Line Blanking Period:
 The line blanking period is divided into three sections.
 These are the ‘front porch’, the ‘line sync’ pulse and the ‘back
porch’.

Figure3.4 Horz. line and sync details compared to horizontal deflection saw tooth and
picture space on the raster
Front porch:
 This is a period of 1.5 µs inserted between the end of the
picture detail for that line and the leading edge of the line sync
pulse.
 This interval allows the receiver video circuit to settle down
from whatever picture voltage level exists at the end of the
picture line to the blanking level before the sync pulse occurs.
 Thus sync circuits at the receiver are isolated from the
influence of end of the line picture details.
Line sync:
 After the front proch of blanking, horizontal retrace is
produced when the sync pulse starts.
 The nominal time duration for the line sync pulses is 4.7 µs.
 During this period the beam on the raster almost completes its
back stroke (retrace) and arrives at the extreme left end of the
raster.
Back porch:
 This period of 5.8 µs at the blanking level allows plenty of
time for line flyback to be completed.
 The relative timings are so set that small black bars are
formed at boththe ends of the raster in the horizontal plane
 These blanked bars at the sides have no effect on the picture
details reproduced during the active line period.
Vertical sync details
 The basic vertical sync added at the end of both even and odd
fields
 Its width has to be kept much larger than the horizontal sync
pulse, in order to derive a suitable field sync pulse at the
receiver to trigger the field sweep oscillator.
 The standards specify that the vertical sync period should be
2.5 to 3 times the horizontal line period.
 In the 625 line system 2.5 line period (2.5 × 64 = 160 μs) has
been allotted for the vertical sync pulses.
 Thus a vertical sync pulse commences at the end of 1st half of
313th line (end of first field) and terminates at the end fo 315th
line.
 Similarly after an exact interval of 20 ms (one field period) the
next sync pulse occupies line numbers— 1st, 2nd and 1st half
of third, just after the second field is over.
 The beginning of these pulses has been aligned in the figure to
signify that these must occur after the end of vertical stroke of
the beam in each field, i.e., after each 1/50th of a second.
 This alignment of vertical sync pulses, one at the end of a half-
line period and the other after a full line period , results in a
relative misalignment of the horizontal sync pulses and they do
not appear one above the other but occur at half-line intervals
with respect to each other.
VSB (Vestigial Sideband Modulation):
 Vestigial Sideband (VSB) modulation is a modulation technique which
allows transmission of one sideband , where a part of the signal named
as a vestige.
 VSB technique was introduced to overcome the drawbacks of SSB
modulation.
 VSB technique is used in TV transmission as television signal are
extremely low-frequency signals.
 The generation and transmission of Vestigial sideband signal:

 VSB signal is generated at the output of the sideband filter employed in


the circuit. However, further amplification of the signal is needed in
order to transmit it to longer distances.
 The balanced modulator used here produces the DSB-SC signal which
is fed to the sideband filter.
 The filter is designed such that it transmits one sideband including
vestige (some part) of the other.Thus producing a VSB signal.
 VSB technique allows the transmission of the upper sideband along
with the vestige of lower sideband. However, suppresses the remaining
part.
 From 1.25 MHz of lower sideband band, 0.75 MHz vestige is
transmitted and rest is suppressed. This basically simplifies the filtering
requirements.
Colour Television: Primary, secondary colours
PRIMARY AND SECONDARY COLOURS:
Primary colours :
 Colours which cannot be produced by mixing other colours are
called primary colours.
 It is not found to produce either red, blue or green colours by
mixing two other colours.
 For this reason red, green and blue are called primary colours.
Secondary colours:
 A secondary colour can be produced by mixing other colours.
 Thus yellow colour can be produced by mixing red and green
colours.
 Magenta colour can be produced by mixing red and blue
colours.
 cyan colour can be produced by mixing blue and green colours.
 Two colours which give white light when added together are
called complementary colours.
Red + Green = Yellow
Red + Blue = Magenta
Blue + Green = Cyan
Primary and complementary colours:
Red + Cyan = White
Green + Magenta = White
Blue + Yellow = White
 In colour television, red, green and blue colours are chosen,
and are called primary colours.
 When these colours are combined with each other in various
proportions, a wide range of hues and tints (colour shades) are
produced.
 These hues and tints are sufficient for presenting any colour
picture. Also the range of colours produced by combining red,
green and blue colours is wider than the range produced by
combining any other colours.
 The colours red, green, and blue are called additive primaries
and are used when coloured light sources are blended to
produce the required colour.
 Cyan, Magenta and Yellow are the subtractive primary colors.
 Each one absorbs one of additive primary colors : Cyan
absorbs Red, Magenta absorbs Green and Yellow absorbs
Blue. Adding two subtractive primary colors filters together
will transmit one of the primary additive colors.
 subtractive primaries and are used when a picture on print is
viewed by reflected light from a white source
 COLOUR PERCEPTION:
 All objects that we observe are focused sharply by the lens
system of the eye on its retina.
 The image formed on the retina is retained for about 20 ms
even after optical excitation has ceased. This property of the
eye is called persistence of vision
 The retina which is located at the back side of the eye has light
sensitive organs which measure the visual sensations.
 The retina is connected with the optic nerve which conducts
the light as sensed by the organs to the optical centre of the
brain.
 According to the theory formulated by Helmholtz the light
sensitive organs are of two types—rods and cones.
 The rods provide brightness sensation and thus perceive
objects only in various shades of grey from black to white.
 The cones that are sensitive to colour are broadly in three
different groups.
 One set of cones detects the presence of blue colour in the
object focused on the retina, the second set perceives red
colour and the third is sensitive to the green range.
 Each set of cones, may be thought of as being ‘tuned’ to only a
small band of frequencies and so absorb energy from a definite
range of electromagnetic radiation to convey the sensation of
corresponding colour or range of colour.
THREE COLOUR THEORY :
 All light sensations to the eye are divided into three main
groups.
 The optic nerve system then integrates the different colour
impressions in accordance with the curve shown in Fig. to
perceive the actual colour of the object being seen.
 This is known as additive mixing and forms the basis of any
colour television system.
 A yellow colour, for example, can be distinctly seen by the eye
when the red and green groups of the cones are excited at the
same time with corresponding intensity ratio.
 Similarly and colour other than red, green and blue will excite
different sets of cones to generate the cumulative sensation of
that colour.
 A white colour is then perceived by the additive mixing of the
sensations from all the three sets of cones.

Mixing of Colours :
Mixing of colours can take place in two ways—additive mixing and
subtractive mixing.
Additive mixing:
 In additive mixing which forms the basis of colour television,
light from two or more colours obtained either from
independent sources or through filters can create a combined
sensation of a different colour.
 Thus different colours are created by mixing pure colours and
not by subtracting parts from white.
 The additive mixing of three primary colours—red, green and
blue in adjustable intensities can create most of the colours
encountered in everyday life.
 The impression of white light can also be created by choosing
suitable intensities of these colours.
 Red, green and blue are called primary colours.
 These are used as basic colours in television.
 By pairwise additive mixing of the primary colours the
following complementary colours are produced:
Red + Green = Yellow
Red + Blue = Magenta (purplish red shade)
Blue + Green = Cyan (greenish blue shade)
Subtractive mixing:
 In subtractive mixing, reflecting properties of pigments are
used, which absorb all wavelengths but for their characteristic
colour wavelengths.
 When pigments of two or more colours are mixed, they reflect
wavelengths which are common to both. Since the pigments
are not quite saturated (pure in colour) they reflect a fairly
wide band of wavelengths.
 This type of mixing takes place in painting and colour printing.
Note that as additive mixing of the three primary colours produces
white, their subtractive mixing results in black.
Color triangle :
 The first chromaticity diagram was a circle devised by Newton.
Later, Maxwell used an equilateral triangle .
 In his trichromatic theory, each of the three primary colors—
red, green, and blue—is located at a corner of the triangle.
 The white color is in the middle.
 Other colors are formed by a combination of the r, g, b
components depending on the distances from each of the three
sides of the triangle.
 An additive color space defined by three primaries has a gamut
that is the color triangle if the amount of primaries is non-
negatively bounded.
 We can now plot the fraction of red on the horizontal axis and
the fraction of green on the vertical axis of a diagram. This is
the chromaticity diagram or color triangle
 When mixing two colors, we must first identify them on the
color triangle, then draw a line between them and, if they were
mixed in equal amounts, find the resulting color in the mid-
point between the two original colors.
 The coordinates of any mixed color on the color triangle. The
color triangle rule is: mixing two colors, the resulting color is
always on the line joining the two colors.
 C is in the middle point between B and G, M between B and R,
and Y between R and G. The color triangle provides a simple
way of mixing additive primary colors to obtain the desired
color
 Pure colors are at the edges of the triangle, while low purity
ones at the center.
 The horse-shoe shape in the figure above, describes the area of
human color vision. This area is delimited by the curved line
along which all spectral colors lie (see corresponding
wavelengths in the diagram), and the line between spectral
violet and red.
 The color triangle includes only the colors that can be obtained
mixing RGB, but the eye can see more colors, including
spectral colors and purples.
 Most of the spectral colors lie outside the color triangle.
 Spectral red, green and blue are at the edges of the triangle,
while yellow, cyan and violet are outside. So are the purples.
 As well as all the magenta-purple hues are non-spectral colors.
There is no single (dominant) wavelength associated with
them. They are exclusively mixed colors. Magenta is the
mixture of spectral red (700 nm) and spectral blue (440 nm),
while purple is the mixture of spectral red (700 nm) with
spectral violet (400 nm). The mixture of R+B in different
proportions generates all the magentas, the reddishmagentas
and the bluish-magentas. These hues can be obtained mixing
RGB primaries (R+B), therefore you can see them on TV or on
the computer monitor. The purple hue cannot be mixed with
RGB primaries
TV camera tube:
A TV camera tube may be called the eye of a TV system.
Some of the more important functions must be,
 Sensitivity to visible light,
 Wide dynamic range with respect to light intensity, and
 ability to resolve details while viewing a multi-element scene.
Most types developed have suffered to a greater or lesser
extent from
 Poor sensitivity,
 Poor resolution,
 High noise level,
 Undesirable spectral response,
 Instability,
 Poor contrast range and
 Difficulties of processing.
Operating Principle of TV Camera Tube:
 The main aim of a camera tube is to detect each element
independently and produce an electrical signal according to the
brightness of each element.
 light from the scene is focused on a photosensitive surface
known as the image plate, and the optical image thus formed
with a lens system represents light intensity variations of the
scene.
 The photoelectric properties of the image plate then convert
different light intensities into corresponding electrical
variations.
 The electron beam moves across the image plate line by line,
and field by field to provide signal variations in a successive
order. This scanning process divides the image into its basic
picture elements.
WORKING:
 The two photoelectric effects used for converting variations of
light intensity into electrical variations are
 Photoemission and
 Photoconductivity.
Photoemission :Certain metals emit electrons when light falls
on their surface.
 These emitted electrons are called photoelectrons and the
emitting surface a photocathode. Light consists of small
bundles of energy called photons.
 When light is made incident on a photocathode, the photons
give away their energy to the outer valence electrons to allow
them to overcome the potential-energy barrier at the surface.
 The number of electrons which can overcome the potential
barrier and get emitted, depends on the light intensity.
 Cesium-silver or bismuth-silver-cesium oxides are preferred as
photo emissive surfaces .
 Photoemissive Camera tubes operate on the principle
of photoemission. Image orthicon is the photoemissive type of
camera tube
Photoconductivity :
 The method of producing an electrical image is by
photoconduction, where the conductivity or resistivity of the
photosensitive surface varies in proportion to the intensity of
light focused on it.
 In general the semiconductor metals including selenium,
tellurium and lead with their oxides have this property known
as photoconductivity.
 Photoconductive Camera tubes operate on the principle
of photoconduction, Vidicon and Plumbicon are the two major types of
photoconductive camera tubes.

Video Signal by Photo emission Video Signal by Photo Conduction


Image Storage Principle:
 Image Storage Principle Television cameras developed during
the initial stages of development were of the non-storage type,
where the signal output from the camera for the light on each
picture element is produced only at the instant it is scanned.

 Most of the illumination is wasted. Since the effect of light on


the image plate cannot be stored, any instantaneous pick-up has
low sensitivity.
 High camera sensitivity is necessary to televise scenes at low
light levels and to achieve this, storage type tubes have been
developed.
Electron Scanning Beam:
 As in the case of picture tubes an electron gun produces a narrow beam
of electrons for scanning. In-camera tubes magnetic focusing is
normally employed. The electrons must be focused on a very narrow
and thin beam.
 The diameter of the beam determines the size of the smallest picture
element. Hence the finest detail of the scene to which it can be resolved.
 Any movement of electric charge is a flow of current and thus the
electron beam constitutes a very small current which leaves the cathode
in the electron gun and scans the target plate.
 The scanning is done by deflecting the beam with the help of magnetic
fields produced by horizontal and vertical coils in the deflection yoke
put around the tubes. The beam scans 312.5 lines per field and 50 such
fields are scanned per second.
Electron Multiplier:
 When the electron beam reaches the metal plate with high velocity, then
secondary emission of electrons takes place.
 Thus camera tubes are incorporated with electron multiplier structure,
that utilizes these secondary electrons to boost the level of low
photoelectric current, that is employed to develop video signal.
 The electron multiplier is a series of cold anode- cathode electrodes
called dynodes mounted internally, with each at a progressively higher
positive potential .
 The few electrons emitted by the photocathode are accelerated to a more
positive dynode.
 The primary electrons can then force the ejection of secondary emission
electrons when the velocity of the incident electrons is large enough.
The secondary emission ratio is normally three or four, depending on
the surface and the potential applied.
 The number of electrons available is multiplied each time the secondary
electrons strike the emitting surface of the next more positive dynode.
 The current amplification thus obtained is noise free because the
electron multiplier does not have any active device or resistors.
 PAL Receiver
1. VHF and UHF Tuner:
 The TV receiver has a VHF and UHF tuner, allows to choose the
channel
 The antennal signal is amplified and transformed into an IF signal,
which is then sent into the video IF amplifier. Tuners also include an
automatic frequency tuning(AFT) feature for accurate colour
reproduction.
 The AFT circuit measure the intermediate frequency and develops a dc
control voltage proportional to the frequency deviations if any.
2. Video IF Amplifier and Video Detector:
 Since the tuner's output signal isn't strong enough to drive the video
detector, it's amplified to the appropriate level via cascaded IF
amplifiers.
 From the modulated composite video stream, the video detector
recovers the original video signal.
3. Sound Strip:
 The frequency modulated sound IF signal is processed in the usual way
to obtain audio output. The volume and tone controls are associated
with the audio amplifier.
 The FM detector's output is suitably amplified and applied to the
loudspeaker for sound reproduction.
4. Automatic Gain Control Circuit:
 Despite the intensity of the input signal, the Automatic Gain Control
(AGC) circuit keeps the output signal at a consistent amplitude.
 The sync separator recovers the H and V sync pulses in the deflection
circuit. Proper oscillators and amplifiers are used to process them.
 Finally, applied to the V and H deflection coils to concurrently deflect
electron beams in the V and H directions.
5. Luminance Signal:
 From the video detector, the chrominance and luminance signals follow
independent routes before rejoining in the matrix portion.
 From the composite video stream, the luminance signal processing
network recovers the luminance (Y) signal.
 The cathodes of colour picture tubes generally receive a negative-going
Y signal (-Y).
6. Color Signal Processing:
 The signal available at the output of the video detector is properly
amplified before feeding it to the various sections.
1. Chrominance Band Pass Amplifier:
The chrominance bandpass amplifier selects the chrominance signal
while rejecting the composite signal's other undesirable components.
2. Burst Banking Circuit: During colour burst periods, this circuit blocks
signal flow to the chrominance bandpass amplifier.
3. Burst Amplifier: The burst gate amplifier isolates the colour burst
signal from the chrominance signal while also enhancing it to the
appropriate level. This transmission has a frequency of 4.43 MHz.
4. Generation and Control of Subcarriers:
The phase discriminator and variable reactance components combine to
serve as an Automatic Phase Control circuit in the subcarrier oscillator,
which is a crystal oscillator used for creating subcarrier signals with a
frequency of 4.43 MHz. It picks up subcarrier and burst signals.
7.Colour killer:
 If the frequency of the subcarrier oscillator is exactly right, its phase is
altered by 90o for the incoming burst signal.
 The APC (Automatic Phase Control )circuit's output is fed into a 7.8
kHz tuned amplifier.
 The 7.8 kHz ac component was overlaid on the output signal in this
circuit.
 The output of this circuit is sent into the colour killer and identification
circuits.
 Before applying the subcarrier output to the V demodulator, the
identification circuit controls an electrical switch that alternately
reverses the phase of the subcarrier output.
 The 7.8 kHz component is available at the APC circuit of the reference
subcarrier oscillator when a colour signal is received.
 The chrominance bandpass amplifier now performs the usual operation.
8. Separation of U and V Modulation Products:
 Here, the PAL delay line circuit, adder, subtractor, V and U sync
demodulators, and difference signal amplifiers with matrix network are
considered.
 The chrominance signal generated by the chroma bandpass amplifier is
sent into one of the adder and subtractor circuits' inputs.
 Using a PAL delay line, the same signal is delayed and applied to the
other inputs of adder and subtractor circuits.
 The U information is the adder's output, the subtractor's output is V
data. Two separate double sidebands, suppressed carrier RF signals
emerge from the adder and subtractor's output.
 These signals are sent to synchronous demodulators in the U and V
bands, respectively.
 The concern oscillator's colour subcarrier signal is applied straight to the
U sync demodulator.
 Similarly, the same signal is delivered to the V sync demodulator
through an electrical switch to create a + or -90o line by line phase-
shifted signal.
 The original B-Y signal is recovered by the U demodulator. Similarly,
the R-Y signal is recovered by the V demodulator.
 The G-Y signal is produced by combining the two signals in a matrix
network.
 These colour difference signals are applied to the colour picture tube's
matching grids.
fig: PAL Receiver
NTSC,PAL,SECAM:
THE NTSC SYSTEM:
 The world’s first commercial colour television system was NTSC
(National Television System Committee)
introduced in the USA in 1953, and later adopted by Canada, Mexico
and Japan.
 In the NTSC system the two colour difference signals, created by the
subtraction of red and blue from the total signal, are not transmitted
together, but with one a quarter of a cycle behind the other (in
quadrature).
 The signals are then added together to form a single chrominance signal.
 When it reaches the receiver, the decoding circuits inside the TV break
down the chrominance signal and separate it from its carrier wave.
 The two signals are then fed into a matrix which then combines them
with the luminance signal to recreate the three original colour signals.
 These then create beams in three electron guns.
 The main drawback in the NTSC system is that even slight errors in the
phase between the colour difference signals produce errors at the
decoding stage so that the set applied too much of one colour; hence the
mnemonic for remembering the name “Never Twice the Same Colour”.
NTSC receivers have a hue control.
 The NTSC system is a simultaneous system which uses quadrature
modulation and is compatible with
monochrome systems.

THE PAL SYSTEM:


 The PAL system, which is a refinement of the NTSC system, has been
adopted in our country.
 The PALsystem aims to improve the colour picture. The signals are
transmitted in the same way, but the receiver
 delays the information on every line by means of an ultrasonic device
for the exact time needed to compare it with the signal for the next line.
 If, for example, a certain line of the picture, as received, contains too
strong a red signal, the system ensures the next line will be too low on
red by reversing the polarity of alternate lines.
 The final information passed to the picture tube is the average of the
delayed first line and the corrected second line, so cancelling out the
error.
 No hue control is necessary.
 The PAL (Phase Alternate Line) colour TV system may be said to stand
midway between the NTSC and SECAM system.
 The PAL system is not a standard in its own right since the name refers
only to the technique used to encode the colour information. PAL
standards exist in both the 625/50 and the 525/60 scanning frequency
standards.
 There is also a PAL-N version used exclusively in Argentina which is a
cross between the 625/50 scanning
and 3.58 MHz subcarrier.
 PAL is a simultaneous compatible TV system using quadrature
modulation. In contrast to SECAM, colour information in the PAL
system is transmitted on every line.

THE SECAM SYSTEM:


 SECAM (Sequential couleur a memoire). It is used in the USSR and
France.
 The SECAM system has a different mode of transmission.
 The colour difference signals are not arranged a quarter of a cycle apart,
but are kept separate by transmitting them on alternate lines of the
picture.
 Delay lines, inside the receiver then hold up one set of signals so that
they can be recombined to build a picture from alternate lines of signals
from the original scan within the camera.
 The SECAM system does not, give good picture on a black-and-white
receiver because it is difficult to separate the signals from their carrier.
 In 1960-1961, the SECAM system was further modified to enable the
colour difference signals to be frequency modulated onto a subcarrier,
which considerably improved the performance of the system.
 The use of frequency modulation, and the sequential transmission of
chrominance signals are the major features of SECAM as distinct from
the NTSC system.
 In a SECAM receiver the chrominance signals are recovered on a time
rather than a phase basis, thus rendering any synchronous detectors (as
required in the NTSC system) unnecessary.
 Frequency modulation has made the SECAM system insensitive to
amplitude, frequency and phase distortions in the transmission circuit.
 SECAM is a compatible colour TV system.
 Its distinguishing feature is the fact that two colour-difference signals
are transmitted sequentially on alternate lines, frequency modulated on a
subcarrier while the luminance signal is transmitted on every line.
Compare NTSC, PAL, SECAM:

 NTSC only uses 525 with only 486 of them visible. The rest are used as
control mechanisms for synchronization and vertical retrace.
 PAL and SECAM both have a higher resolution by using 100 more lines
per frame. Out of the 625 lines of PAL and SECAM, 576 are visible and
the rest are used for control as well.
 The biggest drawback of NTSC is its inability to correct the colors on-
screen automatically. Thus, it needs a tint control that a user needs to
adjust manually.
 The makers of PAL and SECAM used phase reversal in order to
automatically correct the color and eliminate the need for a tint control.
 NTSC uses a refresh rate of 60Hz while PAL and SECAM use 50Hz
NTSC and PAL use QAM while SECAM uses FM
NTSC and PAL sends the red and blue colors together while SECAM
sends them alternately

You might also like