Training Report Final
Training Report Final
on
Submitted by:
DIVYA CHOUDHARY PARIDHI SHARMA BABITA CHOUDHARY SHRUTI GUPTA
JULY 2011
ACKNOWLEDGEMENT
It has been indeed my privilege to receive scintillating supervision of all the members of organization who has always been helpful and kind to devote time and supervise me during training and extend all possible help in spite of their busy schedule.
Contents
1. Doordarshan
1.1What is television 1.1 About doordarshan 1.2 History 1.3 Present status
3.11 Color Sync pulse generator 3.12 Professional video camera 3.13 Studio cameras 3.14 ENG cameras 3.15 Parts of a camera 3.16 PAL Encoder 3.17 Outside broadcasting 3.18 Video switcher 3.19 Video editing 3.20 Graphics 3.21 Electronics news gathering
4. Microphones
4.1 Condenser microphone 4.2 Dynamic microphone 4.3 Ribbon microphone 4.4 Carbon microphone 4.5 Piezoelectric microphone 4.6 Fiber optic microphone 4.7 Laser microphone 4.8 Speakers as microphones 4.9 Application specific designs
6. Satellite Communication
6.1 Direct broadcasting satellite 6.1.1 Geostationary orbit 6.1.2 Footprints 6.1.3 Beam width 6.2 Earth Station 6.3 Uplinking and Downlinking 6.4 Transponder
7. Conclusion
1. What is TELEVISION? The word television is derived from Greek language and means to see at a distance. However, the up-to-date definition of television is more specific and describes television as the electrical transmission of visual scenes and images by wire or radio, in such a rapid succession so as to produce the illusion of being able to witness the events as they occur at the transmitter end. The images can be reproduced in shades of light between black and white of color and is accompanied by sound transmitted by an associated electrical sound channel. About Doordarshan Doordarshan is a public television broadcaster of India and a division of Prasar Bharti, and nominated by the government of India. It is one of the largest broadcasting organisations in the world in terms of the infrastructureof studios and transmitters. Doordarshan Kendra is a milestone in the field of entertainment and education media source. Here many culture and ideas are combing to produce a programme the whole process in DDK is like blood circulation in body. History: Doordarshan had a modest beginning with the experimental telecast starting in Delhi in September 1959 with a small transmitter and a makeshift studio.The regular daily transmission started in 1965 as a part of All India Radio. Till 1975, seven Indian cities had television service and Doordarshan remained the only television channel in India. DDK, JAIPUR: On 1st June 1987, Jaipur Doordarshan Kendra was setup at Jhalana Doongri and the transmission started on 6th July 1987. Fron 2nd October 1993, the LPTs located at Ajmer, Udaipur, Jodhpur and Bikaner and HPT at Bundi were connected with DDK Jaipur via satellite. The high power transmitter of DDK, Jaipur is situated at Nahargarh. Present Status:
Doordarshan Jaipur is the only program production center in Rajasthan. The studios are housed at Jhalana Doongri, Jaipur and the transmitter is located at the Nahargarh Fort. As per the cencus figures of 2001, the channel covers 79% by population and 72% by area of Rajasthan. On 1/5/95 telecast of DD-2 program commenced from Jaipur. Now DD-2 converted as DD News is being telecast from a 10KW HPT set up in 2000. The reach of the News channel is 11% by area and 32% by population. There are 74.85% TV and 35.83% cable homes in urban Rajasthan and 25.69% TV, 7% Cable homes in rural Rajasthan(NRS2002). Presently this Kendra originates over four hours daily programming(25 hrs and 30minutes weekly) in Hindi and Rajasthani. Programs are also telecast in Sindhi, Urdu, English and Sanskrit. This Kendra originates two news bulletins daily one in Hindiand one in rajasthani and feeds important stories for the national bulletins including regular contribution in Rajyon Se Samachar at 1740 hrs daily on DD News. Major sports events are covered for national telecast in live/recorded mode. Programs contributions are also sent for national telecast.
A television studio is an installation in which television or video productions take place, either for live television, for recording live to tape, or for the acquisition of raw footage for postproduction. The design of a studio is similar to, and derived from, movie studios, with a
few amendments for the special requirements of television production. A professional television studio generally has several rooms, which are kept separate for noise and practicality reasons. These rooms are connected via intercom, and personnel will be divided among these workplaces. Generally, a television studio consists of the following rooms:
y y y
1 Studio floor
decoration and/or sets cameras on pedestals microphones lighting rigs and the associated controlling equipment. several video monitors for visual feedback from the production control room a small public address system for communication A glass window between PCR and studio floor for direct visual contact is usually desired, but not always possible
While a production is in progress, the following people work in the studio floor.
y y
The on-screen "talent" themselves, and any guests - the subjects of the show. A floor director, who has overall charge of the studio area, and who relays timing and other information from the director. One or more camera operators who operate the television cameras.
The production control room (also known as the 'gallery') is the place in a television studio in which the composition of the outgoing program takes place. Facilities in a PCR include:
y
a video monitor wall, with monitors for program, preview, videotape machines, cameras, graphics and other video sources switcher a device where all video sources are controlled and taken to air. Also known as a special effects generator audio mixing console and other audio equipment such as effects devices character generator creates the majority of the names and full screen graphics that are inserted into the program digital video effects and/or still frame devices (if not integrated in the vision mixer) technical director's station, with waveform monitors, vectorscopes and the camera control units or remote control panels for the camera control units (CCUs) VTRs may also be located in the PCR, but are also often found in the central machine room .
y y
y y
The actual circuitry and connection boxes of the vision mixer, DVE and character generator devices camera control units VTRs
y y
patch panels for reconfiguration of the wiring between the various pieces of equipment.
one or more make-up and changing rooms a reception area for crew, talent, and visitors, commonly called the green room
V.T.R
Station In first chain we will understand studio program recording. Camera output from the studio hall is sent to CCU. CCU is the camera control unit. Many parameters of video signals are controlled form CCu.The output signal of the CCU after making all the correction is sent to NM(VISION MIXER) in PCR 1(production control room). Output of 3 to 4 cameras comes here and final signal that we see at home is selected here using VM according to directors choice. VM is computer based system by PINNACLE used to add transition and many
other effects like chroma keying between two selected camera outputs. The final signal from VM is sent to VTR(Video tape recording).VTR uses both analog and digital tape recording system.
Audio Chain
As we understood video chain, audio chain is also interesting to know, and easier than video chain. In studio program, audio from studio microphones is directly fed to the AUDIO CONSOLE place in PCR-1. Audio console offers a range of multitrack mixing systems with exceptional flexibility. Audio console used to mix audio from different sources and maintain its output. From AC, signal is directly recorded on tape with video signal in VTR.
A vision mixer (also called video switcher, video mixer or production switcher) is a device used to select between several different video sources and in some cases composite (mix) video sources together and add special effects. This is similar to what a mixing console does for audio. Explanation Typically a vision mixer would be found in a professional television production environment such as a television studio, cable broadcast facility, commercial production facility or linear video editing bay. The term can also refer to the person operating the device. Vision mixer and video mixer are almost exclusively European terms to describe both the equipment and operatorsSoftware vision mixers are also available. Capabilities and usage in TV Productions Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations and generate color signals (called mattes in this context). Most vision mixers are targeted at the professional market, with newer analog models having component video connections and digital ones using SDI. They are used in live and video taped television productions and for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer based non-linear editing.
A character generator (CG for short) is a device or software that produces static or animated text (such as crawls and rolls) for keying into a video stream. Modern character generators are actually computer-based, and can generate graphics as well as text. Character generators are primarily used in the broadcast areas of live sports or news presentations, given that the modern character generator can rapidly (i.e., "on the fly") generate high-resolution, animated graphics for use when an unforseen situation in the game or newscast dictates an opportunity for broadcast coverage -- for example, when, in a football game, a previously unknown player begins to have what looks to become an outstanding day, the character generator operator can rapidly, using the "shell" of a similarly-designed graphic composed for another player, build a new graphic for the previously unanticipated performance of the lesser known player. The character generator, then, is but one of many technologies used in the remarkably diverse and challenging work of live television, where events on the field or in the newsroom dictate the direction of the coverage. In such an environment, the quality of the broadcast is only as good as its weakest link, both in terms of personnel and technology. Hence, character generator development never ends, and the distinction between hardware and software CG's begins to blur as new platforms and operating systems evolve to meet the live television Hardware CGs Hardware CGs are used in television studios and video editing suites. A DTP-like interface can be used to generate static and moving text or graphics, which the device then encodes into some high-quality video signal, like digital SDI or analog component video, high definition or even RGB video. In addition, they also provide a key signal, which the
compositing vision mixer can use an alpha channel to determine which areas of the CG video are translucent. Software CGs Software CGs run on standard off-the-shelf hardware and are often integrated into video editing software such as nonlinear video editing applications. Some stand-alone products are available, however, for applications that do not even attempt to offer text generation on their own, as high-end video editing software often does, or whose internal CG effects are not flexible and powerful enough. Some software CGs can be used in live production with special software and computer video interface cards. In that case, they are equivalent to hardware CGs.
The camera control unit (CCU) is installed in the production control room (PCR), and allows various aspects of the video camera on the studio floor to be controlled remotely. The most commonly made adjustments are for white balance and aperture, although almost all technical adjustments are made from controls on the CCU rather than on the camera. This frees the camera operator to concentrate on composition and focus, and also allows the technical director of the studio to ensure uniformity between all the cameras. As well as acting as a remote control, the CCU usually provides the external interfaces for the camera to other studio equipment, such as the vision mixer and intercom system, and contains the camera's power supply.
D1 (Sony) and Broadcast Television Systems Inc. D2 (Sony and Ampex) Digital Betacam (Sony) Betacam IMX (Sony) DVCAM (Sony) DVCPRO (Panasonic)
.
A VCR. The videocassette recorder (or VCR, more commonly known in the British Isles as the video recorder), is a type of video tape recorder that uses removable videotape cassettes containing magnetic tape to record audio and video from a television broadcast so it can be
played back later. Many VCRs have their own tuner (for direct TV reception) and a programmable timer (for unattended recording of a certain channel at a particular time).
A video monitor is a device similar to a television, used to monitor the output of a video generating device, such as a video camera, VCR, or DVD player. It may or may not have audio monitoring capability. Unlike a television, a video monitor has no tuner and, as such, is unable to independently tune into an over-the-air broadcast. One common use of video monitors in is Television stations and Outside broadcast vechicles, where broadcast engineers use them for confidence checking of signals throughout the system. Video monitors are also used extensively in the security industry with Closed-circuit television cameras and recording devices. Common display types for video monitors
y y y
Serial Digital Interface (SDI, as SD-SDI or HD-SDI) Composite video Component video
Mixing consoles are used in many applications, including recording studios, public address systems, sound reinforcement systems, broadcasting, television, and film post-production. An example of a simple application would be to enable the signals that originated from two separate microphones (each being used by vocalists singing a duet, perhaps) to be heard through one set of speakers simultaneously. When used for live performances, the signal produced by the mixer will usually be sent directly to an amplifier, unless that particular mixer is powered or it is being connected to powered speakers. Further channel controls affect the equalization of the signal by separately attenuating or boosting a range of frequencies (e.g., bass, midrange, and treble frequencies). Most large mixing consoles (24 channels and larger) usually have sweep equalization in one or more bands of its parametric equalizer on each channel, where the frequency and affected bandwidth of equalization can be selected. Smaller mixing consoles have few or no equalization control. Some mixers have a general equalization control (either graphic or parametric). Each channel on a mixer has an audio taper pot, or potentiometer, controlled by a sliding volume control (fader), that allows adjustment of the level, or amplitude, of that channel in
the final mix. A typical mixing console has many rows of these sliding volume controls. Each control adjusts only its respective channel (or one half of a stereo channel); therefore, it only affects the level of the signal from one microphone or other audio device. The signals are summed to create the main mix, or combined on a bus as a submix, a group of channels that are then added to get the final mix (for instance, many drum mics could be grouped into a bus, and then the proportion of drums in the final mix can be controlled with one bus fader). There may also be insert points for a certain bus, or even the entire mix. On the right hand of the console, there are typically one or two master controls that enable adjustment of the console's main mix output level. Finally, there are usually one or more VU or peak meters to indicate the levels for each channel, or for the master outputs, and to indicate whether the console levels are over modulating or clipping the signal. Most mixers have at least one additional output, besides the main mix. These are either individual bus outputs, or auxiliary outputs, used, for instance, to output a different mix to on-stage monitors. The operator can vary the mix (or levels of each channel) for each output. As audio is heard in a logarithmic fashion (both amplitude and frequency), mixing console controls and displays are almost always in decibels, a logarithmic measurement system. This is also why special audio taper pots or circuits are needed. Since it is a relative measurement, and not a unit itself (like a percentage), the meters must be referenced to a nominal level. The "professional" nominal level is considered to be +4 dBu. The "consumer grade" level is 10 dBV.
The required pulse timings at H and V rate are derived from the 2H master oscillator through frequency dividers as shown in the figure. The blanking and sync pulses are derived from the 2H, H and V pulses employing suitable pulse shapers and pulse adders or logic gates.
3.8 Lighting system of studio Basically the fittings employ incandescent lamps and quartz iodine lamps at appropriate color temperatures. Quartz iodine lamps are also incandescent lamps with quartz glass envelope and iodine vapour atmosphere inside. These lamps are more stable in operation and color temperature with respect to aging. The lamp fittings generally comprise spot lights of 0.5 kW and I kW and broads of 1 kW, 2 kW and 5 kW. Quartz iodine lamps of 1 kW provide flood lights. A number of these fittings are suspended from the top so that they can be adjusted unseen. The adjustments for raising and lowering can be done by (i) hand operation for smaller suspensions, (ii) winch motor operated controls for greater mechanical loads of batten suspensions carrying a number of light fittings, (iii) unipole suspensions carrying wells of light fittings manipulated from a catwalk of steel structure at the top ceiling where the poles carrying these are clamped. The lighting is controlled by varying the effective current flow through the lamps by means of silicon controlled rectifier (SCR) dimmers. These enable the angle of current flow to be continuously varied by suitable gate-triggering signals. The lighting patch panels and SCR dimmer controls for the lights are provided in a separate room. The lighting is energized and controlled by switches and faders on the dimmer console in the PCR, from the technical
presentation panel. The lighting has to prevent shadows and produce desired contrast effects. Following are some of the terms used in lighting. High key: lighting is the lighting that gives a picture that has gradations that fall between gray shades and white, confining dark gray and black to few areas as in news reading, panel discussions, etc. Low key: lighting is the lighting that gives picture having gradations falling from gray to black with few areas of light gray or white. Key light: is the principal source of direct illumination often with hinged panels or shutters to control the spread 'of the light beam. Fill light: is the supplementary soft light to fill details to reduce shadow contrast range. Back light: is the illumination from behind the subject in the plane of camera optical axis, to provide 'accent lighting' to bring out the subject against the background or the scene. 3.9 Audio Pick-up For sound arrangement, the microphone placement technique depends upon the type of program. In some cases, e.g. discussions, news and musical programs, the mikes may be visible to the viewers and these can be put on a desk or mounted on floor stands. In other programs, for instance, dramas, the mikes must be out of view. Such programs require hidden microphones or a boom-mounted mike with a boom operator. A unidirectional microphone mounted on the boom arm, high enough to be out of sight, is desirable here. The boom operator must manipulate the boom properly. Lavaliere microphones and hidden microphones are also useful in such programs. In a television studio, there is considerable ambient noise resulting from off-the-camera activity, hence directional mikes are frequently used. The studio walls and ceilings are treated with sound absorbing material to make them as dead as possible. Artificial reverberation is then required to achieve proper audio quality.
raster in the picture monitor will shift slightly as the cameras are switched over. System blanking is useful in overcoming this time difference between the two camera signals arriving at the vision mixer unit. The system blanking is much longer in duration and encompasses both the camera blanking periods. The system line blanking is 12 s whereas the camera line blanking is only 7 s. This avoids the shift in the raster from being observed. In recent cameras, the time difference due to the differences in camera cable lengths is offset by auto phasing circuits which ensure that the video signals arriving from all cameras are all time-coincident irrespective of their cable lengths. Once the circuit is adjusted, the cable length has no effect on the timings. Even in such cases, the system blanking is necessary to mask off the unwanted oscillations or distortions at the end or start of the scanning line.
It is common for professional cameras to split the incoming light into the three primary colors that humans are able to see, feeding each color into a separate pickup tube (in older cameras) or charge-coupled device (CCD). Some high-end consumer cameras also do this, producing a higher-quality image than is normally possible with just a single video pickup. 3.14 ENG Cameras Often used in independent films, ENG video cameras are similar to consumer camcorders, and indeed the dividing line between them is somewhat blurry, but a few differences are generally notable:
y
They are bigger, and usually have a shoulder stock for stabilizing on the cameraman's shoulder They use 3 CCDs instead of one (as is common in digital still cameras and consumer equipment), one for each primary color They have removable/swappable lenses All settings like white balance, focus, and iris can be manually adjusted, and automatics can be completely disabled If possible, these functions will be even adjustable mechanically (especially focus and iris), not by passing signals to an actuator or digitally dampening the video signal. They will have professional connectors - BNC for video and XLR for audio A complete timecode section will be available, and multiple cameras can be timecode-synchronized with a cable "Bars and tone" will be available in-camera (the bars are SMPTE (Society of Motion Picture and Television Engineers) Bars similar to those seen on television when a station goes off the air, the tone is a test audio tone)
y y
y y
3.15 Parts of a Camera:-Lens Turret- a judicious choice of the lens can considerably improve the quality of image , depth of field and the impact which is intended to be created on the viewer. Accordingly a number of different viewing angles are provided. Their focal lengths are slightly adjusted by movement of the front element of the lens located on the lens assembly .
Zoom Lens- a zoom lens has a variable focal length with an angle of 10:1 or more in this lens the viewing angle and field view can be varied without loss of focus . This enables dramatic close up control of the smooth and gradual change of focal length by the camera- man while televising a scene appears to viewers if he approaching or recording from the scene. Camera Mounting- studio camera is necessary to be able to move camera up and down and arround its centre axis to pic-up different sections of the scene. View Finder- to permit the camera operator to frame the scene and maintain proper focus of an electronic view finder is provided with most Tv camera. It receive video signals from the control room stabilizing amplifier.The view finder has its own diflection circuitry as in any other monitor , to produce the raster. The view finder also has a built in dc restorer for maintaining average brightness of the scene televised.
The first and largest part is the production area where the director, technical director, assistant director, character generator operator and producers usually sit in front a wall of monitors. This area is very similar to a Production control room. The technical director sits in front of the video switcher. The monitors show all the video feeds from various sources, including computer graphics, cameras, video tapes, or slow motion replay machines. The wall of monitors also contains a preview monitor showing what could be the next source on air (does not have to be depending on how the video switcher is set up) and a program monitor that shows the feed currently going to air or being recorded. The second part of a van is for the audio engineer; it has a sound mixer (being fed with all the various audio feeds: reporters. commentary, on-pitch microphones, etc. The audio engineer can control which channels are added to the output and will follow instructions from the director. The audio engineer normally also has a dirty feed monitor to help with the synchronization of sound and video.
The 3rd part of the van is video tape. The tape area has a collection of video tape machines (VTRs) and may also house additional power supplies or computer equipment.
The 4th part is the video control area where the cameras are controlled by 1 or 2 people to make sure that the iris is at the correct exposure and that all the cameras look the same.
The 5th part is transmission where the signal is monitored by and engineered for quality control purposes and is transmitted or sent to other trucks.
non-linear editing system, using computers with video editing software linear video editing, using videotape
Video editing is the process of re-arranging or modifying segments of video to form another piece of video. The goals of video editing are the same as in film editing the removal of unwanted footage, the isolation of desired footage, and the arrangement of footage in time to synthesize a new piece of footage Clips are arranged on a timeline, music tracks and titles are added, effects can be created, and the finished program is "rendered" into a finished video. Non Linear Editing
The term "nonlinear editing" is also called "real time" editing, "random-access" or "RA" editing, "virtual" editing, "electronic film" editing, and so on.
Non-linear editing for film and television postproduction is a modern editing method which involves being able to access any frame in a video clip with the same ease as any other. This method is similar in concept to the "cut and glue" technique used in film editing from the beginning. However, when working with film, it is a destructive process, as the actual film negative must be cut. Non-linear, non-destructive methods began to appear with the introduction of digital video technology. Video and audio data are first digitized to hard disks or other digital storage devices. The data is either recorded directly to the storage device or is imported from another source. Once imported they can be edited on a computer using any of a wide range of software. With the availability of commodity video processing specialist video editing cards, and computers designed specifically for non-linear video editing, many software packages are now available to work with them
Linear Editing It is done by using VCR and using monitor to see the output of editing.
3.20 Graphics
The paint-box is a professional tool for graphics designer. Using an electronics curser or pen and a electronics board, by paint box any type of design can be created. An artist can capture any live video-frame and retouch it and subsequently process, cut or paste it on another picture and prepare a stencil out of the grabbed picture. The system consists of:
Mainframe electronics, a graphics table, a keyboard, a floppy disk drive, 385MB Winchester disk drive.
4. MICROPHONES
A microphone (colloquially called a mic or mike; both pronounced) is an acoustic-toelectric transducer or sensor that converts sound into an electrical signal. In 1876, Emile Berliner invented the first microphone used as a telephone voice transmitter. Microphones are used in many applications such as telephones, tape recorders, karaoke systems, hearing aids, motion picture production, live and recorded audio engineering, FRS radios, megaphones, in radio and television broadcasting and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes such as ultrasonic checking or knock sensors. Most microphones today use electromagnetic induction (dynamic microphone), capacitance change (condenser microphone), piezoelectric generation, or light modulation to produce an electrical voltage signal from mechanical vibration. The sensitive transducer element of a microphone is called its element or capsule. A complete microphone also includes a housing, some means of bringing the signal from the element to other equipment, and often an electronic circuit to adapt the output of the capsule to the equipment being driven. Microphones are referred to by their transducer principle, such as condenser, dynamic, etc., and by their directional characteristics. Sometimes other characteristics such as diaphragm size, intended use or orientation of the principal sound
input to the principal axis (end- or side-address) of the microphone are used to describe the microphone.
Inside the Oktava 319 condenser microphone The condenser microphone, invented at Bell Labs in 1916 by E. C. Wente[2] is also called a capacitor microphone or electrostatic microphone. Here, the diaphragm acts as one plate of a capacitor, and the vibrations produce changes in the distance between the plates. There are two types, depending on the method of extracting the audio signal from the transducer: DC-biased and radio frequency (RF) or high frequency (HF) condenser microphones. With a DC-biased microphone, the plates are biased with a fixed charge (Q). The voltage maintained across the capacitor plates changes with the vibrations in the air, according to the capacitance equation (C = Q / V), where Q = charge in coulombs, C = capacitance in farads and V = potential difference in volts. The capacitance of the plates is inversely proportional to the distance between them for a parallel-plate capacitor. (See capacitance for details.) The assembly of fixed and movable plates is called an "element" or "capsule." A nearly constant charge is maintained on the capacitor. As the capacitance changes, the charge across the capacitor does change very slightly, but at audible frequencies it is sensibly constant. The capacitance of the capsule (around 5 to 100 pF) and the value of the bias resistor (100 megohms to tens of gigohms) form a filter that is high-pass for the audio signal, and low-pass for the bias voltage. Note that the time constant of an RC circuit equals the product of the resistance and capacitance. Within the time-frame of the capacitance change (as much as 50 ms at 20 Hz audio signal), the charge is practically constant and the voltage across the capacitor changes instantaneously
to reflect the change in capacitance. The voltage across the capacitor varies above and below the bias voltage. The voltage difference between the bias and the capacitor is seen across the series resistor. The voltage across the resistor is amplified for performance or recording.
the magnetic field generates the electrical signal. Ribbon microphones are similar to moving coil microphones in the sense that both produce sound by means of magnetic induction. Basic ribbon microphones detect sound in a bi-directional (also called figure-eight) pattern because the ribbon, which is open to sound both front and back, responds to the pressure gradient rather than the sound pressure. Though the symmetrical front and rear pickup can be a nuisance in normal stereo recording, the high side rejection can be used to advantage by positioning a ribbon microphone horizontally, for example above cymbals, so that the rear lobe picks up only sound from the cymbals. Crossed figure 8, or Blumlein pair, stereo recording is gaining in popularity, and the figure 8 response of a ribbon microphone is ideal for that application.
resulting in an audible squeal from the old "candlestick" telephone if its earphone was placed near the carbon microphone.
A fiber optic microphone converts acoustic waves into electrical signals by sensing changes in light intensity, instead of sensing changes in capacitance or magnetic fields as with conventional microphones. During operation, light from a laser source travels through an optical fiber to illuminate the surface of a tiny, sound-sensitive reflective diaphragm. Sound causes the diaphragm to vibrate, thereby minutely changing the intensity of the light it reflects. The modulated light is then transmitted over a second optical fiber to a photo detector, which transforms the intensity-modulated light into analog or digital audio for transmission or recording. Fiber optic microphones possess high dynamic and frequency range, similar to the best high fidelity conventional microphones. Fiber optic microphones do not react to or influence any electrical, magnetic, electrostatic or radioactive fields (this is called EMI/RFI immunity). The fiber optic microphone design is therefore ideal for use in areas where conventional microphones are ineffective or dangerous, such as inside industrial turbines or in magnetic resonance imaging (MRI) equipment environments. Fiber optic microphones are robust, resistant to environmental changes in heat and moisture, and can be produced for any directionality or impedance matching. The distance between the microphone's light source and its photo detector may be up to several kilometers without need for any preamplifier and/or other electrical device, making fiber optic microphones suitable for industrial and surveillance acoustic monitoring. Fiber optic microphones are used in very specific application areas such as for infrasound monitoring and noise-canceling. They have proven especially useful in medical applications, such as allowing radiologists, staff and patients within the powerful and noisy magnetic field to converse normally, inside the MRI suites as well as in remote control rooms.[10]) Other uses include industrial equipment monitoring and sensing, audio calibration and measurement, high-fidelity recording and law enforcement.
this surface displace the returned beam, causing it to trace the sound wave. The vibrating laser spot is then converted back to sound. In a more robust and expensive implementation, the returned light is split and fed to an interferometer, which detects movement of the surface. The former implementation is a tabletop experiment; the latter requires an extremely stable laser and precise optics. A new type of laser microphone is a device that uses a laser beam and smoke or vapor to detect sound vibrations in free air Sound pressure waves cause disturbances in the smoke that in turn cause variations in the amount of laser light reaching the photo detector.
The composite video signal is formed by the electrical signal corresponding to the picture information in the lines scanned in the TV camera pick-up tube and the synchronizing signals introduced in it. It is important to preserve its waveform as any distortion of the video signal will affect the reproduced picture, while a distortion in the sync pulses will affect synchronization resulting in an unstable picture. The signal is, therefore, monitored with the help of an oscilloscope, at various stages in the transmission path to conform with the standards. In receivers, observation of the video signal waveform can provide valuable clues to circuit faults and malfunctions
Composite video is the format of an analog television (picture only) signal before it is combined with a sound signal and modulated onto an RF carrier. It is usually in a standard format such as NTSC, PAL, or SECAM. It is a composite of three source signals called Y, U and V (together referred to as YUV) with sync pulses. Y represents the brightness or luminance of the picture and includes synchronizing pulses, so that by itself it could be displayed as a monochrome picture. U and V between them carry the colour information. They are first mixed with two orthogonal phases of a colour carrier signal to form a signal called the chrominance. Y and UV are then added together. Since Y is a baseband signal and UV has been mixed with a carrier, this addition is equivalent to frequency-division multiplexing.
5.2 Colorburst
In composite video, colorburst is a signal used to keep the chrominance subcarrier synchronized in a color television signal. By synchronizing an oscillator with the colorburst
at the beginning of each scan line, a television receiver is able to restore the suppressed carrier of the chrominance signals, and in turn decode the color information.
Lower VHF range Band I 41-68 MHz Upper VHF range Band III 174-230 MHz UHF range Band IV 470-582 MHz UHF range Band V 606-790 MHz (Band II 88-108 MHz is allotted for FM broadcasting.)
TELEVISION CHNNEL ALLOCATIONS Cha nnel 1 2 3 4 5 6 7 8 9 10 11 Frequency range, MHz 41- 47 47- 54 54- 61 61- 68 174-181 181-188 188-195 195-202 202-209 209-216 216-223 Picture carrier, MHz Not used for TV 48.25 55.25 62.25 175.25 182.25 189.25 196.25 203.25 210.25 217.25 Sound carrier, MHz 53.75 60.75 67.75 180.75 187.75 194.75 201.75 208.75 215.75 222.75
The channel allocations in band I and band III are given in table There are four channels in band I, of which channel (6 MHz) is no longer used for TV broadcasting, being assigned to other services.
UHF bands. By international ruling of the ITD, these ranges are exclusively allocated to television broadcasting. Subdivision into operating channels and their assignment by location are also ruled by international regional agreement. The continental standards are valid as per the CCIR 1961 Stockholm plan. The details of the various system parameters are as follows. Band I II III IV V VI Special channels Cable TV Frequency (41) 47 to 68 MHz 87.5 (88) to 108 MHz 174 to 223 (230) MHz 470 to 582 MHz 582 to 790 (860) MHz 11.7 to 12.5 GHz 68 to 82 (89) MHz 104 to 174 and 230 to 300 MH Channel 2 to 4 VHF PM sound 5 to 11(12) 21 to 27 28 to 60 (69) superseded by 2 (3) S channels SI to S20 Bandwidt 7 MHz 7 MHz 8 MHz 8 MHz 7 MHz 7 MHz
The saving of frequency band is about 40%; the polarity is negative because of the susceptibility to interference of the synchronizing circuits of early TV receivers (exception: positive modulation}; residual carrier with negative modulation 10% (exception 20%).
Sound: F3E; PM for better separation from vision signal in the receiver (exception: AM).
Sound carrier above vision carrier within RF channel, inversion at IF; (exception: standards A, E and, in part, L ).
In the video signal very low frequency modulating components exist along with the rest of the signal. These components give rise to sidebands very close to the carrier frequency which are difficult to remove by physically realizable filters. Thus it is not possible to go to the extreme and fully suppress one complete sideband in the case of television signals. The low video frequencies contain the most important information of the picture and any effort to completely suppress the lower sideband would result in objectionable phase distortion at these frequencies. This distortion will be seen by the eye as 'smear' in the reproduced picture. Therefore, as a compromise, only a part of the lower sideband is suppressed, and the radiated signal then consists of a full upper sideband together with the carrier, and the vestige (remaining part) of the partially suppressed lower sideband. This pattern of transmission of the modulated signal is known as vestigial sideband or A5C transmission. In the 625 line system, frequencies up to0.75 Mz in the lower sideband are fully radiated. The net result is a H normal double sideband transmission for the lower video frequencies corresponding to the main body of picture information. As stated earlier, because of filter design difficulties it is not possible to terminate the bandwidth of a signal abruptly at the edges of the sidebands. Therefore, an attenuation slope covering approximately 0.5 MHz is allowed either end.
Any distortion at the higher frequency end, if attenuation Slopes were not allowed, would mean a serious loss in horizontal detail, Since the high frequency components of the video modulation determine the amount of horizontal detail in the picture. Fig illustrates the saving of band. Space. which results from
vestigial sideband transmission. The picture signal is seen to occupy a bandwidth of 6.75 MHz instead of 11MHz.
5.9 Transmission
This high channel capacity can only be achieved with internal studio links via coaxial cables or fiber optics. The public communications networks of present-day technology, the limits per channel lie at the hierarchical step of 34 Mbits/s for microwave links, later 140 Mbits/s. Therefore great attempts are being made at reducing the bit rate with the aim of achieving satisfactory picture quality with 34 Mbits/s per channel. Terrestrial TV transmitters and coaxial copper cable networks are unsuitable for digital
F-sample
Quantization
q Data flow/chl
16 bits 16 bits
512Kbits/sec 768Kbits/sec
transmissions. 5-EEllites with carrier frequencies of about 20 GHz and above may be used. The digital coding of sound signals for satellite sound broadcasting and for the digital sound studio is more elaborate with respect to quantizing than for video signals. A quantization q of 16 Bits/amplitude value is required to obtain a quantizing signal-to-noise ratio S/Nq of 98 dB, [S/Nq =96 + 2) dB]. The sampling frequency must follow the sampling theorem f sample =/> 2 x fmax, where fmax is the maximum frequency of the base band.
6. SATELLITE COMMUNICATION
Television could not exist in its contemporary form without satellites. Since 10 July 1962, when NASA technicians in Maine transmitted fuzzy images of themselves to engineers at a receiving station in England using the Telstar satellite, orbiting communications satellites have been routinely used to deliver television news and programming between companies and to broadcasters and cable operators. And since the mid-1980s they have been increasingly used to broadcast programming directly to viewers, to distribute advertising, and to provide live news coverage.
6.1.1 Geostationary Orbit As indicated in Section 7.12, satellites orbiting at a height of about 36,000 km from the earth, at an orbital speed of about 3 km/s (11,000 km/h) can act as a geostationary satellite, when the centrifugal force acting on the satellite just balances the gravitational pull of the earth. If M is the mass of the earth, m is the mass of the satellite, r, the radius of the orbit, and G, the gravitational Constant, we have the centrifugal force,
mv2 r
Mm *G r2
= 24*3600 seconds
Put: M = 5.974 X 10
24 11
kg,
G = 6.6672 X 10-
Gives the orbital radius of a synchronous satellite as 42164 km. Deducting the radius of earth equal to 6378 km, the distance from earth surface will be 35786 km.
6.1.2 Footprints
As the satellite radio beam is aimed towards the earth, it illuminates on the earth an oval service area, called the 'footprint'. Because of slant illumination of the earth by the equatorial satellite, this is actually an egg-shaped area with the sharper side pointing towards the pole. The size of the footprint depends on how greatly the beam spreads on to the surface of the earth intercepted by it. The foot prints for contours of 3 dB or half power beam width are usually considered. The beam width planning depends on the angle of incidence of the beam on the earth or the angle elevation of the satellite. It can be directly controlled by the size of the on-board parabolic antenna. Present day launchers can carry antennas of around 3m,
giving a minimum beam width of about 0.6. With difficulties in accurate station-keeping, it is p r u d e n t t o allow for a margin of around 0.10 when planning the footprint to cover a country. Some satellite employ additional antennas to emit spot beams that cover regions beyond the normal oval shape the slant range of a satellite involves calculation of the distances from the bore sight point of the beam, covered by the semi-beam width angle, considering the geometry of footprint
The radiation pattern from a parabolic dish can be calculated from the equation:
sin
is given by twice this angle 0. The 3 dB beamwidth of the main lobe is given by the half lobe angle
3Db=58( /D)
It may be observed that the antenna gain is inversely proportional to square of the beam width. That is, a decrease of the beam width by a factor of 2 obtained by doubling the diameter of the dish increases the antenna gain by a factor of 4 (6 dB).
An earth station or ground station is the surface-based (terrestrial) end of a communications link to an object in outer space. The space end of the link is occasionally referred to as a space station (though that term often implies a human-inhabited complex). The majority of earth stations are used to communicate with communications satellites, and are called satellite earth stations or teleports, but others are used to communicate with space probes, and manned spacecraft. Where the communications link is used mainly to carry
telemetry or must follow a satellite not in geostationary orbit, the earth station is often referred to as a tracking station. satellite earth station is a communications facility with a microwave radio transmitting and receiving antenna and required receiving and transmitting equipment for communicating with satellites (also known as space stations). Many earth station receivers use the double superhet configuration shown in which has two stages of frequency conversion. The front end of the receiver ~~ mounted behind the antenna feed and converts the incoming RF signals to a first IF in the range 900 to 1400 MHz. This allows the receiver to accept all the signals transmitted from a satellite in a 500-MHz bandwidth at C band or Ku band, for example. The RF amplifier has a high gain and the mixer is followed by a stage of IF amplification. This section of the receiver is called a low noise block converter (LNB). The 900-1400 MHz signal is sent over a coaxial cable to a settop receiver that contains another down-converter and a tunable local oscillator. The local oscillator is tuned to convert the incoming signal from a selcted transponder to a second IF frequency. The second IF amplifier has a bandwidth matched to the spectrum of the transponder signal. Direct broadcast satellite TV receivers at Ku band use this approach; with a second IF filter bandwidth of 20 MHz.
A satellite receives television broadcast from a ground station. This is termed as then broadcast down over the footprint area in a process called Down linking.
Up
linking because the signals are sent up from the ground to the satellite. These signals are
To ensure that the uplink and downlink signals do not interfere with each other, separate frequencies are used for uplinking and downlinking.
S BAND
2.555 to 2.635
5.855 to 5.935
3.4 to 3.7
5.725 to 5.925
3.7 to 4.2
5.925 to 6.425
4.5 to 4.8
6.425 to 7.075
10.7 to 13.25
12.75 to 14.25
Ka Band
18.3 to 22.20
27.0 to 31.00
6.4 T r a ns ponde r s
The word transponder is coined from transmitter-responder and it refers to the equip ment channel through the satellite that connects the receive antenna with the transmit antenna. The transponder itself is not a single unit of equip ment, but consists of some units that are co mmon to all transponder channels and others that can be identified with a particular channel.
7. CONCLUSION Doordarshan and its services are available today in all over India via highly advanced networking facilities and technology tie-ups with satellite and cable system operators. It has been at the forefront in the use of satellite networking, secure encryption, subscriber management services and call centre technologies. It is one of the oldest and fastest growing industries in communication. Till the date it has provide its most valuable service to people not only in India but across the globe. And it promises to move ahead and continue its hard work in the same manner.