UNIT-IV Video & Animation
UNIT-IV Video & Animation
Video is a combination of image and audio. It consists of a set of still images called frames
displayed to the user one after another at a specific speed, known as the frame rate measured in
number of frames per second (fps), If displayed fast enough our eye cannot distinguish the
individual frames, but because of persistence of vision merges the individual frames with each
other thereby creating an illusion of motion. The frame rate should range between 20 and 30 for
perceiving smooth realistic motion. Audio is added and synchronized with the apparent
movement of images. The recording and editing of sound has long been in the domain of the PC.
Doing so with motion video has only recently gained acceptance. This is because of the
enormous file size required by video.
Example: One second of 24-bit, 640 × 480 mode video and its associated audio requires 27 MB
of space, i.e. (640 × 480 pixels) (24 bits/pixel) (30 frames/sec). Thus, a 20 minute dip fills up 32
GB of disk space.
In all the multimedia elements, video places the highest performance demand on your computer
and its memory and storage. Consider that a high-quality color still image on a computer screen
could require as much as a megabyte of storage memory. Multiply this by 30-the number of
times per second that still picture is replaced to provide the appearance of motion-and you would
need 30 megabytes of storage to play your video for one second, or 1.8 gigabytes of storage for a
minute. Just moving these entire pictures from computer memory to the screen at that rate would
challenge the processing capability of a super- computer. Multimedia technologies and re-search
efforts today deal with compressing digital video image data into manageable streams of
information so that a massive amount of image can be squeezed into a comparatively small data
file that still delivers a good viewing experience on the intended viewing platform during
playback.
Carefully planned, well-executed video clips can make a dramatic difference in a multimedia
project.
Analog Video
When light reflected from an object passes through a video camera lens that light is converted
into electronic signals by a special sensor called charged couple device (CCD). Top-quality
broadcast cameras may have as many as three CCDs (one for each color of red, green, and blue)
to enhance the resolution of the camera. The output of the CCD is processed by the camera into a
signal containing three channels of color information and synchronization pulses (sync): There
are several video standards for managing CCD output, each dealing with the amount of
separation between the components of the signal. The more separation of the color information
in the signal results in higher quality of the image (and the more expensive the equipment).
Digital Video
Analog video has been used for years in recording / editing studios and television broadcasting.
For the purpose of incorporating video content in multimedia production video needs to be
converted into the digital format.
It has already been mentioned that processing digital video on personal computers was very
difficult initially, firstly because of the huge file sizes required, and secondly of the large bit rate
and processing power required. Full screen video only became a reality after advent of the
Pentium-II processor together with fast disks capable of delivering the required output. Even
with these powerful resources delivering video files was difficult until the reduction in prices of
compression hardware and software. Compression helped to reduce the size of video files to a
great extent which required a lower bit-rate to transfer them over communication buses.
Nowadays video is rarely viewed in; the uncompressed form unless there is specific reason for
doing so, e.g. to maintain the high quality, as for medical analysis.
Using Video On PC
Analog video needs to be converted to the digital format before it can be displayed on a PC
screen. The procedure for conversion involves two types of devices- source devices and capture
devices.
The source and source device can be one of the following:
camcorder with pre-recorded video tape
VCP with pre-recorded video tape
Video camera with live footage.
We need Video capture card to convert analog signal to digital signal along with video capture
Software such as AVI capture, AVI to MPEG Converter, MPEG capture, DAT to MPEG
Converter or MPEG Editor.
Video Formats
Three analog broadcast video standards are commonly in use around the world: NTSC, PAL, and
SECAM. In the United States, the NTSC standard is being phased out, replaced by ATSC digital
television standard. Because these standards and formats are not easily interchangeable, it is
important to know where your multimedia project will be used. A video cassette recorded in
USA which uses NTSC will not play on a television set in any European country (which uses
either PAL or SECAM), even thought the recording method and style of the cassette is “VHS”.
Likewise, tapes recorded in European PAL or SECAM formats will not play back on an NTSC
video cassette recorder. Each system is based on a different standard that defines the way
information is encoded to produce the electronics signal that ultimately creates a television
picture. Multi-format VCRs can play back all three standards but typically cannot dub from one
standard to another; dubbing between standards still require high-end specialized equipment.
The United States, Canada, Mexico, Japan, and many other countries use a system for
broadcasting and displaying video that is based upon the specifications set forth by the 1952
National Television Standards Committee. These standards define a method for encoding
information into the electronic signal that ultimately creates a television picture. As specified by
the NTSC standard, a single frame of video is made up of 525 horizontal scan lines drawn onto
the inside face of a phosphor-coated picture tube every 30th of a second by a fast-moving
electron beam. The drawing occurs so fast that your eye perceives the image as stable. The
electron beam actually makes two passes as it draws a single video frame, first laying down all
the odd-numbered lines, then all the even-numbered lines. Each of these passes (which happen at
a rate of 60 per second, or 60 Hz) paints a field, and the two fields are combined to create a
single frame at a rate of 30 frames per second (fps). (Technically, the speed is actually 29.97
Hz.) This process of building a single frame from two fields is called interlacing, a technique that
helps to prevent flicker on television screens. Computer monitors use a different progressive-
scan technology, and draw the lines of an entire frame in a single pass, without interlacing them
and without flicker.
The Phase Alternate Line (PAL) system is used in the United Kingdom, Western Europe,
Australia, South Africa, China, and South America. PAL increases the screen resolution to 625
horizontal lines, but slows the scan rate to 25 frames per second. As with NTSC, the even and
odd lines are interlaced, each field taking 1/50th of a second to draw (50 Hz).
The Sequential Color and Memory is used in France, Eastern Europe, the former USSR, and a
few other countries. Although SECAM is a 625-line, 50 Hz system, it differs greatly from both
the NTSC and the PAL color systems in its basic technology and broadcast method. Often,
however, TV sets sold in Europe utilize dual components and can handle both PAL and SECAM
systems.
What started as the High Definition Television (HDTV) initiative of the Federal
Communications Commission in the 1980s, changed first to the Advanced Television (ATV)
initiative and then finished as the Digital Television (DTV) initiative by the time the FCC
announced the change in 1996. This standard, slightly modified from the Digital Television
Standard and Digital Audio Compression Standard, moves U.S. television from an analog to
digital standard and provides TV stations with sufficient bandwidth to present four or five
Standard Television signals (STV, providing the NTSC's resolution of 525 lines with a 3:4 aspect
ratio, but in a digital signal) or one HDTV signal (providing 1,080 lines of resolution with a
movie screen's 16:9 aspect ratio). More significantly for multimedia producers, this emerging
standard allows for transmission of data to computers and for new ATV interactive services. As
of May 2003, 1,587 TV stations in the United States (94 percent) had been granted a DTV
construction permit or license. Among those, 1,081 stations were actually broadcasting a DTV
signal, almost all simultaneously-casting their regular TV signal. According to the current
schedule, all the stations are to cease broadcasting on their analog channel and completely switch
to a digital signal by 2006.
It provides high resolution in 16: 9 aspect ratios. This aspect ratio allows the viewing of
Cinemascope and Panavision movies. There is contention between the broadcast and computer
industries about whether to use interlacing or progressive-scan technologies. The broadcast
industry has promulgate an ultra-high-resolution, 1920xl080 interlaced format to become the
cornerstone of a new generation of high-end entertainment centers, but the computer industry
would like to settle on a 1280x720 progressive-scan system for HDTV. While the 1920xl080
format provides more pixel than the 1280x720 standard, the refresh rates are quite different. This
higher-resolution interlaced format delivers only half the picture every 1/60 of a second, and
because of the interlacing, on highly detailed images there is a great deal of screen flicker at 30
Hz. The computer people argue that the picture quality at 1280x720 is superior and steady. Both
formats have been included in the HDTV standard by the Advanced Television System!
Today's multimedia monitors typically use a screen pixel ratio of 4:3 (800x600), but the new
HDTV standard specifies a ratio of 16:9 (1280x720), much wider than tall. There is no easy way
to stretch and shrink existing graphics material to this new aspect ratio, so new multimedia
design and interface principles will need to be developed for HDTV presentations.
TELEVISION:
Conventional System:
Analog television is the original television technology that uses analog signals to transmit video
and audio. In an analog television broadcast, the brightness, colors and sound are represented
by amplitude, phase and frequency of an analog signal.
Analog signals vary over a continuous range of possible values which means that electronic
noise and interference may be introduced. Thus with analog, a moderately weak signal
becomes snowy and subject to interference. In contrast, picture quality from a digital
television (DTV) signal remains good until the signal level drops below a threshold where
reception is no longer possible or becomes intermittent.
Analog television may be wireless (terrestrial television and satellite television) or can be
distributed over a cable network as cable television.
All broadcast television systems used analog signals before the arrival of DTV. Motivated by the
lower bandwidth requirements of compressed digital signals, since the 2000s a digital television
transition is proceeding in most countries of the world, with different deadlines for the cessation
of analog broadcasts.
In an analogue system, the output of the CCD is processed by the camera into three channels of
colour information and synchronization pulses (sync) and the signals are recorded onto magnetic
tape. There are several video standards for managing analogue CCD output, each dealing with
the amount of separation between the components—the more separation of the colour
information, the higher the quality of the image (and the more expensive the equipment). If each
channel of colour information is transmitted as a separate signal on its own conductor, the signal
output is called component (separate red, green, and blue channels), which is the preferred
method for higher-quality and professional video work. Lower in quality is the signal that makes
up Separate Video (S-Video), using two channels that carry luminance and chrominance
information. The least separation (and thus the lowest quality for a video signal) is composite,
when all the signals are mixed together and carried on a single cable as a composite of the three
colour channels and the sync signal. The composite signal yields less-precise colour definition,
which cannot be manipulated or colour-corrected as much as S-Video or component signals.
Enhanced Definition System:
Digital television (DTV) is the transmission of television signals using digital rather than
conventional analog methods.
Advantages of DTV over analog TV include:
Digital television is not the same thing as HDTV (high-definition television). HDTV describes a
new television format (including a new aspect ratio and pixel density), but not how the format
will be transmitted. Digital television can be either standard or high definition.
In the United States, analog television broadcasts will stop in 2009. Analog television sets will
require digital set-top boxes to convert transmissions. Most satellite and cable subscribers
already have converter boxes. However, people who previously used antennas for "over the air"
transmission and cable customers who now plug the cable directly into their sets will need
converters.
High-definition System
The high-definition television, also known as HDTV (High Definition Television) is a television
system with a resolution significantly higher than in the traditional formats (NTSC, SECAM,
PAL).
The HDTV is transmitted digitally and therefore its implementation generally coincides with the
introduction of digital television (DTV), technology that was launched during the 1990s.
Although several patterns of high-definition television have been proposed or implemented, the
current HDTV standards are defined by ITU-R BT.709 as 1080i (interlaced), 1080p
(progressive) or 720p using the 16:9 screen format.
The term “high definition” can refer to the specification of the resolution itself or, more
generally, the mídia capable of such a definition as the video media support or the television set.
What will be of interest in the near future is high definition video, through the successors of the
DVD, HD DVD and Blu-Ray (is expected that the last one will be adopted as a standard) and,
consequently, the projectors and LCD and plasma televisions sets as well as retro projectors and
video recorders with 1080p resolution/definition.
High-definition television (HDTV) yields a better-quality image than standard television does,
because it has a greater number of line resolution.
The visual information is some 2 to 5 times sharper because the gaps between the scan lines are
narrower or invisible to the naked eye.
The larger the size of the television the HD picture is viewed on, the greater the improvement in
picture quality. On smaller televisions there may be no noticeable improvement in picture
quality.
The lower-case “i” appended to the numbers denotes interlaced; the lower-case “p” denotes
progressive: With the interlaced scanning method, the 1,080 lines of resolution are divided into
pairs, the first 540 alternate lines are painted on a frame and then the second 540 lines are
painted on a second frame; the progressive scanning method simultaneously displays all 1,080
lines on every frame, requiring a greater bandwidth.
Modern entertainment industry i.e. film and television has gained new heights because of
advances in animation, graphics and multimedia. Television advertisements, cartoons serials, and
presentation and model designs - all use animation and multimedia techniques.
Animation Techniques
Animators have invented and used a variety of different animation techniques. Basically there
are six animation technique which we would discuss one by one in this section.
Traditionally most of the animation was done by hand. All the frames in an animation had to be
drawn by hand. Since each second of animation requires 24 frames filmfilm, the amount of
efforts required to create even the shortest of movies can be tremendous.
Keyframing
In this technique, a storyboard is laid out and then the artists draw the major frames of the
animation. Major frames are the ones in which prominent changes take place. They are the key
points of animation. Key framing requires that the animator specifies critical or key positions
for the objects. The computer then automatically fills in the missing frames by smoothly
interpolating between those positions.
Procedural
In a procedural animation, the objects are animated by a procedure − a set of rules − not by key
framing. The animator specifies rules and initial conditions and runs simulation. Rules are often
based on physical rules of the real world expressed by mathematical equations.
Behavioral
In behavioral animation, an autonomous character determines its own actions, at least to a
certain extent. This gives the character some ability to improvise, and frees the animator from
the need to specify each detail of every character's motion.
Key Framing
A keyframe is a frame where we define changes in animation. Every frame is a keyframe when
we create frame by frame animation. When someone creates a 3D animation on a computer,
they usually don’t specify the exact position of any given object on every single frame. They
create keyframes.
Keyframes are important frames during which an object changes its size, direction, shape or
other properties. The computer then figures out all the in-between frames and saves an extreme
amount of time for the animator. The following illustrations depict the frames drawn by user
and the frames generated by computer.