Chapter 4 Video and Animation
Chapter 4 Video and Animation
3 Multimedia System
Chapter 4
Video and Animation
Visual Representation
The main objective of visual representation is to offer the viewer a sense of presence in
the scene and of participation in the events portrayed. To meet the main objective, the
televised image should convey the spatial and temporal content of the scene. Importance
measures are:
1. Vertical detail and viewing distance:
The geometry of the field occupied by the television image is based on the ratio of
the picture width W to height H. It is called aspect ratio.
Aspect ratio: ratio of picture width and height (4/3 = 1.33 is the conventional
aspect ratio)
Viewing angle = Viewing distance/Picture height (D/H)
4. Perception of depth:
In natural vision, this is determined by angular separation of images
received by the two eyes of the viewer.
In the flat image of TV, focal length of lenses and changes in depth of
focus in a camera influence depth perception.
7. Continuity of motion:
Motion continuity is achieved at a minimal 15 frames per second; is good
at 30 frames/sec; some technologies allow 60 frames/sec.
NTSC standard provides 30 frames/sec - 29.97 Hz repetition rate.
PAL standard provides 25 frames/sec with 25Hz repetition rate.
8. Flicker effect:
Flicker effect is a periodic fluctuation of brightness perception. To avoid this
effect, we need 50 refresh cycles/sec. Display devices have a display refresh
buffer for this.
Transmission
Video signals are transmitted to receivers through a single television channel. To encode
color, a video signal is a composite of three signals. For transmission purposes, a video
signal consists of one luminance and two chrominance signals.
Video bandwidth is computed as follows
700/2 pixels per line X 525 lines per picture X 30 pictures per second
Visible number of lines is 480.
Color Encoding:
A camera creates three signals
RGB (red, green and blue)
For transmission of the visual signal, we use three signals: 1 luminance
(brightness-basic signal) and 2 chrominance (color signals).
In NTSC, luminance and chrominance are interleaved
Goal at receiver
- separate luminance from chrominance components
- avoid interference between them prior to recovery of primary color signals
for display.
RGB signal
for separate signal coding
consists of 3 separate signals for red, green and blue colors. Other colors are
coded as a combination of primary color. (R+G+B = 1) --> neutral white color.
YUV signal
separate brightness (luminance) component Y and
color information (2 chrominance signals U and V)
Y = 0.3R + 0.59G + 0.11B
U = (B-Y) * 0.493
V = (R-Y) * 0.877
Resolution of the luminance component is more important than U,V
Coding ratio of Y, U, V is 4:2:2
YIQ signal
similar to YUV - used by NTSC format
Y = 0.3R + 0.59G + 0.11B
U = 0.60R - 0.28G + 0.32 B
V = 0.21R -0.52g + 0.31B
Composite signal
All information is composed into one signal
To decode, need modulation methods for eliminating interference b/w luminance
and chrominance components.
Digitization
Before a picture or motion video can be processed by a computer or transmitted over a
computer network, it need to be converted from analog to digital representation.
Digitization is the representation of an object, image, sound, document or a signal
(usually analog signal) by a discrete set of its points or samples.
Digitization = Sampling + Quantization
Sampling is the reduction of a continuous signal to a discrete signal.
Refers to sampling the gray/color level in the picture at MXN array of points.
Once points are sampled, they are quantized into pixels
sampled value is mapped into an integer
quantization level is dependent on number of bits used to represent resulting
integer, e.g. 8 bits per pixel or 24 bits per pixel.
Need to create motion when digitizing video
digitize pictures in time
obtain sequence of digital images per second to approximate analog motion
video.
CPU Peripheral
Devices
The video controller displays the image stored in the frame buffer, accessing the memory
through a separate access port as often as the raster scan rate dictates. The constant
refresh of the display is its most important task. Because of the disturbing flicker effect,
the video controller cycles through the frame buffer, one scan line at a time, typically 60
times/second.
Some computer video controller standards are given here an example. Each of these
systems supports different resolution and color presentation.
Television
Television is the most important application that has driven the development of motion
video. Television is a telecommunication medium for transmitting and receiving moving
images that can be monochrome (black and white) or colored, with or without
accompanying sound. Television may also refer specifically to a television set, television
programming or television transmission.
Conventional Systems
Conventional system used in black and white and color television. Conventional
television systems employ the following standards:
NTSC (National Television Systems Committee)
• NTSC developed in U.S., is the oldest and most widely used television standard.
• The color carrier is used with approximately 4.429 MHZ or with approximately
3.57 MHZ.
• NTSC uses a quadrature amplitude modulation with a suppressed color carrier
and work with a motion frequency of approximately 30 Hz.
• 4×3 Aspect ratio.
• 525 lines
• 30 frames per second.
• Scanned in fields.
Enhanced Systems
Enhanced Definition Television Systems (EDTV) are conventional systems modified to
offer improved vertical and/or horizontal resolution. EDTV are an intermediate solution,
to digital interactive television system and their coming standards.
Digital coding are essential in the design and implementation of HDTV. There are two
kinds of possible digital codings: composite coding and component coding.
Composite Coding:
The simplest possibility for digitizing video signal is to sample the composite analog
video signal. Here, all signal components are converted together into a digital
representation. The composite coding of the color television signal depends on the
television standard. Using component coding, the sampling frequency is not coupled with
the color carrier frequency.
Component Coding:
The principle of component coding consists of separate digitization of various image
components or planes; for example, coding of luminance and color difference
(chrominance) signals. These digital signals can be transmitted together using
multiplexing. The luminance signal is sampled with 13.5 MHz as it is more important
than the chrominance signal.
objects. Choose the animation tool best suited for the job. Then build and tweak your
sequences; experiment with lighting effects. Allow plenty of time for this phase when
you are experimenting and testing. Finally, post-process your animation, doing any
special rendering and adding sound effects.
Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which have been replaced today by acetate or plastic. Cels of famous animated
cartoons have become sought-after, suitable-for-framing collector’s items. Cel animation
artwork begins with key frames (the first and last frame of an action). For example, when
an animated figure of a man walks across the screen, balances the weight of his entire
body on one foot and then the other in a series of falls and recoveries, with the opposite
foot and leg catching up to support the body.
The animation techniques made famous by Disney use a series of progressively different
on each frame of movie film which plays at 24 frames per second. A minute of animation
may thus require as many as 1,440 separate frames. The term cel derives from the clear
celluloid sheets that were used for drawing each frame, which is been replaced today by
acetate or plastic. Cel animation artwork begins with key frames.
Computer Animation
Computer animation programs typically employ the same logic and procedural concepts
as cel animation, using layer, key frame, and tweening techniques, and even borrowing
from the vocabulary of classic animators. On the computer, paint is most often filled or
drawn with tools using features such as gradients and antialiasing. The word links, in
computer animation terminology, usually means special methods for computing RGB
pixel values, providing edge detection, and layering so that images can blend or
otherwise mix their colors to produce special transparencies, inversions, and effects.
Computer Animation is same as that of the logic and procedural concepts as cel
animation and use the vocabulary of classic cel animation– terms such as layer, Key
frame, and tweening. The primary difference between the animation software program is
in how much must be drawn by the animator and how much is automatically generated
by the software In 2D animation the animator creates an object and describes a path for
the object to follow. The software takes over, actually creating the animation on the fly as
the program is being viewed by your user. In 3D animation the animator puts his effort in
creating the models of individual and designing the characteristic of their shapes and
surfaces. Paint is most often filled or drawn with tools using features such as gradients
and anti- aliasing.
Kinematics
It is the study of the movement and motion of structures that have joints, such as a
walking man. Inverse Kinematics is in high-end 3D programs, it is the process by which
you link objects such as hands to arms and define their relationships and limits. Once
those relationships are set you can drag these parts around and let the computer calculate
the result.
Morphing
Morphing is popular effect in which one image transforms into another. Morphing
application and other modeling tools that offer this effect can perform transition not only
between still images but often between moving images as well the morphed images were
built at a rate of 8 frames per second, with each transition taking a total of 4 seconds.
operate on objects. The ASAS program fragment below describes an animated sequence
in which an object called my-cube is spun while the camera pans.
Constraint-based Systems
Support a hierarchy of constraints and to provide motion where constraints are specified
by the dynamics of physical bodies and structural characteristics of materials is a subject
of active research.
Tracking Live Action
Trajectories of objects in the course of an animation can also be generated by tracking
live action. Traditional animation uses rotoscoping. Tracking live-action technique is to
attach some sort of indicator to key points on a person’s body.