Cit371 Introduction To Computer Graphics and Anima 240115 143417
Cit371 Introduction To Computer Graphics and Anima 240115 143417
In the early 1960's IBM, Sperry-Rand, Burroughs and a few other computer
companies existed. The computers of the day had a few kilobytes of memory,
no operating systems to speak of and no graphical display monitors. The
peripherals were Hollerith punch cards, line printers, and roll-paper
plotters. The only programming languages supported were assembler,
FORTRAN, and Algol. Function graphs and ―Snoopy'' calendars were about
the only graphics done. In 1963 Ivan Sutherland presented his paper
Sketchpad at the Summer Joint Computer Conference. Sketchpad allowed
interactive design on a vector graphics display monitor with a light pen
input device. Most people mark this event as the origins of computer
graphics.
The state of the art in computing was an IBM 360 computer with about 64
KB of memory, a Tektronix 4014 storage tube, or a vector display with a
light pen (but these were very expensive).
Today most graphicist want an Intel PC with at least 256 MB of memory and
a 10 GB hard drive. Their display should have graphics board that supports
real-time texture mapping. A flatbed scanner, color laser printer, digital
video camera, DVD, and MPEG encoder/decoder are the peripherals one
wants. The environment for program development is most likely Windows
and Linux, with Direct 3D and OpenGL, but Java 3D might become more
important. Programs would typically be written in C++ or Java.
What will happen in the near future -- difficult to say, but high definition TV
(HDTV) is poised to take off (after years of hype). Ubiquitous, untethered,
wireless computing should become widespread, and audio and gestural
input devices should replace some of the functionality of the keyboard and
mouse
You should expect 3-D modeling and video editing for the masses, computer
vision for robotic devices and capture facial expressions, and realistic
rendering of difficult things like a human face, hair, and water. With any
luck C++ will fall out of favor.
1. Medical Imaging
There are few endeavors more noble than the preservation of life. Today, it
can honestly be said that computer graphics plays a significant role in
saving lives. The range of application spans from tools for teaching and
diagnosis, all the way to treatment. Computer graphics is tool in medical
applications rather than a mere artifact. No cheating or tricks allowed.
5. Games
6. Entertainment
If you can imagine it, it can be done with computer graphics. Obviously,
Hollywood has caught on to this. Each summer, we are amazed by state-of-
the-art special effects. Computer graphics is now as much a part of the
entertainment industry as stunt men and makeup. The entertainment
industry plays many other important roles in the field of computer graphics.
For our purposes today, models already generated. The image drawn on
monitor, printed on laser printer, or written to a raster in memory or a file.
These different possibilities require us to consider device independence.
1. The display screen is coated with ―phospors‖ which emit light when
excited by an electron beam. (There are three types of phospor, emitting red,
green, and blue light.) They are arranged in rows, with three phospor dots
(R, G, and B) for each pixel.
3. An electron gun scans the screen, line by line, mapping out a scan
pattern. On each scan of the screen, each pixel is passed over once. Using
the contents of the frame buffer, the controller controls the intensity of the
beam hitting each pixel, producing a certain color.
How it works:
Graphics software
Graphics software (that is, the software tool needed to create graphics
applications) has taken the form of subprogram libraries. The libraries
contain functions to do things like: draw points, lines, polygons apply
transformations fill areas with color handle user interactions. An important
goal has been the development of standard hardware independent libraries
such as:
Hardware
Hardcopy:
Laserprinter
Ink-jet printer
Film recorder
Electrostatic printer
Pen plotter
1. Mouse
2. Tablet and stylus
3. Force feedback device
4. Scanner
5. Live video streams
6. Display/output (e.g., screen, paper-based printer, video recorder, non-
linear editor.
1. Medical Imaging
2. Scientific Visualization
3. Computer Aided Design
4. Graphical User Interfaces (GUIs)
There are two classes of BRDFs and two important properties. BRDFs can
be classified into two classes: isotropic BRDFs and anisotropic BRDFs. The
two important properties of BRDFs are reciprocity and conservation of
energy.
An equivalent representation
Require a single matrix to represent general affine transformations
PLEASE PATRONISE US FOR MORE EXAM SUMMARY, PAST QUESTION
AND TMA (30/30) - WHATSAPP 08024665051, 08169595996 Page 10
Can be used to represent perspective transformations (later)
1. Parallel lines don‘t remain parallel, rendered object size decreases with
distance from the image plane
2. More realistic, provides a sense of being in the scene Used for
immersive environments
Advantages:
Disadvantages:
Two-part Mapping
Environmental Mapping
Bump Mapping
Elements from the bump map are mapped to a polygon in exactly the same
way as a surface texture, but they are interpreted as a perturbation to the
surface normal, which in turn affects the rendered intensity. The bump map
may contain:
Random patterns
Regular patterns
Surface detail
The in-between frames are interpolated from the keyframes. Originally done
by armies of underpaid animators but now done with computers. A key
frame or keyframe is a location on a timeline which marks the beginning or
end of a transition. It holds special information that defines where a
transition should start or stop. The intermediate frames are interpolated
over time between those definitions to create the illusion of motion.
Advantages of keyframing
Q. Define Kinematics
Forward Kinematics
Inverse Kinematics
With inverse kinematics, a user specifies the position of the end effector, p,
and the algorithm has to evaluate the required θ give p. That is, θ = f −1(p).
Usually, numerical methods are used to solve this problem, as it is often
nonlinear and either underdetermined or over determined. A system is
underdetermined when there is not a unique solution, such as when there
are more equations than unknowns. A system is over determined when it is
inconsistent and has no solutions. Extra constraints are necessary to obtain
unique and stable solutions. For example, constraints may be placed on the
range of joint motion and the solution may be required to minimize the
kinetic energy of the system.
Despite the labor involved, motion capture has become a popular technique
in the movie and game industries, as it allows fairly accurate animations to
be created from the motion of actors. However, this is limited by the density
of markers that can be placed on a single actor. Faces, for example, are still
very difficult to convincingly reconstruct. Motion capture is one of the
primary animation techniques for computer games.
1. Once you have the program, you can get lots of motion
2. It reduces the overall cost of keyframe-based animation in
entertainment industry.
In optics, a thin lens is a lens with a thickness (distance along the optical
axis between the two surfaces of the lens) that is negligible compared to
the radii of curvature of the lens surfaces. Lenses whose thickness is not
negligible are sometimes called thick lenses.
The thin lens approximation ignores optical effects due to the thickness of
lenses and simplifies ray tracing calculations. It is often combined with
the paraxial approximation in techniques such as ray transfer matrix
analysis.
The focal length, f, of a lens in air is given by the lensmaker's equation:
1. Printers
2. Dot-Matrix Printers
3. Daisy Wheel Printers
4. Line Printers
5. Drum Printers
Aliasing
In signal processing and related disciplines, aliasing is an effect that causes
different signals to become indistinguishable (or aliases of one another)
when sampled. It also often refers to the distortion or artefact that results
Antialiasing
Q. What is light?
Reflection of light
(A)Light leaves the light source, (B)Light leaves the light source, is reflected
off the back wall
Refraction of light
Q. What is a Vector?
The Evans & Sutherland Corporation and General Electric started building
flight simulators with real-time raster graphics. Unix, X and Silicon
Graphics GL were the operating systems, window system and application
programming interface (API) that graphicist used. Shaded raster graphics
were starting to be introduced in motion pictures. PCs started to get decent,
but still they could not support 3-D graphics, so most programmer's wrote
software for scan conversion (rasterization) used the painter's algorithm for
hidden surface removal, and developed ―tricks‖' for real-time animation.
PLEASE PATRONISE US FOR MORE EXAM SUMMARY, PAST QUESTION
AND TMA (30/30) - WHATSAPP 08024665051, 08169595996 Page 19
Q. Explain the following colour models
The additive colour model used for computer graphics is represented by the
RGB colour cube, where R, G, and B represent the colours produced by red,
green and blue phosphours, respectively.
YIQ is the color space used by the NTSC color TV system, employed mainly
in North and Central America, and Japan. I stands for in-phase,
while Q stands for quadrature, referring to the components used
in quadrature amplitude modulation. Some forms of NTSC now use
the YUV color space, which is also used by other systems such as PAL.
The Y component represents the luma information, and is the only
component used by black-and-white television receivers. I and Q represent
the chrominance information. In YUV, the U and V components can be
thought of as X and Y coordinates within the color space. I and Q can be
thought of as a second pair of axes on the same graph, rotated 33°; therefore
IQ and UV represent different coordinate systems on the same plane.
To produce blue, one would mix cyan and magenta inks, as they both reflect
blue while each absorbing one of green and red. Unfortunately, inks also
interact in non-linear ways. This makes the process of converting a given
monitor colour to an equivalent printer colour a challenging problem. Black
ink is used to ensure that a high quality black can always be printed, and is
often referred to as to K. Printers thus use a CMYK colour model.
Models such as HSV (hue, saturation, value) and HLS (hue, luminosity,
saturation) are designed for intuitive understanding. Using these colour
models, the user of a paint program would quickly be able to select a desired
colour. HSL (hue, saturation, lightness) (or HSB (hue, saturation,
brightness)) and HSV (hue, saturation, value) are alternative representations
of the RGB color model, designed in the 1970s by computer
graphics researchers to more closely align with the way human vision
perceives color-making attributes. In these models, colors of each hue are
arranged in a radial slice, around a central axis of neutral colors which
ranges from black at the bottom to white at the top.
The HSV representation models the way paints of different colors mix
together, with the saturation dimension resembling various tints of brightly
Q. The table below summarizes the properties of the four primary types of
printing ink. Fill the missing gap
Traditional Animation
At the early day of the history of animation, it took a lot of effort to make an
animation, even the shortest ones. In film, every second requires 24 picture
frames for the movement to be so smooth that humans cannot recognise
discrete changes between frames. Before the appearance of cameras and
computers, animations were produced by hand. Artists had to draw every
single frame and then combined them as one animation. It is worth
mentioning about some of the techniques that were used to produce
animations in the early days that are still being employed in computer-
based animations:
1. Key frames: This technique is used to sub divide the whole animation
into key points between which a lot of actions happen. For example, to
specify an action of raising a hand, at this stage the manager only specifies
the start and finish positions of the hand without having to worry about the
image sequence in between. It is then the artist‘s job to draw images in
polygon vertices
spline control
joint angles
muscle contraction
camera parameters
color
The IBM PC was marketed in 1981 The Apple MacIntosh started production
in 1984, and microprocessors began to take off, with the Intel x86 chipset,
but these were still toys. Computers with a mouse, bitmapped (raster)
display, and Ethernet became the standard in academic and science and
engineering settings. In computer graphics, a raster graphics or bitmap
Q. What is animation?
At the early day of the history of animation, it took a lot of effort to make an
animation, even the shortest ones. In film, every second requires 24 picture
frames for the movement to be so smooth that humans cannot recognise
discrete changes between frames. Before the appearance of cameras and
computers, animations were produced by hand. Artists had to draw every
single frame and then combined them as one animation Old machines, such
as the ZX Spectrum, required more CPU time to iterate through each
location in the frame buffer than it took for the video hardware to refresh the
screen. In an animation, this would cause undesirable flicker due to
partially drawn frames. To compensate, byte range [0, (W − 1)] in the buffer
wrote to the first scan-line, as usual.
2. Antialiasing: replace pixels by the average of their own and their nearest
neighbours colours
3. Colour balancing: modify colours as they are written into the colour
buffer.
BSP trees (short for binary space partitioning trees) can be viewed as a
generalization of k-d trees. Like k-d trees, BSP trees are binary trees, but
now the orientation and position of a splitting plane can be chosen
arbitrarily. The figure below depicts the feeling of a BSP tree.
A Binary Space Partition tree (BSP tree) is a very different way to represent a
scene, Nodes hold facets, and the structure of the tree encodes spatial
information about the scene. It is useful for HSR and related applications.
Spectroradiometer
A device to measure the spectral energy distribution. It can therefore also
provide the CIE xyz tristimulus values. illuminant C A standard for white
Complementary colours
Colours which can be mixed together to yield white light. For example,
colours on segment CD are complementary to the colours on segment CB.
Dominant wavelength
The spectral colour which can be mixed with white light in order to
reproduce the desired colour. Colour B in the above figure is the dominant
wavelength for colour A.
non-spectral colours
Colours not having a dominant wavelength. For example, colour E in the
above figure. perceptually uniform colour space. A colour space in which the
distance between two colours is always proportional to the perceived
distance. The CIE XYZ colour space and the CIE chromaticity diagram are
not perceptually uniform, as the following figure illustrates. The CIE LUV
colour space is designed with perceptual uniformity in mind.
Colour Gamuts
The chromaticity diagram can be used to compare the "gamuts" of various
possible output devices (i.e., monitors and printers). Note that a colour
printer cannot reproduce all the colours visible on a colour monitor.
Vector Addition
When we add two vectors, we simply sum their elements at corresponding
positions. So for a pair of 2D vectors a = [u, v]T and b = [s, t]T we have:
a + b = [u + s, v v + t]T
Vector Subtraction
Vector subtraction is identical to the addition operation with a sign change,
since when we negate a vector we simply flip the sign on its elements.
−b = [−s,−t]T
a − b = a + (−b) = [u − s, v − t]T
Vector Scaling
If we wish to increase or reduce a vector quantity by a scale factor λ then we
multiply each element in the vector by λ.
λa = [λu, λv] T
Vector Magnitude
We write the length of magnitude of a vector s as |s|. We use Pythagoras‘
theorem to compute the magnitude:
The figure shows this to be valid, since u and v are distances along the
principal axes (x and y) of the space, and so the distance of a from the origin
is the hypotenuse of a right-angled triangle. If we have an n-dimensional
vector q = [q1, q2, q3, q..., qn] then the definition of vector magnitude
generalises to:
Q. Explain the term Ray casting, the basic ideas for it and highlight two of
its goal
The goal of ray casting is to determine the color of each pixel in the view
window by considering all of the objects in the scene
What part of the scene affects a single pixel?
For a single pixel, see a finite volume of the scene
(iii) spectroradiometer
A device to measure the spectral energy distribution. It can therefore also
provide the CIE xyz tristimulus values. illuminant C A standard for white
light that approximates sunlight. It is defined by a colour temperature of
6774 K.
(iv) rendering
Rendering (shading) were discovered by Gouraud and Phong at the
University of Utah. Phong also introduced a reflection model that included
specular highlights. Rendering is the conversion of a scene into an image:
Q. What is a texture?
i. C 0
A curve is C0 and G0 continuous if adjacent segments join at a common
endpoint.
ii. C 1
A curve is G1 continuous if the parametric first derivative is continuous
across its joints •i.e., the tangent vectors of adjacent segments are collinear
(i.e., the tangent vectors are on the same line) at the shared endpoint. A
curve is C1 continuous if the spatial first derivative is continuous across
joints •i.e., tangent vectors of adjacent segments are collinear and have the
same magnitude at their shared endpoint