Computer Graphics
Computer Graphics
Answer Key
COURSECODE CSE 0502 COURSE TILE GRAPHICSAND ANIMATION
TECHNIQUES
SEMESTER IV / VI BRANCH AND CSE II / III
YEAR
FACULTY DR.S.Subbiah Designation & Dept Professor / CSE
NAME
Window: (2.5)
It consists of a visual area containing some of the graphical user interface of the program it belongs to
and is framed by a window decoration.
A window defines a rectangular area in world coordinates. You define a window with a GWINDOW
statement. You can define the window to be larger than, the same size as, or smaller than the actual range
of data values, depending on whether you want to show all of the data or only part of the data.
Viewport: (2.5)
A viewport is a polygon viewing region in computer graphics. The viewport is an area expressed in
rendering-device-specific coordinates, e.g. pixels for screen coordinates, in which the objects of interest
are going to be rendered.
A viewport defines in normalized coordinates a rectangular area on the display device where the image
of the data appears. You define a viewport with the GPORT command. You can have your graph take up
the entire display device or show it in only a portion, say the upper-right part.
2. What is output primitive? (2)
Output primitives are combined to form complex structures
Electron- Electron Beam is directed from top to bottom Electron Beam is directed to only that part of
Beam and one row at a time on screen, but electron screen where picture is required to be drawn, one
beam is directed to whole screen. line at a time so also called Vector Display.
Cost It is less expensive than Random Scan It is Costlier than Raster Scan System.
System.
Refresh Refresh rate is 60 to 80 frame per second. Refresh Rate depends on the number of lines to be
Rate displayed i.e30 to 60 times per second.
Picture It Stores picture definition in Refresh It Stores picture definition as a set of line
Definition Buffer also called Frame Buffer. commands called Refresh Display File.
Line Zig – Zag line is produced because plotted Smooth line is produced because directly the line
Drawing value are discrete. path is followed by electron beam .
Realism in It contains shadow, advance shading and It does not contain shadow and hidden surface
display hidden surface technique so gives the technique so it can not give realistic display of
realistic display of scenes. scenes.
Image It uses Pixels along scan lines for drawing an It is designed for line drawing applications and uses
Drawing image. various mathematical function to draw.
4. Explain the basic concept of Midpoint Ellipse Algorithm. Derive the decision parameters for the
algorithm and write down the algorithm steps. (8)
Midpoint ellipse algorithm is a method for drawing ellipses in computer graphics. This method is
modified from Bresenham’s algorithm so it is sometimes known as Bresenham'scircle algorithm.
The advantage of this modified method is that only addition operations are required in the program
loops. This leads to simple and fast implementation in all processors. The algorithm uses symmetry of
ellipse and mid point algorithm to implement one quadrant only. We divide the quadrant into two
regions and the boundary of two regions is the point at which the curve has a slope of -1. We proceed by
taking unit steps in the x direction to the point P(where curve has a slope of -1), then taking unit steps in
the y direction and applying midpoint algorithm at every step. (3)
Take input radius along x axis and y axis and obtain center of ellipse.
Initially, we assume ellipse to be centered at origin and the first point as : (x, y0)= (0, ry).
Obtain the initial decision parameter for region 1 as: p10=ry2+1/4rx2-rx 2ry
If p1k<0 then the next point along the is (xk+1 ,yk) and p1k+1=p1k+2ry2xk+1+ry2
Obtain the initial value in region 2 using the last point (x0, y0) of region 1 as: p20=ry2(x0+1/2)2+rx2 (y0-
1)2-rx2ry2
Now obtain the symmetric points in the three quadrants and plot the coordinate value as: x=x+xc, y=y+yc
Picture (2)
TEXT CLIPPING 01
In all or none string clipping method, either we keep the entire string or we reject entire string based on
the clipping window. STRING2 is entirely inside the clipping window so we keep it and STRING1
being only partially inside the window, we reject.
CHARACTER CLIPPING 01
This clipping method is based on characters rather than entire string. In this method if the string is
entirely inside the clipping window, then we keep it. If it is partially outside the window, then − You
reject only the portion of the string being outside
If the character is on the boundary of the clipping window, then we discard that entire character and
keep the rest string.
6. Derive the 3D transformation matrix for rotation about the arbitrary axis. (8)
Rotation of a point in 3 dimensional space by theta about an arbitrary axes defined by a line between two points
P1 = (x1,y1,z1) and P2 = (x2,y2,z2) can be achieved by the following steps
(1) translate space so that the rotation axis passes through the origin (2)
(2) rotate space about the x axis so that the rotation axis lies in the xz plane
(3) rotate space about the y axis so that the rotation axis lies along the z axis
(4) perform the desired rotation by theta about the z axis
(5) apply the inverse of step (3)
(6) apply the inverse of step (2)
(7) apply the inverse of step (1)
Note:
If the rotation axis is already aligned with the z axis then steps 2, 3, 5, and 6 need not be
performed.
In all that follows a right hand coordinate system is assumed and rotations are positive when
looking down the rotation axis towards the origin.
Symbols representing matrices will be shown in bold text.
The inverse of the rotation matrices below are particularly straightforward since the determinant
is unity in each case.
All rotation angles are considered positive if anticlockwise looking down the rotation axis
towards the origin.
Step 1 (4)
Translate space so that the rotation axis passes through the origin. This is accomplished by translating space by
-P1 (-x1,-y1,-z1). The translation matrix T and the inverse T-1 (required for step 7) are given below
1 0 0 -x1 1 0 0 x1
0 1 0 -y1 0 1 0 y1
T= T-1 =
0 0 1 -z1 0 0 1 z1
0 0 0 1 0 0 0 1
Step 2
Rotate space about the x axis so that the rotation axis lies in the xz plane. Let U = (a,b,c) be the unit vector
along the rotation axis. and define d = sqrt(b 2 + c2) as the length of the projection onto the yz plane. If d = 0 then
the rotation axis is along the x axis and no additional rotation is necessary. Otherwise rotate the rotation axis so
that is lies in the xz plane. The rotation angle to achieve this is the angle between the projection of rotation axis
in the yz plane and the z axis. This can be calculated from the dot product of the z component of the unit vector
U and its yz projection. The sine of the angle is determine by considering the cross product.
The rotation matrix Rx and the inverse Rx-1 (required for step 6) are given below
1 0 0 0 1 0 0 0
0 0 0 1 0 0 0 1
Step 3
Rotate space about the y axis so that the rotation axis lies along the positive z axis. Using the appropriate dot
and cross product relationships as before the cosine of the angle is d, the sine of the angle is a. The rotation
matrix about the y axis Ry and the inverse Ry-1 (required for step 5) are given below.
d 0 -a 0 d 0 a 0
0 1 0 0 0 1 0 0
-1
Ry = Ry =
a 0 d 0 -a 0 d 0
0 0 0 1 0 0 0 1
Step 4
Rotation about the z axis by t (theta) is Rz and is simply
cos(t) -sin(t) 0 0
sin(t) cos(t) 0 0
Rz =
0 0 1 0
0 0 0 1
The complete transformation to rotate a point (x,y,z) about the rotation axis to a new point (x`,y`,z`) is as
follows, the forward transforms followed by the reverse transforms.
x' x
y' y
-1 -1 -1
= T Rx Ry RzRyRxT
z' z
1 1
7. Prove that rotation followed by translation is not the same as translation followed by rotation in three
dimension. (8)
Split up of marks
2 marks for rotation followed by translation
2 marks for translation followed by rotation
4 marks for justification
This means that if I put my right hand vertically down, like in karate, with my fingers
along the positive x-axis, and bend the hand towards the y-axis, the thumb will point up
along the positive z-axis.
We use homogeneous coordinates from the beginning. This means that the general transformation matrix is a
4x4 matrix, and that the general vector form is a column vector with four rows.
P2=M·P1
Translation
A translation in space is described by tx, ty and tz. It is easy to see that this matrix realizes the
equations:
x2=x1+tx
y2=y1+ty
z2=z1+tz
Rotation is a bit more complicated. We define three different basic rotations, one around every axis.
JUSTIFICATION OF THE ANSWER IT IS NOT THE SAME WITH THEIR OWN EXPLANATION
8. Give in detail about the types of physical input devices in graphics. (2)
Following any four items 4 X 0.5 marks
Keyboard. Keyboard is the most common and very popular inputdevice which helps to input
data to the computer. ...
Mouse. Mouse is the most popular pointing device. ...
Joystick. Joystick is also a pointing device, which is used to move the cursor position on a
monitor screen. ...
Light Pen. ...
Track Ball. ...
Scanner. ...
Digitizer. ...
Microphone.
• The projection matrix scales and translates each vertex so that those inside the view volume will be
inside a standardcube that extends from -1 to 1 in each dimension (Normalized Device Coordinates).
• This cube is a particularly efficient boundary against which to clip objects.
• The image is distorted, but the viewport transformation will remove the distortion.
The projection matrix also reverses the sense of the z-axis; increasing values of z now represent
increasing values of depth from the eye. (2)
• The viewport matrix maps the standard cube into a 3D viewport whose x and y values extend across the
viewport (in screen coordinates), and whose z-component extends from 0 to 1 (a measure of the depth of
each point).
• This measure of depth makes hidden surface removal (do not draw surfaces hidden by objects closer to
the eye) particularly efficient. (2)
• Pre-processing the model, to remove parts which cannot be seen in the final picture. This involves both
clipping and back-face culling
• Lighting the model. This includes the use of light direction and intensity; but it also includes surface reflection
calculations (different surfaces have different reflective properties).
• Hidden surface removal: only at this stage do we discover which objects obscure parts of other objects.
We will also look at how to incorporate texture detail on surfaces, for greater realism.
In a similar spirit we can throw away any facets which are facing away from the viewer. This is know as back-
facing culling or back-face removal. The reason we can do this is that most real objects have a non-negligible
thickness. For example, both sides of a wall have to be modelled to give an accurate shape to a building.
However, if I am standing on one side of a wall, I cannot see the other side of it. So we can throw away the
polygons which are facing away from the viewer, reducing the number of polygons to be considered when
producing the picture. We do this to improve the efficiency of later stages. We arrange the model so that every
facet has an orientation, so we can determine what is its front, and what is its back. The simplest way to do this
is to record the edges of the triangle consistently; for example, so that the vectors formed by each edge proceed
clockwise around the triangle when seen from its front.With this arrangement, the vector product of two
successive edges will be a vector in the direction of the normal.
We now look at the way we apply lighting (described in detail in the ray tracing chapter) to the patches that
form the model. There are two separate issues which come together here. The first is the normal of the surface at
the position being rendered (together with its colour and any other features that affect the way it reflects light).
The second is the model we have for how the incoming light is scattered by a particular set of surface
properties. It is the interaction of these two which determines the local effect. First we assume that the renderer
works on a horizontal scan line, producing the pixels for that line before moving on to the next line. At any
point, therefore, the renderer only has access to a small slice of the model.
One thing to keep in mind: the mesh is a 3D structure, while the screen is 2D. In the above diagram, the triangle
is in 3-space but the pixels are in 2-space.
This is the simplest method. We give each triangle a single intensity value. If we just use the underlying colour
of the surface, there will be no interaction with the lighting and we will not be able to see the shape of the
surface. Instead, we find the surface normal of the facet, use a diffuse lighting computation, and colour the
whole triangle with the resulting colour. Each triangle has its own normal: those facing the light will be brightly
lit, those facing more obliquely will be darker. In this way we will be able to see the shape of the surface. This
shading method is computationally cheap, and is good for a quick look at a model. One problem is that the
individual triangles of the mesh are all visible. There is something worse: the Mach band effect whereby the eye
perceives sharp intensity changes as dark and bright bands. This makes the edges look much worse.
A French/American mathematician Dr Benoit Mandelbrot discovered Fractals. The word fractal was derived
from a Latin word fractus which means broken. What are Fractals? Fractals are very complex pictures generated
by a computer from a single formula. They are created using iterations. This means one formula is repeated with
slightly different values over and over again, taking into account the results from the previous iteration. Fractals
are used in many areas such as − Astronomy − For analyzing galaxies, rings of Saturn, etc. Biology/Chemistry −
For depicting bacteria cultures, Chemical reactions, human anatomy, molecules, plants, Others − For depicting
clouds, coastline and borderlines, data compression, diffusion, economy, fractal art, fractal music, landscapes,
special effect, etc.
Generation of Fractals Fractals can be generated by repeating the same shape over and over again as shown in
the following figure. In figure a shows an equilateral triangle. In figure b, we can see that the triangle is
repeated to create a star-like shape. In figure c, we can see that the star shape in figure b is repeated again and
again to create a new shape. We can do unlimited number of iteration to create a desired shape. In programming
terms, recursion is used to create such shapes
Geometric Fractals Geometric fractals deal with shapes found in nature that have non-integer or fractal
dimensions. To geometrically construct a deterministic nonrandom self-similar fractal, we start with a given
geometric shape, called the initiator. Subparts of the initiator are then replaced with a pattern, called the
generator.
As an example, if we use the initiator and generator shown in the above figure, we can construct good pattern
by repeating it. Each straight-line segment in the initiator is replaced with four equallength line segments at
each step. The scaling factor is 1/3, so the fractal dimension is D = ln 4/ln 3 ≈ 1.2619. Also, the length of each
line segment in the initiator increases by a factor of 4/3 at each step, so that the length of the fractal curve tends
to infinity as more detail is added to the curve as shown in the following figure
14. How diffuse scattering can be shown. (2)
Diffuse scattering is the scattering that arises from any departure of the material structure from that of
a perfectly regular lattice. One can think of it as the signal that arises from disordered structures, and it
appears in experimental data as scattering spread over a wide q-range (diffuse).
15. Discuss the various research areas of medical Image processing. (10)
Any five applications Each 2 marks
Until only a few years ago, traditional computer vision techniques have provided excellent results to detection
and segmentation task. More recently, with the advent of deep learning and neural networks also in medical
imaging, we obtain surprisingly better results in all task, be it detection, segmentation, classification and the
like.
Tens of thousands of people (including thousands of children) die every year of primary cancerous tumors of the
brain and spinal cord. Secondary tumors or brain metastases only make these figures more dramatic.
3D segmentation of brain tumor has high clinical relevance for the estimation of the volume and spread of the
tumor. RSIP Vision constructs a probability map to localize the tumor and uses deformable models to obtain
the tumor boundaries with zero level energy.
Prostate Segmentation
Prostate cancer is the second most common cancer among American men, with more than 200,000 new cases
diagnosed every year and about 1 man in 7 diagnosed during his lifetime. Volume is a key indicator of the
health of the prostate, revealing key information about the stage of the cancer, the probable prognosis and
viable treatment. The rich experience of RSIP Vision enables us to recommend an approach based on a semi-
automatic prostate segmentation to give a precise estimate of the prostate volume.
A large proportion of the human skeleton is made of porous bone, which offers only low X-ray attenuation,
resulting in data density equal to or only slightly higher than that of soft tissues. Bones segmentation and
skeleton segmentation using image processing algorithms have become a valuable and indispensable process in
many medical applications and have made possible a fast and reliable 3D observation of fractured bones. It's
another successful medical application in computer vision by RSIP Vision.
Automatic segmentation of tumour cells
Visual examination of tumour cells is highly time-consuming and not readily available in clinical applications,
where rapid intervention is crucial. Thus, manual segmentation of tumor cells by humans is a quite unpractical
and non-trivial task even for experts. Therefore we propose a method for an automatic tumor cells
segmentation in histological tissue with variable biomarker expression levels, using computer vision algorithms
and machine learning.
Interpretation of ultrasound images of cartilage is challenging since they display no obvious borders in the
transition between tissues: the boundary between tissues can morph in both density and texture. Our software
can process these problematic ultrasound images and automatically measure the density of cartilage in the knee.
The main benefits of this procedure are its non-invasive nature and the efficient and accurate measurement of
the cartilage it provides.
Kidney Segmentation
The most dramatically common kidney diseases are: kidney cancer, hitting 50,000 new patients every year only
in the U.S.; and kidney failures, which leave the organ unable to remove wastes. Laparoscopic partial
nephrectomy operations remove or reduce kidney tumors and some renal malfunctions. We at RSIP Vision help
by providing a semi-automatic and very accurate kidney segmentation technique, built on the study of 4 CT
scans and designed to create a kidney model which would be specific for each patient.
16. Write short notes on Augmented Reality and Virtual Reality. (8)
Virtual Reality means creation of the world that is not real, but seems real. To your better understanding, Virtual
Reality is the creation of a 3D environment that simulates the real world and users can interact with it by
wearing a VR helmet or goggles similar to Oculus Rift. The virtual environment is immersive and computer
generated. A coding language Virtual Reality Modeling Language (VRML) is used to create VR software and
bring some place else for you.
Microsoft made headlines last year when it unveiled HoloLens that uses AR similar to Mini's driving goggles.
However, HoloLens goes a step further to significantly narrow down the gap between your PC and your living
room. It actually paves the way for you to surround yourself with your Windows 10 apps.
Pranav Mistry, the Global Vice President of Research at Samsung and the head of Think Tank Team, has
developed an AR device SixthSensethat enables new interactions between the real world and the world of
data.SixthSense allows a user to use natural hand gestures to interact with digital information.
Augmented reality is a perfect mix of digital and real world, making the existing reality more meaningful. The
creation of Augmented Reality calls for integration of digital information with the user’s environment in the real
time. So, it’s absolutely correct to say that AR makes use of our current reality to overlay new information on
top of it, thereby allowing the user to make more sense out of the existing environment.
Blippar (an AR app for iOS and Android) allows you to discover a whole new world by pointing your phone’s
camera to various products or images. The Augmented Reality app enables you to blip everyday objects from
plants, fruits to pets. Moreover, the app allows your smartphone to bring a static print ad to life, or watch a
movie trailer by simply focusing your smartphone camera at a poster.
The techniques of digital art are used extensively by the mainstream media in advertisements, and by
film-makers to produce visual effects. Desktop publishing has had a huge impact on the publishing
world, although that is more related to graphic design. Both digital and traditional artists use many
sources of electronic information and programs to create their work. Given the parallels between visual
and musical arts, it is possible that general acceptance of the value of digital visual art will progress in
much the same way as the increased acceptance of electronically produced music over the last three
decades.
Digital art can be purely computer-generated (such as fractals and algorithmic art) or taken from other
sources, such as a scannedphotograph or an image drawn using vector graphics software using
a mouse or graphics tablet.[8] Though technically the term may be applied to art done using other media
or processes and merely scanned in, it is usually reserved for art that has been non-trivially modified by
a computing process (such as a computer program, microcontroller or any electronic system capable of
interpreting an input to create an output); digitized text data and raw audio and video recordings are not
usually considered digital art in themselves, but can be part of the larger project of computer
art and information art.[9] Artworks are considered digital painting when created in similar fashion to
non-digital paintings but using software on a computer platform and digitally outputting the resulting
image as painted on canvas.
Andy Warhol created digital art using a Commodore Amiga where the computer was publicly introduced
at the Lincoln Center, New York in July 1985. An image of Debbie Harry was captured in monochrome
from a video camera and digitized into a graphics program called ProPaint.
CAD (4)
Computer-aided design (CAD) is the use of computers (or workstations) to aid in the creation, modification,
analysis, or optimization of a design.[1] CAD software is used to increase the productivity of the designer,
improve the quality of design, improve communications through documentation, and to create a database for
manufacturing. CAD output is often in the form of electronic files for print, machining, or other manufacturing
operations. The term CADD (for Computer Aided Design and Drafting) is also used.
Its use in designing electronic systems is known as electronic design automation (EDA). In mechanical design it
is known as mechanical design automation (MDA) or computer-aided drafting (CAD), which includes the
process of creating a technical drawing with the use of computer software.
CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional
drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it
involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of
CAD must convey information, such as materials, processes, dimensions, and tolerances, according to
application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids
in three-dimensional (3D) space.
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding,
and aerospace industries, industrial and architectural design, prosthetics, and many more. CAD is also widely
used to produce computer animation for special effects in movies, advertising and technical manuals, often
called DCC digital content creation. The modern ubiquity and power of computers means that even perfume
bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of
its enormous economic importance, CAD has been a major driving force for research in computational
geometry, computer graphics (both hardware and software), and discrete differential geometry.
Overview
Raw scientific data, such as huge collections of numbers, may be meaningless if they cannot be interpreted and
understood easily. With graphical visualisation, in the form of diagrams and drawings, they can be
comprehended at a glance of an image. However, with complex data one image is not enough and animation has
to be employed. It will be explained here, where and when animation is needed and how it is used.
A n alys is
They animation has to be done is such a way that useful information will result from it and what the user will
get will not just be a pretty picture. This is done by deciding what factors are going to be modified with time.
An example would be to hold the light source and the observer fixed and rotate the animation object, or to hold
the object still and vary the position of a clipping plane. More than one parameters can be modified with time,
but this requires more computational power and is sometimes more difficult to implement.
D is p lay
A few parameters of the system will not change with time, but instead they will be represented on the
individual frames, in two or three dimensions. In order to display these, the systems capabilities must be such
to match the human visual system i.e. use different textures, colours, shading, lighting etc.
I n teraction
The user must be able to some degree to control the various aspects of the animation such as the speed and
direction of the sequence and the position of the observer/camera. It is very often useful to be able to zoom in
and out, rotate the display etc.
There are endless examples of the applications of animation in scientific visualisation. There are two main
categories where it is used:
British Telecom for example uses a sophisticated program that plots on a map of the U.K. the density of
telephone fault reports using different colours. When a storm was plotted on top of this map, and the whole
system was animated it could be seen that the density of the faults increased at areas from which the storm had
just passed. It would have been very difficult to visualise this in a different way.
Astronomers also rely on computers to do animations of high speed jets penetrating different gases in order to
determine why a few galaxies flare dramatically. This research has given out valuable information about why
some galaxies flare into broad plumes and why some others remain extremely straight and narrow.
Teach in g
On of the most difficult things in teaching, is communicating ideas effectively. This might be very difficult in
complex situations, and here is where animation might help to convey information. For example there are many
programs that show the planetary system in action in three dimensions - an idea that would be very difficult to
grasp otherwise.
Astrophysicists at the NCSA (National Center for Supercomputing Applications) work with artists, in order to
explain some phenomena that cannot be seen. A typical example is the visualisation of the gravitational field of
a Schwarzschild black hole. The latter cannot be visible, as it absorbs all light that falls onto it, so the only way
of experimenting with it is running it in a computer simulation and seeing the effects.
Future Developments
A very short animation, might be made up by a few thousand frames, which will occupy a lot of disk space and
will require a lot of bandwidth to transmit. Hence these programs currently cannot be used on the average
computer and only a few people benefit from them. Fortunately, the progress in computer science will soon
enable us to move from the expensive systems, to cheaper and more powerful machines, and so animation will
be easy and might be commonly used for many tasks that were once very time consuming.