Mod 3
Mod 3
Clipping is an essential technique used in computer graphics to ensure that objects or parts of
objects are displayed within the visible region of a computer screen or viewport. When rendering
three-dimensional scenes or objects, the graphics pipeline generates a two-dimensional
representation of the scene for display. However, not all objects or parts of objects may be entirely
visible within the viewing window, and those portions that fall outside need to be clipped or
removed from the final image.
Briefly explain Co-hen Suterland line clipping without code. Discuss four cases
x= x1 +1/m(y-y1)
Any lines that are completely contained within the window edges have a region code of 0000 for
both endpoints, and we save these line segments. Any line that has a region-code value of 1 in the
same bit position for each endpoint is completely outside the clipping rectangle, and we eliminate
that line segment. As an example, a line that has a region code of 1001 for one endpoint and a code
of 0101 for the other endpoint is completely to the left of the clipping window, as indicated by the
value of 1 in the first bit position of each region code
When the or operation between two endpoint region codes for a line segment is false (0000), the
line is inside the clipping window and we can eliminate it from the scene description
Lines that cannot be identified as being completely inside or completely outside a clipping window
by the region-code tests are next checked for intersection with the window border
Phong model
• An empirical model for calculating the specular reflection range, developed by Phong
Bui Tuong and called the Phong specular-reflection model or simply the PhonG model,
• Angle φ can be assigned values in the range 0◦ to 90◦, so that cos φ varies from 0 to 1.0.
o A very shiny surface is modeled with a large value for ns (say, 100 or more)
• Using the spectral-reflection function W(θ), we can write the Phong specular-reflection
model as.
where Il is the intensity of the light source, and φ is the viewing angle relative to the specular-
reflection direction R.
Because V and R are unit vectors in the viewing and specular-reflection directions, we can calculate
the value of cos φ with the dot product V · R. In addition, no specular effects are generated for the
display of a surface if V and L are on the same side of the normal vector N or if the light source is
behind the surface.
Thus, assuming the specular-reflection coefficient is a constant for any material, we can determine
the intensity of the specular reflection due to a point light source at a surface position with the
calculation
The direction for R, the reflection vector, can be computed from the directions for vectors L and N.
the projection of L onto the direction of the normal vector has a magnitude equal to the dot product
N · L, which is also equal to the magnitude of the projection of unit vector R onto the direction of N.
R + L = (2N · L)N
R = (2N · L)N − L
we can set V = (0.0, 0.0, 1.0), which is a unit vector in the positive z direction.
A somewhat simplified Phong model is obtained using the halfway vector H between L and V to
calculate the range of specular reflections. If we replace V · R in the Phong model with the dot
product N · H, this simply replaces the empirical cos φ calculation with the empirical cos α
calculation (Figure 18). The halfway vector is obtained as
H = (L + V )/(|L + V|)
6a. Clip the polygon given in Fig.Q.6(a). using Suterland Hodgman polygon clipping algorithm with
neat sketches.
b. Explain the different types of light sources supported by Opengl
A light source can be defined with a number of properties. We can specify its position, the color of
the emitted light, the emission direction, and its shape.
The simplest model for an object that is emitting radiant energy is a point light source with a single
color, specified with three RGB components. We define a point source for a scene by giving its
position and the color of the emitted light.
As shown in Figure 1, light rays are generated along radially diverging paths from the single-color
source position.
This light-source model is a reasonable approximation for sources whose dimensions are small
compared to the size of objects in the scene.
We can also simulate larger sources as point emitters if they are not too close to a scene.
We use the position of a point source in an illumination model to determine which objects in the
scene are illuminated by that source and to calculate the light direction to a selected object surface
position
A large light source, such as the sun, that is very far from a scene can also be approximated as a
point emitter, but there is little variation in its directional effects.
In contrast to a light source in the middle of a scene, which illuminates objects on all sides of the
source, a remote source illuminates the scene from only one direction.
The light path from a distant light source to any position in the scene is nearly constant, as illustrated
in Figure 2. We can simulate an infinitely distant light source by assigning it a color value and a fixed
direction for the light rays emanating from the source. The vector for the emission direction and the
light-source color are needed in the illumination
A local light source can be modified easily to produce a directional, or spotlight, beam of light.
If an object is outside the directional limits of the light source, we exclude it from illumination by
that source.
One way to set up a directional light source is to assign it a vector direction and an angular limit θl
measured from that vector direction, in addition to its position and color.
This defines a conical region of space with the light-source vector direction along the axis of the cone
(Figure 3).
A multicolor point light source could be modeled in this way using multiple direction vectors and a
different emission color for each direction. We can denote Vlight as the unit vector in the light-
source direction and Vobj as the unit vector in the direction from the light position to an object
position. Then
where angle α is the angular distance of the object from the light direction vector. If we restrict the
angular extent of any light cone so that 0◦ < θl ≤ 90◦, then the object is within the spotlight if cos α ≥
cos θl, as shown in Figure 4. If Vobj· Vlight < cos θl , however, the object is outside the light cone
When we want to include a large light source at a position close to the objects in a scene, such as the
long neon lamp in Figure 5,
we can approximate it as a lightemitting surface. One way to do this is to model the light surface as a
grid of directional point emitters.
We can set the direction for the point sources so that objects behind the light-emitting surface are
not illuminated.
We could also include other controls to restrict the direction of the emitted light near the edges of
the source.
The Warn model provides a method for producing studio lighting effects using sets of point emitters
with various parameters to simulate the barn doors, flaps, and spotlighting controls employed by
photographers.
Spotlighting is achieved with the cone of light discussed earlier, and the flaps and barn doors provide
additional directional control. For instance, two flaps can be set up for each of the x, y, and z
directions to further restrict the path of the emitted light rays. This light-source simulation is
implemented in some graphics packages.
According to the tristimulus theory of vision, our eyes perceive color through the stimulation of
three visual pigments in the cones of the retina. One of the pigments is most sensitive to light with a
wavelength of about 630 nm (red), another has its peak sensitivity at about 530 nm (green), and the
third pigment is most receptive to light with a wavelength of about 450 nm (blue). By comparing
intensities in a light source, we perceive the color of the light. This theory of vision is the basis for
displaying color output on a video monitor using the three primaries red, green, and blue, which is
referred to as the RGB color model.
We can represent this model using the unit cube defined on R, G, and B axes, as shown in Figure 11.
The origin represents black and the diagonally opposite
vertex, with coordinates (1, 1, 1), is white. Vertices of the cube on the axes represent the primary
colors, and the remaining vertices are the complementary color points for each of the primary
colors. As with the XYZ color system, the RGB color scheme is an additive model. Each color point
within the unit cube can be represented as a weighted vector sum of the primary colors, using unit
vectors R, G, and B:
where parameters R, G, and B are assigned values in the range from 0 to 1.0. For example, the
magenta vertex is obtained by adding maximum red and blue values to produce the triple (1, 0, 1),
and white at (1, 1, 1) is the sum of the maximum values for red, green, and blue. Shades of gray are
represented along the main diagonal of the cube from the origin (black) to the white vertex. Points
along this diagonal have equal contributions from each primary color, and a gray shade halfway
between black and white is represented as (0.5, 0.5, 0.5). The color graduations along the front and
top planes of the RGB cube are illustrated in Color Plate 22.
Chromaticity coordinates for the National Television System Committee (NTSC) standard RGB
phosphors are listed in Table 1. Also listed are the RGB chromaticity coordinates within the CIE color
model and the approximate values used for phosphors in color monitors. Figure 12 shows the
approximate color gamut for the NTSC standard RGB primaries
Phosphor x y
A subtractive color model can be formed with the three primary colors cyan, magenta, and yellow.
As we have noted, cyan can be described as a combination of green and blue. Therefore, when white
light is reflected from cyancolored ink, the reflected light contains only the green and blue
components, and the red component is absorbed, or subtracted, by the ink. Similarly, magenta ink
subtracts the green component from incident light, and yellow subtracts the blue component. A unit
cube representation for the CMY model is illustrated in Figure 13
In the CMY model, the spatial position (1, 1, 1) represents black, because all components of the
incident light are subtracted. The origin represents white light. Equal amounts of each of the primary
colors produce shades of gray along the main diagonal of the cube.
A combination of cyan and magenta ink produces blue light, because the red and green components
of the incident light are absorbed. Similarly, a combination of cyan and yellow ink produces green
light, and a combination of magenta and yellow ink yields red light.
The CMY printing process often uses a collection of four ink dots, which are arranged in a close
pattern somewhat as an RGB monitor uses three phosphor dots. Thus, in practice, the CMY color
model is referred to as the CMYK model, where K is the black color parameter. One ink dot is used
for each of the primary colors (cyan, magenta, and yellow), and one ink dot is black. A black dot is
included because reflected light from the cyan, magenta, and yellow inks typically produce only
shades of gray. Some plotters produce different color combinations by spraying the ink for the three
primary colors over each other and allowing them to mix before they dry. For black-and-white or
grayscale printing, only the black ink is used.
Aug 2022
5a. What is clipping? Explain Cohen-Sutherland Line Clipping algorithm, with suitable example.
(Repeat)
• Ambient lighting
• Diffuse reflection
• Specular reflection
Ambient lighting
Ambient lighting is a type of global illumination that provides a uniform level of lighting throughout a
scene. It approximates the light that is reflected off of surfaces in the scene and bounces around the
environment, producing a subtle and indirect illumination effect.
Ambient lighting can be thought of as a form of diffuse reflection, as it affects all surfaces in the
scene equally, regardless of their orientation or viewing direction. It is independent of the position
and intensity of light sources in the scene, and it is typically used to provide a basic level of
illumination to objects and surfaces that are not directly lit by a light source.
In OpenGL, ambient lighting can be enabled using the `glLightModelfv()` function, which allows you
to set the ambient lighting color and intensity. By default, OpenGL sets the ambient lighting color to
black, which means that objects in the scene will not be visible without any other lighting sources.
However, by adding a small amount of ambient lighting, you can create a more realistic and natural-
looking scene.
Diffuse reflection
• The incident light on the surface is scattered with equal intensity in all directions, independent of
the viewing position. Such surfaces are called ideal diffuse reflectors.
• They are also referred to as Lambertian reflectors, because the reflected radiant light energy from
any point on the surface is calculated with Lambert’s cosine law.
• Diffuse (Lambertian) surfaces are rough or grainy, like clay, soil, fabric
• The bright spot, or specular reflection, that we can see on a shiny surface is the result of
total, or near total, reflection of the incident light in a concentrated region around the
specular-reflection angle.
• The below figure shows the specular reflection direction for a position on an illuminated
Surface
The specular reflection angle equals the angle of the incident light, with the two angles measured
on opposite sides of the unit normal surface vector N. In this figure,
L is the unit vector directed toward the point light source, and
V is the unit vector pointing to the viewer from the selected surface position.
6a. Explain Sutherland-Hogman Polygon clipping algorithm Find the final clipping polygon for the
following Fig. Q6(a)
Send pair of endpoints for each successive polygon line segment through the series of clippers. Four
possible cases:
1. If the first input vertex is outside this clipping-window border and the second vertex is inside,
both the intersection point of the polygon edge with the window border and the second vertex are
sent to the next clipper
2. If both input vertices are inside this clipping-window border, only the second vertex is sent to the
next clipper
3. If the first vertex is inside and the second vertex is outside, only the polygon edge intersection
position with the clipping-window border is sent to the next clipper
4. If both input vertices are outside this clipping-window border, no vertices are sent to the next
clipper
b. Write an OpenGL program to rotate a cube in all directions.
#include<stdio.h> // preprocessor directive to include standard input/output header
file
#include<gl/glut.h> // preprocessor directive to include GLUT header file
GLfloat colors[][3] = { // declare and initialize an array of colors for each vertex
{1.0, 0.64, 0.0}, // orange
{0.5, 0.0, 1.0}, // purple
{1.0, 0.0, 1.0}, // pink
{0.0, 0.5, 1.0}, // sky blue
{0.0, 1.0, 0.5}, // seafoam green
{0.85, 0.58, 0.0},// gold
{0.85, 0.0, 0.63},
{0.0, 0.86, 0.64}
};
void polygon(int a, int b, int c, int d) // function to draw a polygon with four
vertices
{
glBegin(GL_POLYGON); // begin drawing a polygon
glColor3fv(colors[a]); // set the color of the first vertex
glVertex3fv(vertices[a]); // specify the position of the first vertex
glColor3fv(colors[b]); // set the color of the second vertex
glVertex3fv(vertices[b]); // specify the position of the second vertex
glColor3fv(colors[c]); // set the color of the third vertex
glVertex3fv(vertices[c]); // specify the position of the third vertex
glColor3fv(colors[d]); // set the color of the fourth vertex
glVertex3fv(vertices[d]); // specify the position of the fourth vertex
glEnd(); // end drawing the polygon
}
static GLfloat theta[] = { 0.0,0.0,0.0 }; // array to store the rotation angles for
each axis
static GLint axis = 2; // variable to keep track of the current rotation axis
void display(void)
{
glClearColor(0.0, 0.0, 0.0, 0.0);
// Clear the color and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Load the identity matrix onto the modelview matrix stack
glLoadIdentity();
// Apply the three rotation transformations specified by theta to the modelview
matrix
glRotatef(theta[0], 1.0, 0.0, 0.0);
glRotatef(theta[1], 0.0, 1.0, 0.0);
glRotatef(theta[2], 0.0, 0.0, 1.0);
// Draw the color cube using the current transformation matrix
colorcube();
// Swap the front and back buffers to display the newly rendered image
glutSwapBuffers();
}
void spincube()
{
// Increment the rotation angle for the current axis by a small amount
theta[axis] += 0.005;
// Wrap the angle around to keep it within the range [0, 360)
if (theta[axis] > 360.0)
theta[axis] -= 360.0;
// Trigger a display update to render the new rotated cube
glutPostRedisplay();
}
Sep 2020
An object is translated in three dimensions by transforming each of the defining coordinate positions
for the object, then reconstructing the object at the new location. For an object represented as a set
of polygon surfaces, we translate each vertex for each surface (Figure 2) and redisplay the polygon
facets at the translated positions. The following program segment illustrates construction of a
translation matrix, given an input set of translation parameters.
Three-Dimensional Rotation
- Easiest rotation axes to handle are those that are parallel to the Cartesian coordinate axes.
• The matrix expression for the three-dimensional scaling transformation of a position P=(x, y,
z)relative to the coordinate origin is a simple extension of 3D scaling.Include the parameter for z-
coordinate scaling in the transformation matrix:
July 2019
In OpenGL programming, the `glutReshapeFunc` function is part of the GLUT (OpenGL Utility Toolkit)
library, which provides a simplified API for handling windowing and input events in OpenGL
applications. The purpose of `glutReshapeFunc` is to register a callback function that is invoked
whenever the window is resized.
The `glutReshapeFunc` function takes a function pointer as its parameter, which should be a
function you define in your code. This function serves as the callback and is executed automatically
by GLUT whenever the window's dimensions change, such as when the user resizes the window or
the application programmatically alters its size.
```c++
```
The callback function you pass to `glutReshapeFunc` should have the following signature:
```c++
```
The `width` and `height` parameters represent the new dimensions of the window after the resizing
event.
When a window resize event occurs, GLUT internally calls the registered callback function with the
updated width and height values. Inside your callback function, you can implement the desired logic
to respond to the window size change. For example, you might want to adjust the OpenGL viewport
and projection matrix to accommodate the new aspect ratio, or update the size of rendered objects.
```c++
#include <GL/glut.h>
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE);
glutInitWindowSize(800, 600);
glutReshapeFunc(ReshapeFunc);
glutMainLoop();
return 0;
```
In the example above, the `ReshapeFunc` function is registered as the callback for handling window
resize events using `glutReshapeFunc`. Inside this function, we set the OpenGL viewport to cover the
entire window by using `glViewport`. You can add additional code to handle other aspects of the
resize event according to your application's requirements.
Remember to include the appropriate GLUT header (`<GL/glut.h>`) and link against the GLUT library
when compiling your program.
Overall, the `glutReshapeFunc` function allows you to define a callback function that is triggered
whenever the window is resized, giving you the ability to adapt your OpenGL rendering based on the
new dimensions of the window.
b. With the help of suitable diagram explain basic 3D geometric transformation techniques and
give the transformation matrix. (Repeat)
Multiple point light sources can be included in an OpenGL scene description, and various properties,
such as position, type, color, attenuation, and spotlight effects,
The three OpenGL property constants for radial intensity attenuation are GL CONSTANT
ATTENUATION, GL LINEAR ATTENUATION, and GL QUADRATIC ATTENUATION, which correspond to
the coefficients a0, a1, and a2. Either a positive integer value or a positive floating- point value can
be used to set each attenuation coefficient. For example, we could assign the radial-attenuation
coefficient values as
Once the values for the attenuation coefficients have been set, the radial attenuation function is
applied to all three colors (ambient, diffuse, and specular) of the light source. Default values for the
attenuation coefficients are a0 = 1.0, a1 = 0.0, and a2 = 0.0. Thus, the default is no radial attenuation:
fl,radatten = 1.0. Although radial attenuation can produce more realistic displays, the calculations
are time consuming.
void init() {
glEnable(GL_DEPTH_TEST); // Enable depth testing
glEnable(GL_LIGHTING); // Enable lighting
glEnable(GL_LIGHT0); // Enable light source 0
void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen
glMatrixMode(GL_MODELVIEW); // Switch to the modelview matrix
glLoadIdentity(); // Reset the matrix
gluLookAt(5.0, 5.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); // Set the view matrix
return 0;
}
6a. What is Clipping? With the help of suitable example explain choen southerland line clipping
algorithm. (Repeat)
b. Design transformation matrix to rotate an 3D object about an axis that is parallel to one of the
co-ordinate axes.(same as before)like keeping z as viewer
A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a
composite transformation involving combinations of translations and the coordinate-axis rotations.
1.We first move the designated rotation axis onto one of the coordinate axes.
2.Then we apply the appropriate rotation matrix for that coordinate axis.
3.The last step in the transformation sequence is to return the rotation axis to its original position.
This composite matrix is of the same form as the two-dimensional transformation sequence for
rotation about an axis that is parallel to the z axis (a pivot point that is not at the coordinate origin).
When an object is to be rotated about an axis that is not parallel to one of the coordinate axes, we
must perform some additional transformations.
we also need rotations to align the rotation axis with a selected coordinate axis and then to bring the
rotation axis back to its original orientation. Given the specifications for the rotation axis and the
rotation angle, we can accomplish the required rotation in five steps:
“0
c. With the help of a suitable diagram, explain basic illumination, RGB and CMY colour models
(Repeat)
Jan 2020
5a. What is clipping? Explain with example the Sutherland-Hodgman polygon clipping algorithm.
(Repeat)
6a. Explain RGB and CMY color models with examples. Explain the transformation between CMY
and RGB color spaces.
b. Obtain the matrix representation for rotation of object in an arbitrary axis
Rotation about arbitrary axis that is not parallel to one of the coordinate axes, we can accomplish
the required rotation in five steps: Consider the Initial position of object in the below figure
We assume that the rotation axis is defined by two points, as illustrated, and that the direction of
rotation is to be counter-clockwise when looking along the axis from P2 to P1. The components of
the rotation-axis vector are then computed as
Rotation about arbitrary axis that is not parallel to one of the coordinate axes, we can accomplish
the required rotation in five steps:
1. Translate the object so that the rotation axis passes through the coordinate origin.
This translation matrix is which repositions the rotation axis and the object
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
If we represent the projection of u in the yz plane as the vector u’ = (0, b, c), then the cosine of
the rotation angle α can be determined from the dot product of u’ and the unit vector u(z) along
the z axis:
the unit vector in the xz plane, resulting from the rotation about the x axis. This vector, labeled
u, has the value a for its x component, because rotation about the x axis leaves the x component
unchanged. Its z component is d (the magnitude of u’ ), because vector u’ has been rotated onto
the z axis. Also, the y component of u’’ is 0 because it now lies in the xz plane. Again, we can
determine the cosine of rotation angle β from the dot product of unit vectors u’’ and uz. Thus,
3. Perform the specified rotation about the selected coordinate axis.
4.Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original spatial position
The transformation matrix for rotation about an arbitrary axis can then be expressed as the
composition of these seven individual transformations
Jan 2023
5a. Develop the Cohen Sutherland Line clipping program using OpenGL.
#include<stdio.h>
#include<GL/glut.h>
// Viewport boundaries
double xvmin = 200, yvmin = 200, xvmax = 300, yvmax = 300;
// Compute the region codes for the two endpoints of the line segment
outcode0 = computecode(x0, y0);
outcode1 = computecode(x1, y1);
// Loop until the line is either completely inside, completely outside, or clipped
do {
if (!(outcode0 | outcode1)) // bitwise OR Trivially accept the line
{
accept = 1;
done = 1;
}
else if (outcode0 & outcode1) // Trivially reject the line
done = 1;
else {
// Compute the intersection point of the line with the clipping window
double x, y;
outcodeout = outcode0 ? outcode0 : outcode1;/* which point is out ? which
is non zero that
is result and the other point to be clipped*/
if (outcodeout & TOP)
{
x = x0 + (x1 - x0) * (ymax - y0) / (y1 - y0);
y = ymax;
}
else if (outcodeout & BOTTOM)
{
x = x0 + (x1 - x0) * (ymin - y0) / (y1 - y0);
y = ymin;
}
else if (outcodeout & RIGHT)
{
y = y0 + (y1 - y0) * (xmax - x0) / (x1 - x0);
x = xmax;
}
else
{
y = y0 + (y1 - y0) * (xmin - x0) / (x1 - x0);
x = xmin;
}
// Main function
void main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutInitWindowPosition(0, 0);
glutCreateWindow("Cohen Suderland Line Clipping Algorithm");
myinit();
glutDisplayFunc(display);
glutMainLoop();
}
b. Discuss the RGB color model and CMY color model. (Repeat)
Jan 2020
b. Write an OpenGL program to draw a color cube and spin it using OpenGL functions(Repeat)
c. What are the basic illumination models? Briefly, explain phong illumination model. (Repeat)
Aug 2021
Clipping: Any procedure that eliminates those portions of a picture that are either inside or outside a
specified region of space
1. Efficiency: Clipping helps improve the efficiency of rendering operations by reducing the
amount of computation required to process and render objects. By discarding or clipping
away the portions of objects that are outside the view, the graphics system can focus only
on rendering the visible parts, saving computational resources.
2. Hidden surface removal: Clipping plays a crucial role in hidden surface removal or occlusion
culling. When objects or primitives extend beyond the view or clipping region, they may
obscure or hide other objects that should be visible. By clipping away the hidden or occluded
portions, the graphics system ensures that only the visible surfaces are rendered, providing a
more accurate representation of the scene.
4. Visual aesthetics: Clipping helps maintain visual aesthetics and prevents objects from
extending beyond the boundaries of the display or viewport. By clipping away the parts that
are outside the visible region, the graphics system ensures that the displayed objects fit
within the screen or viewport, resulting in a more visually pleasing and properly composed
image.
b. Write a note on basic illumination models along with their corresponding OpenGL functions.
(Repeat)
6a. Explain the composite transformation sequence for general 3D rotations of an object about an
axis that is parallel to x-axis. Also give the specifications for the rotation axis and rotation angle.
(Repeat)
To perform a general 3D rotation of an object about an axis parallel to the x-axis, you need to specify
the rotation axis and the rotation angle. Here are the specifications:
Rotation Axis: The rotation axis for this specific rotation is parallel to the x-axis. This means that the
axis lies along a line with a constant x-coordinate value while the y and z-coordinate values can vary.
The equation for the rotation axis parallel to the x-axis can be represented as:
Rotation Angle: The rotation angle specifies the amount of rotation around the rotation axis. It
determines how far the object will rotate in degrees or radians. You need to specify a value for the
rotation angle, denoted as θ, which can be any real number.
For example, if you want to rotate an object 45 degrees counterclockwise around an axis parallel to
the x-axis, the specifications would be:
Rotation Axis: (x, y, z) where x is constant (e.g., x = 0)
These specifications will determine the specific transformation matrix or equations needed to
perform the rotation around the x-axis for the given object.
c. Write the expression for the transformation between CMY and RGB color space(matrix)
(Repeat)
Jan 2021
July 2018
(x0,y0) = (60,20)
Any method for explaining the properties or behavior of color within some particular context is
called a color model. No single model can explain all aspects of color, so we make use of different
models to help describe different color characteristics.
Find the final clipped vertices for the following Fig Q6(a)
b. Explain specular Reflection and phong model. (Repeat)