0% found this document useful (0 votes)
26 views

Ch11 Color and Shading Models MCA (1)

Uploaded by

pinderrai576
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Ch11 Color and Shading Models MCA (1)

Uploaded by

pinderrai576
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 151

Chapter 11

Color and Shading Models


(Based on Hearn and Baker and
Schaum Outline Series)
Viewing Models
Need for different models
1. Light and Color Models: Properties of light and
representation of colour schemes on the computer.
2. Illumination Models: Computation of color intensity
using Phong model.
3. Shading Models: To rendering the intensity at every point
of the surface.

[email protected]
Contents

1. Light and Color (Not in Syllabus)


2. Illumination Models
3. Texture Surface Patterns
4. Displaying Light Intensities
5. Interpolative shading models
Color Models
– There are many color models. These are related to
subjective description of colors. A model can be
• Additive: Start with no light, and add red green and blue light to
make white and complementary colours
• Subtractive: Start with white light, and subtract red green and blue
light to achieve complementary colours and black
– The choice of a particular model depends on the context in
which we want to describe colors. Example color models
are:
• RGB
• YIQ
• CMY and CMYK
• HSI
November 1, 2024 Computer Graphics 4
RGB Color Model
– Based on Tristimulus theory of vision our eyes perceive
color through the stimulation of three visual pigments in
the cones of retina.
– These visual pigments have peak sensitivity at wavelengths
of about 630 nm, 530 nm, and 450 nm
– It is an additive color model, as any color C is expressed as
c = rR+gG+bB
where r, g, and b values range from 0 to 1 and designate
the amount of primaries needed to match C

November 1, 2024 Computer Graphics 5


RGB Color Model

http://en.wikipedia.org/wiki/RGB
November 1, 2024 Computer Graphics 6
RGB Color Model

– Each RGB value is given a number from 0 – 255 starting


from Black, R = 0 , B = 0, G = 0 to White, R = 255 B =
255 B = 255
Thus 256 colors can be represented by one byte
– Total possible colors from RGB code are
255 x 255 x 255 = 16 581 375 colours
– True colour, but will require 3 bytes per pixel ( 24 bit
colour ).

November 1, 2024 Computer Graphics 7


November 1, 2024 Computer Graphics 8
B.Sc. (Hons) Multimedia Computing Media Technologies
CMY Color Model
– Three secondary colors: Cyan, Magenta, Yellow
– It is a subtractive color model
– Same as RGB except white is at the origin and black is at
the extent of the diagonal
– Instead of mixing colors to add, we put down colors that
subtract light
• Paint, ink, etc.
– Very important for hardcopy devices
• Already white so we can’t add to it!!

November 1, 2024 Computer Graphics 9


CMY Color Model

November 1, 2024 Computer Graphics 10


CMY Color Model

é C ù é1ù é R ù é R ù é1ù é C ù
ê M ú = ê1ú - êG ú and êG ú = ê1ú - ê M ú
ê ú êú ê ú ê ú êú ê ú
êë Y úû êë1úû êë B úû êë B úû êë1úû êë Y úû

White à minus Blue minus Green = Red

November 1, 2024 Computer Graphics 11


CMYK Color Model
– Cyan-Magenta-Yellow-Black, and pronounced as separate
letters. CMYK
– It is also based on the subtractive synthesis model
– CMYK is a color model in which all colors are described as
a mixture of these four process colors.
– CMYK are commonly used for inks.
– CMYK is the standard color model used in offset printing
for full-color documents. Because such printing uses inks
of these four basic colors, it is often called four-color
printing.

November 1, 2024 Computer Graphics 12


Contents

1. Light and Color (Not in Syllabus)


2. Illumination Models
3. Texture Surface Patterns
4. Displaying Light Intensities
5. Interpolative shading models
Illumination Models
– Calculate intensity of light at any given point. Surface
rendering algorithms use these models. If we don’t have
lighting effects nothing looks three dimensional as the
RGB values are not sufficient.

[email protected]
[email protected]
The Phong Model
– Developed in 1973 by Phong Bui Tuong – it is a widely
and highly effective method to mimic the reflection of light
from viewer’s eye.

– It is an empirical approach based on basic principles of


physics.

– It is a local illumination model that computes the direct


impact of the light coming from the present light source
whereas a global illumination model includes transparency
and refractions
[email protected]
The Phong Model
– The model computes the direct impact of

1. Diffuse reflection in the presence of ambient light


2. Diffuse reflection in the presence of single point light source
3. Specular reflections in the presence of single point light source

– Can be extended easily for multiple light sources

[email protected]
Example

Ambient Diffuse

Final
Specular Image

[email protected]
Example

[email protected]
Ambient Light
– Why objects not directly lit are typically still visible ?
• e.g., the ceiling in this room, undersides of desks

– They are lit up by reflections from other nearby objects.

– Typical examples are light from


• a cloudy day, moonlight, sunlight scattered by fog or haze

[email protected]
Ambient Light
Properties:
1. Independence of Surface position and orientation: all
surfaces are equally illuminated. However, the amount
reflected depends on surface properties.
2. Independent of direction of light: Has a source, but has
bounced around so much to make it is directionless.
3. Independent of Viewing angle Illumination caused by
Ambient light is also independent of Viewer position.

Ambient light has no spatial or directional characteristics

[email protected]
Ambient Light

4. The total reflected light from a surface is the sum of the


contributions from light sources and reflected light

5. Too expensive to calculate (in real time), so we use a hack


called an ambient light source and incorporate it by simply
setting a general brightness level for a scene.

We will denote this value as Ia

[email protected]
Diffuse Reflection
– Surfaces that are rough or grainy tend to reflect light in all
directions
– This scattered light is called diffuse reflection

[email protected]
Diffuse Reflection
– Diffuse reflectivity as Diffuse-reflection coefficient
• A parameter kd is set for each surface that determines the fraction
of incident light that is to be scattered as diffuse reflections from
that surface

– ka is assigned a value between 0.0 and 1.0


• 0.0: dull surface that absorbs almost all light
• 1.0: shiny surface that reflects almost all light

– ka can be built f( surface texture, colour)

[email protected]
Diffuse Reflection
– In the presence of Ambient Light

– For background lighting effects we can assume that every


surface is fully illuminated by the scene’s ambient light Ia

– Therefore the ambient contribution to the diffuse reflection


is given as:

I amb,diff = k a I a
[email protected]
Diffuse Reflection
– In the presence of Single Point Light Source
– When a surface is illuminated by a light source, the amount
of incident light depends on the orientation of the surface
relative to the light source direction

[email protected]
Diffuse Reflection
Ideal diffuse reflectors
– Surfaces reflect incident light with equal intensity in all
directions.
– At the microscopic level, is a very rough surface
• (real-world example: chalk)
– They reflect according to Lambert’s cosine law. These are
often called Lambertian surfaces.
– Reflected intensity is independent of the viewing direction,
but does depend on the surface orientation with regard to
the light source

[email protected]
Lambert’s Cosine Law
Ideal diffuse surfaces reflect according to Lambert’s
cosine law:
– The radiant energy reflected by a small portion of a
surface area DA from a light source in a given direction q is
proportional to the cosine of the angle between that
direction and the surface normal.

[email protected]
Diffuse Reflection
– The angle between the incoming light direction and a
surface normal is referred to as the angle of incidence given
as θ

[email protected]
Diffuse Reflection
– So the amount of incident light on a surface is given as:

I l ,incident = I l cos q
– Further, as the amount of light reflected by a surface
depends on its colour and texture. Hence we define a
diffuse reflectivity coefficient Kd= f(texture, colour) for a
given surface and model the diffuse reflections as:

I l ,diff = kd I l ,incident
= kd I l cosq
[email protected]
Diffuse Reflection
– Assuming we denote the normal for a surface as N
and the unit direction vector to the light source
as L then:
N × L = cos q

and
ìkd I l ( N × L) if N × L > 0
I l ,diff =í
î 0 if N × L £ 0
[email protected]
Diffuse Reflection
– Combining Ambient And Incident Diffuse Reflections
– To combine the diffuse reflections arising from ambient
and incident light most graphics packages use two separate
diffuse-reflection coefficients:
• ka for ambient light
• kd for incident light
– The total diffuse reflection equation for a single point
source can then be given as:
ìka I a + kd I l ( N × L) if N × L > 0
I diff =í
î ka I a if N × L £ 0

[email protected]
Example
– We need only consider angles from 0° to 90° (Why?)
– A Lambertian sphere seen at several different lighting
angles:

[email protected]
Specular Reflection
– Additionally to diffuse reflection some of the reflected light
is concentrated into a highlight or bright spot
– This is called specular reflection

[email protected]
Specular Reflection
– Shiny surfaces exhibit specular reflection
• Polished metal
• Glossy car finish

– These highlights appear as a function of the viewer’s


position, so specular reflection is viewer dependent

[email protected]
Specular Reflection
– The bright spot that we see on a shiny surface is the result
of near total of the incident light in a concentrated region
around the specular reflection angle
– Ideal specular reflection angle equals the angle of the
incident light
Unit normal Unit normal
to surface in direction
of ideal
Unit vector
specular
toward light
reflection
source

Unit vector
to viewer

[email protected]
The Phong Specular Reflection
Model
– The Phong model sets the intensity of specular reflection as
proportional to the angle between the viewing vector and
the specular reflection vector

Unit normal Unit normal


to surface in direction
of ideal
Unit vector
specular
toward light
reflection
source

Unit vector
to viewer

[email protected]
The Phong Specular Reflection
Model
– The specular-reflection exponent, ns is determined by the
type of surface we want to display
• Shiny surfaces have a very large value (>100)
– Rough surfaces would have a value near 1

[email protected]
The Phong Specular Reflection
Model
– A perfect mirror reflects light only in the specular-
reflection direction
– Other objects exhibit specular reflections over a finite range
of viewing positions around vector R

[email protected]
The Phong Specular Reflection
Model
– The angle f can be varied between 0° and 90° so that cos f
varies from 1.0 to 0.0
– So, the specular reflection intensity is proportional to
cos ns f

[email protected]
The Phong Specular Reflection
Model
specular reflection coefficient ks
– For some materials the amount of specular reflection
depends heavily on the angle of the incident light.

– Fresnel’s Laws of Reflection describe in great detail how


specular reflections behave.

– However, we don’t need to worry about this and instead


approximate the specular effects with a constant specular
reflection coefficient ks

[email protected]
The Phong Specular Reflection
Model
– So the specular reflection intensity is given as:

I l ,spec = ks I l cos fns

– Remembering that V.R = cos f we can say:

ìk s I l (V × R) ns if V × R > 0 and N × L > 0


I l ,spec =í
î 0.0 if V × R < 0 or N × L £ 0

[email protected]
Direction of Ideal Reflection
L R+L = 2 (Cosq) N
2(N.L)N R = 2(N.L)N – L
N

L R
(N.L)N

cos(qi)N

qL qR

qL= qR
[email protected]
Direction of Ideal Reflection
– In stead of going for cumbersome computation we can
estimate the vector R as half way vector as used in Phong
Model. Moreover, the ideal specular reflection may not be
observed in all cases

L +V
H=
L +V
so that
ìk s I l (V × H ) ns if V × H > 0 and N × L > 0
I l , spec =í
î 0.0 if V × H < 0 or N × L £ 0

[email protected]
Example

[email protected]
Combining Diffuse & Specular
Reflections
– For a single light source we can combine the effects of
diffuse and specular reflections simply as follows:
I = I diff + I spec
= I ambdiff + I l ,diff + I l , spec
=k aI a + kd I l ( N × L) + k s I l (V × R) ns
N
Viewer R q q L
a
V

[email protected]
Combining Diffuse & Specular
Reflections
– For Color Components
I r =k a ,r I a + k d ,r I l ( N × L) + k s I l (V × R) ns
I g =k a , g I a + k d , g I l ( N × L) + k s I l (V × R) ns
I b =k a ,bI a + k d ,b I l ( N × L) + k s I l (V × R) ns

kd = <0.7, 0.7, 0.3> for yellowish representation


ks is kept same as reflected color is same as
incident color

[email protected]
Combining Diffuse & Specular
Reflections
– For multiple light sources we can place n number of light
sources in a scene

[ ]
n

Exam Question
I = I ambdiff + å I l ,diff + I l , spec

Common
l =1

[ ]
n
= k a I a + å I l i k d ( N × Li ) + k s (V × Ri )
ns

i =1

[email protected]
WIREFRAME MODEL WITH AMBIENT LIGHT

WITH DIFFUSE REFLECTION WITH SPECULAR REFLECTION

[email protected]
Try this Demo
• There’s a very nice Java
illumination model demo which
may help you understand the
effects of different kinds of
reflections available at:

http://www.siggraph.org/educatio
n/materials/HyperGraph/illumin/v
rml/pellucid.html

Try playing with the various


parameters and see if you can
predict what the sphere will look
like
[email protected]
Refraction Model
– Translucent surfaces allow some light to penetrate the
surface and emerge from another location on the light.

– Some light might be reflected at the surface.

– Water and glass are translucent surfaces.

– Translucent properties are set somewhat with the alpha


channel, the fourth component in the color settings.

[email protected]
Refraction Model

[email protected]
Refraction Model
– When light is incident upon a translucent surface, part of it
is reflected and part of it is refracted.
– The speed of light is different in different materials, hence
the path of light changes with the change in medium.

[email protected]
Refraction Model
– The direction of the refracted light is specified by the Angle
of refraction which is function of index of refraction of
each material and direction of incident light.

Ql = f(hi ,hr ,Qi)

Qi = Angle of incidence
Ql = Angle of refraction
hi : Index of refraction for material 1
hr : Index of refraction for material 2

[email protected]
Refraction Model
– For thin surfaces, can ignore change in direction
– Assume light travels straight through surface
T @ -L
N
hi = 1.0 for air
hr = 1.5 for crown glass Qi

hi L
hr
T
Qr

Qi
T
[email protected]
Refraction Model

For solid objects, apply Snell’s law:


hi Qi = Angle of incidence
sin Q r = sin Qi Qr = Angle of refraction
hr hi : Index of refraction for light material
hr : Index of refraction for surface material
h r sin Q r = hi sin Qi
N
Qi

hi L
hr
T
Qr
[email protected]
Refraction Model
• To avoid cumbersome computations , the refraction effect
is modeled by simply shifting the path the incident light by
the unit transmission vector T calculated as:

Qi = Angle of incidence
hi hi Qr = Angle of refraction
T = ( cos Qi - cos Q r ) N - L
hr hr hi : Index of refraction for light material
hr : Index of refraction for surface material

• T can be used to find the intersection of refracted light with


objects behind transparent surface.

[email protected]
Refraction Model
– Further if
Index of refraction for S1 ≈ Index of refraction for S2
Þ Angle of refraction ≈ Angle of incidence
– the effect of transparence can be modeled by setting
transparency coefficient as the fraction transmitted as
follows
• KT = 1 for translucent object, KT = 0 for opaque
• 0 < KT < 1 for object that is semi-translucent
• 1 – KT is called the opacity factor
– Total surface intensity can be calculated as
I = (1– KT ) Irefl+ KT Itrans
[email protected]
Try this Demo
• There’s a very nice Java
illumination model demo which
may help you understand the
effects of different kinds of
reflections available at:

• http://www.siggraph.org/educatio
n/materials/HyperGraph/illumin/v
rml/pellucid.html

• Try playing with the various


parameters and see if you can
predict what the sphere will look
like
[email protected]
Reflections

[email protected]
Reflection and Refraction

[email protected]
Refraction

[email protected]
Complex Scenes

[email protected]
Complex Scenes

[email protected]
Many Interesting Features

[email protected]
Contents

1. Light and Color


2. Illumination Models
3. Texture Surface Patterns
4. Displaying Light Intensities
5. Interpolative shading models
Texture Surface Patterns
– So far: we have displayed smooth surfaces.

– Most objects do not have smooth surfaces. Examples: the


earth, an orange, a vase, a painting, a carpet, grass, …

– Modeling and rendering an object is processed through


several stages:
• Wireframe model
• Smooth surface model
• Surface details added – either by drawing the details directly, or by
applying an image on the surface.

[email protected]
Texture Surface Patterns
– Applying surface detail can be made through
• Pasting small objects
• Modeling surface details through polygons
• Texture mapping – applying a pattern – a texture – to determine the
color of a surface
• Bump mapping – distorting the shape of an object to create
variations
• Fractal generation
• Reflection or environmental mapping – modeling global reflections
looking like ray-tracing.

[email protected]
Texture Surface Patterns
– Modeling surface detail through polygons

• When the surface detail presents important regularity.

• Example: squares on a checkerboard, tiles on the floor, floral


design on a carpet.

• Irregular surface could be modeled through small, randomly


oriented polygon facets.

• May modify the illumination, since the new surface illumination


will be predominant over the covered surface.

[email protected]
Texture Surface Patterns
– Texture mapping consists of mapping
patterns onto the
geometric description of an object.

– Texture pattern are mapped


• either an array of color values
• or as a procedure that
modifies existing color
• or as an image from an image file

[email protected]
Contents

1. Light and Color


2. Illumination Models
3. Texture Surface Patterns
4. Displaying Light Intensities
5. Interpolative shading models
Displaying Light Intensities
– Values of intensity given by the illumination models have
to be converted to some allowable intensity levels for a
particular graphics system in use.

– So we will study
• Assignment of intensity level
• Displaying continuous tones

[email protected]
Assigning Intensity Levels
– Assuming that intensity value ranges from 0 to 1 in
normalized environment, the four non zero intensity levels
can be
• ⅛, ¼ , ½ and 1
– If we label them as I0, I1, I2, …. In then
I1 I 2 I
= == n = r
I 0 I1 I n -1
Þ Ik = r k I0
1n
æ1ö
Þ r = çç ÷÷
è I0 ø
or I k = (I 0 )
( n-k ) n

[email protected]
Assigning Intensity Levels
– Thus for a black and white monitor with
• 8 bits per pixel (n = 255)
• I0 = 0.01
• The ratio of successive intensities r = 1.0182

– The 256 intensities are


• 0.0100, 0.0102, 0.0104, 0.0106, 0.0107, 0.0109…………..0.09821,
1.000

– Intensity values will form G.P. and not A.P.

[email protected]
Displaying Continuous Tones
– The human eye can see more than the screens can offer…

– We may need available intensity levels as


• Resolution differ between I/O devices.
– 256/65535/TrueColor Systems.
• Local Color-Space.

– This can be done with quantization. Two such quantization


techniques are
• HALFTONING
• DITHERING

[email protected]
Half toning
– Halftoning - a technique, originally developed for the
printing industry, to reproduce continuous tone images
using printing techniques capable of black and white only
and not shades of gray (Gray Scale).
– A black and white photograph may have hundreds of
shades of gray.

[email protected]
Half toning
– In a black-and-white photograph, for example, a group of
large dots placed closely together appears black.
– A group of smaller dots with larger spaces between them
produces a , gray shade.
– A group of even smaller dots spaced widely apart appears
almost white.

[email protected]
Half toning
– A color photograph may have upwards of three million
colors in it, but most printing presses use only these four
process inks:
• Cyan
• Magenta
• Yellow
• Black

[email protected]
Half toning
– A screen frequency can be represented by a grid like the
one shown in figure.
– Each square in this grid is a halftone cell, capable of
holding one halftone dot.

– Example of Levels of gray:

Halftone Patterns

[email protected]
Halftone Patterns
– Halftone patterns are defined as the cluster of pixels to
represent intensity
• Trade spatial resolution for intensity resolution
– These are also called halftone approximations.

– The approximations or number of patterns depends on


• Grid Size: Number of pixels included in the region
• Level: Number of intensities/colors/levels supported in the system

[email protected]
Halftone Patterns
– Possible halftone patterns using 2X2 pixel grids with bi-
level system

[email protected]
Halftone Patterns
– Intensity levels 0 through 12 obtained with halftone
approximations using 2X2 pixel grids with four-level
system

[email protected]
Halftone Patterns
– Intensity levels 0 through 9 obtained with halftone
approximations using 3X3 pixel grids with Bi-Level
system

[email protected]
Halftone Patterns
– We can see there may be multiple patterns for single
intensity level
– Consider 5 levels using 2X2 pixel grids with bi-level
system
Five levels
(0) (1) (2) (3) (4)

Options for Level 1


(0) (1) (2) (3)

Options for Level 2


(0) (1) (2) (3) (4)

[email protected]
Halftone Patterns
– Type of Halftone patterns
• Symmetric vs. Asymmetric
– Asymmetric are more preferred as they produce
randomness and avoid streaks.

Symmetric patterns Asymmetric patterns


Not Preferred Preferred
[email protected]
Halftone Patterns

[email protected]
Halftone Patterns
– We can design a 3x3 matrix for producing 10 levels from 0
to 9 as follows:
é8 3 7 ù
ê5 1 2ú
ê ú
êë4 9 6úû
– To display a particular intensity level k we turn on all
pixels whose value is <= k
– A pattern generation matrix is designed for kxk grid using
bi Level system that can produce n can be designed as
stored for ready to use.
[email protected]
Half toning
– Halftone approximations has to be constructed to minimize
contouring and other undesirable visual effects. Hence
asymmetric patterns are preferred over symmetric ones.
– The use of NxN grid with L – level system increases the
total number of intensities to be displayed from L to (N2 x
(L-1)+1), but it also reduces the resolution by a factor of
1/N in each direction.
• Trade off spatial for intensity resolution; works well for newspaper
– As the grid sizes increases the grid patterns becomes
apparent.

[email protected]
Original

[email protected]
Half Toning

[email protected]
Dithering
– In the world of computer graphics Dithering refers to the
process of applying patterns to areas of color in order to
simulate a wider color range.

– The dithering process is used for displaying an image on a


device that doesn't have as many colors.

[email protected]
Dithering
– For our discussion let’s assumed that the range of colors (or
intensity levels for gray-scale pictures) of the output device
is fixed - No quantization problem (choosing between
optimal output colors).
– We'll restrict ourselves to gray-scale images. The same can
be extended to handle color images with RGB composition.

[email protected]
Dithering
– Dithering distribute errors among pixels
• Exploit spatial integration in our eye
• Display greater range of perceptible intensities
– Uniform quantization discards all errors
• i.e. all “rounding” errors

[email protected]
Dithering
– Techniques for dithering
• Thresholding.
• Ordered dither
• Random dither; Robert’s algorithm

[email protected]
Dithering – Thresholding
– The simplest and probably the worst for common images.
– There is only one parameter: the threshold.
– Process each point of the input image:
• If ( Intensity > Threshold ) a white point is outputted
• Else a black point is outputted
– Thus if we want to quantize a gray-level image to a binary
color map then:
• Map the upper half of the gray-level scale to white, and the lower
half to black
• a simple threshold operation, preformed independently at each
pixel.

[email protected]
Dithering – Thresholding
Original image. Simple threshold. n = 0.5

8-bit Gray 1-bit B/W


Errors are low spatial frequencies.
[email protected]
Dithering – Ordered Dither
– Main Component is a dithering matrix
– A matrix of n by n produce (n*n)+1 intensity levels
– The matrix elements are similar to the threshold used in the
previous technique
– Once we have a dithering matrix we can process each point
of the input image: (n is the size of the matrix)
• Get an input pixel (x,y)
• Access the matrix element we'll use :
– ((x mod n),(y mod n))) element of the matrix
• Threshold (x,y) with the matrix element

[email protected]
Dithering – Ordered Dither
– For given Dither Patterns the dither matrix is
é8 3 7 ù
ê5 1 2ú
ê ú
êë4 9 6úû

[email protected]
Dithering – Ordered Dither
Bayer Ordered Dither Patterns

[email protected]
Dithering – Ordered Dither
For all x pixels The dithering matrix (3x3)
For all y pixels
v = Intensity(x,y) 8 3 7
5 1 2
i = x mod 3
4 9 6
j = y mod 3
if v >= M[i,j] then
Set_Pixel(x,y, BLACK)
else
Set_Pixel(x,y, WHITE)

[email protected]
Dithering – Ordered Dither
Image
8 7 5 1 6 2 4 5 3 2 Dithering
6 8 3 7 2 2 5 4 3 2 mask
9 4 8 3 7 4 5 2 6 4
8 3 7
3 8 9 7 7 2 2 1 3 2
5 1 2
6 9 2 9 7 4 3 2 2 4
4 9 6
9 4 8 8 8 4 4 8 4 4
Thresholding (5) Ordered Dither
1 1 1 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0
1 1 0 1 0 0 1 0 0 0 1 1 1 1 1 0 0 1 1 0
1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0
0 1 1 1 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0
1 1 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0
1 0 1 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0
[email protected]
Dithering – Ordered Dither
Original image. 4x4 Ordered Dither

8-bit Gray 1-bit B/W


[email protected]
Dithering – Random Dither
– Randomize quantization errors
• Errors appear as noise

– Modify the image pixels as


• P(x, y) = trunc(I(x, y) + noise(x,y) + 0.5)

– as compare to
• P(x, y) = trunc(I(x, y) + 0.5) in thresholding.

[email protected]
Dithering – Random Dither
Original image. Random Dither

8-bit Gray 1-bit B/W


[email protected]
Dithering – Thresholding
Original image. Simple threshold. n = 0.5

8-bit Gray 1-bit B/W


Errors are low spatial frequencies.
[email protected]
Dithering – Ordered Dither
Original image. 4x4 Ordered Dither

8-bit Gray 1-bit B/W


[email protected]
Dithering – Random Dither
Original image. Random Dither

8-bit Gray 1-bit B/W


[email protected]
Contents

1. Light and Color


2. Illumination Models
3. Texture Surface Patterns
4. Displaying Light Intensities
5. Interpolative shading models
Interpolative Shading Models
– Keep in mind:
• It’s a fairly expensive calculation
– Several possible answers, each with different implications
for the visual quality of the result

– Shading models dictate how often the color computation is


performed.

[email protected]
SHADING METHODS
POLYGON RENDERING METHODS

CONSTANT SHADING INTERPOLATION SHADING


NO SHADING Used for polyhedron-mesh
Is used for objects defined as
All pixel objects approximation of curved surfaces
polyhedral.
are same

GOURAUD SHADING PHONG SHADING


Interpolate intensity Interpolate surface normal

[email protected]
NO SHADING

Effects of
different
shading
techniques
CONSTANT SHADING INTERPOLATION SHADING

GOURAUD SHADING PHONG SHADING

[email protected]
OBJECT MODEL

Effects of
different
shading
techniques
CONSTANT SHADING INTERPOLATION SHADING

GOURAUD SHADING PHONG SHADING

[email protected]
Interpolative Shading Models
– We will start to look at shading or rendering methods
commonly used in computer graphics

1. Constant surface shading/rendering


2. Gouraud surface shading/rendering
3. Phong surface shading/rendering

[email protected]
Constant Shading
– Also known as flat shading or fast shading it is an
extension to Constant intensity shading.
– The entire object is divided into number of polygons
surfaces called facets.
– Determine the orientation of each facet and does one color
computation for each polygon or facet.
– It is simple, fast (?) and produce better results than no
shading.

[email protected]
Constant Shading
Just add lots and lots of polygons – that makes it SLOW!

Original Faceted Shaded


[email protected]
Constant Shading
– Still it is not realistic as
• For point sources, the direction to light varies across the facet
• For specular reflectance, direction to eye varies across the facet
– We have to determine the orientation of each facet we have
to find the normal for each facet. This approximation
effect the result.

[email protected]
Constant Shading
– It is very useful for quickly displaying the general
appearance of surfaces
– It is more applicable when
• the object must be polyhedron and not an approximation of an
object with a curved surface.
• All light sources illuminating the object are sufficiently far from the
surface so that N.L and the attenuation functions are constant over
the surface.
• The viewing position is sufficiently far from the surface so that V.R
is constant over the surface.

[email protected]
Gouraud Shading
– Gouraud surface shading was developed in the 1970s by
Henri Gouraud while working at the University of Utah
along with Ivan Sutherland and David Evans and is also
called intensity-interpolation surface rendering
– This is the most common approach
• Intensity levels are calculated at each vertex
• Linearly interpolate the resulting colors over faces
– Along edges
– Along scanlines

[email protected]
Gouraud Shading
– To render a polygon, Gouraud surface rendering proceeds
as follows:
1. Determine the average unit normal vector at each vertex of the
polygon
2. Apply an illumination model at each polygon vertex to obtain the
light intensity at that position
3. Linearly interpolate the vertex intensities over the projected area
of the polygon

[email protected]
Gouraud Shading
Step 1: Determine the average unit normal vector at each vertex of the
polygon
The average unit normal vector at v is given as:
N1 + N 2 + N3 + N 4
Nv = N1
N1 + N 2 + N3 + N 4
Nv

or more generally: v
n N4 N2
åN i
Nv = i =1
n

åN
i =1
i N3

[email protected]
Gouraud Shading
Step 2: Apply an illumination model at each polygon vertex
– You have to apply the illumination model to take of
• ambient light I2
• Diffuse reflection N1
• Specular reflection
• Transparency (Refraction) I1 Nv
I
v
I = I diff + I spec N4 N2

= I ambdiff + I l ,diff + I l , spec


=k aI a + kd I l ( N v × L) + k s I l (V × Rv ) ns N3
I4
I3
[email protected]
Gouraud Shading
Step 3: Linearly interpolate the vertex intensities over the projected area of the
polygon along edges and along scan lines

y y4 - y2 y1 - y4
3
I4 = I1 + I2
y1 - y2 y1 - y2

1 y5 - y 2 y3 - y5
I5 = I3 + I2
Scan-line y3 - y 2 y3 - y 2
4 p 5
x5 - x p x p - x4
Ip = I4 + I5
x5 - x4 x5 - x4
2
x
[email protected]
Gouraud Shading
Given the intensities of pixels A, B and C = 200, 150
and 50 respectively, Compute the intensity at point P.

[email protected]
Gouraud Shading
Given the intensities of pixels A, B and C = 200, 150
and 50, respectively, compute the intensity at point P.

[email protected]
Gouraud Shading

Original Gouraud Shaded


[email protected]
Gouraud Shading
– It can be combined with hidden surface algorithms to fill in
visible surfaces/polygons along each scan line.
– It removes the intensity discontinuities associated with the
faceted or flat shading.
– Gouraud is more efficient using an iterative approach

[email protected]
Gouraud Shading
– Objects shaded with Gouraud shading often appears dull,
chalky
– Lacks accurate specular component
• If included, will be averaged over entire polygon

C1

C3

C2 Can’t shade that effect!

[email protected]
Gouraud Shading
– Gouraud shading can introduce anomalies known as Mach
bands i. e. Artifact at discontinuities in intensity or intensity
slope

C1

C4
C3

C2

Discontinuity in rate
[email protected] of color change
occurs here
Mach Bands
A psychological phenomenon
whereby we see bright bands
where two blocks of solid colour
meet
A good demo is available to
experiment with this at:

http://www.nbb.cornell.edu/neur
obio/land/OldStudentProjects/cs4
90-96to97/anson/
MachBandingApplet/

Try playing with the various


parameters and see if you can
predict what the sphere will look
like
[email protected]
Phong Shading
– A more accurate interpolation based approach for rendering
a polygon was developed by Phong Bui Tuong
– Basically the Phong surface rendering model interpolates
normal vectors instead of intensity values and then
compute the actual intensity value using Phong Lighting
Model.
– Often also called normal or vector – interpolation surface
rendering

[email protected]
Phong Shading
– To render a polygon, Phong surface rendering proceeds as
follows:
1. Determine the average unit normal vector at each vertex of the
polygon
2. Linearly interpolate the vertex normals over the projected area of
the polygon
3. Apply an illumination model at positions along scan lines to
calculate pixel intensities using the interpolated normal vectors

[email protected]
Phong Shading
Step 1: Determine the average unit normal vector at each vertex of the
polygon
The average unit normal vector at v is given as:
N1 + N 2 + N3 + N 4
Nv = N1
N1 + N 2 + N3 + N 4
Nv

or more generally: v
n N4 N2
åN i
Nv = i =1
n

åN
i =1
i N3

[email protected]
Phong Shading
Step 2: Linearly interpolate the vertex Normal over the projected area of the
polygon along edges and along scan lines
N3
y4 - y2 y1 - y4
N1 N4 = N1 + N2
y1 - y2 y1 - y2
y5 - y 2 y3 - y5
N5 = N3 + N2
N4 y3 - y 2 y3 - y 2
Np
Scan line
p N5 x p - x5 x4 - x p
Np = N4 + N5
x4 - x5 x4 - x5
N2
[email protected]
Phong Shading
Step 3: Apply an illumination model at each pixel p
– You have to apply the illumination model to take of
• ambient light
• Diffuse reflection
• Specular reflection
• Transparency (Refraction)

I p =k aI a + kd I l ( N p × L) + ks I l (V × Rp ) ns

[email protected]
Phong Shading

– Phong shading accept the same input as Gouraud shading


and produce very smooth-looking results.

– Remove the effect of Mach Bands.

– More realistic as the phenomena of specular reflection is


maintained.

[email protected]
Phong Shading
– Phong shading is considerably more expensive.

– Phong shading is much slower than Gouraud shading as the


lighting model is revaluated so many times

– However, there are fast Phong surface rendering


approaches that can be implemented iteratively.

– Typically Phong shading is implemented as part of a visible


surface detection technique.

[email protected]
Phong Shading Examples

[email protected]
Phong Shading Examples

[email protected]
Fast Phong Shading

– Fast Phong shading is Gary Bishop’s approximation of the


Phong shading model

– Requires lighting calculations at vertices and edge


midpoints

Fast Phong Quadratic Exponentiation


Shading = interpolation + Texture Map

[email protected]
Fast Phong Shading
– It approximates the intensity calculations using a Taylor-
series expansion and triangular surface patches.

– Every face has vectors A, B, C computed from the three


vertex equations

N k = Axk + Byk + C k = 1,2,3


where ( xk , yk ) represents a vertex position
– This makes sense because Phong shading interpolates
normal vectors from vertex normals

[email protected]
Fast Phong Shading
– Replace N in intensity computations with Ax+By+C for
each x,y value

– Thus omitting reflectivity and attenuation parameters, we


can write the calculations for light-source diffuse reflection
for a surface point (x, y) as

[email protected]
Fast Phong Shading
L.N
I diff =
LN
L.( Ax + By + C )
=
L Ax + By + C
( L. A) x + ( L.B ) y + ( L.C )
=
L Ax + By + C
ax + by + c
=
(dx 2 + exy + fy 2 + gx + hy + i )
L. A L. B
a= ,b = , and so on
L L

[email protected]
Fast Phong Shading
– We can express the denominator in the final equation as
Taylor series expansion and retain sum up to second degree
in x and y.

ax + by + c
I diff =
(dx 2 + exy + fy 2 + gx + hy + i )
= T5 x 2 + T4 xy + T3 y 2 + T2 x + T1 y + T0
where Tk is a function of parameters a, b, c, and so on

[email protected]
Fast Phong Shading

– It reduces the Phong shading calculations

– But it still takes approximately double time to render a


surface than with Gouraud and in some cases takes 6-7
times longer than Gouraud!

[email protected]
No Surface Rendering
Flat Surface Rendering
Gouraud Surface Rendering
Phong Surface Rendering
Comparison

[email protected]
Instructional sessions are OVER !

[email protected]
Prepare Well for Exam!

[email protected]

You might also like