0% found this document useful (0 votes)
37 views

Shading and Texturing: I've Not Come From Nowhere To Be Nothing - Module 2

The document discusses various techniques for adding texture and shading to 3D objects to make them appear more realistic, including: - Shading refers to adding color, depth and shadows using techniques like Gooch shading. Texturing involves mapping 2D textures onto surfaces using UV mapping. - Anti-aliasing techniques like supersampling and prefiltering are used to reduce jagged edges by increasing the effective pixel sampling rate. - Alpha channels represent transparency and are used for compositing, masking, transitions and anti-aliasing by specifying opacity on a per-pixel level. - The texturing pipeline involves UV mapping, texture creation, texture mapping, shading and rendering to apply textures to 3

Uploaded by

Dept Phys
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Shading and Texturing: I've Not Come From Nowhere To Be Nothing - Module 2

The document discusses various techniques for adding texture and shading to 3D objects to make them appear more realistic, including: - Shading refers to adding color, depth and shadows using techniques like Gooch shading. Texturing involves mapping 2D textures onto surfaces using UV mapping. - Anti-aliasing techniques like supersampling and prefiltering are used to reduce jagged edges by increasing the effective pixel sampling rate. - Alpha channels represent transparency and are used for compositing, masking, transitions and anti-aliasing by specifying opacity on a per-pixel level. - The texturing pipeline involves UV mapping, texture creation, texture mapping, shading and rendering to apply textures to 3

Uploaded by

Dept Phys
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Shading and

Texturing
I’ve not come from nowhere to be nothing
-Module 2
Shading and Texturing

• Shading refers to the process of adding color, depth, and


shadow to a 3D object to make it appear more realistic.

• Texturing involves adding a 2D image, called a texture, to


the surface of a 3D object to give it a more detailed and
complex appearance.
Gooch Shading - example
• Idea behind Gooch shading is to compare the surface normal
to the light’s location.
• If the normal points toward the light, a warmer tone is used
to color the surface; if it points away, a cooler tone is used.
• Angles in between interpolate between these tones, which
are based on a user-supplied surface color.
UV Mapping texturing - example
• The 3D object to be textured is selected, and a 2D mesh is
created that covers its surface.
• The mesh is then flattened out into a 2D plane, such as a
rectangle or a circle
• The UV coordinates (x,y) are assigned to each vertex of the
mesh. Each UV coordinate is a set of two values that represent
the horizontal and vertical position of a point on the 2D
texture.
• The 2D texture is then mapped onto the surface of the mesh
using the UV coordinates.
Light Sources
• Unit vectors – surface normal, light direction vector, view
vector
Light Sources
• Point Light – light from a point source like light bulb
• Directional Light – infinite source in a direction like sun light
• Spot Light – cone shaped light like stage spotlight, torch
• Area Light – finite area source like window or fluorescent light panel
• Ambient Light – diffuse light with no discernable source
• Reflected Light – light reflected from other objects

• With its own intensity, colour and in some cases size and shape
Effects of angle and intensity
• Determines effect in a representable manner
Effects of distance
• Distance calculation to source in discrete manner and reach
0
Inverse Square Law window function
Binary shading effects
• Perpendicular light in full intense (1) light along surface is
least (0)

• Remaining is interpolation between 0 and 1

• Klight varies linearly and represents intensity

• Funlit(n,v) maybe (0,0,0) or black and max flit(l,n,v) maybe


(255,255,255)
Aliasing
Aliasing effect
• The aliasing effect is the appearance of jagged edges or
“jaggies” in a rasterized image (an image rendered using
pixels). The problem of jagged edges technically occurs due to
distortion of the image when scan conversion is done with
sampling at a low frequency, which is also known as
Undersampling. Aliasing occurs when real-world objects
which comprise of smooth, continuous curves are rasterized
using pixels.
NYQUIST FREQUENCY
• Phenomena results from sampling the data at a frequency
below its NYQUIST FREQUENCY: that is, displaying a line on
a screen of too Iow a resolution or sampling the sound at too
Iow a frequency. In fact aliasing is a potential source of
distortion when sampling any form of data, another
example being the optical illusion that wheels are rotating
backwards often seen when old movies are shown on
television.
• NYQUIST FREQUENCY – sampling should be done at atleast
twice the rate of max frequency
Anti-aliasing
Anti-aliasing methods
• high-resolution display
• Post filtering (Supersampling)
• Pre-filtering (Area Sampling)
• Pixel phasing
Using high-resolution display:
One way to reduce aliasing effect and increase sampling rate is
to simply display objects at a higher resolution.
Using high resolution, the jaggies become so small that they
become indistinguishable by the human eye. Hence, jagged
edges get blurred out and edges appear smooth.
Practical applications:
For example retina displays in Apple devices, OLED displays
have high pixel density due to which jaggies formed are so
small that they blurred and indistinguishable by our eyes.
Post filtering (Supersampling)
• In this method, we are increasing the sampling resolution by
treating the screen as if it’s made of a much more fine grid, due
to which the effective pixel size is reduced. But the screen
resolution remains the same. Now, intensity from each subpixel is
calculated and average intensity of the pixel is found from the
average of intensities of subpixels. Thus we do sampling at higher
resolution and display the image at lower resolution or resolution
of the screen, hence this technique is called supersampling. This
method is also known as post filtration as this procedure is done
after generating the rasterized image.
• Practical applications: AA, SSAA, FSAA and MSAA in games
• AA = anti aliasing, ss = super sampling, ms = multisampling 
Pre-filtering (Area Sampling)  
• In area sampling, pixel intensities are calculated proportional
to areas of overlap of each pixel with objects to be displayed.
Here pixel color is computed based on the overlap of scene’s
objects with a pixel area.
Anti-aliasing – Prefiltering

Rasterize pixels with >50% fill Rasterize pixels with shading


Filled Red Line
related to % fill
Pixel phasing
• Here pixel positions are shifted to nearly approximate
positions near object geometry. Some systems allow the size
of individual pixels to be adjusted for distributing intensities
which is helpful in pixel phasing.
Transparency
• Acts as attenuator of colours behind it.

• Higher levels include frosting, refraction, attenuation due to


thickness. Transmission change due to viewing angle
Screen-door transparency
• Pixel aligned checker board fill pattern. Alternate pixels are
rendered and hence partially transparent.
• If two transparent objects are placed then the third one is
not possible.
Alpha
• a value or channel that represents the transparency or opacity
of an image or object.
• alpha is often represented as a fourth color channel (alongside
red, green, and blue) called the alpha channel.
• high alpha more opaque, low alpha more transparent.
• used to create cutouts or masks of objects in an image.
• Used to composite multiple images or layers together.
• represented as floating-point values between 0 and 1, where 0
represents full transparency and 1 represents full opacity.
Applications
• Compositing: Alpha is used extensively in compositing, where multiple images are
combined to create a single, final image. Alpha allows images to be blended
together seamlessly, creating realistic and believable composite images.
• Masking: Alpha can be used to create masks, which are used to isolate specific
parts of an image. This is useful for applying effects or adjustments to specific areas
of an image, such as blurring the background of a portrait.
• Transitions: Alpha can be used to create transitions between two images, such as a
fade or dissolve. This is useful in video editing and motion graphics.
• Anti-aliasing: Alpha is used in anti-aliasing to smooth out jagged edges in an
image. By specifying the opacity of each pixel, alpha can create a smoother
transition between colors, reducing the appearance of jaggies.
• Reflections and refractions: Alpha can be used to create realistic reflections and
refractions in 3D rendering. By specifying the transparency of each pixel, alpha
can simulate the way light interacts with transparent materials like glass and
water.
• https://www.vectornator.io/blog/alpha-channel/

An RGBA image composited over a checkerboard


background layer. Alpha is 0% at the top and 100%
at the bottom.
Compositing
• a technique used in digital media production to combine
visual elements from multiple sources into a single image or
sequence.
• can include video, photographs, 3D models, and computer-
generated graphics, among other sources
• can use tools to adjust the lighting, color, and composition of
each layer, and to create seamless transitions between
different elements
• Adobe After Effects, Nuke, Maya or Fusion
Texturing Pipeline
• steps that are used to apply textures to 3D models

• UV mapping: This step involves the creation of a 2D


representation of the 3D model's surface, known as a UV map.
The UV map is used to define how textures will be applied to
the model.
• Texture creation: Textures are created using image editing
software or generated procedurally using specialized software.
These textures can include color maps, normal maps,
displacement maps, and more.
• Texture mapping: Textures are applied to the 3D model using
the UV map created in step 1. This step involves specifying how
each texture should be mapped onto the model's surface.
• Shading: This step involves defining how light interacts with the
textured surface of the model, taking into account factors such
as the texture's color, reflectivity, and transparency
• Rendering: The final step in the texturing pipeline is rendering
the scene. The 3D model with its textures and shading is
rendered into a 2D image or animation sequence
Image Texturing
• UV Mapping: The 3D object is unwrapped and flattened onto
a 2D plane, which is called UV mapping. This process enables
the texture artist to create a 2D image that fits the 3D
object properly.
• The texture artist creates a 2D image
• The 2D image is applied to the 3D object using texture
mapping techniques
• The final step is rendering the 3D object with the texture
applied to it.
Image Texture
• An image texture is a set of metrics calculated in image
processing designed to quantify the perceived texture of an
image. Image texture gives us information about the spatial
arrangement of color or intensities in an image or selected
region of an image.
• A statistical approach sees an image texture as a
quantitative measure of the arrangement of intensities in a
region.
Texture Animation
• Add your own series of textures, animate them into an
animation sequence (similar to GIF) and then assign them to
surface.
• Meta Spark has this option too
Material Mapping
• Textures are fit to the surface of an object.

• apply different materials or textures to specific parts of a


3D object. It involves assigning a unique texture or material
to different regions of a 3D model, based on properties such
as color, reflectivity, transparency, and bumpiness.
Material Mapping
Parallax Mapping
• Parallax mapping is a technique used in computer graphics to
create the illusion of depth in two-dimensional (2D) textures. It
is a form of bump mapping that simulates 3D surface
displacement on a flat surface by adjusting texture coordinates
based on the view angle and the perceived depth of the texture.
• Parallax mapping works by shifting the texture coordinates of a
surface based on the angle from which the surface is viewed. As
the view angle changes, the texture appears to move in 3D
space, creating the illusion of depth and relief.
• This technique can be used to enhance the realism of textures
on surfaces such as walls, floors, and rocks in video games,
virtual reality, and other computer graphics applications.
Parallax Mapping
• Play video Parallax_mapping.ogv

• Height map in back and white image is used over a brown


surface but is computationally intensive as each surface has
lots of vertices and height information
Textured Lights
• Textured lights (projected textures) involves projecting a
textured image onto a surface or scene to create a unique
lighting effect.

• adds visual interest and complexity to lighting effects.


• simulating sunlight filtering through trees, creating patterns
on a dance floor, or adding texture to a wall

You might also like