0% found this document useful (0 votes)
14 views

E0_271_Assignment_2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

E0_271_Assignment_2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

E0 271 Graphics & Visualisation Assignment - 2

Sai Phanindra Korlepara — 24142 — M.Tech CSA Dept.

Problem 1. Scalar field visualization (OpenGL): Slicing

• Use an axis parallel plane to visualize the data via slicing. Display the bounding box of
the domain to provide a context.

• Repeat the above task on GPU. Use 3D textures within shaders to represent the scalar
field. Use 1D texture in a shader to implement a colormap.

• Perform a comprehensive comparison of the CPU-based and GPU-based approaches


with respect to image quality and performance, and report your observations.

Solution.

• Since the original vtk file has data in the form of a grid and we want to display the
slicing planes that are normal to any one of the axes, the co-ordinates inside the cutting
plane don’t change, they remain the same.
Based on the Normal axis chosen, the geometry is calculated with the same resolution
as that of the data in the vtk file. To determine the position of the cutting plane, time
based equation was setup that loops after reaching one of the extremes.
For any plane, the values of the scalarfield that is being visualised is accessed from
the closest two lines and linearly interpolated. This data is passed along with vertices
inside the vertex buffer as the 4th element for each vertex.
The shaders do nothing special and just display the triangles as they are passed from
the main function (CPU). Hence the entire calculation is performed inside the CPU
and just visualisation is handled inside the GPU.

• In comparison to the above code, the geometry generation still has to be handled inside
the CPU. Hence the time dependent equation remains unchanged.
This time a new uniform is setup to send the scalarfield as a 3D Texture. This 3D
texture is acessed inside the fragment shader. we can simply pass normalised vertex
co-ordinates to get the scalar value at that point tha is already linearly interpolated
for us by the inbuilt opengl function.
we pass two uniforms that signify max and min values for the chosen property. with
the help of these two uniforms, we can determine the correct color from the colormap.
An option is also added in both the questions to dynamically switch the normal axis
by pressng letters - ’x’, ’y’ or ’z’ to choose the normal axis respectively. along with
this the property can also be chosen dynamically by pressing the numbers ’1’ through
’max prop number’.
Also along with the above, I also implemented, Zoom, translate and rotate features
using the mouse movement functions. Left click and drag to rotate, Right click and
drag to translate, scroll wheel to scale the model.

1
• . There is no noticeable difference between the image quality of the CPU or the GPU
implementation. Both the methods generate the same image for all the properties.
But there can be difference noticed in the frame rates as discussed below

Table 1: Average Frame rates for Slicing


CPU GPU
redseasmall 120 118
redseamod 115 65

As seen from above, for both the files, in all the different properties remain more
or less similar, but consistently gpu implementation performed worse than the CPU
implementation.
I think the main reason for this unexpected result is because, in CPU implementaiton
because the entire data is stored in an array that is contiguous it is easy to access the
elements and get the calculation done quickly. But in GPU implementation, for each
of the vertices, we need to access the memory. For this reason I conclude this difference
in the CPU and GPU implementations.

Problem 2. Scalar field visualization (OpenGL): Isosurfaces


• Implement the marching cubes algorithm to extract the isosurface of the scalar field on
the CPU. Color vertices of the isosurface using an appropriate colormap that represents
the order in which the cubes are traversed by the algorithm.
• Use a geometry shader for computing the isosurface.
• Compare the CPU-based and GPU-based approaches in terms of quality, performance,
and pre-processing effort, and report your observations.
Solution.

• To implement the marching cubes algorithm, I used the triangle tables by Paul Bourke.
I implemented a new function to extract the triangles for any given voxel in a seperate
file called marchingcubes.cpp.
Since there are 256 different combinations possible, I had to use a existing resource
for the list of triangles for any given vertex in/out combination. with the help of this,
each edge intersection by the iso surface was linearly interpolated.
The new triangles that are created are returned to the main file where they are added
to the vertices buffer and we proceed with the next voxel. Along with the co-ordinates
of the triangle at each vertex a number indicating the index of the voxel is also passed
to the shader.
with the help of a uniform variable that sends the largest voxel size, the color of each
vertex is evaluated by the ratio of voxel number to max voxel number.

2
In addition to scaling, rotation, translation, user can press the mouse middle button
and drag to change the isovalue dynamical while inside the glut window for ease of
access.
• Similar algorithm to the previous method was used, but here a new geometry shader
is added to handle the new vertices being created. Instead of sending triangles to the
vertex shader, one of the vertices for each voxel in the data is passed as the index
vertex to the vertex shader.
vertex shader passes this vertex untouched to the geometry shader. Inside the geometry
shader bunch of uniform variable make the information regarding scalar field domain
size, spacing, transformation matrix and other details are made available.
Here we will estimate the scalar value at each of the vertices using the 3D texture. The
function isoextract() is called to extract the triangles with iso value as passed by the
uniform. the newly generated triangles are passed to the fragment shader along with
the index of the voxel for all the triangles generated inside that voxel. This enables
color determination inside the fragment shader.
• . The image quality as before no significant difference can be observed between the CPU
and GPU based implementations. The pre-processing effort is also not significantly
changed, because same data needs to be caputured from the vtk file. Same calculations
need to be performed to determine the triangles.
The only difference is that the calculation of triangle moved significantly down the
visualisation pipeline. One major difference noticed is regarding the framerate. The
CPU based implementations are much faster than the GPU based implementations. It
is also noted that from the time the command is run there is at least 5 second delay for
both the methods to load the background color inside the window. It is also important
to note that the initial load time is higher for CPU.
This delay was not noticed for the cutting plane problem. As can be seen below the
CPU implementaion outperformed the GPU implementation. I consider the reason for
this to be number of times the iso surfaces are calculated in CPU based implementa-
tion vs GPU based implementation. In CPU implementation once the iso surface is
calculated there is no further calculation other than visualisation which is quite simple.
But since in GPU implementation we need to calculate the iso-surface during each
iteration again and again there is a significant impact on the performance. Initially i
considered a animation similar to the previous problem. But since there is a significant
delay for each new isovalue, which led me to abandon this idea.

Table 2: Average Frame rates for Slicing


CPU GPU
redseasmall 120 55
redseamod 117 5

3
Problem 3. Vector field visualization (Paraview): glyphs, LIC, streamlines

• Use the components provided to compute the vector field.

• Visualize the vector field using glyphs.

– Scale the glyphs using the magnitude of velocity.


– Use uniform scale along with a colormap. Choose seeds points appropriately to
avoid clutter. Which of the two leads to less clutter?

• Visualize the vector field using streamlines. Again choose seeds points and region of
interest appropriately to avoid clutter. How do the choice of integration method, number
of seed points, and other parameters affect the output?

• Choose appropriate slice, visualize the vector field restricted to the slice using LIC,
analyse the effect of the number of steps and step-size.

Solution.

• Components were calculated using U ∗ iHat + V ∗ jHat + W ∗ kHat.

• For scaled, I used 5 Lakh sampling points, 40 thousand seed points. Line type repre-
sentation gave the best view. Used colormap Gr4L for all the questions.
For part 2, Scaling was disabled, used 50 thousand sampling points, 20 thousand seed
points. Scaled version was more informative and clutter free compared to unscaled
version.

• DISCLAIMER : I was not able to get the trace to record all the values i fed into
the visualizer. Hence I am making a detailed note so that it can be recreated easily
during demo. Using forward or backward we integrate in only one direction, hence the

Table 3: Parameters for 3.3


Int. method Direction SPH.x SPH.y SPH.z SPH.r Sample rate
RK 4.5 Both 40.82 20.05 -8.25 13 8000
RK 4.5 Forward 40.82 20.05 -8.25 13 8000
RK 4.5 Backward 40.82 20.05 -8.25 13 8000
RK 4 Both 40.82 20.05 -8.25 13 8000
RK 2 Both 40.82 20.05 -8.25 13 8000

streamlines generated are less dense compared to integrating both sides.


using RK2 is second order and as the step size is usually higher, we can see many sharp
edges in the streamlines indicating errors and unreliability, but speed.
using RK4 was a middle ground between 4.5 and 2. Hence it is smoother and more
accurate in comparison to RK2. since RK 4.5 uses dynamic adjustment of stepsize to
improve clarity, it has the best result compared to both RK2 and RK4

4
• for LIC, as the steps decrease, the image will look more like the original noise where
it started instead of being similar to streamlines.
when steps is set to 1, image looks almost completely noise, but when steps is set to
50 image is almost similar to how streamlines behave.
By increasing stepsize, width of the noise bands decreases, this decreases the clarity.
But decreasing the stepsize comes at cost of higher computation time. Hence a rea-
sonable middle ground must be chosen to get the best visualisation depending on the
data set.

You might also like