0% found this document useful (0 votes)
3 views

ideas

The document outlines project ideas for a Scientific Computing II course, focusing on implementing parallel programming solutions for various problems. Proposed topics include matrix multiplication, Runge-Kutta integration, finite differences, Monte Carlo methods, parallel sorting, machine learning, and simulating epidemics with cellular automata. Each project requires demonstrating speed improvements over sequential implementations and applying the solution to real-world problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

ideas

The document outlines project ideas for a Scientific Computing II course, focusing on implementing parallel programming solutions for various problems. Proposed topics include matrix multiplication, Runge-Kutta integration, finite differences, Monte Carlo methods, parallel sorting, machine learning, and simulating epidemics with cellular automata. Each project requires demonstrating speed improvements over sequential implementations and applying the solution to real-world problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Project Ideas

Scientific Computing II
2024-II

This assignment consists of a set of tasks whose scope could be extended to


serve as the final project for the course. Your task is to propose a solution
that makes use of parallel programming to solve at least 1 of these problems.
It does not need to be a comprehensive solution; it can be a simple approach
to the problem, but it must utilize a parallel algorithm.

1. Matrix Multiplication

Implement a parallel algorithm for multiplying large (may be small for


the moment) matrices. You can explore different parallelization strategies
such as block-based parallelism or parallelizing matrix multiplication using
CUDA for GPUs (in case you have it).
Your should be able to

• Prove that your implementation is (consistently) faster compared to


sequential implementations.
• Use your implementation to solve an applied problem requiring matrix
multiplication of large matrices.

References:

• https://www.cs.utexas.edu/users/flame/pubs/2D3DFinal.pdf
• https://www.youtube.com/watch?v=sZxjuT1kUd0
• Interesting: https://www.youtube.com/watch?v=fDAPJ7rvcUw

2. Runge-Kutta Integration

Suppose that you have a system of n differential equations1


dyi
= fi (t, y), for i = 1, 2, . . . , n
dt
1
Actually, you can show that any system of ordinary differential equations can be
transformed into this form.
2 Scientific Computing II

where y = [y1 , y2 , . . . , yn ] represents the vector of dependent variables, and


fi (t, y) are the corresponding functions defining the rate of change of each
variable with respect to time t.
Your task is to implement a parallelized version of the Runge-Kutta method
to numerically integrate a system of differential equations over a specified
time interval and under certain initial conditions.
You should be able to

• Prove that your implementation is (consistently) faster compared to


sequential implementations.

• Use your implementation to solve a particular interesting (system)


of differential equations. Therefore, you may report your results, de-
pending on the problem you are solving.

References:

• https://nvlpubs.nist.gov/nistpubs/Legacy/IR/nistir6031.pdf

• https://www.cs.usask.ca/~spiteri/M314/notes/parallelODEs.pdf

• The basics: https://www.youtube.com/watch?v=vNoFdtcPFdk

3. Finite Differences

Consider a partial differential equation (PDE) describing a physical (math-


ematical or economical) phenomenon, such as heat diffusion or wave propa-
gation, given by:
!
∂u ∂2u ∂2u
=F , ,...
∂t ∂x2 ∂y 2

where u(x, y, . . . , t) represents the unknown function, and F is some function


involving second-order partial derivatives of u with respect to the spatial
variables x, y (at least two spatial variables), etc.
Your task is to implement a parallelized version of the finite differences
method to solve a partial differential equation over a specified spatial domain
and time interval, and under certain initial and boundary conditions.
You should be able to
3 Scientific Computing II

• Prove that your implementation is (consistently) faster compared to


sequential implementations.
• Handle boundary conditions appropriately to ensure the solution re-
mains accurate and meaningful.
• Report your results as a descriptive graph or animation.

References:

• https://www.ljll.fr/~frey/cours/UdC/ma691/ma691_ch6.pdf
• Wave equation: https://hplgit.github.io/num-methods-for-PDEs/
doc/pub/wave/pdf/wave-4print-A4-2up.pdf
• Introduction.

4. Monte Carlo Methods

Monte Carlo methods provide a powerful technique for estimating integrals


or volumes of complex functions or regions. In this assignment, you are
tasked with implementing Monte Carlo methods in a parallel computing
environment to estimate integrals or volumes over specified domains.
You can start by finding the hypervolume of the n-th sphere. In the second
problem of this assignment you will find the instructions.
You should be able to

• Prove that your implementation is (consistently) faster compared to


sequential implementations.
• Select your own problem which requires to handle integrals or hyper-
volumes.
• Report your results with a given estimator of the precision of
your algorithm.

References:

• https://www.ime.usp.br/~jmstern/wp-content/uploads/2020/04/
EricVeach2.pdf
• Theoretical Video.
4 Scientific Computing II

5. Parallel Sorting

Sorting is a fundamental operation in computer science with applications


in various domains. Parallelizing sorting algorithms can lead to significant
performance improvements, especially for large datasets. In this assignment,
you are tasked with implementing parallel sorting algorithms in a distributed
computing environment.
You can start by implementing parallel versions of sorting algorithms such
as parallel quicksort, parallel mergesort, or parallel radix sort.
You should be able to

• Demonstrate that your parallel sorting algorithms achieve significant


speedup and scalability compared to their sequential counterparts, es-
pecially for sorting large datasets or arrays.

• Implement and analyze different parallelization strategies for sorting


algorithms, such as task parallelism, data parallelism, or hybrid ap-
proaches.

• Apply your implementation to a real-life problem.

References:

• https://www.geeksforgeeks.org/sorting-algorithms/

• https://www.dcc.fc.up.pt/~ricroc/aulas/1516/cp/apontamentos/
slides_sorting.pdf

6. Parallel Machine Learning

Machine learning algorithms play a crucial role in extracting insights from


large datasets and solving complex tasks such as classification, regression,
clustering, and dimensionality reduction. Parallelizing machine learning al-
gorithms can lead to significant performance improvements, especially for
training models on big data. In this assignment, you are tasked with im-
plementing parallel machine learning algorithms in a distributed computing
environment.
You can start by implementing parallel versions of machine learning al-
gorithms such as parallel gradient descent, parallel k-means clustering, or
5 Scientific Computing II

parallel support vector machines (SVM). You may start by taking a look at
the .ipynb files found in here.
You should be able to

• Demonstrate that your parallel machine learning algorithms achieve


significant speedup and scalability compared to their sequential coun-
terparts, especially for training models on large datasets or performing
computationally intensive tasks.

• Implement and analyze different parallelization strategies for machine


learning algorithms, such as data parallelism, model parallelism, or
asynchronous parallelism.

• Experiment with optimizing parallel machine learning algorithms for


specific hardware accelerators, such as GPUs or TPUs, or distributed
computing frameworks like TensorFlow. Use the cloud tools you con-
sider appropiate.

• Apply your algorithm to solve a real-life problem.

References:

• https://www.tensorflow.org/guide/distributed_training

• https://www.tensorflow.org/guide/keras/distributed_training

• https://turingintern2018.github.io/tensorflowother.html

7. Simulating an Epidemic with Cellular Automata

Cellular automata provide a versatile framework for simulating complex sys-


tems, including the spread of epidemics within a population. In this assign-
ment, you are tasked with implementing a parallel cellular automaton model
to simulate the spread of an epidemic.
You can start by implementing a parallel version of a cellular automaton
model such as the SIR (Susceptible-Infectious-Recovered) model or the SEIR
(Susceptible-Exposed-Infectious-Recovered) model.
You should be able to
6 Scientific Computing II

• Demonstrate that your parallel cellular automaton model accurately


captures the dynamics of epidemic spread within a simulated pop-
ulation, including factors such as infection rate, recovery rate, and
population density.

• Implement and analyze different parallelization strategies for simu-


lating the cellular automaton model, such as domain decomposition,
parallel update rules, or asynchronous updating.

• Experiment with optimizing the parallel cellular automaton model for


specific epidemic scenarios or population characteristics, such as vary-
ing infection rates, mobility patterns, or interventions.

References:

• https://www.youtube.com/watch?v=ANAZIEFXKck

• On rule 30.

8. Your Own Problem

Have something else in mind? Discuss a little with your teacher.

You might also like