0% found this document useful (0 votes)
23 views

QNP Vlsi Capp

The document discusses several topics related to parallel and distributed computing: 1. It asks to define the taxonomy of MIMD computers, differentiate between multiprocessing and multitasking, describe multivector computers and the difference between medium and fine grain multi-computers, and explain the use and advantages of dependence graphs. 2. It provides questions about how aspects of instruction set architecture, compiler technology, CPU implementation and control, and cache/memory hierarchy affect CPU performance. 3. It includes questions about calculating MIPS ratings of programs run on different computers, explaining vector supercomputer architecture, describing the functions of PRAM, VLSI, UMA, NUMA and COMA models, and explaining program partitioning and parallel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

QNP Vlsi Capp

The document discusses several topics related to parallel and distributed computing: 1. It asks to define the taxonomy of MIMD computers, differentiate between multiprocessing and multitasking, describe multivector computers and the difference between medium and fine grain multi-computers, and explain the use and advantages of dependence graphs. 2. It provides questions about how aspects of instruction set architecture, compiler technology, CPU implementation and control, and cache/memory hierarchy affect CPU performance. 3. It includes questions about calculating MIPS ratings of programs run on different computers, explaining vector supercomputer architecture, describing the functions of PRAM, VLSI, UMA, NUMA and COMA models, and explaining program partitioning and parallel
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 2

QNP-VLSI-1303071-UT1-CAPP PART A (2 MARKS) 1. 2. 3. 4. 5. Write the bells taxonomy of MIMD computers. Differentiate between multiprocessing and multitasking.

What are multivector computers? Differentiate between medium grain and fine grain multi-computers. What is the use of dependence graph? State its advantages. PART B (8 MARKS)

07 MAR 13

6. Discuss how the instruction set architecture, compiler technology. CPU implementation and control, cache and memory hierarchy affects CPU performance. Justify their effects in terms of program length, clock rate and effective cycles per instruction (CPI) (OR) 7. Explain about the hardware and software parallelism using suitable example PART C (16 MARKS)

1. (i) The execution times (in seconds) of four programs on three computers are given below: Program Execution Time (in seconds) Computer A Computer B Computer C Program 1 1 10 20 Program 2 1000 100 20 Program 3 500 1000 50 Program 4 100 800 100 Assume that 100000000 instructions were executed in each for the four programs. Calculate the MIPS rating of each program on each of the three machines. Based on these ratings, comment on the relative performance of there computers. (12) (ii) Explain in detail about the architecture of a vector super computer (OR) 2. (i) Explain the function and applications of PRAM and VLSI models. (ii)Explain the function of UMA, NUMA and COMA models. (6) (8) (8)

3. Explain the process of program partitioning. Describe how the grain packing and scheduling are performed for parallel processing using a suitable example. (OR) 4. (i) Perform data dependence analysis by drawing dependence graphs. S1 : A = B+D S2 : C = Ax3 S3 : A = A+C

S4 : E = A/2

(5)

(ii) List the conditions of parallelism and detect the parallelism with a program using bernsteins conditions (11)

You might also like