D. Granularity
D. Granularity
Single Medium Parallel 2. The longest path in a task dependency graph is called 1
choice Algorithms, the
Task A. Directed Path
Interconnection B. Sequence of Task
C. Critical Path Length
D. Critical Path
Single Medium Parallel 7. The Owner Computes Rule generally states that the 1
choice Algorithm, process assigned a particular data item is responsible for
Decomposition A. all computation
B. computations that use the input data
C. computations that use the Output data
D. None
Multiple Easy Parallel 8. Once a problem has been decomposed into independent 1
choice Algorithm, tasks, the characteristics of these tasks critically impact
Task choice and performance of parallel algorithms. Relevant task
characterizatio characteristics include:
n A. Task generation
B. Task Size
C. Size of Data associated with Task
D. None
Single Easy Parallel 9. Once a problem has been decomposed into concurrent 1
choice Algorithm, tasks, these must be mapped to processes. Then mapping
Decomposition must minimize
A. Task interaction
B. Overheads
C. A & B both
D. None
Single Medium Parallel 10. In Mapping techniques for minimum idling the task are 1
choice Algorithm, mapped to a process a-priori in
Task Mapping A. Dynamic Mapping
B. Static mapping
C. Regular Mapping
D. Irregular Mapping
Single Easy Parallel 11. Instructions in a program may be related to each other. 1
choice programming, The results of an instruction may be required for subsequent
Pipelining instructions. This is referred to as
A. True Dependency
B. Resource Dependency
C. Instruction Dependency
D. Procedure Dependency
Single Easy Parallel 12. The rate at which data can be pumped from the memory 1
choice Programming, to the processor determines the
Memory A. Latency
B. Bandwidth
C. Transfer rate
D. None
Single Easy Parallel 13. The improvement in performance resulting from the 1
choice Programming, presence of the cache is based on the assumption that there is
Memory repeated reference to the same data item. This notion of
repeated
reference to a data item in a small time window is called
A. Spatial Locality
B. Reference locality
C. Temporal Locality
D. None
Single Difficult Parallel 14. A computer has a single cache (off-chip) with a 2 ns hit 2
choice Programming, time and a 98% hit rate. Main memory has a 40 ns access
Memory time. If we add an on-chip cache with a .5 ns hit time and a
94% hit rate, what is the computer’s effective access time?
How much of a speedup does the on-chip cache give the
computer?
A. 2.4 ns, 4.2 ns
B. 2.6 ns,4.6 ns
C. 2.8 ns,4.2 ns
D. None
Single Easy Parallel 15. When the data at a location in cache is different from the 1
choice Programming, data located in the main memory, the cache is called
Memory A. Unique
B. Inconsistent
C. Fault
D. Variable
Single Medium Parallel 16. Imagine sitting at your computer browsing the web 1
choice Programming, during peak network traffic hours. We access a whole bunch
Network of pages in one go – amortizing the latency across various
accesses. This approach is called
A. Multithreading
B. Prefetching
C. Spatial Locality
D. Temporal Locality
Single Easy Parallel 17. Computers in which each processing element is capable 1
choice Programming, of executing a different program independent of the other
Architecture processing elements are called
A. SIMD
B. MISD
C. SISD
D. MIMD
Single Medium Parallel 18. If the time taken to access certain memory words is 1
choice Programming, longer than others, the platform is called
Architecture A. UMA
B. NUMA
C. SPMD
D. CUDA
Single Medium Parallel 19. Dynamic networks for parallel Computers , are built 1
choice Programming, using
Network A. Point to Point Link
B. Switches and Communication links
C. A & B both
D. None of the above
Single Easy Parallel 20. The minimum volume of communication allowed 1
choice Programming, between any two halves of the network is called
Network A. Section bandwidth
B. Cross Section bandwidth
C. Bisection bandwidth
D. None
UNIT –II
UNIT-III
UNIT IV
Single Medium Dense 64. The matrix contains m rows and n columns. The 1
choice matrix matrix is called Sparse Matrix if ________
algorithm, A. Total number of Zero elements > (m*n)/2
matrix B. Total number of Zero elements = m + n
C. Total number of Zero elements = m/n
D. Total number of Zero elements = m-n
Single Medium Dense 65. Which of the following is not the method to 1
choice matrix represent Sparse Matrix?
algorithm, A. Dictionary of Keys
matrix B. Linked List
C. Array
D. Heap
Single Easy Dense 66. Is Sparse Matrix also known as Dense Matrix? 1
choice matrix
algorithm, A. True
matrix B. False
Single Easy Dense 67. Which one of the following is a Special Sparse 1
choice matrix Matrix?
algorithm, A. Band Matrix
matrix B. Skew Matrix
C. Null matrix
D. Unit matrix
Single Medium Dense 68. In what way the Symmetry Sparse Matrix can be 1
choice matrix stored efficiently?
algorithm, A. Heap
matrix B. Binary tree
C. Hash table
D. Adjacency List
Single Difficult Dense 69. What does the following piece of code do? 1
choice matrix
algorithm, for(int i = 0; i < row; i++)
array {
operation for(int j = 0; j < column; j++)
{
if(i == j)
sum = sum + (array[i][j]);
}
}
System.out.println(sum);
A. Normal of a matrix
B. Trace of a matrix
C. Square of a matrix
D. Transpose of a matrix
Single Difficult Graph 70. In the given graph identify the cut vertices. 1
choice algorithm,
features
A. B and E
B. C and D
C. A and E
D. C and B
Single Difficult Graph 71. For the given graph(G), which of the following 1
choice algorithm, statements is true?
features
A. G is a complete graph
B. G is not a connected graph
C. The vertex connectivity of the graph is 2
D. The edge connectivity of the graph is 1
Single Difficult Graph 72. What is the number of edges present in a complete 1
choice algorithm, graph having n vertices?
features A. (n*(n+1))/2
B. (n*(n-1))/2
C. n
D. Information given is insufficient
Single Difficult Graph 73. The given Graph is regular. 1
choice algorithm,
features
A. True
B. False
Single Difficult Graph 74. Consider the given graph. 1
choice algorithm,
Prims
algorithm
UNIT-V
Single Easy Parallel DFS, 84. In Parallel Depth First search the Work is split 1
choice Work by
splitting A. Node Splitting
B. Stack Splitting
C. A & B both
D. None
Single Difficult Parallel Best 85. In Asynchronous Round Robin Scheme worst 1
choice first search, case is
splitting A. V(P)=O(P)
technique B. V(P)=O(P2)
C. V(P) is Unbounded
D. None of the above
Single Medium Parallel Best 86. Global round robin has poor performance 1
choice first search, because of
splitting A. large number of work requests.
technique B. contention at counter
C. a & b both
D. None
Single Medium Parallel Best 87. In Tree based Termination Detection Scheme 1
choice first search, Termination is signaled when the weight at
Termination processor P0 becomes
Detection A. 1
B. 0
C. –ve
D. None
Single Medium Parallel BFS, 90. In Parallel Best First Search the locking 1
choice operations operation is used to
A. extracts the best node
B. serialize queue access by various
processors
C. A & B both
D. None of the above
Single Medium Parallel BFS, 91. In Parallel Best First Search the run time will be 1
choice Execution at least
A. n(taccess+ texp)
B. (taccess+ texp)/taccess
C. ntaccess
D. None
Single Medium Parallel BFS. 92. Parallel Best first Search avoid contention by 1
choice Features A. balance the quality of nodes
B. balancing strategies
C. having multiple open lists.
D. A & B both
Single Medium Parallel BFS. 93. Parallel Best First Search balancing Strategies 1
choice Features are
A. Ring
B. Blackboard
C. Random Communication
D. All of above
Single Medium Parallel BFS. 94. In Parallel Best First Search Hashing can be 1
choice Features parallelized by
A. Two function
B. One function
C. Three function
D. None of the above
Single Medium Parallel BFS. 95. Executions yielding speedups greater than p by 1
choice Features using p processors are referred to as
A. deceleration anomalies
B. acceleration anomalies
C. Speedup anomalies
D. None of the above
Single Medium Parallel BFS. 96. If the heuristic function is good, the work done 1
choice Features in parallel best-first search is typically more than
that in its serial counterpart.
A. True
B. False
Single Medium Sequential 100. DFBB does not explore paths that are 1
choice search, depth guaranteed to lead to solutions worse than current
first search best solution.
A. True
B. False
UNIT-I
Subjective Medium Dichotomy of 4. List three major problems requiring the use of 3
Parallel supercomputing in the following
Computing domains:
Platforms, 1. Structural Mechanics.
types 2. Computational Biology.
3. Commercial Applications.
Subjective Medium Routing 5. Why E-cube routing in a hypercube network is 3
mechanism, E- used? Explain with the help of an example.
cube routing in
hypercube
network
Subjective Easy Decomposition 6. Compare Recursive decomposition and Data 3
technique, decomposition techniques with a suitable
Recursive example.
decomposition
Subjective Easy Communication 7. Tasks may communicate with each other in 3
techniques, various ways. Define any two task
types communication technique with example.
UNIT-III
UNIT –IV
UNIT V
UNIT-I
UNIT –II
UNIT-III
Subjectiv Difficult Analysis of parallel 22. Compute the value of V (p) for different 5
e DFS, Load load-balancing (ARR,GRR,RP) schemes.
balancing
Subjectiv Easy Termination 23. Define Tree based termination detection 5
e detection, tree technique.
based
Subjectiv Medium Parallel BFS, 24. Draw a general schematic diagram for 5
e representation parallel best-first search using a centralized
strategy.
Subjectiv Medium Parallel BFS, 25. Differentiate between ring communication 5
e communication and blackboard communication strategy.
strategy