Week 10 CH 9 VirtualMemory
Week 10 CH 9 VirtualMemory
Operating Systems
Assignment #6
Of
Chapter 9
Developed by:Adnan Alam Khan([email protected]) Department of Computer Science & Information Technology 1
Q#6 Assume that you are monitoring the rate at which the pointer in the clock algorithm (which indicates the candidate page for replacement) moves. What can you say about the system if you notice the following behavior: a. pointer is moving fast b. pointer is moving slow Developed by:Adnan Alam Khan([email protected]) Department of Computer Science & Information Technology 2
Q#15 What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem? Answer: Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming. Q#16 Is it possible for a process to have two working sets? One representing data and another representing code? Explain. Answer: Yes, in fact many processors provide two TLBs for this very reason. As an example, the code being accessed by a process may retain the same working set for a long period of time. However, the data the code accesses may change, thus reflecting a change in the working set for data accesses.
Developed by:Adnan Alam Khan([email protected]) Department of Computer Science & Information Technology 5
Q#18 Assume there is an initial 1024 KB segment where memory is allocated using the Buddy System. Using Figure 9.27 as a guide, draw the tree illustrating how the following memory requests are allocated: request 240 bytes request 120 bytes request 60 bytes request 130 bytes Next, modify the tree for the following releases of memory. Perform coalescing whenever possible: release 240 bytes release 60 bytes release 120 bytes Answer: The following allocation is made by the Buddy system: The 240 byte request is assigned a 256 byte segment. The 120 byte request is assigned a 128 byte segement, the 60 byte request is assigned a 64 byte segment and the 130 byte request is assigned a 256 byte segment. After the allocation, the following segment sizes are available: 64 bytes, 256 bytes, 1K, 2K, 4K, 8K, 16K, 32K, 64K, 128K, 256K, and 512K. After the releases of memory, the only segment in use would be a 256 byte segment containing 130 bytes of data. The following segments will be free: 256 bytes, 512 bytes, 1K, 2K, 4K, 8K, 16K, 32K, 64K, 128K, 256K, and 512K. Q#19 The slab allocation algorithm uses a separate cache for each different object type. Assuming there is one cache per object type, explain why this doesnt scale well with multiple CPUs. What could be done to address this scalability issue? Answer: This had long been a problem with the slab allocator poor scalability with multiple CPUs. The issue comes from having to lock the global cache when it is being accesses. This has the effect of serializing cache accesses on multiprocessor systems. Solaris has addressed this by introducing a perCPU cache, rather than a single global cache. Q#20 Consider a system that allocates pages of different sizes to its processes. What are the advantages of such a paging scheme? What modifications to the virtual memory system are provide this functionality? Developed by:Adnan Alam Khan([email protected]) Department of Computer Science & Information Technology 6
Developed by:Adnan Alam Khan([email protected]) Department of Computer Science & Information Technology 7