Operating System Theory Projrct Report
Operating System Theory Projrct Report
Virtual Machine abstracts the hardware of our personal computer such as CPU, disk
drives, memory, NIC (Network Interface Card) etc, into many different execution
environments as per our requirements, hence giving us a feel that each execution
environment is a single computer. For example, VirtualBox.
When we run different processes on an operating system, it creates an illusion that
each process is running on a different processor having its own virtual memory, with
the help of CPU scheduling and virtual-memory techniques.
Problem Statement:
VM sprawl wastes valuable computing resources:
Organizations often virtualize a certain number of workloads and then have to buy more
servers down the line to accommodate more workloads. This occurs because
companies usually don't have the business policies in place to plan or manage VM
creation.
State-of-the-art technique:
Methodology:
A process VM, sometimes called an application virtual machine, or Managed Runtime
Environment (MRE), runs as a normal application inside a host OS and supports a
single process. It is created when that process is started and destroyed when it exits.
Results:
kernel-build compiles the complete Linux 2.4.18 kernel . SPECweb99 measures web
server performance, using the 2.0.36 Apache web server. We configure SPECweb99
with 15 simultaneous connections spread over two clients connected to a 100 Mb/s
Ethernet switch. kernel-build and SPECweb99 exercise the vir tual machine intensively
by making many system calls.
They are similar to the I/O-intensive and kernel-inten sive workloads used to evaluate
Cellular Disco . All experiments are run on an computer with an AMD Athlon 1800+
CPU, 256 MB of memory, and a Samsung SV4084 IDE disk. The guest kernel is Linux
2.4.18 ported to UMLinux, and the host kernels for UMLinux are all Linux 2.4.18 with
different degrees of support for VMMs. All virtual machines are configured with 192 MB
of «physical» memory.
The virtual hard disk for UMLinux is stored on a raw disk partition on the host to avoid
double buffering the virtual disk data in the guest and host file caches and to prevent the
virtual machine from benefitting unfairly from the host’s file cache. The host uses the
same hardware and software installation as the virtual-machine systems and has access
to the full 256 MB of host memory.
Conclusion:
First, the host OS required a separate host user process to control the main guest-
machine process, and this generated a large number of host context switches. We
eliminated this bottleneck by moving the small amount of code that controlled the guest-
machine process into the host kernel. Second, switching between guest kernel and
guest user space generated a large number of memory protection operations on the
host. We eliminated this bottleneck in two ways.