0% found this document useful (0 votes)
2 views

Class-6 Live Migration of VMs

The document discusses the concept and significance of live migration of virtual machines (VMs) in cloud computing, highlighting its importance for service continuity, load balancing, and resource optimization in data centers. It outlines various strategies for VM migration, including pre-copy and post-copy methods, and details the steps involved in the migration process, such as memory transfer and network adjustments. Additionally, it addresses performance metrics and implementation challenges associated with live VM migration, emphasizing the need for efficient management of resources during the migration process.

Uploaded by

Aaryan Bhagat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Class-6 Live Migration of VMs

The document discusses the concept and significance of live migration of virtual machines (VMs) in cloud computing, highlighting its importance for service continuity, load balancing, and resource optimization in data centers. It outlines various strategies for VM migration, including pre-copy and post-copy methods, and details the steps involved in the migration process, such as memory transfer and network adjustments. Additionally, it addresses performance metrics and implementation challenges associated with live VM migration, emphasizing the need for efficient management of resources during the migration process.

Uploaded by

Aaryan Bhagat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

Cloud Computing and

Cloud Networking
Winter 2025
K. K. Ramakrishnan
[email protected]
SSC 308
Tue-Thur 2:00 pm - 3:20 pm
CS 208: Class 6
Live Migration of Virtual Machines

Slides from a number of papers & presentations:


1) Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric
Jul, Christian Limpach, Ian Pratt, Andrew Warfield
University of Cambridge Computer Laboratory UK & Department of Computer
Science University of Copenhagen
Live migration of virtual machines
NSDI 2005 (https://www.usenix.org/legacy/events/nsdi05/tech/full_papers/clark/
clark.pdf)

2) Post-copy Live Migration of Virtual Machines by Michael Hines, Umesh


Deshpande, and Kartik Gopalan from Binghamton University: [VEE 2009]

3)Black-box and Gray-box Strategies for Virtual Machine


Migration by Timothy Wood, Prashant Shenoy, Arun
Venkataramani, and Mazin Yousif, NSDI 2007
2
Motivation
Why do we need live migration?
We want to move a VM without
interrupting the VM’s service.

Products (some from many years ago):


Xen 2.0 : live migration
VMware:Vmotion
Container migration: Docker CRIU
(‘live’ is still a ‘work in progress’)
◦ Checkpoint/Restore In Userspace (CRIU)
Server Consolidation
• Can pack lots of VMs onto a server
• e.g., different websites have workload peaks
at different times, so we can share resources
in a datacenter’s server(s)

• Placement problem:
• Data center has 10,000 - 1,000,000 servers
• Customers have 100,000s VMs
• Each with different CPU and RAM needs
• Which VMs should go where?
• How hard is this problem?
• Often use bin packing algorithms
• But imbalance may result from
imperfect predictions, variability 4
VM Checkpointing
• What is a computer?
• What is its "state"?

• How can we save a computer to disk?


• Why would we want to?

5
VM Checkpointing
• A VM is really just (memory + CPU + disk) state
• Save a copy of a VM:
• Pause VM
VM
• Write out all of its memory pages to disk
• Write out processor (e.g., registers) state Memory state
• Make a copy of its disk data

CPU state
• Can restore it later, at exactly the
point where it was paused
• Or create multiple clones based
on this saved state
6
VM Migration
• Virtualization abstracts the VM from the
underlying hardware...
• So we can change the hardware completely
without affecting the VM

• Live VM Migration: transferring a live VM from


one computer to another
• VM doesn't realize it is being moved
• Network connections to the VM aren't interrupted
• Users and applications should be able to continue (mostly)
unaffected

7
VM Migration
• Assume disk is accessible to both hosts over network
• Only need to migrate memory and CPU state
• Basic idea for Live Migration:
• Copy all memory pages as the VM is running
• Track what memory pages are written to by the VM during transfer
• Resend all dirty pages.
• Repeat until there are very few pages left.
• Pause and send the final set of pages

time
VM VM
Memory Copy All Pages Memory
...
Iterate: Copy dirtied pages
Host 1 Host 2
8
Significance of Live Migration
 Concept:
 Migration: Move VM between different physical machines
 Live: Without disconnecting client or application (invisible)

 Relation to Cloud Computing and Data Centers:


 Cloud Infrastructures and data centers have to efficiently use their huge
scale of hardware resources.
 Virtualization Technology helps in two approaches:
 Server Consolidation
 Live Migration (e.g., for load balancing)

 Roles in a Data Center:


 Flexibly
remap hardware among VMs.
 Balance workload
 Save energy
 Enhance service availability and fault tolerance
Introduction: Original Idea
Fundamental Idea: Live OS
migration
◦ Migrating an entire OS and all of its
applications as a unit
◦ Memory state can be transferred in a
consistent and efficient fashion
◦ Allow a separation of concerns between the
users and operator
◦ Minimize the downtime and total migration
time
◦ Example: Pre-copy approach
11
Related Work (in addition to
Xen’s)
Vmotion
Process Migration
◦ Residual dependency
 an ongoing need for a host to maintain
data structures or provide functionality
for a process even after the process
migrates away from the host

12
Why Live Migrate VMs?
• Load Balancing

• System Maintenance

• In anticipation of hardware failure or natural disaster

• Edge computing
• To move computation closer to the data sources

• Avoiding residual dependencies at source host which occurs


with a process migrating to another node
– E.g. system call redirection, shared memory
Performance metrics of Live
Migration
• Minimizing Downtime

• Reducing total migration time

• Avoiding interference with normal system


activity

• Minimizing network activity


Metrics
 Metrics and Measurement:  Metrics Sufficiency:
 Downtime  Cost :
 Def: how long the VM is suspended  migration overhead,
 Measure: ping  amount of migrated data
 Total migration time (burden on network)
 Def: how long a Live Migration(LM) lasts  QoS:
 Measure: timing the LM command  downtime,
 Amount of migrated data  total migration time
 Def: how much data is transferred
 migration overhead,
 Measure: transferred data on its exclusive TCP
port
 Migration overhead
 Def: How much LM impairs performance of the
normal workload
 Measure: Reduced percentage of the
workloads’s score in a benchmark

16
Migrate VM’s memory first or last?
• Pure stop-and-copy
– Freeze VM at source,
– Copy the VM’s pseudo-physical memory contents to target,
– Restart VM at target
– Long downtime.
– Total migration time = downtime

• Pure Demand Paging


– Freeze VM at source,
– Copy minimal execution context to target
• PC, Registers, non-pageable memory
– Restart VM at target,
– Pull memory contents from source as and when needed
– Smaller downtime
– Sloooow warm-up phase at target during page-faults across
network
Design(1) - Migrating Memory
Steps for Pre-Copy and Post-Copy
based Live Migration Approaches
Minimize both downtime and total
migration time
◦ Downtime – the period during which the
service is unavailable
◦ Total Migration Time – the duration
between when migration is initiated and
when the original VM can be discarded

18
Design(2) - Migrating Memory
 Three phases of memory transfer
◦ Push phase
 Source VM continues running
 Pages are pushed across the network to destination
◦ Stop and copy phase
 The source VM stopped, pages are copied across to the
destination VM
◦ Pull phase
 New VM executes and find faults
 “Pull” pages from the source
 Pre- copy
◦ A bounded iterative push phase with a very
short stop and copy phase
◦ Avoids Pull phase
19
Design(3) – Network &
Disk
Network
◦ Generate an unsolicited ARP reply
from the migrated host, advertising
the IP has moved to a new location
◦ A small number of in-flight packets
maybe lost
Disk
◦ Network-attached Storage(NAS)

20
Pre-copy Live VM Migration

1st Iteration 2nd Iteration


Transfers entire memory Dirty Pages
Stop & copy
Preparation (live) – Push Phase Downtime Resume time
• DON’T freeze VM at source ➔Let it continue to run

• Copy VM’s memory contents to target over multiple iterations


– First iteration ➔ copy all pages. (original Xen: pause and take snapshot;
later QEMU improved it - mark all pages ‘dirty’ to begin with).
– Each subsequent iteration ➔ copy pages that were dirtied by the VM during
the previous iteration

• Do a short stop-and-copy when number of dirty pages is “small enough”.

• But what if number of dirty pages never converges to a small enough


number?
– After a fixed number of iterations, give up and do a long stop-and-copy.
Design(4) – Logical Steps
Stage 0: Stage 3:
Pre- Stop and
Migration copy

Stage 1: Stage 2: Stage 4: Stage 5:


Reservatio Iterative Commitmen
t
Activation
n Pre-copy

VM running
VM running Overhead Downtime
normally on
normally on Due to (VM out of
Destination
Source Host copying service)
Host

22
Design(5) – Logical Steps
Stage 0: Pre-Migration
◦ Preselect target host
Stage 1: Reservation
◦ Confirm the resources are available on
destination host
Stage 2: Iterative Pre- copy
◦ First iteration, all pages are transferred
from source to destination
◦ Subsequent iteration, copy dirty pages
during the previous transfer phase

23
Design(6) – Logical Steps
Stage 3: Stop and copy
◦ Stop the running Guest OS at source host
◦ Redirect the network flow to destination host
◦ CPU state and remaining memory pages are
transferred
Stage 4: Commitment
◦ Destination host indicates to source it has
successfully received a consistent OS image
◦ Source Host acknowledges and VM @ source
now can be discarded
Stage 5: Activation
◦ VM starts on Destination host
24
Pre-copy migration
Challenge in Pre-copy
Every VM has some set of pages
which it updates very frequently.
And this ‘Working Set’ size would
influence downtime of the
migration.
Þ Writable Working Set

The bandwidth of network also


impacts the migration time.
Writable Working Set
(WWS)
WWS
◦ A certain set of pages will be written
often
 Based on the iteration time window
◦ Should be transferred on every
iteration of the ‘stop and copy’
phase (actually the iterative pre-
copy phase)
◦ Use Xen’s shadow page tables to
track

27
28
Implementation
 There are two kinds of implementations of migration:
for initiating and managing state transfer
◦ Managed Migration
 The physical machine has a management VM. (Xen)
 A migration daemon running in the management VM
◦ Self Migration
 Implemented within the migrating (guest) OS
 A small stub required on the destination machine
 There are three conditions for changing to the stop-
and-copy (iterative pre-copy) phase.
◦ The dirty rate is bigger than an upper bound.
◦ The size of working set is small enough.
◦ A limit on the rounds.

◦ Dirty Rate = (# Previous dirty pages) / (#duration of


previous round)
Managed Migration(Xen)
1. 1st round:
◦ Copy all memory pages to destination machine.
◦ Replace original page table with shadow page table,
all of whose pages are marked read-only.
◦ Create a dirty bit map for the VM.
2. 2nd-(n-1)th round:
◦ During the page transfers, if VM wants to modify a
page, it will invoke Xen to set the appropriate bit in the
dirty bit map.
◦ Dirty pages will be resent again.
◦ Reset dirty bit map for the next round.
3. nth round:
◦ When the Dirty rate is bigger than upper bound, begin
to do stop-and-copy.
Tracking dirty pages

• Xen-based implementation Guest Page Table


• Page tables in Xen maintained by guest
• Move to shadow page tables for migration
• Migration managed by control software in domain0
• Shadow page table constructed on demand for every
Shadow Page Table
round in Xen
• Dirty bitmap maintained for every round
• Any page access by guest -> page fault to Xen, shadow page table
updated
• PTE marked as read-only by default in shadow
• If valid write access, shadow PTE marked writeable, page marked
dirty in bitmap
• At end of round, dirty pages are marked for transfer in control
software
• Shadow page table and dirty bitmap reinitialized after every round
• Last set of dirty pages copied in stop-and-copy
• Guest page table in target host is changed based on
new physical addresses
Self Migration
1. 1st round:
◦ Copy all memory pages to destination machine.
◦ Mark all virtual addresses as write-protect.
◦ The guest OS creates a dirty bit map (used in tracking
physical pages) to track and remember the modified pages.
2. 2nd-(n-1)th round:
◦ During the page transfers, if write faults occur, the guest OS
will set the appropriate bit in the dirty map.
◦ Dirty pages will resent again.
◦ Reset dirty bit map for the next round.
3. nth round:
◦ When the dirty rate is bigger than the upper bound, OS would
disable all activities.
4. Copy all dirty pages to shadow buffer.

5. Transfer the shadow buffer and ignore new page updates.


Implementation Issues(1)
Dynamic Rate - Limiting
◦ Administrator decides a minimum(m)
and a maximum(M) bandwidth limit
◦ Transfer speed (v)
◦ Every subsequent round calculate
the dirtying rate (r)
 r = dirty pages / duration of previous
round

33
Implementation Issues(2)
Dynamic Rate - Limiting
◦ The first round v = m (transfer rate =
min. b/w)
◦ Next round v = v*r (adjust transfer
rate)
◦ Pre-copy will be terminated when v
> M or remaining pages less than
256KB

34
Implementation Issues(3)
 Rapid Page Dirtying
◦ The page dirtying is often physically clustered
◦ “Peek” those pages dirtied in the previous round
◦ Avoid transferring page multiple times
• Before transmitting page, peek into the current round's dirty
bitmap
• Skip transmission if page is already dirtied in ongoing round
 Stunning Rogue Process
◦ Some processes may produce dirty memory at a fast rate
◦ Ex. A test program which writes one word in every page was
able to dirty memory at a rate of 320Gbit/sec
◦ Forking a monitoring thread within the OS kernel when
migrating begins
◦ Monitor the Writeable WS of individual processes
◦ If the process dirties memory too fast, then “stun” it
 Move non-interactive processes generating dirty pages to
wait queue
• Execution paused until migration completes 35
Implementation Issues(4)
Freeing Page Cache Pages
◦ OS can tell some or all of the free
pages to not be transferred
◦ Page Cache: transparent cache for
the pages originating from a
secondary storage device - disk.
◦ Do not transfer these pages in the
first iteration
◦ Reduce transferred time

37
Evaluation(1)
DellPE-2650 server-class
machine
Dual Xeon 2Ghz CPUs
2GB memory
Broadcom TG3 network interface
Gigabit Ethernet
Netapp F840 NAS
XenLinux 2.4.27

38
Evaluation(2)- Simple Web
Server
Continuously serving a single
512KB file to a set of 100 clients

39
Evaluation(3)- SPECweb99
SPECweb99 – a application-level
benchmark for evaluating web
server

40
Evaluation(4)
Quake 3 server – an online
game server with 6 players
◦ Downtime: 50ms
Diabolical Workload
◦ Running a 512MB host and use a
simple program that writes
constantly to a 256MB region of
memory
◦ Downtime : 3.5sec
◦ Rare in real world
41
A worst case example
Summary
Minimal impact on running
services
Small downtime with realistic
server

44
Optimization
 Dynamic Rate-Limiting
 Rapid Page Dirtying
 Para-virtualized Optimization

1. Ballooning mechanism
technique to dynamically adjust physical memory of guest.
driver in the guest OS, called a balloon driver, allocate pages
from the guest OS and then hand those pages back to Xen.
From the guest OS perspective: has all memory that it started with;
only there is a device driver - memory hog. But from Xen’s
perspective, the memory which the device driver asked for is no
longer real memory — it’s just empty space (hence “balloon”).
When the administrator wants to give memory back to VM, balloon
driver asks Xen to fill the empty space with memory again
(shrinking or “deflating” the balloon), and then “free” the
resulting pages back to the guest OS (memory available for use
again).

2. Stunning rogue Process


Postcopy

4
The catch in Pre-copy:
Dirty page tracking
• Mark the VM’s memory pages as read-only after each
iteration.

• Trap write operations via hypervisor to track dirtied


pages.

• Reset after each iteration

• Pre-copy works well as long as writes are infrequent

• But for write-intensive workloads, copying iterations may


never converge.
Postcopy Live Migration
• Better for write-intensive workloads
Preparation Resume Time (live)
(live) Downtime

Active Push + Demand Paging

(Non-pageable
Memory e.g. guest
Kernel memory)
• Advantage
• Lower network overhead
• Each page sent only once over the network
• Total migration time lower for write-intensive
workloads.

• Disadvantage
• Cold start penalty at the destination till working set
is fetched over the network

48
Stages of Postcopy Migration
1. First, freeze the VM at the source
2. Migrate CPU state and minimum state to destination
3. Start VM at the target
• but without its memory!

5. Fetch memory by one of two ways


• Demand paging over network
• Actively pushing from source

7. Hopefully pages will be pushed BEFORE they are


demand paged
• Causing few network-bound page faults from destination9
4
Prepaging strategies for Post-copy

• Goal: Reduce number of network-bound demand paging


faults.
– Bubbling: To proactively push pages around the faulted
page
Pivot

VM Memory Pages
0 Max

50
Prepaging strategies for Post-copy

• Goal: Reduce number of network-bound demand paging


faults.
– Bubbling: To proactively push pages from source around
the most recent page fault.
Direction (uni/bi) Number of pivots

0 Max
Size LRU Ordering

• 50% reduction in network-bound page faults


51
Hybrid Live Migration
• Balance between read-intensive and write-intensive workloads
Preparation
Resume Time (live)
(live) Downtime

1st Iteration Pre-copy Active Push + Demand Paging


(Non-pageable
Dirty Memory)

• Versus Precopy
• Lower total migration time

•Versus Postcopy
• Smaller cold-start penalty due to fewer network-bound page faults

52
Black-box and Gray-box Strategies
for Virtual Machine Migration

Timothy Wood, Prashant Shenoy,


Arun Venkataramani, and Mazin Yousif*

University of Massachusetts Amherst


*
Intel, Portland

UNIVERSITY OF MASSACHUSETTS, AMHERST • Department of Computer Science


Enterprise Data Centers
Data Centers are composed of:
Large clusters of servers
Network attached storage devices

Multiple applications per server


Shared hosting environment
Multi-tier, may span multiple servers

Allocates resources to meet


Service Level Agreements (SLAs)

Virtualization increasingly common


Benefits of Virtualization
Run multiple applications on one server
Each application runs in its own virtual machine
Maintains isolation
Provides security
Rapidly adjust resource allocations
CPU priority, memory allocation
VM migration
“Transparent” to application
No downtime, but incurs overhead

How can we use virtualization to more


efficiently utilize data center resources?
Data Center Workloads
Web applications see highly dynamic
workloads
Multi-time-scale variations
Transient spikes and flash crowds

per min
Arrivals per min

1200
140000

Request Rate (req/min)


120000
100000
80000
60000

Arrivals
40000
20000

0 0
0 5 10 15 20
0 1 2 3 4 5
Time (hrs)
Time (days)

How can we provision resources to


meet these changing demands?
Provisioning Methods
Hotspots form if resource demand exceeds
provisioned capacity

Static over-provisioning
Allocate for peak load
Wastes resources
Not suitable for dynamic workloads
Difficult to predict peak resource requirements

Dynamic provisioning
Adjust based on workload
Often done manually
Becoming easier with virtualization
Problem Statement

How can we automatically detect and eliminate


hotspots in data center environments?

Use VM migration and dynamic


resource allocation!
Outline
Introduction & Motivation

System Overview

When? How much? And Where to?

Implementation and Evaluation

Conclusions
Research Challenges
Sandpiper: automatically detect and mitigate
hotspots through virtual machine migration

When to migrate?
Where to move to?
How much of each resource to allocate?
A migratory bird

How much information needed to make decisions?


Knowledge: “black-box” only high level resource utilization
information;
“gray-box” have access to application logs or OS level data
Sandpiper Architecture
Nucleus
Monitor resources of phy. m/c
Report to control plane

VM 1
VM 2
One per server Nucleus

Control Plane
Centralized server
PM 1 PM N
Hotspot Detector
Detect when a hotspot occurs
Profiling Engine Hotspot
Detector
Profiling
Engine
Migration
Manager
Decide how much to allocate
Control Plane
E.g., Apache modelling module
Migration Manager
PM = Physical Machine
Determine where to migrate VM = Virtual Machine
Black-Box and Gray-Box
Black-box: only data from outside the VM
Completely OS and application agnostic
Dom 0 for monitoring network usage; memory usage (indirectly)

Gray Box
Black Box
??? Application logs
OS statistics

Gray-Box: access to OS stats and application logs


Request level data can improve detection and profiling
Not always feasible – customer may control OS

Is black-box sufficient?
What do we gain from gray-box data?
Outline
Introduction & Motivation

System Overview

When? How much? And Where to?

Implementation and Evaluation

Conclusions
Black-box Monitoring
Xen uses a “Driver Domain”
Special VM with network and disk drivers
Nucleus runs here

VM
Driver
CPU Domain

Scheduler statistics Nucleus


Network
Linux device information Hypervisor
Memory
Detect swapping from disk I/O
for VM’s swap partition
Only know when performance is poor
Hotspot Detection – When?
Resource Thresholds
Potential hotspot if utilization exceeds threshold
Only trigger for sustained overload
Must be overloaded for k out of n measurements
Autoregressive Time Series Model
Use historical data to predict future values
Minimize impact of transient spikes
Utilization
Utilization

Utilization
Time Time Time
Not overloaded Hotspot Detected!
Resource Profiling – How much?
How much of each resource to give a VM
Create distribution from time series
Provision to meet peaks of recent workload
Utilization Profile
100

Historical data 80

Probability
60

40

20

0
0 20 40 60 80 100

% Utilization
Black box: generally know when resources are not enough
What to do if utilization is at 100%?
Gray-box
Request level knowledge can help
Can use application models to determine requirements
Determining Placement – Where to?
Migrate VMs from overloaded to underloaded servers
1 1 1
Volume =

net
1-cpu
* 1-net
* 1-mem

Use Volume to find most loaded servers cpu

Captures load on multiple resource dimensions

em
m
Highly loaded servers are targeted first

Migrations incur overhead


Migration cost determined by RAM (memory footprint of VM)
Avoid expense to move large RAM footprint if possible.
Migrate the VM with highest Volume/RAM ratio
Maximize the amount of load transferred while
minimizing the overhead of migrations
Placement Algorithm
First try migrations PM1 PM2
Displace VMs from high Volume servers VM1
Use Volume/RAM to minimize overhead VM2
VM3 VM4
Don’t create new hotspots!
Migration
What if high average load in system?

Swap if necessary
Swap a high Volume VM for a low Volume one Spare
Requires 3 migrations
Can’t support both at once
PM1 PM2
VM1
Swaps increase the number VM2 VM5
of hotspots we can resolve VM3 VM4
Swap
Outline
Introduction & Motivation

System Overview

When? How much? And Where to?

Implementation and Evaluation

Conclusions
Implementation
Use Xen 3.0.2-3 virtualization software

Testbed of twenty 2.4Ghz P4 servers

Apache 2.0.54, PHP 4.3.10, MySQL 4.0.24

Synthetic PHP applications


RUBiS – multi-tier ebay-like web application
Migration Effectiveness
3 Physical servers, 5 virtual machines
VMs serve CPU intensive PHP scripts
Migration triggered when CPU usage exceeds 75%

Sandpiper detects and responds to 3 hotspots


CPU Usage (stacked)

PM 1

PM 2

PM 3
Memory Hotspots
Virtual machine runs SpecJBB benchmark
Memory utilization increases over time
Black-box increases VM memory allocation by 32MB if page-
swapping is observed
Gray-box maintains 32 MB free
Significantly reduces page-swapping
756

706

656
RAM (MB)

606

556

506

456

406
356
Black-box
306 Gray-box
256
0 200 400 600 800 1000 1200 1400

Time (sec)

Gray-box can improve application


performance by proactively increasing allocation
Data Center Prototype
16 server cluster runs realistic data center applications on 35 virtual
machines
6 servers (14 VMs) become simultaneously overloaded
4 CPU hotspots and 2 network hotspots
Sandpiper eliminates all hotspots in four minutes
Uses 7 migrations and 2 swaps
Despite migration overhead, VMs see fewer periods of overload

12 180
Static
Static 160
10 Sandpiper
Sandpiper 140

Time (intervals)
# of Hotspots

8 120

100
6
80

4
60

2 40

20
0
0
1 11 21 31 41 51

Time Overloaded Sustained


Stability During Overload
Predict future usage
Will not migrate if destination could become overloaded

Each set of migrations must eliminate a hotspot


Algorithm only performs bounded number of migrations

Measured Predicted
0.45
0.4
0.35
Utilization

0.3
0.25

0.2
0.15
0.1
PM1
0.05 PM2
0
0 50 100 150 200 250 300

Time (sec)
Related Work
Menasce and Bennani 2006
Single server resource management

VIOLIN and Virtuoso


Use virtualization for dynamic resource control
in grid computing environments

Shirako
Migration used to meet resource policies determined by
application owners

VMware Distributed Resource Scheduler


Automatically migrates VMs to ensure they receive their
resource quota
Summary
Virtual Machine migration is a viable tool for
dynamic data center provisioning
Sandpiper can rapidly detect and eliminate
hotspots while treating each VM as a black-box
Gray-Box information can improve performance in
some scenarios
Proactive memory allocations

Future work
Improved black-box memory monitoring
Support for replicated services
Virtual Machine Files

77
File format(1)
.XML File
◦ Save VM Configuration details
◦ Named with the GUID

78
File format(2)
.BIN files
◦ This file contains the memory of a
virtual machine or snapshot that is in
a saved state(running programs, data
for those programs, word processing
documents you are viewing, etc.)
.VSV files
◦ This file contains the saved state from
the devices associated with the virtual
machine.

79
File format(3)
.Vhd files
◦ These are the virtual hard disk files
for the virtual machine(save things
such as files, folders, file system and
disk partitions)
.avhd files
◦ These are the differencing disk files
used for virtual machine snapshots

80

You might also like