EVA Overview
EVA Overview
Virtual
Arrays
The EVA family
A technical overview
Administrator
Remove Operational Tasks
Command View EVA
Powerfully Simple
Command View Tape Library
Business Copy EVA
Continuous Access EVA
Cluster Extension EVA, Metrocluster,
ContinentalCluster
Simplicity
HP StorageWorks product portfolio
The B-Series SAN switch family
HP 400 MP-Router
Fabric Manager (16FC + 2IP ports)
Enhanced capabilities
5.x
SAN Switch 4/64 OS
(32-64 ports) b ric 4/48 Port Blade
a
onF For the 4/256
m
C om
HP MPR Blade
SAN Switch 4/32
For the 4/256
(16-32 ports)
Brocade 4Gb SAN Switch
for HP c-Class BladeSystem
SAN Switch 4/8 & 4/16
8 and 16 ports
Brocade 4Gb SAN Switch
for HP p-class BladeSystem
The C-series SAN switch family
Small & Medium-Sized Enterprise & Service Provider
Business
MDS 9000 Family
Systems
MDS 9020*
MDS 9216
and 9216i
MDS 9000
Modules
MSA1500
MSA1510i
MSA1000
Enterprise
MSA60, 70 MSA500 plugged into the
Availability
*Note: Legacy 36GB FC and 250, 400GB FATA disks are still fully supported EVA8000
The EVA family specifications
EVA4000 EVA6000 EVA8000
Controller HSV200 HSV210
Cache size 4GB 8GB
RAID Levels VRAID0, VRAID1, VRAID5
Windows 2000/2003, HP-UX, Linux, IBM AIX,
Supported OS OpenVMS, Tru64, SUN Solaris, VMWare,
Netware
Supported FC: 72, 146GB/15krpm, 146, 300GB/10krpm
EVA4000
Drives FATA: 250, 400, 500GB
EVA6000
Host ports 4 8
Device ports 4 8
Mirror ports 4
Backend loop
0 2 4
switches
Management
Server (Windows)
Fabric 1 Fabric 2
• 4Gbps Front-End
• 1 to 4 Disk
enclosures
• 8 to 56 FC Disks
The EVA6000 architecture
Management
Server (Windows)
Fabric 1 Fabric 2
• 4Gbps Front-End
• 2 HSV Controllers HSV200 controller 1 HSV200 controller 2
• 16 to 112 FC Disks
The EVA8000 architecture
Heterogeneous Servers
Management
Server (Windows)
Fabric 1 Fabric 2
• 4Gbps Front-
HSV210 controller 1 HSV210 controller 2
• End
2 HSV Controllers
• 2-18 Disk
enclosures
12 in the first rack
6 in the utility
cabinet
• 8 to 240 FC Disks
EVA Performance (based on 2GB controllers)
RAID Controller
Traditional Disk Array Approach
RAID Controller
Disk Groups
& RAID
Level
RAID1 RAID5
Dedicated
Spare
Spare
Disk(s)
Traditional Disk Array Approach
Presente
0 1 2 d LUNs
RAID Controller
RAID1 RAID5
LUN 0
Spare
LUN 2
LUN 1
Traditional Disk Array Approach
RAID levels in separate small Disk Groups, dispersed
LUNs, beware of hot-spots
Presente
0 1 2 3 5 7 d LUNs
4 6
RAID Controller
Spare
LUN 7 LUN 3
LUN 4
RAID1 RAID5
Dedicated
LUN 0 Spare Disk
Spare
LUN 2
LUN 1
HP Virtual Array Approach
Disk groups, segments, block mapping tables &
sparing
Block
Mapping
Table
Disk Group(s)
HP Virtual Array Approach
Disk groups
An EVA can have
• 1 to 16 disk groups
• 8 to 240 disks per disk
group
HP Virtual Array Approach
LUN/vdisk allocation
Presented
LUNs 1 2
Virtual Array Controller
LUN 1 (RAID1)
LUN 2 (RAID5)
HP Virtual Array Approach
LUNs/vdisks and their allocation
An EVA can have
1 2 3
Virtual Array Controller
LUN 1
LUN 2
LUN 3
HP Virtual Array Approach
Capacity upgrade, disk group growth
2
HP Virtual Array Approach
All RAID levels within a Disk Groups, optimal striping,
no hot-spots
Presented 2 3
LUNs 1
Disk Group(s)
HP Virtual Array Approach
Online Volume Growth
1 2
Virtual Array Controller
LUN 1
LUN 2
HP Virtual Array Approach
Online Volume Growth
25
The value of the EVA virtualization
Lower management and Improved application
• training costs availability
• Easy to use intuitive
• Enterprise-class availability
web-interface • Dynamic pool and
• Unifies storage into Vdisk (LUN) expansion
a common pool • No storage reconfiguration
• Effortlessly create down time
virtual RAID volumes (LUNs) Improve performance
Buy less – service more customers
• Significantly increase • Vraid striping across all
utilization and reduce disks in disk group
stranded capacity • Eliminate I/O hot spots
• Automatic load leveling
EVA IscsiSlide
Transition
Connectivity
Option
EVA iSCSI connectivity option
• An integrated EVA solution
− Mounted in the EVA cabinet
− Provides LUNs to iSCSI hosts
− Managed and controlled by Command View
− Flexible Connectivity
• Fabric and direct attach on EVA 4/6/8000
• Fabric attach on EVA 3/5000
• Single or dual iSCSI option
− A324A single router configuration
− A325A upgrade to dual router configuration
• High Performance Solution
− 35K target IOPS
• OS Support
− Microsoft Windows 2003, SP1
− Red Hat Enterprise Linux:
• Red Hat™ Enterprise Linux 4 update 3 (kernel 2.6.9-34)
• Red Hat Enterprise Linux 3 update 5
− SUSE® Linux Enterprise Server:
• SUSE Linux Enterprise Server 9, SP3 (2.6.5 kernel)
• SUSE Linux Enterprise Server 8, SP4
EVA iSCSI connectivity option
• An integrated EVA solution
− Mounted in the EVA cabinet
− Provides LUNs to iSCSI hosts
− Managed and controlled by
Command View
HP S
t o
r a
g e Wo
r k
s
MG MT
mp
x10
FC1 FC2 GE 1 GE 2
− Flexible Connectivity HP S
t o
r a
g e Wo
mp
x10
r ks
MGMT
HP S
t o
r a
g e Wo
mp
x 10
r ks
MGMT
11 9 7 5 3 1 11 9 7 5 3 1
12 10 8 6 4 2 12 10 8 6 4 2
I NTE RC ON TROLLE R
CAB
ON STBY
UID
4/6/8000
DP1B DP2B MP1 FC1 FC2 FC3 FC4 MP2 DP1A DP2A
IN TE RC ON TR OLLE R
CAB
ON STBY
UID
DP1B DP2B MP1 FC1 FC2 FC3 FC4 MP2 DP1A DP2A
12 10 8 6 4 2 12 10 8 6 4 2
iSCSI IP iSCSI IP
Network Network
iSCSI IP iSCSI IP
Network Network
iSCSI IP iSCSI IP
Network Network
LUN masking
2 1
n
3
Multipathing for EVA3000/5000
VCS 2.x and 3.x
• They use an
active/passive LUN
presentation model, a HBA HBA HBA HBA
1) For details see SAN Design Referenc Guide: Heterogeneous server rules on
www.hp.com/go/sandesign
2) See http://www.sun.com/io_technologies/qlogic_corp_.html
ALUA
Asymmetric Logical Units Access defined by INCITS
T10 / Adaptive Load Balance
A LUN can be accessed trough multiple Target Ports.
Target Ports Groups can be defined to manage Target Ports with the same
attributes
The ALUA inquiry string reports one of the following states/attributes
• Active/optimized
Active/optimized path
• Active/non-optimized HBA HBA
1 2
Active/non-optimized path
• Standby
• Unavailable
Target Target Target
Port 1 Port 2 Port n
LU
N
ALB and Windows MPIO
Adaptive Load Balance
HP Implementation of ALUA into the Windows MPIO DSM (Initial Release
2.01.00)
Supported with EVA3/5000 and VCS4.x and EVA4/6/8000
Enabled by DSM CLI command: “hpdsm set device=x alb=y” or DSM manager
GUI HP MPIO Full-Featured DSM for EVA Disk Arrays (Windows
2000/2003)
Maximum number of HBAs per host 8
Maximum number of paths per LUN for EVA 32
Failback Yes
Load balancing Yes
User Interface Yes
Support for Microsoft Cluster Yes
Coexistence with HP MPIO basic failover for EVA Yes
arrays on same server
Coexistence with HP MPIO Full-Featured DSM for Yes
EVA3/5000 VCS 4.x and XP Disk Arrays on same
server
• Side effect:
− Command View EVA will not longer rely on the System Management
homepage
− Therefore port has been changed to:
https://localhost:2372/command_view_eva
Non-migrate disk drive firmware
update
• Pre-XCS 6 possibilities New with XCS 6.0
− Massive disk drive code load to update all drives
at a time
• Single image applied like an EVA firmware code load
• EVA will be offline for several minutes
− Single ungrouped disk drive code load
• Every drive has to be ungrouped, updated and re-
grouped
• Massive time and effort
/agents
groups, or add physical
disks with just a few
mouse clicks SMIS and APIs
• Uses standards-based
SMI-S HP Enterprise Virtual Arrays
• Allows you to easily
CV EVA deployment options
HP Storage Management Appliance (discontinued)
• Choice
and flexibility to OV SOM v1.2
Existing OV SOM
maximize your investment SMA SW v1.2 Installs; includes OV SNM
Or
• BroadMicrosoft Windows CV EVA
CV EVA ≥5.0 required for
Host SMA SW v1.2 EVA4000/6000/8000
OS coverage
• Host-basedor direct host SAN
management
HP ProLiant Storage Management Server-dedicated
General Purpose Server (NAS)
Server Existing OV SOM
Gigabit Ethernet (iSCSI) OV SOM v1.2
Or Fibre channel Installs; includes OV SNM
Customer
application Or
CV EVA
Or NAS OS CV EVA CV EVA ≥5.0 required for
Host EVA4000/6000/8000
CV EVA
SAN
SAN SAN
Up to 16 EVAs
Up to 16 EVAs EVA family
EVA family
HP command view EVAperf
EVA performance analysis
• Performance analysis tool for whole EVA
product line
• Shipped with Command View EVA
• Integrates with Windows PerfMon
• Create your own scripts via a command
prompt
• Monitor in real-time and view historical EVA
performance metrics to more quickly identify
performance bottlenecks
• Easily monitor and display EVA Performance
metrics:
− Host connection data
− port status
− host port statistics
− storage cell data
− physical disk data
See the− virtual
EVAPerf disk data
Whitepaper on:
http://h18006.www1.hp.com/storage/arraywhitepapers.html
− CA statistics
EVA Replication Software
Enhancements with XCS 6.0xx
• Replication Solution Manager 2.1
− Tru64 Host Agent
− Single sign-on
• Business Copy 4.0
− MirrorClone feature with Delta Resync and Instant
Restore
− Instant Restore from a Snapshot
• Continuous Access 3.0
− Enhanced asynchronous performance and distance
support by using buffer-to-disk (journaling)
Replication Solutions Manager 2.1
Familiar browser based navigation
Selectable views
Status monitoring
Context sensitive actions and wizards
Local and remote
Mgmt
Interactive
Topology
Manager
Business copy EVA
4 options available: point-in-time copy capability for the EVA (local copy
• space efficient vSnapShot
• pre-allocated vSnapShot
• vSnapClone
• Mirror Clone
Controlled from Command
View, RSM or SSSU.
Ideally suited to create point-in-
time copies to:
• Keep applications online
while backing up data
• Test applications against
real data before deploying
• Restore a volume after a
corruption
• Mine data to improve
business processes or
customer marketing
Space efficient snapshots
Virtually capacity free
contents contents contents
identical different different
volume A
volume A receives more
receives updates
updates (copy
on write)
time
t0 t1 t2 t3 t4
$ create snapshot “A”
Pre-allocated snapshots
Space reservation
contents contents contents
identical different different
snap
of A
updates t3
updates t1 updates t1
volume “A”
volume “A” receives more
receives updates
updates (copy
on write)
time
t0 t1 t2 t3 t4
$ create snapshot “A”
New: Pre-allocated 3-phase
snapshots
Space reservation
contents contents contents
identical different different
snap
of A
updates t3
updates t1 updates t1
t-x t0 t1 t2 t3 t4 t5
$ create snapshot “A”
SnapClone of virtual disks
Full copy
contents contents relation
identical different suspended
B
A snap A A B
of A
updates t1
t0 t1 t2 t3 t4
$ create snapclone“A”
3-phase SnapClone
Full copy
contents contents relation
identical different suspended
B
A snap A A B
of A
updates t1
t-x t0 t1 t2 t3 t4 t5
$ create snapclone“A”
Business Copy 4.0 MirrorClones
• A Mirror Clone is a pre-normalized Clone New with XCS 6.0
− Full Clone of the source
• requires 100% of the capacity (if same raid level)
− Synchronous mirror between source VDisks and MirrorClone
• Once synchronized data is always identical (unless fractured)
− MirrorClone can be in a different Disk Group/have a different
Raid level
• Tiered storage approach
• Can be used to protect against physical failures
Writes
User...
• Creates empty container with same size as source
Host Read VDisk
E: s Container
• Raid level and Disk Group can be different
Writes
User...
• Creates MirrorClone using the Container as target
Host Read EVA...
E: s
• Establishes MirrorClone relationship
Writes • Start inital synchronization of MirrorClone (Volume
Writes
behind copy
stays fully accessible to host)
fence
Synchronized MirrorClone
• Once MirrorClone is synchronized data on both
Host Read volumes is kept identical
E: s • Writes are applied to both volumes
• Reads are satisfied by source only
Writes
MirrorClone fracture and resynch
MirrorClone source MirrorClone
(Production Vdisk) target User...
• Fractures MirrorClone
Host Read EVA ...
E: s • Stops applying writes to the MirrorClone target
• Instead changes are marked in a delta bitmap
Writes
User...
• Can present fractured MirrorClone for various
Host Read Read
Host purposes (Read and write).
E: s s F: EVA ...
• Changes to the source and target are recorded in a
Writes delta bitmap
Writes
User...
Host Read
• Initiates resynchronization of volumes in either direction
EVA ...
E: s
• Copies change block only until source and target are
synchronized
Writes
Synchronized MirrorClone
• Once MirrorClone is synchronized data on both volumes is
Host Read
E: s kept identical
• Writes are applied to both volumes
Writes • Reads are satisfied by source only
Combining Snapshots and
MirrorClone
• MirrorClone and SnapShot can be combined in a way that you take
the Snapshot from the MirrorClone target
t0
• Advantages: Source MC t1
• Disadvantages:
− No Snapshot restore in the first release
• Workaround could be to detach MirrorClone, then restore and present as
original LUN
• Direct restore is planned for end 2006
Continuous access EVA
Remote copy capability for the EVA
Continuous Access EVA delivers array-based remote data replication –
protecting your data and ensuring continued operation from a disaster.
• Ideal suited to:
• Keep byte for byte copies
of data at a remote site
for instant recovery
• Replicate data from
multiple sites to one for
consolidated backup
• Shift operations to a
recovery site for primary
site upgrades and
maintenance
• Ensure compliance to
government legislation
and business objectives
Continuous Access EVA
Remote copying
• What does it do?
− Replicates LUNs between EVAs
− Provides disaster recovery
− Simplifies workload management
− Allows point-in-time database backup
− Provides restore without latency
• How does it work? Dest
VOL
− Creates up to 256 Copy Sets for all Source
Set
specified logical units in the array VOL
Copy
over Fibre Channel and FC extensions
Source
− Synchronous and asynchronous VOL
Time
Multiple relationships
• Fan-in of multiple
relationships
− The ability of one EVA to act
as the destination for EVA3000
different LUNs from more EVA6000
than one source EVA EVA4000
• Fan-out of multiple
relationships
− The ability for different LUNs EVA5000
on one EVA to replicate to
different destination EVA
EVA8000 EVA6000
• Bidirectional
− one array with copy sets EVA8000
acting as the source and EVA5000
destination across the same
EVA CA SAN configuration
2 fabric configuration
Shared SAN for host and CA traffic.
Server1 Server2
Managemen Managemen
t Server t Server
EVA1 EVA2
EVA CA SAN configurations
Physically separated 6 fabric configuration
CA traffic is only going through the CA
Server1 SAN Server2
No host IO cross-site possible > CLX EVA
Managemen Managemen
t Server t Server
EVA1 EVA2
CA configurations: Dedicated
CA fabrics
Physically separated & zoned 4 fabric configuration
CA traffic is only going through the CA SAN if the
Server1 EVA ports are properly zoned off in the host SAN Server2
Host IO cross-site possible -> Stretched Cluster
Managemen Managemen
t Server t Server
EVA1 EVA2
CA configuration: Dedicated CA
zone
Zoned 2 fabric configuration
CA traffic is only going through the CA zone if the
Server1 EVA ports are properly zoned off in the host zones Server2
Host IO cross-site possible -> Stretched Cluster
Managemen Managemen
t Server t Server
EVA1 EVA2
EVA Solutions
Transition Slide
Zero downtime backup
Recovering in minutes not hours
• Description Client network
− Data Protector provides no impact backup,by
performing backup on the copy of the HP-UX Solaris NT
production data; with the option to copy it or W2k
move it to tape.
Data
− NEW with Data Protector 6.0:
Incremental ZDB for files Protector
Server
• Usage SAN
− Data that requires:
• Non-disruptive protection
• Application-aware
• Zero impact backup
− SAN protection
• Benefits P-Vol S-Vol
− Fully automates the protection process
− All options can be easily configured using simple
selections
− The Data Protector GUI permits complete
control of the mirror specification
− Administrators can choose the schedule of the
backup
Oracle database integration
• What does it do?
− Maps the Oracle DB to
Vdisks, DR Groups etc.
− Replicates all Vdisks of
specified Oracle
Databases
− Allows creating local or
remote replicas
− Easy control via RSM GUI
− Can quiesce and resume
Oracle
− Provides a topology MAP
• AppRM is
− Disk-based (VSS) replication and restore only, no tape backup possible,
but can be used as pre-exec to 3rd party backup application
• AppRM is based on Data Protector 6.0 code
− DP will offer same feature set as AppRM, whereas AppRM offers only a
subset of DP functionality (ZDB and IR)
• Target customers:
− NON – DP accounts, with the desire of a VSS instant recovery solution,
but no need for a “full” backup software product
− Potential up-sell opportunity to migrate existing backup product to Data
Protector
Application Recover Manager
• AppRM follows Data Protector licensing scheme
− Capacity based
− More expensive than FRS, especially for only a few larger
systems
− but also more functionality
− TB licenses are based on the source capacity independent
of the number of copies
• Requirements:
– EVA Disk Arrays
DataCenter 1 DataCenter 2
– Cluster Extension EVA
Up to 500km – Continuous Access EVA
– Max 20ms network round-trip delay
– Command View EVA & SMI-S
Windows 2003 stretched cluster
with CA
App A App A
App B App B
Quoru Quoru
Quoru
m m
m
• CA
• Failover
Restart
DRG A Servers
(Rescan)
Continuous Access
DRG B EVA
Cluster extension EVA CLX
– Manual Move of App A
Quorum or
Witness
Server
App A App A
App B
Quoru Quoru
m m
• Move App A
DRG A
Continuous Access
DRG B EVA
Cluster extension EVA
– Storage Failure
App A App A
App B App B
Quoru Quoru
Quoru
m m
m
DRG A
Continuous Access
DRG B EVA
Majority Node Set Quorum
– File Share Witness
• What is it?
− A patch for Windows 2003 SP1 clusters provided by Microsoft (KB921181)
95 March 7, 2008
Majority Node Set Quorum
– File Share Witness
\\arbitrator\share
Get
vote
App A App A
App B