Introduction To The Emc VNX Series: VNX5100, VNX5300, VNX5500, VNX5700, & VNX7500
Introduction To The Emc VNX Series: VNX5100, VNX5300, VNX5500, VNX5700, & VNX7500
Abstract
This white paper introduces the EMC® VNX™ series unified
platform. It discusses the different models, new and improved
features, and key benefits.
December 2013
Copyright © 2013 EMC Corporation. All rights reserved.
For the most up-to-date listing of EMC product names, see the
EMC corporation trademarks page at EMC.com.
Audience
This white paper is intended for IT architects, administrators, and others who are
interested in VNX series arrays. You should be familiar with storage array concepts,
general hardware, and the software services provided by the arrays.
Terminology
• Automatic Volume Management (AVM)—Feature of VNX for File that creates and
manages volumes automatically without manual volume management by an
administrator. AVM organizes volumes into storage pools that can be allocated to
file systems.
• Converged Network Adapter (CNA)—A host adapter that allows a host to process
Fibre Channel and Ethernet traffic through a single type of card and connection,
decreasing infrastructure costs over time.
• Disk array enclosure (DAE)—Shelf in VNX that includes an enclosure, either 15 or
25 disk modules, two Fibre Channel link control cards (LCCs), and two power
supplies, but does not contain storage processors (SPs).
• Disk-processor enclosure (DPE)—Shelf in VNX that includes an enclosure, disk
modules, storage processors (SPs), two Fibre Channel link control cards (LCCs),
two power supplies, and four fan packs. A DPE supports DAEs in addition to its own
disk modules. This 3U form factor is used in the lower-end VNX models
(VNX5100™, VNX5300™, and VNX5500™) and supports a maximum of 75, 125,
and 250 drives, respectively.
• Fibre Channel (FC)—Nominally 1 Gb/s data transfer interface technology, although
the specification allows data transfer rates from 133 Mb/s up to 4.25 Gb/s. Data
can be transmitted and received simultaneously. Common transport protocols,
such as Internet Protocol (IP) and Small Computer Systems Interface (SCSI), run
over Fibre Channel. Consequently, a single connectivity technology can support
high-speed I/O and networking.
Gateway configurations
Each VNX Series Gateway product—the VG2 or VG8—is a dedicated network server optimized
for file access and advanced functionality in a scalable, easy-to-use package. The gateway
systems can connect to, boot from, and work with Symmetrix, the VNX series, and CLARiiON
back-end array technologies, as well as multiple systems.
VNX VG2
• One or two 2U Blade (or Data Movers)
• Four core 2.40 Ghz Intel Xeon 5600 processors with 6 GB memory
• 256 TB usable capacity per Blade (256 TB per system)
VNX VG8
• Two to eight 2U Blades (or Data Movers)
• Six core 2.83 Ghz Intel Xeon 5600 processors with 24 GB memory
• 256 TB usable capacity per Blade (1,792 TB per system)
VNX VG2 and VG8 gateway support for VMAXe and FCoE
The VNX series gateway models (VG2 and VG8) now support FCoE I/O modules on the
storage processors, along with VMAXe backend connectivity. VMAXe is the Symmetrix
business class that complements the higher-end line VMAX.
•
Figure 5: Block dense configuration example
Figure 5 and Figure 6 show the front and back of the 25-drive DAE that is housing 2.5”
SAS drives.
The 15-drive DAE and DPE supports 2.5” SAS, 3.5” NL-SAS, and 3.5” flash drives
The 15-drive DAE, shown in Figure 8 and Figure 9, may be populated with 2.5” drives
in 3.5” carriers. The 15-drive DAE may also be populated with any combination of 3.5”
EFD, SAS, and NL-SAS drives.
These options enable you to configure your system for optimal efficiency and
performance. The flash drives provide extreme performance, the mid-tier SAS drives
provide a good balance of price and performance (reaching speeds of up to 15k rpm),
and the cost-effective NL-SAS drives have a capacity of up to 2 TB.
These 15-drive DAEs, while similar in appearance to previous-generation CX™ and NS
models, are not backward-compatible with CX and NS arrays 1. Also, because the DAEs
use SAS as the backend bus architecture, the VNX series arrays do not accept
previous-generation DAEs.
1
To learn how to move data from a NS or CX model to a VNX model, please refer to the white papers Migrating Data from an
EMC Celerra Array to a VNX Platform using Celerra Replicator and Migrating Data from an EMC CLARiiON Array to a VNX Platform
using SAN Copy.
The 60-drive high-capacity DAE supports 2.5” SAS, 3.5” NL-SAS, and 3.5” flash drives
The 60-drive DAE can be populated with 2.5” SAS drives using a 3.5"-carrier. The 60-
drive will hold up to 60 rotating or SSD-type drives in 3.5” (EFD, 7200 rpm and 10 K
rpm) and 2.5” (EFD, 7200 rpm, 10 K rpm) form factors. For the 3.5” drive types, 15 K
rpm drives are not supported in this enclosure.
These options allow you to configure your system for optimal efficiency and
performance. The flash drives provide extreme performance in SFF (small form factor),
2.5" and LFF (large form factor), 3.5" sizes. The mid-tier SAS drives provide a good
balance of price and performance (reaching speeds of up to 10 K rpm), and the cost-
effective NL-SAS drives have a capacity of up to 3 TB.
The enclosure employs a slide-out drawer with access to the drives from the top. The
drive matrix consists of 5 rows (banks A-E) of 12 disks (slots 0-11). These disks are
addressed and notated using a combination of letters and numbers, such as A1 and
B4, to uniquely identify a single drive in the enclosure. The enclosure itself is labeled
clearly to ensure proper drive identification.
The dimensions of the 60-drive high-capacity enclosure are 7" (height) (4U) x 35"
(depth, chassis only) with a maximum weight of 260 lbs. Because the design of this
industry-leading enclosure provides far more density per square foot of rack space
Figure 15. Cable management arms in the rear of the 60-drive DAE
Figure 16. Power supply, ICM LCC, and SAS expander ports
Unified
VNX5100 VNX5300 VNX5500 VNX5700 VNX7500
platform model
Management
Unisphere Unisphere Unisphere Unisphere Unisphere
software
Drive count
75 125 250 500 1000
Flash, SAS, Flash, SAS, Flash, SAS, Flash, SAS, Flash, SAS,
Drive types
NL-SAS NL-SAS NL-SAS NL-SAS NL-SAS
Standby power
1/2 1/2 2 2 2
supplies
File: number of
none 1-2 1-3 2-4 2-8
Blades
Array
enclosure/ DPE/2 SP DPE/2 SP DPE/2 SP SPE/2 SP SPE/2 SP
SP count
Figure 18. Back view of the DPE with SP A (on the right) and SP B (on the left)
As you can see in Figure 18, the System Information tag is located on the back of the SP A
side of the DPE (on the right side above). This tag contains serial number and part
number information and should be handled carefully when racking the system.
Figure 19, Figure 20, and Figure 21 show the backs of the DPE-based storage processor.
VNX5500
The VNX5500 is designed for the mid-tier space. This model provides either block and file
services, file only, or block only (without blades and Control Stations).
The block only model uses a 2.13 GHz, four-core Xeon 5600 processor with 12 GB RAM
and a maximum of 250 drives with the following block-based host connectivity options:
FC, iSCSI, and FCoE.
The VNX5500 uses a DPE that is available in 15 x 3.5” drive or 25 x 2.5” drive form
factors. The DPE includes four onboard 8 Gb/s Fibre Channel ports and two 6 Gb/s SAS
ports for backend connectivity on each storage processor. A micro-DB9 port and service
LAN port are available. EMC service personnel use these ports to connect to the VNX.
A LAN connection is provided on each SP for array management. Each SP in the enclosure
also has a power supply module and two UltraFlex I/O module slots. Both I/O module
VNX5700
The VNX5700 is designed for the high-end, mid-capacity space. This model provides
either block and file services, file only, or block only), and uses a SPE form factor.
Each SP in the SPE uses a 2.4 GHz, four-core Xeon 5600 processor with 18 GB RAM and a
maximum of 500 drives with the following host connectivity options: FC, iSCSI, and FCoE.
The VNX5700 has a SPE that uses UltraFlex I/O slots for all connectivity. The first slot
houses the internal network management switch, which includes a mini-serial port and
service LAN port. EMC service personnel use these ports to connect to the VNX.
A LAN connection is provided on each SP for array management. Each SP in the enclosure
also has two power supply modules. Each SP has five I/O module slots. Any slots without
I/O modules will be populated with blanks (to ensure proper airflow).
The fault/status light of the SPE form factor is located on the front of the unit. This form
factor requires an additional DAE to provide a minimum of four drives for the VNX Array
Operating Environment. This model’s form factor favors scalability, rather than density,
requiring slightly more rack space for a minimal configuration.
The back of the SPE is shown in Figure 26. In this figure, you can see the various
components including I/O modules. Figure 27 provides a close-up view of the back of
the SPE.
VNX7500
The VNX7500 is designed for the enterprise space. This model provides either block and
file services, file only, or block only and uses a SPE form factor.
This model uses a 2.8 GHz, six-core Xeon 5600 processor with 24 or 48 GB RAM per
blade and a maximum of 1,000 drives with the following host connectivity options: FC,
iSCSI, and FCoE. There is also an optional upgrade to 48 GB of RAM if you have
purchased a VNX or have an existing system with 24 GB of RAM per SP.
The VNX7500 has a SPE that uses UltraFlex I/O slots for all connectivity. The first slot
houses the internal network management switch, which includes a mini-serial port and
service LAN port. EMC service personnel use these ports to connect to the VNX.
A LAN connection is provided on each SP for array management. Each SP in the enclosure
also has two power supply modules. Each SP has a maximum of five I/O module slots.
Any slots without I/O modules will be populated with blanks (to ensure proper airflow).
The fault/status light of the SPE form factor is located on the front of the unit. This form
factor requires an additional DAE to provide a minimum of four drives for the VNX Array
Operating Environment. This model’s form factor favors scalability, rather than density,
requiring slightly more rack space for a minimal configuration.
The front of the SPE houses the power supplies, cooling fans, and CPU modules, along
with the fault/status LED.
The blades in this model use a 2.8 GHz, six-core Xeon 5600 processor with 24 GB RAM
per blade, with a maximum storage capacity of 256 TB per blade with a maximum of eight
blades, and supports the following NAS protocol options: NFS, CIFS, and pNFS.
I/O modules
I/O modules for the storage processor
VNX series arrays support a variety of UltraFlex I/O modules on the storage processors,
which are discussed in this section.
• Is supported on blades
• Enabled optical connectivity to NAS clients
• Operates at 10G only
• Provides two optical ports
Control Station
The Control Station is a 1U self-contained server, and is used in unified and file
configurations. The Control Station provides administrative access to the blades; it
also monitors the blades and facilitates failover in the event of an blade runtime issue.
The Control Station provides network communication to each storage processor. It
uses Proxy ARP to enable communication over the Control Station management
Ethernet port to each storage processor’s own IP address.
An optional secondary Control Station is available that acts as a standby unit to
provide redundancy for the primary Control Station.
Unified management
EMC Unisphere provides a flexible, integrated experience for managing existing VNX
storage systems.
Unisphere provides simplicity, flexibility, and automation—all key requirements for
optimal storage management. Unisphere’s ease of use is reflected in its intuitive task-
based controls, customizable dashboards, and single-click access to real-time support
tools and online customer communities. Unisphere’s wizards help you provision and
manage your storage while automatically implementing best practices for your
configuration.
Unisphere is completely web-enabled for remote management of your storage
environment. Unisphere Management Server runs on the SPs and the Control Station;
it can be launched by pointing the browser to the IP address of either SP or the Control
Station.
Unisphere has all the existing features and functionality of the previous interfaces,
such as VMware awareness, LDAP integration, Analyzer, and Quality of Service
Manager. Unisphere also adds many new features like dashboards, systems
dashboard, task-based navigation and online support tools. For more information on
Unisphere, refer to the white paper, EMC Unisphere: Unified Storage Management
Solution.
Figure 42 displays the new system dashboard. You can customize the view-blocks
(also referred to as panels) in the dashboard.
FAST Cache
Data deduplication
MirrorView/A
SAN Copy
EMC VNX SAN Copy™ is a block feature that copies data between EMC and qualified
third-party storage systems. SAN Copy copies data directly from a source LUN on one
storage system to destination LUNs on other systems without using any host
resources. SAN Copy may be used to create full and incremental copies of a source
LUN.
Business continuance
VNX Snapshots
VNX Snapshots is a new feature created to improve snapshot capability for VNX
Block. VNX Snapshots are point-in-time views of a LUN, which can be made
accessible to another host, or be held as a copy for possible restoration. VNX
Snapshots use a redirect-on-write algorithm, and are limited to pool based
provisioned LUNs (i.e. not RAID Group LUNs). VNX Snapshots support 256 writable
snaps per pool LUN. Branching, or snap of a snap, is also supported. There are no
restrictions to the number of branches, as long as the entire Snapshot Family is
within 256 members. Consistency Groups are also introduced, meaning that several
pool LUNs can be combined into a Consistency Group and snapped at the same
time. For more information see the White paper on VNX Snapshots on powerlink.
SnapView Snapshots
EMC VNX SnapView snapshots are point-in-time views of a LUN, which can be made
accessible to another host, or be held as a copy for possible restoration. SnapView
Clones
SnapView clones are fully populated point-in-time copies of LUNs that allow
incremental synchronization between source and destination LUNs. Unlike snapshots
that provide point-in-time views of data, clones provide fully populated point-in-time
copies that maximize the flexibility of the storage environment. These point-in-time
copies allow you to perform additional storage management tasks with minimal
impact to production data. These tasks include backup/recovery, application testing,
warehousing, and data movement.
Security
Anti-virus software
The VNX Event Enabler (VEE) allows VNX to integrate with industry-leading anti-virus
applications, such as Symantec, McAfee, Computer Associates, Sophos, Kaspersky,
and Trend Micro.
The Anti-Virus feature works with an external system that houses one of the supported
anti-virus applications. Once configured, files on a file system are scanned before they
are delivered to the requesting client. The file is either validated and sent to the client
or denied if any issues are identified during the scan. A scan is also made after a file is
modified and closed. If no modifications are performed on the file, it is not rescanned
when closed. This can be configured to either scan files on write (default) or scan files
on read. There is the capability to scan all files in the file system with a separate
command as well. This feature helps protect files from viruses and malware while still
allowing you to benefit from the advanced features of the VNX array, such as high
availability and performance.
File-Level Retention
The File-Level Retention feature protects files from deletion or modification until a
user-specified retention date elapses. This feature prevents users from deleting or
modifying files that are locked and protected. There are two levels of FLR protection
that are enabled at the file system level upon creation:
• FLR-C File-Level Retention (Compliance level)—Protects data from changes made
by users through CIFS, NFS, and FTP, including administrator actions. This level of
FLR also meets the requirements of SEC Rule 17a-4(f). A file system that contains
files that are locked with FLR-C cannot be deleted.
• FLR-E File-Level Retention (Enterprise level)—Protects data from changes made by
users through CIFS, NFS, and FTP, but not including administrator actions. This
means that only a VNX administrator with the appropriate authorization can delete
an FLR-E file system that contains FLR-E protected files. However, administrators
still cannot target individually locked files for deletion or modification.
Figure 50 illustrates the workflow for File-Level Retention.
Assumptions
• DART/FLARE compatible revisions are loaded on the private LUNs as a precondition to the
B2U.
• The upgrade will provide uninterruptable block data I/O during the file services upgrade
and uninterruptable management access.
• Block data services, MirrorView, SnapView and replication via Recover Point will not be
interrupted.
• Dual control station configurations are supported as part of the block to file upgrade.
User Experience
Source Platform
(Model/Memory) VNX5100/ VNX5300/ VNX5500/ VNX5700/ VNX7500/ VNX7500/
4GB 8GB 12GB 18GB 24GB[1] 48GB
[2] [2]
VNX5100/4GB N/A Yes Yes Yes N/A Yes
[2] [2]
VNX5300/8GB N/A N/A Yes Yes N/A Yes
[2] [2]
VNX5500/12GB N/A N/A N/A Yes N/A Yes
Conclusion
The VNX series offers increased scalability, efficiency, and performance in a unified
form factor that you can manage using a single pane of glass, offering an unparalleled
unified experience. The advances in hardware, from the Intel Xeon 5600 processors, to
the 6 Gb/s x 4 lanes SAS backend, to the optimizations made using Flash drives, place
the VNX series in the forefront of EMC’s midrange offerings.
The unification of file and block is another step in the continued evolution of our
product lines. By unifying the hardware and creating a new unified management
experience (through the conceptual merger of file and block inside Unisphere), EMC
has provided a holistic way to view and manage an entire storage domain with the
utmost simplicity.
Components
VNX5100 VNX5300 VNX5500 VNX5700 VNX7500
Configuration
Max FAST
1 1 1
Cache memory 100 500 1000 1500 2100
(GB)
Max FAST
2 10 20 30 42
Cache drives
Max drives per
75 125 250 500 1000
array
Min drives per
4 4 4 4 4
storage system
Minimum
2 2 2, 3 2, 3
configuration 4U 7U 7U 8U 8U
rack space
3.5”–100 GB/200
Yes Yes Yes Yes Yes
GB Flash drives
2.5 –100 GB/200
Yes Yes Yes Yes Yes
GB Flash drives
3.5”–300 GB 15k
Yes Yes Yes Yes Yes
& 10k drives
3.5”–600 GB 15k
Yes Yes Yes Yes Yes
& 10k drives
3.5”–900 GB 10k
Yes Yes Yes Yes Yes
drives
3.5”–1 TB 7.2k
Yes Yes Yes Yes Yes
drives
3.5”–2 TB 7.2k
Yes Yes Yes Yes Yes
drives
3.5”–3 TB 7.2k
Yes Yes Yes Yes Yes
drives
2.5”–300 GB 10k
Yes Yes Yes Yes Yes
drives
2.5”–600 GB 10k Yes Yes Yes Yes Yes
BLOCK COMPONENTS
Processor clock
speed/number of
1.6 GHz, 2-core 1.6 GHz, 4-core 2.13 GHz, 4-core 2.4 GHz, 4-core 2.8 GHz, 6-core
cores/architectur
Xeon 5600 Xeon 5600 Xeon 5600 Xeon 5600 Xeon 5600
e
(per SP)
Physical memory
4 8 12 18 24 or 48
per SP (GB)
Max FC ports
per SP (FE
4 4 4 0 0
ports)—onboard
only
Max FC ports
per SP (FE
0 4 4 12 16
ports)—I/O
module add-On
Max 1 Gb/s
iSCSI ports per 0 4 8 8 8
SP (FE only)
Max 10 Gb/s s
iSCSI ports per 0 4 4 6 6
SP (FE only)
Max FCoE ports
0 4 4 6 8
per SP (FE only)
Max initiators per
256 512 512 512 1024
FC port
Max initiators per
1 Gb/s iSCSI 0 512 512 512 512
port
Max initiators per
10 Gb/s iSCSI 0 512 1024 2048 2048
port
Max initiators per
0 512 512 512 1024
FCoE port
Max VLANs per
1 Gb/s iSCSI 0 8 8 8 8
port
Max VLANs per
10 Gb/s iSCSI 0 8 8 8 8
port
Max usable
UltraFlex I/O
module slots Please see Table 5. Block configuration on page 73
(front-end)
(per SP)
Embedded I/O
4 FC 4 FC 4 FC
ports per 0 0
2 back-end SAS 2 back-end SAS 2 back-end SAS
(per SP)
Available 6 Gb
SAS ports for 2 2 2 or 6 4 4 or 8
DAE connectivity
Max LUNs per
512 2048 4096 4096 8192
storage system
Max user-visible
5 896 2816 4864 5376 10496
LUNs
LUN ID range 0-1023 0-4095 0-8191 0-8191 0-24575
Max LUNs per
256 256 256 256 256
RAID group
Max LUNs per
256 512 512 1024 1024
storage group
RAID options 0,1,1/0,3,5,6 0,1,1/0,3,5,6 0,1,1/0,3,5,6 0,1,1/0,3,5,6 0,1,1/0,3,5,6
Max RAID
groups per 75 125 250 500 1000
storage system
Max drives per
16 16 16 16 16
RAID group
DPE/SPE rack
3U (DPE) 3U (DPE) 3U (DPE) 2U (SPE) 2U (SPE)
space
Max SnapView
sessions(LUNs) 8 8 8 8 8
per source LUN
Max SnapView
128 256 256 512 512
snapshot source
FILE COMPONENTS
Processor clock
speed / number of
2.13 GHz, 4-core 2.13 GHz, 4-core 2.4 GHz, 4-core 2.8 GHz, 6-core
cores / N/A
Xeon 5600 Xeon 5600 Xeon 5600 Xeon 5600
architecture
(per blade)
1
Attainable using 100 GB Flash drives
2
Single Control Station configuration
3
Using 2.5” 25-drive DAE
4
Each blade requires one I/O module slot for backend (fibre) connectivity. This has been deducted from the numbers shown in
the chart.
5
These numbers include FLUs + pool LUNs + snapshot LUNs (SV and Adv) + MV/A LUNs
6
These numbers are without any snaps or replications
7
Limited by the number of total drives per array multiplied by the per drive capacity
Number of blades 1‒2 1‒3 1‒3 2‒4 2‒4 5‒8 2‒4 5‒8
Total number of
configurable host I/O 2 2 1 3 3 2 2 1
connectivity slots
Table 5 and Table 6 pertain to components delivered in an EMC rack. I/O module slots are only reserved
for upgrade from block-only to unified if the array is delivered in an EMC rack. The VNX5100 does not
support the use of UltraFlex I/O modules, so it has been omitted in these tables.
Table 5. Block configuration
SAS back-
1‒2 1‒2 3‒6 1‒4 1‒4 5‒8
end buses
Space
reserved for
N Y N Y N Y N Y N Y N Y
upgrade to
*
unified?
Total
number of
configurable
2 2 1 4 3 4 3 3 2
host I/O
connectivity
slots
SAS back-end
1‒2 1‒2 3‒6 1‒4 1‒4 5‒8
buses
Number of blades 1‒2 1‒3 1‒3 2‒4 2‒4 5‒8 2‒4 5‒8
Total number of
configurable host
1 1 1 1 1 1 1 1
I/O connectivity
*
slots
There is only one configurable slot available to account for the RecoverPoint Fibre
Channel option. As a file-only configuration, no other host connectivity is allowed from
the storage processors.