0% found this document useful (0 votes)
54 views

Emulex Leaders in Fibre Channel Connectivity: Fibre Chanel Over Ethernet

Emulex(r) Leaders in Fibre Channel Connectivity Fibre Channel over Ethernet Confidential 1 Agenda Emulex - Who we are FCoE Overview FCoE Value Proposition Emulex FCoE product Plans Q&A Confidenţial 2 Emulex: Positioned for Next-Gen Data Centers Products that are trusted by the most demanding customers 96 of the Fortune 100 use our HBAs Supplier to all top-tier server and storage OEMs.

Uploaded by

easyroc75
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Emulex Leaders in Fibre Channel Connectivity: Fibre Chanel Over Ethernet

Emulex(r) Leaders in Fibre Channel Connectivity Fibre Channel over Ethernet Confidential 1 Agenda Emulex - Who we are FCoE Overview FCoE Value Proposition Emulex FCoE product Plans Q&A Confidenţial 2 Emulex: Positioned for Next-Gen Data Centers Products that are trusted by the most demanding customers 96 of the Fortune 100 use our HBAs Supplier to all top-tier server and storage OEMs.

Uploaded by

easyroc75
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Emulex Leaders in Fibre Channel Connectivity

Fibre Chanel over Ethernet

Confidential
1

Agenda
Emulex Who we are FCoE Overview FCoE Value Proposition Emulex FCoE Product Plans Q&A

Confidential
2

Emulex: Positioned for Next-Gen Data Centers


Products that are trusted by the most demanding customers
Supplier to all top-tier server & storage OEMs 96 of the Fortune 100 use our HBAs
Host Server Products Embedded Storage Products

Enterprise Capabilities
Large investment in software (drivers, management) Large investment in test labs Long list of product qualifications

Strategic commitment to 10GbE leadership


World-class 10GbE technology Confidential

Intelligent Network Products

Trends: Data Center Pent-up SAN Demand


Need for SAN storage beyond database tier fueled by
Blade servers Server virtualization

Growing awareness of SAN economics Strong desire for SAN on Ethernet


E.g. iSCSI outside data center

S A N D E M A N D

Keep cabling & power costs under control

Confidential
4

Fibre Channel over Ethernet


Enables network convergence by carrying FC traffic over Ethernet infrastructure
Server with traditional approach Server with converged approach
ConvergedNIC adapter

HBA

NIC

HBA

4/8G FC

10GbE

10G EE

Single cable

Benefits
Increases SAN Adoption in the data center Using lesser number of switch ports, adapters and cables Protects existing investments in Fibre Channel SANs Retains existing storage management framework Reduces overall power consumption through network consolidation Confidential
5

Fibre Channel over Ethernet How it works


Direct mapping of Fibre Channel over Ethernet
FC-4 FC-3 FC-2 FC-1 FC-0 FC-4
CRC SOF

FC-3 FC-2
FCoE Mapping MAC PHY

FC Frame
Ethernet Payload

EOF

Ethernet Header

Ethernet FCS

(a) Protocol Layers

(b) Frame Encapsulation

Requires Lossless Ethernet Fabric


Enabled by Priority PAUSE and Congestion Management Enhancements to Ethernet

Maintains traffic prioritization within single Ethernet Link


Enables prioritization of storage traffic over other networking traffic
FCoE Traffic Ethernet Link Other Networking Traffic

Confidential
6

FCoE Driven Converged Ethernet Fabric


Conventional Datacenter Network Converged Datacenter Network
Entire datacenter is SAN enabled - while using lesser adapters, lesser cables. Increased SAN adoption facilitates server virtualization! Fully leverages investments in existing FC infrastructure WAN
Router Core Ethernet Switch

Uniform SAN Mgmt, SRM tools across entire datacenter - Simple extension of existing management processes

LAN

Ethernet Edge Switch

Unified Management
10G Switch w/ FC Forwarder 10G Switch w/ FC Forwarder

FC-connected Servers Core Switch

Converged Fabric

SAN

FC-Edge Switch

Non-FC (DAS, NAS) connected Servers

Existing FC SAN

New Servers, Non-FC (DAS, NAS) connected Servers

WAN

FCDirector

IP Traffic FCoE Traffic FC Traffic

FC Storage

-- FC Initiator

Confidential
7

Agenda
FCoE Overview FCoE Value Proposition Emulex FCoE Product Plans Emulex Sales & Marketing Plan Q&A

Confidential
8

TCO-Driven Deployment of Converged Infrastructure


Industry Trend: Consolidation via Server Virtualization

OPTION-1: TRADITIONAL
Apache1 Apache2 Web Logic1 Web Logic2

VM1

VM2

VM3

VM-n

ESX

BEFORE
Workloads Apache1 Apache2 DAS Server1 DAS Server2 Web Logic1 DAS Server3 Web Logic2

1G NICs Vmotion VMs VMs Backup Vconsole FC-HBA

FC Switch Eth Switch

Cables

DAS Server-n

OPTION-2: CONVERGED
1G NIC 1G NIC 1G NIC 1G NIC Apache1 Apache2 Web Logic1 Web Logic2

Plan to complete TCO White Papers


CEE Traffic Classes

VM1

VM2

VM3

VM-n

ESX Vmotion VMs VMs Backup Vconsole FC

Converged Network Adapter

Single Cable

Converged Switch

Confidential
9

IO Consolidation using FCoE


4 4 4 4

Nearly twice the cables

Servers w/o FCoE


4+ cables per server 32+ FC server connections 32+ Ethernet server connections 4 FC uplink cables 4 Ethernet uplink cables 72+ Cables

FCoE enabled Servers


2 cables per server 0 FC server connections 32 Ethernet server connections 4 FC uplink cables 4 Ethernet uplink cables 40 Cables

Confidential
10

CEE/FCoE Consolidation for HP C-Class


Pass-Through Customers Before
LAN SAN-A SAN-B

After
LAN SAN-A SAN-B

Switch-Blades Customers Before After


LAN SAN-A SAN-B LAN SAN-A SAN-B

84 mezz cards 168 downlink cables 16 uplink cables

42 mezz cards 84 downlink cables 16 uplink cables

84 mezz cards 72 downlink cables 16 uplink cables

42 mezz cards 36 downlink cables 16 uplink cables

Confidential
11

Converged Ethernet Fabric Advantages


Enables an evolutionary approach toward network consolidation Expands SAN economics to Mid-tier and Front-End Servers Protects existing investments in FC Retains FC management tools, skills and processes Leverages Proven FC technology
Lesser unknowns Enables rapid adoption

Light-weight, stateless FCoE encapsulation ensures


Less complexity in gateways High performance

Maintains clear boundaries for Storage and Networking Management Domains

Confidential
12

Agenda
FCoE Overview FCoE Value Proposition Emulex FCoE Product Plans Emulex Sales & Marketing Plan Q&A

Confidential
13

Q&A

Confidential
14

Linux Update
October 2007
Confidential
15

Emulex and the Linux Supply Chain


Storage Vendor

Contributor

Distribution kernel kernel Contributor

Server Vendor

100% Support of the ELX Driver in YOUR Storage Stack

Confidential
16

Emulex and the Linux Supply Chain


What has Emulex contributed to the kernel? SCSI Mid-Layer Extensions SCSI Device Block/Unblock to cover cable pulls Addition of Transport Objects SCSI Netlink Event interface Miscellaneous utility functions The FC Transport we are the maintainer First major transport implementation. Used as template for other transports. FC Remote Port: Display FC topology in sysfs without custom tools Introduce Host, Rport, and Target class objects with attributes and statistics based on HBAAPI. Vendor-agnostic mgmt interface! Transport function library for cable pulls, consistent target id bindings, and event posting. Bug Fixes SCSI Mid-Layer bugs: scanning, object mgmt, error handling, reset handling Misc kernel bugs And our lpfc device driver

Confidential
17

Emulex and the Linux Supply Chain


Q: How do you succeed where others have failed? A: Contribute quality vendor- and transport-agnostic code to the kernel, and maintain a thin next-generation driver as a result And invest heavily, in the best talent, to stay ahead of the dynamic kernel
Error Handling Management Device Disc. IO Queuing Big, Monolithic Driver Vendor-Unique Behavior Yesterdays Approach
(still in use today by many)

Thin HW Driver Modern Approach


(driven by, and in place today with, Emulex)

Confidential
18

Emulex and the Linux Supply Chain


What has Emulex contributed to the community? Education and a Knowledge Base We educate users and OEMs on many subjects: The kernel and directions The SCSI subsystem Device Mapper Transports Virtualization Storage in general We educate the kernel developers on storage: Virtualization Authentication Naming End-to-End Data Protection Management Interfaces End User Issues We help distros track kernel and midlayer bugs by kernel rev We cross-pollinate ideas and status between kernel developers, standards bodies, and vendor offerings. White papers on Kernel and Distro behavior Using Udev and Multipath and transport settings We are a catalyst for new Initiatives and APIs SAN mgmt; Authentication & Key mgmt; Virtualization In Linux, Standards Bodies, and Partner products Troubleshoot the kernel, the SCSI subsystem, and FC on behalf of users See SourceForge list, or linux-scsi responses Respected Reviewer of other storage proposals

Confidential
19

Emulex and the Linux Supply Chain


Emulex fully supports the driver on the distribution This is not just a Boot From SAN driver The Linux Distribution fully supports the same driver The Server OEM fully supports the same driver The Storage OEM fully supports the same driver The Application provider fully supports the same driver, as it resides in the distribution Oracle Unbreakable, for example The End-User, as a result, has a fully supported storage stack No need to swap out critical stack components to get support

Confidential
20

Emulex and the Linux Supply Chain


Where is this fully-supported driver and who supports them? RHEL3 EMC Fujitsu RHEL4 HP NEC RHEL5 IBM Bull SLES9 Dell Unisys SLES10 Oracle Cray Asianux Sun Symantec Red Flag HDS Engenio NTAP Miracle Linux Oracle
Confidential
21

Virtual HBA Roadmap N_Port_ID virtualization


October 2007
Confidential
22

Emulex Enables the Virtualized Data Center


1998 2001 2003
Emulex first to develop SAN connectivity for virtualized mainframe servers. Emulex, IBM & McData begin to define the NPIV standard NPIV Standard is adopted by ANSI T11 Emulex first to demonstrate NPIV on VMware at SNW

Virtualization Timeline

2005
Emulex First to ship NPIV ready 4Gb/s PCI-X & PCI Express FC HBAs Emulex first to demonstrate NPIV with Xen on SUSE and Red Hat

2006

Emulex first to demonstrate NPIV support for Microsoft Virtual Server. Emulex introduces VMPilot, the first management application for use with Micro Soft Virtual Server. Emulex submitted NPIV into the Linux upstream kernel Emulex first to demonstrate integrated SAN connectivity within Microsoft System Center Virtual Machine Manager Emulex, Cisco & VMware demonstrate an NPIV enabled SAN - enhanced QoS, data protection and chargeback capabilities

VMPilot is the first application to harness the power of NPIV

Behind many of the key innovations that are bringing the power of virtualization to the data center

2007

Confidential
23

What is Virtual HBA Technology


Multiple virtual FC connections (pWWN) on a single physical port (HBA or blade switch) N-Port ID Virtualization (NPIV) is an ANSI T11 FC std.

Virtual machine/blade Virtual machine/blade Virtual machine/blade

Virtual port Virtual port

Arrays End point (HBA, blade switch) And LUNs switch

Arrays And LUNs

Virtual port

Confidential
24

Key to Server Virtualization is Virtual HBA Technology


Emulex Virtual HBA Technology
Based on ANSI NPIV spec Multiple Fibre channel IDs (WWN) for each physical port
OS OS OS

B
I/O

Hypervisor HBA
Single Fibre Channel ID shared among all guest OSs

Allows SAN best practices to be followed at the VM level


LUN mapping/masking and fabric zoning done on a VM-by-VM basis

OS

OS

OS

No storage/fabric reconfiguration required when migrating VMs

Hypervisor NPIV HBAs


One Fibre Channel ID for each Guest OS

Confidential
25

Virtual HBA Benefits


VM-level fabric zoning zone each VM separate from others Compatible with VMotion (vWWPN in VM description file) VM-level traffic shaping/prioritization VM-level Inter-VSAN routing LUN-to-VM identification VM-specific storage customization (cache strategy, RAID level, drive speed, mirroring, frequency of snapshots/backup.) VM-level native VSAN membership (VSAN trunking) * coming soon

Confidential
26

Components of a Virtual HBA solution


1. 2. 3.

4. 5. 6.

Storage: no requirement Fabric: edge switch must support NPIV HBA must support NPIV/generate new port request Driver must provide API for port request generation Tool must allow user request, generate WWN VM: no requirement

RDM LUN SAN

Emulex 4G HBA X86 Boot Admin environment (e.g., Virtual Center) Vport Hypervisor OS
Application VM

VM Confidential
27

Component requirements
VMware Switch MS Virtual Server Xen Cisco MDS 91xx/92xx SAN-OS 3.0(3) firmware Brocade Fabric OS v5.3.0 firmware McData switches with E/OS v9.2.0 firmware Emulex 4 G enterprise/midrange HBAs: LP11002, LPe1102: 200 Vports, 2 physical ports LP11000, LPe11000: 100 Vports, 1 physical port LP1150, LPe1150: 8 Vports, 1 physical port ESX 3.5 with built-in 7.4 driver (GR December 07) Virtual Center Notes RDM only in ESX 3.5 MS VS 2005 R2, Storport driver 1.30 Emulex VMpilot, SCVMM Driver version 8.2 in 2.6.23 kernel, upcoming distros Vendor tools being extended Packaged solutions in GR (3leaf)

HBA

Server, driver Management

Confidential
28

Cisco, VMware and Emulex Support

Figure 3: Routing Virtual Machines across VSANs using NPIV and IVR. In this figure the targets are in different VSANs.

http://www.emulex.com/white/hba/CiscoEmulexVirtualization.pdf

Confidential
29

You might also like