0% found this document useful (0 votes)
112 views

Brocade Product Training: Why Fibre Channel?

This presentation will investigate and exemplify Fibre Channel's purpose and role in today's IT infrastructure. Attendees should be able to: Identify reasons the FC protocol exists Identify benefits of FC SANs Identify some components related to FC SAN.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views

Brocade Product Training: Why Fibre Channel?

This presentation will investigate and exemplify Fibre Channel's purpose and role in today's IT infrastructure. Attendees should be able to: Identify reasons the FC protocol exists Identify benefits of FC SANs Identify some components related to FC SAN.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

1

Brocade®
Product Training
Why Fibre Channel?

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

In this presentation we will investigate and exemplify Fibre Channel’s purpose


and role in today's IT infrastructure.
Objectives 2

After completing this module, attendees should be able to:

‰ Identify reasons the FC protocol exists

‰ Identify benefits of FC SANs

‰ Identify some components related to FC SANs

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003
Topics 3

‰ Identify reasons and markets for FC protocol


¾ Storage access history and SAN introduction

¾ What is Fibre Channel?

¾ Why FC SANs?

¾ What is a FC SAN?

‰ Identify some components related to FC SANs

‰ Embedded throughout this presentation we will discuss FC


standards and reasons for the success of FC SANs

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003
Storage Access – 4
History
‰ Attaching storage to non-mainframe servers during the 1970s and
1980s was straightforward:
¾ Storage was directly attached to the server

¾ Networks were avoided, to ensure best performance and


reliability
¾ To enhance performance, a parallel interface with a limited
number of devices was used
‰ Result: A high-speed channel from server to storage

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

The primary external interface in the early days of external storage was the
Small Computer System Interface (SCSI), a bus architecture with dedicated
parallel cabling between servers and storage devices. It is an open standard
that has been enhanced over the years to support increases in device speed
and functionality.
By providing a dedicated physical channel, high levels of reliability could be
ensured during data transfers between servers and storage. Storage-server
connections must have high levels of reliability – if there are any glitches in a
server-storage transfer, valuable data is compromised permanently. For this
reason, server-storage connections traditionally avoided networks as providing
insufficient levels of confidence.
Storage Access – 5
Storage Area Networks (SANs)
‰ Over time, storage-server connection requirements have changed
¾ Network-level flexibility with channel-like performance and
reliability
¾ Leverage SCSI command set over emerging serial interfaces

¾ Extend over a much larger geographic area

‰ These are the origins of the Storage Area Network

Storage Area
Network

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

We can now see that several factors contributed to the rise of Storage Area
Networks:
• For a variety of reasons (business mergers, introducing new technologies,
explosive data growth), the number of servers and storage devices that
intercommunicate has risen rapidly.
• The flexibility required for server-storage access has reached network-like
levels – but with a need for channel-like reliability and performance.
• The various SCSI committees tried to keep up with the exploding storage
market, and had success in maintaining a rich set of device commands. The
SCSI driver is usually implemented to be more efficient in interacting with an
operating system than the IP stack, and makes it more well-suited to handling
block data transfers.
• Newer serial-based technologies (Ethernet, OCR, etc.) have seen more rapid
improvements in performance than SCSI-style parallel buses.
SAN Technology – 6

Fibre Channel
‰ The dominant technology used in modern Storage Area Networks is
Fibre Channel
¾ An open standard (IEEE T11 committee)

¾ Designed to emulate channels large block transfer behavior while


extending distance and allowing many-to-many connectivity
‰ Connectivity: Thousands of devices per fabric (network)
‰ Performance:
¾ Current speeds: 1 and 2 Gbit/sec (100 and 200 MBytes/sec), with
10 Gbit/sec (1 GBytes/sec) coming; our focus - 2 Gbit/sec
¾ Initiator arbitrates for access before transmitting (ensures
channel-like access to target)
¾ All SCSI commands and user data is sent over 2112 byte Fibre
Channel payload frames

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

The above is a brief overview of the Fibre Channel protocol. More details about
Fibre Channel are available in Chapter 2 of Building SANs with Brocade Fabric
Switches.
7
What is Fibre Channel?

¾ A standard
¾ High performance and speed
¾ Low latency
¾ Long distance
¾ Robust data integrity
¾ Large connectivity

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Fibre Channel Highlights:


¾A standard: AN ANSI standard providing flexible serial data transport at long
distances for Storage Area and System Area Networks - ratified as ANSI
standard in 1994. Now an ISO/IEC Standard

¾High performance and speed: Hardware based transport mechanism for high
performance; 1, 2, 4, 10 Gb/s speeds

¾Low latency: Less than 2 micro second latency input port to output port of FC
switch

¾Long distance: Up to 10KM distance (longer with extenders), can be extended


non-natively over ATMs up to 3000 km

¾Robust data integrity: Uses IBM’s 8B/10B encoding scheme for robust integrity
plus FC has a bit error rate (BER) of 10-12 - a transmission might have a BER of
10-12 means that, out of 10,000,000 bits transmitted, one bit was in error

¾Large connectivity:

-Per the standard, Fibre Channel allows a theoretical 16M devices to be


connected to one Fabric

-Support for multiple physical media types - Copper, Optical Fibre (multi-
mode and Single mode) and Mixed media

-Support for multiple protocols - SCSI, IP, VIA, Ficon, etc. and mixed
protocols

-Support for multiple topologies - Point-to-Point, Switched, Loop and


mixed topologies

-Heterogeneous interconnect scheme for computing and peripheral


devices
8
Fibre Channel
- Hybrid Transport System -

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Fibre Channel combines the best of both worlds: It is a “channel” transport that shares
many of the characteristics of an I/O bus (e.g. SCSI). This means that hosts and
applications see the disk devices as locally attached storage. It also incorporates the best
of the networking world as Fibre Channel allows multiple protocol support, such as SCSI,
IP, Ficon, BB, and others. Manageability of the SAN can be done by “typical” networking
management applications, I.e. HP Openview. And, Fibre Channel allows for a
heterogeneous set of devices to participate.
Fibre Channel = Foundation of SAN Fabric 9

A Networking Model for Storage


Clients
WAN

LAN
Servers
Fabric
Fabric
“Fabric”
“Fabric”isisaawell-designed
well-designed
NETWORK
NETWORKof ofhighly
highly
intelligent
intelligent FibreChannel
Fibre Channel
Storage Area switches
switcheswhich
whichprovides
provides
Network enterprise-class
enterprise-classscalability,
scalability,
performance,
performance,manageability
manageability
and
andavailability.
availability.
Storage Subsystems

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Why give storage its own network—Fabric? Answer: A good LAN does not make a good SAN!
• LANs use different protocols, different tools
• LANs are physically insecure at the desktop and potentially vulnerable at the server
• LANs seldom have spare capacity for storage networking
• LANs are tuned to favor short, “bursty” user transmissions versus large, continuous data transfers

While Local Area Networks (LANs) may do a good job of supporting user access to servers,
they are less than ideal for providing servers with access to storage systems. For one thing,
user workstations and storage systems use different network protocols. LAN hardware and
operating systems are geared toward user traffic—they are designed for a fast user response
to messaging requests. By definition, user networks have to go to where the users are and
often this means that the servers may also be located all over the enterprise. With a SAN,
the storage units can be secured separately from the servers and totally apart from the user
network.
Most enterprises are in an ongoing struggle to maintain adequate LAN performance in the
face of the rapid increase in user utilization rates. For them, it would be asking too much to
also provide ongoing access to storage systems. Better to move all the storage traffic to the
SAN and give the LAN a room to breath. Finally, it should be noted that user networks
frequently employ broadcasts to coordinate access activities. If storage devices are attached
to the main network, they are needlessly included in such broadcasts. The intermittent
flurries of user broadcasts can be disruptive to bulk data transfers.
(Note: WAN = Wide Area Network, long distance large network)
Here is a formal definition for SAN from the Storage Network Industry Association (SNIA):
“A network whose primary purpose is the transfer of data between computer systems and
storage elements and among storage elements. Abbreviated SAN. A SAN consists of a
communication infrastructure, which provides physical connections, and a management
layer, which organizes the connections, storage elements, and computer systems so that
data transfer is secure and robust.”--SNIA Technical Dictionary, copyright Storage Networking
Industry Association, 2000
Benefits of Fibre Channel 10

‰ A new multi-purpose network infrastructure for connecting open


system storages, networks, videos and cluster servers
‰ Provides a general hardware transport vehicle for Upper Level Protocols
(e.g. SCSI, IP protocols, etc.) Not Yet-Another-New-Protocol
‰ High Speed: 2 Gigabit/sec rate today (200MB), full duplex dedicated
connection, moving to 10Gb/sec and beyond
‰ Congestion free transmission
‰ Up to 10 kilometers (plus extensions for supporting thousand
kilometers distances, excellent deployment for Disaster Recovery)
‰ Advanced follow control system to guarantee in-order delivery
‰ Heterogeneous systems support
(e.g. AIX, NT, Solaris, LINUX, Novell, etc.)
‰ Adopts legacy environments and applications

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

The high-speed, low-delay connections offered by Fibre Channel makes it ideal for a variety of data-
intensive applications. Please note that the Fibre Channel is not a SAN only technology. The Fibre
Channel technology has been used for networking in the movie and TV companies for the post-
production of moving the video imaging between servers and editing stations.

Highlight in some of the Fibre Channel features:

High Speed
Currently at 2Gb/sec, moving to 10Gb/sec. Some Fibre Channel vendors will skip the 4Gbps speed
generation and will go directly to 10Gbps. Note that the Ethernet 10Gb group and the Fibre Channel
10Gb group have a joint working group and both technologies will be released simultaneously.
Networking technologies are synergistic, not competing against each other. Furthermore, ATM
(Asynchronous Transfer Mode) is moving towards 10 Gbps speed paradise.

Long Distance
Fibre Channel is 10 kilometers by the standard specification. Today, some Fibre Channel vendors have
found solutions to implement long distance SAN (Up to 3,000 kilometers using ATM as a WAN transport)
without breaking the Fibre Channel standard.

Up to 256 Upper Layer Protocols (ULP) support


In this book, we will focus on SCSI and TCP/IP protocols support only. But do aware that Fibre Channel
has the capability to support many other storage, network, video and clustering protocols as well. This
makes Fibre Channel easier for IT professionals to understand and support, as they do not need to learn
a new storage or network command set. The last thing an IT professional want is —“sorry, you need to
throw away what you know and learn a new protocol/language again!”
11
What is a FC SAN?

‰ Open Systems Model for


Networked Storage

‰ Enhanced Storage
Management
¾ Flexibility to add or
reconfigure storage as needed
without downtime

‰ Independent Scaling of CPU


and Storage capacity
¾ De-couples servers and
storage so that either can be
scaled separately

‰ Easy Migration
¾ Current applications run
without software changes
¾ Incremental deployment
allows flexible adoption

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

A Storage Area Network (SAN) is an enabling Infrastructure that provides network class
of benefits for IT data centers
9Data can become a unified, “virtual”, resource
9Legacy systems can be seamlessly integrated

SANs provide the flexibility for deploying various enterprise IT applications using a
single infrastructure:
9SAN Backup
9Storage Consolidation
9Remote Data Replication
9High Availability Fast Server Failover
12
What’s inside a FC SAN?

FC SAN
Node N_Port F_Port F_Port N_Port Node

Network Fabric
FabricPort
NetworkOperations
Operations (F_Port)
Port
(Fabric (F_Port)isisanan
Node
NodePort
Port(N_Port)
(N_Port) (FabricServices)
Services) N_Ports
N_Ports
has
hasnonoknowledge
knowledgeofof ••Fabric connection
connectionintointoaa
the
the path.This
path. This Fabrichierarchical
hierarchicalstructure
structure FC
relieves ••Logical
Logicalmanagement FCSAN
SAN
relievesthe
theN_Port
N_Portofof management
having
havinglocal
localrouting
routing •• Internal
Internalswitching
switchingfunction
function
tables.
tables.This
This ••Switch communication
Switch communication
translatestotoeasy
translates easy ••Routing
Routingpath
pathselection
selection
to
toconnect
connectand
and
manage
manage E_Port E_Port Switch

Internal structure and operations are not visible to the N_Ports


© 2003 Brocade Communications Systems,
Incorporated.
Revision0.1_FC101_2003

Fibre Channel provides an interesting network scenario where network clients have very little
idea about what is going on inside the network. They do not know or care how connections
are routed, they just know what they are connected to across the Fabric. This
implementation means we can now offload many requirements from the N_Port CPUs, thus
making things simpler.

FC devices use FC protocols to connect and communicate. These protocols often represent
Fabric services and, as the name implies, they reside in the Fabric. There is an initialization
process, for example, that occurs when a device connects to a Fabric port. The port that the
device attaches will “become” the the type of communication portal attached devices needs.
In this picture we see connecting E_Ports or Expansion ports. When switches connect to
each other they exchange link parameters (ELP) letting them know what is attached at the
other end. FC has a switch-switch protocol called Inter link services (ILS) with a rich set of
commands that allow switches to exchange information. We also see an N_Port (Node Port)
attached to a F_Port (Fabric Port). Node ports need Fabric ports to communicate. From the
Fabric perspective an F_Port implies a N_Port is at the other end of the cable. Fabric access,
called Fabric logins or FLOGIs and query methodologies represent Fabric services that are
essentially hidden from end ports - the node ports do not need to keep track of all Fabric
service information. The Node ports absorb only the Fabric service information they need to
build device lists and communicate across the Fabric.

In this course, we’ll examine the internal operations of a switched Fabric, both in terms of its
interaction with a various elements of the Fabric, as well as its fairly rich set of network
capabilities and services offering to the attached nodes.
SAN Fabric 13

‰ Fabric is a term used to describe a generic switching environment. It


can consists one or more interconnected switches (domains) One
Fibre Channel Switch = One Fabric Domain
‰ Maximum of 239 domains in a single Fabric
‰ Fabric communication is based on 24-bits address space partitioning

Special
SpecialAgent:
Agent:
••Principal
PrincipalSwitch
Switch
Fabric
Special
SpecialDelivery:
Delivery:
••Class
ClassFFService
Service

24 Bit Address Space

Domain Area ALPA


ID ID ID
8 Bits 8 Bits 8 Bits

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

In Fibre Channel, the Fabric is normally an entity that distributes address identifiers to the
N_Ports. In general, the N_Ports need to be aware of how the Fabric manages address
identifier allocation. A Domain is the highest logical construct in the hierarchy of Port
Identifiers. Areas are the intermediate level logical construct and ALPAs are the lowest level
logical construct in the hierarchy.

To facilitate Fabric communication address management, a partitioning scheme has been


developed. The 24-bit address is divided into three 8-bit fields
• The upper 8-bits are the Domain
• The middle 8-bits are the Area
• The lower 8-bits are the ALPA (Arbitrated Loop Physical Address)

The domain is used to identify a Fibre Channel Switch. When a frame is received, it is routed
to the correct domain (Switch). Once the frame reaches the correct domain, it is routed to
the correct area and finally, the frame is routed to the correct port.

Fibre Channel has the concept of a “Principal Switch.” The function of the Principal Switch is
to simplify the problem of determining precedence between Fibre Channel Switches without
adding a separate external Fabric management software component, The Principal Switch
facilitates the bring up of the Fabric, acts as controller of domains (ensures each Switch
coming into a Fabric has a unique domain) and handles time services when available.

Class F is the communication class of service used between switches. It is a special internal
communication service within a multi-switch Fabric. The primary purpose for the Class F is
Fabric management and operation.
FC SAN Components 14

FC
FCSANs
SANscancanalso
alsoinclude
include
Host Bus Adapter in Server other
other interconnectingdevices
interconnecting devices
like HUBs and Bridges
like HUBs and Bridges
Note:
Note:Cables
Cables&&GBIC
GBIC
(SFP)
(SFP) willbe
will bediscussed
discussed
ininnext
nextsection
section

Cable
Fibre Channel
Switch
GBIC/SFP

Storage Subsystem
FC
FCSANs
SANsusually
usuallyinclude
includesome
someSAN
SAN
management
managementapplications
applications
© 2003 Brocade Communications Systems,
Incorporated.
Revision0.1_FC101_2003

A SAN is a mass storage infrastructure that frees up the LAN or WAN, while it operates faster and does
FC SAN Fabrics are high-performance networks based on Fibre Channel, and dedicated to storage. They
provide any-to-any connectivity for the resources in the SAN. Any server can potentially talk to any
storage device, and the SAN Fabric also enables a communication between storage and SAN devices
(switches, hubs, routers, bridges). SANs employ fiber optic and copper connections to create dedicated
networks for servers and their storage systems. More on these later.
•Servers/HBAs (Host bus Adapters) - HBA’s are similar to the Network Interface Card (NIC) that is used
to connect devices to a LAN
9UNIX
9Windows
9Linux
•Storage - Disks (RAID/JBOD)
•RAID stands for Redundant Array of Independent Disks. These Arrays look like a a single disk
volume to the server and they are fault-tolerant either through Mirroring or Parity-checking.
They also typically have their own management software just for that RAID array. JBOD - Just
a Bunch of Disks that usually plug into an enclosure that has the connection to the SAN. These
disks have no protection against failure.
•Tape
•Interconnecting Devices
9Hubs/Switches
9Bridges/FC Extenders
•Software - SAN Management Applications
9Telnet
9Front Panels/Serial Connections - depends on the device being managed
9WEB Browser
9Fabric Manager (FM)
9SNMP - i.e. HP Openview, CMNS, CA Unicenter, Adventnet …
9Application Programming Interface (API)
93rd party applications that use SNMP and/or API
Interconnecting Devices - Fibre Channel Switch 15
Core of the Network Intelligent
‰ Multi-port connectivity (8, 16, 64, 128 or
more ports)
‰ Support Fabric and legacy loop devices -
Fabric port types: F, E, FL
‰ 1 or 2 Gbit/sec speed (auto-sensing)
‰ FC header/protocol support
¾ Full duplex performance with cut-
through routing
‰ Embedded services
‰ Login

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Fibre Channel Switches (named Switch Element in Fibre Channel terminology) are intelligent
devices able to interconnect individual nodes, devices, and even other switch elements. At
the physical layers, switch intelligence means that a switch is very much plug-and-play, it
can detect whatever type of device is plugged in and, provided they have the proper GBIC/
SFP installed. The reason for small switch versus bigger switch approach is to enable pay-as-
you-grow implementation model. Most IT organizations can start deploying a small SAN
island using two 8-ports switches, once they felt comfortable with the new technology, they
can buy bigger switches (for example, 16-ports). The 64-ports and 128-ports core switches
are mainly used for connecting many SAN islands across an organization.
Fabric port types:
ŠF_Port: For direct connection
ŠE_Port: For switch connection
ŠFL_Port: For loop/hub connection
FC header/protocol support:
Š Hardware based cut through frame routing to keep latency small
Š Link level flow control to prevent loss of frames
Š Link level error detection/recovery for high application performance
Š Small to large frame sizes to meet different application throughputs/latencies
Embedded services: Name Service; Alias Service; Management Service and more.
Login:
ŠEstablishment of operating characteristics
ŠAutomatic address assignment (24 bit wide)
Some leading Switches in today’s market offer the following features:
• Integrated SNMP and MIB-compliant management for remote management (as well as Telnet)
• Configuration management tools and utilization monitoring (web-based graphical user interface)
•Automated port isolation and device fail-over for fault-tolerance, along with N+1 hot-swappable
components
Interconnect Devices – Fibre Channel Switch
Port Interfaces

NL_Port

FL_Port

N_Port
F_Port

N_Port
NÆF 1 node to 1 port
NL Æ FL Up to 126 AL_PA
nodes to 1 port

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Switch Ports:
ŠE_port -- expansion port, connects two switches to make a fabric
ŠF_port -- a Fabric port to which an N-port attaches
ŠFL_Port -- A Fabric Loop port to which a Loop attaches

Device Ports:
ŠN_port -- port designator for direct fabric attached devices
ŠNL_Port -- device that is attached to the loop (ie, host, storage)
Interconnect Devices 17
- Hubs -
Hub

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Hubs utilize arbitrated loop topology. The hub will tie circuits through each port,
joining the last ports Tx circuit to the first ports Rx circuit, thus sharing the
backplanes throughput. Ports must be able to recover valid clocked info at the
rate of 1.0625 gigabaud. There is no addressing scheme between the Hub and
the connected device. Hubs posses the ability to auto bypass ports to allow
ease of connectivity. Hubs follow FC-AL, FC-AL-2 Fibre Channel standards. As
long as vendors do not supercede Fibre Channel standards they will create their
Hubs with different Management functionality, port density, signaling
processing, port type, and/or port density. The Fibre Channel Hub is what connects
devices together to create an Arbitrated Loop. Loops support a maximum of 126 devices
and has a maximum bandwidth of 100MB per second on the whole loop. Every device on
the loop must share that bandwidth and access. Only two devices can be communicating
on the loop at any point in time. With this in mind, Hubs are a way for the devices that
support FC Loop ONLY to become part of the Fabric. As members of the Fabric/SAN,
loops are multiple independent networks with limited connectivity
Unmanaged Hubs - These hubs are usually used in small environments since
they are simple, low cost and posses an entry level interconnection scheme.
They will generally have bypassing technology as long as the signaling
thresholds are met. They will usually provide simplistic LED functionality.
Managed Hubs - These hubs introduce another level of intelligence for
manageability. Basic functionality can now be managed via TCP/IP I.e., Web,
Telnet, SNMP. There are two levels to this functionality. First is hardware
additions to the hub, second is the software to run the new hardware. Managed
Hubs also provide the ability to recognize ordered sets, CRC error detection, link
errors, invalid transmission words, most active AL_Pas, loop status, topology
mappings, and event tracking.
Interconnect devices - Bridges - 18

‰ FC-SCSI Router/Bridge
¾ Maps SCSI devices to units of a single Arbitrated Loop
Physical Address (AL_PA)
Bridge
¾ Configurable mapping table

¾ SNMP management

SCSI Only

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Fibre Channel-to-SCSI Bridges (also known as routers) allow the connection of


non-Fibre Channel devices to the SAN. Typically used to connect SCSI tape
devices to the SAN, a bridge can also be used to connect servers or
workstations to the SAN in “initiator” mode.
Bridges interface Fibre Channel and SCSI, or to connect Fibre Channel links to
devices without Fibre Channel ports. FC-based tape libraries are not yet
prevalent so bridges are used to bring SCSI-based tape libraries into Fibre
Channel SANs. Removing the tape device from an application server and
attaching it to the bridge allows for faster, more accurate backups. It also
allows the tape device to be shared by other servers across the SAN versus it
being a dedicated resource. The need for these systems is declining with the
development of SAN capable devices.
Heterogeneous Attachment 19

Storage
‰ RAID – Redundant Array of Independent Disks
‰ JBOD – Just a Bunch of Disks
‰ Tape – Primary use for Backup and Recovery

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Two terms often heard in discussions of SAN storage subsystem are RAID and JBOD.

RAID, or Redundant Array of Inexpensive Disks, is a disk clustering technology that has
been available on larger systems for many years. Depending on how we configure the array,
we can have the data mirrored (duplicate copies on separate drives), striped (interleaved
across several drives), or parity protected (extra data written to identify errors). These can
be used in combination to deliver the balance of performance and reliability that the user
requires. Because of the high capacity (and cost) of RAID storage systems, they are good
candidates for sharing across a SAN. Although we can certainly have a SAN without RAID,
these two technologies are often used hand in hand.

JBOD stands for Just a Bunch of Disks and is a counterpart of RAID. It is a collection of disks
that share a common connection to the server, but don’t include the mirroring, striping, or
parity facilities that RAID systems do, but these capabilities are available with host-based
software. JBOD represents the simplest and least expensive "raw storage" option. The
individual disks are arranged in a simple cabinet and are available to its servers as a group of
independently accessible disks. They have little or no “buffering” (or cache memory) or an
intelligent controller that enables advanced features. JBOD has a limited growth capacity and
it usually scales to less than 1 terabyte (TB) per cabinet. Since they have no inherent
intelligence, parity checking or data striping, there is no protection in the event of a drive
failure.

Tape helps you to store data sequentially on a magnetic tape cartridge. Generally used to
store large amounts of data for backup purposes. Tape storage would be a common method
of saving information in the event of a drive failure.
Heterogeneous Attachment Cont. 20

Storage

Intelligence in the
controllers

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Storage Devices
Raid Controllers act as the interface between the actual disks and the fabric.
They handle all tasks necessary to present the disks in the RAID array to the
Fabric, including LUN masking, cache, port initialization and communication
management.
Fibre Channel Host Bus Adapters (HBAs) 21

‰ Provides an Interface between the Servers or Workstations Internal Bus (e.g.


PCI or SBUS) and Fibre Channel network
‰ HBA software driver provides the storage information required by the
Operating System
¾ Handles I/O and Control requests
¾ Copper/Optical media support (may be dual port cards)

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Every device connected to a SAN requires a Fibre Channel interface or adapter board. Fibre
Channel Host Bus Adapters (HBAs) are available for a variety of bus types including PCI and
Sbus.

Some leading adapters on the market today provide the following features:
• Plug-and-play flexibility and copper/optical connector support
• SNMP (Simple Network Management Protocol) and MIB (Management Information Base) support
• Support for both Arbitrated Loop and Switched Fabric topologies
Heterogeneous Attachment Cont. 22
- Intelligence in the Host – HBAs

HBA card =
Node

Connectors =
Ports

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Major responsibilities of a Host Bus Adapter (HBA):


• Framing packets
• Provide physical addressing
• Link level error checking
• Sequence level error checking
• Manage flow control
• Provide linking of I/O request and packet
• Handle chaining of multiple requests
• Manage many concurrent I/O
• Analyze and manage data
• Run Storage virtualization
Note: When compared with normal Ethernet network card, Ethernet card only handles
the first three operations and the rest of the functionality require server CPU handling.
HBAs 23
SCSI Adapter and Driver
Yesterday
Application Layer
Netscape Payroll Inventory

System Call Interface Layer

File Subsystem Layer


NTFS FAT UFS

Operating System Disk Driver Tape DriverCD-ROMDriver


Kernel

You can load


SCSI Driver this driver

SCSI Adapter SCSI Adapter


Driver Driver

INTERNAL I/O BUS INTERNAL I/O BUS

SCSI SCSI Adapter SCSI Adapter SCSI


CARD CARD

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

In this slide we can see the different layers that go into servicing a particular
SCSI command from the Application layer down to the SCSI Adaptor card. This
view is an illustration of a parallel SCSI system.

How does this work?


•The application layer depicts Netscape, Payroll, and inventory applications.
•The Kernel OS System call interface and file subsystem layers process
application layer communication.
•The disk, tape or CD ROM device drivers are responsible for handling specific
request (i.e., Opens/Reads/Writes/Closes) from one or more applications –
these drivers put the necessary device specific handlers on the communication
going down the chain.
•When we look at SCSI communication in this picture, the Kernel OS SCSI
driver is responsible for accepting generic I/O requests from upper layers in
the Operating Systems Kernel and converting them to the appropriate Device
Specific SCSI Command Descriptor Blocks. Note: SCSI Command Descriptor
Blocks (CDBs) are a specific unit of work to be acted upon by a SCSI Initiator
or Target.
•The SCSI adaptor drivers that you load or let the server operating system
load for you, is the interface between the physical card and entry point into the
Kernel OS SCSI drivers. A SCSI adaptor will take SCSI commands and
package them into the parallel format needed on the SCSI data bus where the
disk and tape devices are sitting.
HBAs SCSI/Fibre Channel Adapter and Driver
24
Today
Application Layer
Netscape Payroll Inventory

System Call Interface Layer

File Subsystem Layer New host


SCSI driver to
NTFS FAT UFS

Operating System
support Fibre Channel
Disk Driver Tape DriverCD-ROMDriver
Kernel

You typically load


this driver
New and Improved SCSI Driver

FC Adapter FCAdapter
Driver Driver

INTERNAL I/O BUS INTERNAL I/O BUS

Fibre FC HBA FC HBA Fibre


Channel CARD CARD Channel

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

Here is a view of the different layers that go into servicing a SCSI command. The
difference between the previous slide and this one is the fact that the device drivers are
using Serial SCSI-3 commands that enhance error recovery and device sharing, plus
the block level device or device Kernel OS is communicating with does not have to be
on the same physical parallel bus for the host to access it.
A FC HBA communication process today is very similar:
•The application layer still handles Netscape, Payroll, and inventory applications. By the
way, this level also includes backup, multipathing, clustering and volume management
applications.
•The Kernel OS System call interface and file subsystem layers still process application
layer communication.
•The disk, tape or CD ROM device drivers are responsible for handling specific request
(i.e., Opens/Reads/Writes/Closes) from one or more applications – these drivers put the
necessary device specific handlers on the communication going down the chain.
•When we look at SCSI communication (most prevalent FC protocol), the Kernel OS
SCSI driver is still responsible for accepting generic I/O requests from upper layers in
the Operating Systems Kernel and converting them to the appropriate Device Specific
SCSI Command Descriptor Blocks. Note: SCSI Command Descriptor Blocks (CDBs) are
a specific unit of work to be acted upon by a SCSI Initiator or Target. These new and
improved SCSI drivers also understand how to communicate with FC Adaptors, called
HBA drivers.
•The FC adaptor or HBA drivers that you load or let the server operating system load
for you, represent the interface between the physical card and entry point into the
Kernel OS SCSI drivers. These adapters will take the SCSI commands and package
them into sequences of frames, append FC addressing information and flow control
parameters. They handle what we call FC Layer 2 processes. HBA components also
assign counters, register statistical data, serialize, encode & decode FC frames. All
steps necessary to send these frames out the link through the Fabric to their
destination ports.
Summary 25

‰ FCcombines the best of Channel and Network


protocols

‰ FCSANs are a new multi-protocol network that provide


high speed, congestion free communication between
heterogeneous devices with legacy support

‰ SAN components include interconnect devices


(switches, hubs, bridges), storage (storage controllers)
and hosts (HBA’s)

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003
Additional Information 26

Use the resources page Internet Links, find SAN ED 101


link “SAN Fabric Foundation”, go to chapter 1
(Introduction to SAN Concepts and Benefits) for
additional information related to this presentation

© 2003 Brocade Communications Systems,


Incorporated.
Revision0.1_FC101_2003

You might also like