100% found this document useful (1 vote)
170 views

Data Center Structured Cabling

Data Center Structured Cabling best practices

Uploaded by

Vijay Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
170 views

Data Center Structured Cabling

Data Center Structured Cabling best practices

Uploaded by

Vijay Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

SECTION

Introduction to Data Centers


Chapter One:
Overview
1
What is a Data Center? content distribution, file storage and backup, database
A data center, as defined in TIA/EIA-942, Telecommun- management, fail-safe power, HVAC controls, security
ications Infrastructure Standard for Data Centers, is a and high-performance cabling infrastructure. As shown
building or portion of a building whose primary function in Figure 1.1, the functional areas of the data center can
is to house a computer room and its support areas. be broken down into:
The main functions of a data center are to centralize 1. Switching
and consolidate information technology (IT) resources, • Point of Presence (PoP) Zone
house network operations, facilitate e-business and to • Server Area Zone
provide uninterrupted service to mission-critical data 2. Storage
processing operations. • Storage Area Network

Data centers can be classified as either enterprise (private) PoP Zone


data centers or co-location (co-lo)/hosting (public) data This area of the data center is sometimes referred to
centers. Enterprise data centers are privately owned and as the “meet me” room. It is typically the area where
operated by private corporate, institutional or government the service provider enables access to their networks.
entities. Enterprise data centers support internal data This area contains many routers and core switches.
transactions and processing, as well as Web Services
and are supported and managed by internal IT support. Server Zone
Co-lo data centers are owned and operated by telcos This area of the data center provides the front-end
or unregulated competitive service providers and offer connection to the database servers. This area contains
outsourced IT services. Services that data centers typically many switches and servers. The protocols used to commu-
provide include Internet access, application or Web hosting, nicate in this area are 1 Gigabit and 10 Gigabit Ethernet.

Storage Switching

PoP
SAN
Server Area
Server Area
PoP
Server Area
Server Area
SAN
PoP

Figure 1.1
Functional Areas of the Data Center | Drawing ZA-3580
Storage Zone Network Efficiency
This area of the data center provides the back-end Data centers have seen significant growth in size and num-
connection to data. This area contains many types of bers in the past few years and should continue to see signifi-
storage devices. The protocols used to communicate in this cant growth in the future as networks continue to evolve
area are Fibre Channel Ethernet and small computer system and move toward 100 Gigabit Ethernet. Due to the consid-
interface (SCSI). erable growth in data centers, there is a need to have simple,
efficient cabling solutions that maximize space and facilitate
Regardless of the type of data center to be implemented, reduced installation time and costs. Preterminated solutions
there are three fundamental issues, or concerns, that are often the preferred solution as they provide higher fiber
should be addressed when evaluating each area of the density, reduced installation time and the ability to easily
data center infrastructure: facilitate moves, adds and changes (MACs).

1. Manageability Corning Cable Systems’ preterminated optical fiber cabling


2. Flexibility and Scalability solutions streamline the process of deploying an optical
3. Network Efficiency network infrastructure in the data center. A modular
design guarantees compatibility and flexibility for all optical
Manageability connectivity and easily scales as demands dictate and require-
End users are looking for a higher performance, low- ments change. The preterminated solutions also manage
profile solution for a more effective overall operation fiber polarity, virtually eliminating it as a concern in network
of the network. Manageability is essential; without it, the design, installation or reconfiguration.
cabling infrastructure takes over the data center in a short
amount of time. To increase control over the data center Corning Cable Systems’ newest preterminated solution,
infrastructure, structured cabling should be implemented. Pretium EDGE™ Solutions, provides increased system densi-
The key benefit of structured cabling is that the user ty when compared to traditional preterminated systems and
regains control of the infrastructure rather than living offers the highest port density in the market. Custom-engi-
with an unmanageable buildup of patch cords and an neered components enable simple integration into common
abundance of unidentifiable cables. SAN directors and switches, while the preterminated com-
ponents allow for reduced installation time and faster MACs.
Flexibility and Scalability
Flexibility and scalability of the cabling infrastructure A well-planned infrastructure can last 15 to 20 years and
allow quick and easy changes with little to no impact on the will have to be operational through multiple generations
day-to-day operation of the data center, as well as reduced of system equipment and data-rate increases. The following
risk that tomorrow’s technology will render an obsolete chapters address all of the factors to be considered for a
infrastructure. Scalability of the data center is essential well-designed data center cabling infrastructure.
for migration to higher data rates and for adding capacity
without major disruption of operations. The initial data
center must be designed so it can be scaled quickly and
efficiently as the requirements change. To meet the require-
ments and demands of the data center, the topology in the
data center, as well as the actual components used to imple-
ment the topology, must be explored. Both topology and
components, if chosen correctly, create an effective network,
save time and money, and create efficiency, manageability,
flexibility and scalability in the data center.
Chapter Two:
Data Center Networking Protocols

General Data center Ethernet deployments operate at speeds of 1G


Data centers contain many network transmission protocols and 10G utilizing predominately OM3 and OM4 multimode
for communication between electronic equipment. optical fiber. Multimode fiber installations usually operate
Ethernet and Fibre Channel are the dominant networks, at 850 nm with VCSEL transceivers. OM3 and OM4 fibers
with Ethernet providing a local area network (LAN) with 850 nm VCSEL transceivers provide significant
between users and computing infrastructure while Fibre economic value propositions when compared to single-mode
Channel provides connections between servers and storage fiber and DFB/FP transceivers.
to create a storage area network (SAN). See Figure 2.1.
To design a structured cabling system for a data center,
the designer should understand the different protocols LAN
that are used in each area of the data center. Core Switch

LAN Protocols SAN


Ethernet Switch
Ethernet
Ethernet is the most widely installed LAN data transmission Edge
Switch
technology and is standardized as IEEE 802.3. Ethernet is
FC
typically used in data center backbones to transmit data
FC
packets from the core router to the access switch to the Ethernet
server network interface card (NIC). Figure 2.2 illustrates
the Ethernet frame.
Storage
Ethernet originally began as a bus-based application Server
OM3
with coaxial cable as the primary bus medium that was Copper Cable
eventually replaced with fiber and copper twisted-pair
media. Ethernet is now deployed in data center switch
networks with optical connectivity in the backbone and Figure 2.1
copper connectivity that addresses short-length equip- Typical Data Center Architecture Today | Drawing ZA-3468
ment interconnects.
TYPE

DESTINATION SOURCE
SOF

PREAMBLE DATA FCS


ADDRESS ADDRESS

7 OCTETS 6 OCTETS 2 OCTETS 4 OCTETS


1 OCTET 6 OCTETS 46-1500 OCTETS

Figure 2.2
Ethernet Frame Format | Drawing ZA-3675
The IEEE 802.3z and 802.3ae task force groups released Optical Receiver Optical Transmitter
MTP® Connector MTP Connector
standards for Gigabit Ethernet and 10 Gigabit Ethernet in
1998 and 2002, respectively. The primary 1G and 10G Rx
12 Fiber Position 1
Tx

physical media dependent (PMD) variants being deployed Rx


Rx
Tx
Tx

are provided in Table 2.1. Rx


Rx
Tx
Tx
Rx Tx
Rx Tx
Rx Tx
Future industry bandwidth drivers such as video applica- Rx Tx
Rx Tx
tions, virtualization and I/O convergence are driving the Rx Tx
Rx Tx
need for network data rates beyond 10G. In response to that
need, the IEEE 802.3ba task force was formed to develop Tx Rx
Tx Rx
guidance for 40G and 100G Ethernet data rates. OM3 and Tx
Tx
Rx
Rx
OM4 fibers are the only multimode fibers included in the Tx
Tx
Rx
Rx

standard. 40/100G distances for OM3 and OM4 are 100 m Tx


Tx
Rx
Rx

and 150 m, respectively. The 40/100G standard does not Tx


Tx
Rx
Rx

include guidance for UTP/STP copper media. Tx


Tx
Rx
Rx
1 Fiber Position 12

Ethernet duplex fiber serial transmission with a directly Figure 2.3 Z


modulated 850 nm VCSEL has been used for data rates up Parallel Optics for 100G Ethernet | Drawing ZA-3300
to 10G. Duplex fiber serial transmission becomes impractical
at 40/100G data rates due to reliability concerns when the
850 nm VCSEL is directly modulated across extreme tem-
peratures in the data center. Ethernet 40/100G multimode Optical Receiver
MTP Connector
Optical Transmitter
MTP Connector

fiber PMDs (40GBASE-SR4 and 100GBASE-SR10)


12 Fiber Position 1
uses parallel optics with OM3 and OM4 fibers to mitigate Rx Tx
Rx Tx
the VCSEL reliability concern. Rx Tx
Rx Tx

40G Ethernet uses four 10G channels to transmit


and four 10G channels to receive while 100G Ethernet Tx Rx
Tx Rx
uses ten 10G channel to transmit and ten 10G channel Tx Rx
Tx Rx
to receive. See Figures 2.3 and 2.4. 1 Fiber Position 12

Optical Transmitter Optical Receiver


SAN Protocols MTP Connector MTP Connector

Fibre Channel
Figure 2.4
Fibre Channel is a high-performance, low latency, duplex Parallel Optics for 40G Ethernet | Drawing ZA-3299
fiber serial link application with data rates of 1 Gb/s, 2 Gb/s,
4 Gb/s, 8 Gb/s, 10 Gb/s and 16 Gb/s. It provides a very
reliable form of communication that guarantees delivery from the server host bus adapter (HBA) to the SAN Z
of information. The Fibre Channel T11 technical commit- director to the SAN storage. Similar to Ethernet, OM3
tees are responsible for developing transmitting guidance. and OM4 fibers are the dominant fibers and media type
Fibre Channel is used in the data center to transmit data used in the SAN network. Fibre Channel networks to date

TABLE 2.1
1G: Multimode 1G: Single-mode

1000BASE-SX (OM3: 1000 m, OM4: 1000 m) 1000BASE-LX (SM: 10 km)

10G: Multimode 10G: Single-mode

10GBASE-SR (OM3: 300 m, OM4: 550 m) 10GBASE-LR (SM: 10 km)


TABLE 2.2: T11 Fibre Channel Speed Roadmap
T11 Spec Technically Market
Product Naming Throughput (MBps) Line Rate (GBaud)
Completed (Year) Availability (Year)
1GFC 200 1.0625 1996 1997

2GFC 400 2.125 2000 2001

4GFC 800 4.25 2003 2005

8GFC 1600 8.5 2006 2008

16GFC 3200 14.025 2009 2011

32GFC 6400 28.05 2012 2014

64GFC 12800 57 2016 Market Demand

128GFC 25600 114 2020 Market Demand

have exclusively used optical media for the backbone as Electronic Engineer’s (IEEEs) Data Center Bridging
well as the interconnect into the electronics. SAN Fibre committee are defining standards to converge the two into
Channel links are being designed and deployed today to a unified fabric with Fibre Channel over Ethernet (FCoE).
support migration to 16G. Maximum 16G OM3 and OM4
channel distances are 100 m and 125 m, respectively. Fibre FCoE is simply a transmission method in which the Fibre
Channel single-mode fiber usage is minimal in the data Channel frame is encapsulated into an Ethernet frame
center but is exclusively used for synchronization between at the server (Figure 2.5). The server encapsulates Fibre
primary and secondary data center sites. T11 activity has Channel frames into Ethernet frames before sending
recently started to develop 32G guidance. Initial objectives them over the LAN and de-encapsulates them when
are for a duplex fiber serial transmission solution with FCoE frames are received. Server I/O consolidation
OM3 and OM4 fibers for 70-100 m distance. Table 2.2 combines the NIC and HBA cards into a single converged
provides the T11 Fibre Channel speed roadmap. network adapter (CNA) which reduces server cabling and
power/cooling needs. At present, the Ethernet frame is
Fibre Channel over Ethernet removed at the Ethernet edge switch to access the Fibre
Data centers utilize multiple networks that present opera- Channel frame which is then transported to the SAN
tional and maintenance issues as each network requires dedi- directors. FCoE encapsulation standards activity takes
cated electronics and cabling infrastructure. As previously place at the Fibre Channel T11.3 committee.
discussed, Ethernet (LAN) and Fibre Channel (SAN) are
the typical networks in a data center. Fibre Channel’s T11
technical committee and the Institute of Electrical and
Ethernet

Channel
Header
Header

Header
FCoE

Fibre

CRC

EOF

FCS

Fibre Channel Payload

Figure 2.5
Fibre Channel Payload | Drawing ZA-3673
TABLE 2.3: T11 Fibre Channel Speed Roadmap

Equivalent Line T11 Spec Technically Market


Product Naming Throughput (MBps)
Rate (GBaud) Completed (Year) Availability (Year)

10GFCoE 2400 10.3125 2008 2009

40GFCoE 9600 41.225 TBD Market Demand

100GFCoE 24000 103.125 TBD Market Demand

Fibre Channel is a deterministic protocol that guarantees


delivery of information. Native Ethernet has not been LAN
deterministic and has relied on transmission control Core Switch
protocol (TCP) to retransmit dropped frames. With
FCoE, the Ethernet transport has been required to be SAN
updated to ensure that frames/packets are lossless without Switch
Ethernet
using TCP/IP protocol. The new enhanced Ethernet FCoE
Edge Switch
standard is called converged enhanced Ethernet (CEE). FC
FC
CEE standards activity takes place at the IEEE 802.1
Data Center Bridging working groups. FCoE

Table 2.3 provides the Fibre Channel Industry Association


Storage
(FCIA) FCoE speed roadmap. Where 10G FCoE utilizes
Server
serial duplex fiber transmission, 40/100G FCoE speeds OM3
will require parallel optics. Data centers should install SFP+ Twinax
12-fiber MPO backbone cables with OM3 or OM4 fiber
today that can be used for 10G FCoE and to provide an
Figure 2.6
effective migration path to emerging parallel optics that First Generation FCoE Architecture | Drawing ZA-3469
require an MPO interface into the switch electronics and
the server (Figure 2.6).
the Ethernet frame is removed to access the Fibre Channel
First generation FCoE implementation will focus on the frame. The Fibre Channel frame is then transmitted to the
edge switch and server. Ethernet OM3 or OM4 fiber SAN network. See Figure 2.6. This architecture solution
optical uplinks will be received into the FCoE enabled edge reduces the server interconnect cabling and adapter card
switch and then interconnected to the server CNA. Instead number by at least 50 percent.
of copper UTP interconnects, SFP+ direct attached twinaxial
copper cable is now used as the media with significantly Second generation FCoE deployments are expected
lower power and latency performance. The twinax copper to use FCoE enabled core switches and edge switches.
cable will be used for distances up to 7-10 m. Beyond that This architecture will continue to use basic Ethernet
distance, low-cost, ultra-short-reach (USR) SFP+ modules optical uplinks from the core switch to the edge switch and
and OM3 or OM4 optical fiber will be used. The encapsulat- SFP+ twinax interconnects into the server. The difference
ed Fibre Channel frame is returned to the edge switch where occurs when the FCoE frame is transmitted back through
the edge switch to the core switch over the same optical
fiber previously used as the uplink to the server. At the FCoE SAN
Core Switch Switch
core switch, the FCoE frame is forwarded to the SAN
director where the Ethernet frame is removed and the FCoE

Fibre Channel frame is then transmitted to the storage


devices. This architecture solution reduces the server FCoE FCoE FC
interconnect cabling and adapter card number by Edge Switch
at least 50 percent and eliminates the Fibre Channel
HBA to SAN optical fiber trunk cable. See Figure 2.7.
FCoE
Third generation FCoE architecture mirrors the second Storage
generation with the exception that the core switch now
forwards the FCoE frame directly to storage where
Server
the Fibre Channel frame is accessed. This architecture OM3
solution reduces the server interconnect cabling and SFP+ Twinax
adapter card number by at least 50 percent, eliminates
the Fibre Channel HBA to SAN optical fiber trunk cable Figure 2.7
and eliminates the core switch to SAN director fiber Second Generation FCoE Architecture | Drawing ZA-3470
trunk cable. See Figure 2.8.

The FCIA has adopted specific guidance relative to the


cabling physical layer. Optical connectivity shall be in FCoE
accordance with IEEE 802.3ae (10GBASE-SR) utilizing Core Switch
OM3 or OM4 optical fiber. In addition, new installs FCoE
are recommended to be = < 100 m to be compatible with
emerging 40/100G Ethernet and 16/32G Fibre Channel. Storage
FCoE
The SFP+ is the preferred electronic interface for copper FCoE
and optical cable. This eliminates use of 10GBASE-T Edge Switch
copper UTP/STP cable.
FCoE
FCoE offers a data center unified fabric solution that
simplifies operational and maintenance of the cabling
infrastructure. FCoE facilitates utilization of low-cost
Ethernet electronics and OM3/OM4 optical connectivity Server
OM3
to support 10/40/100G data rates.
SFP+ Twinax

Figure 2.8
Third Generation FCoE Architecture | Drawing ZA-3471
Chapter Three:
Fiber Type and Performance

As fiber becomes more widely deployed in the data center, include a minimum 2000 MHz•km effective modal band-
a system designer should evaluate all the various grades of width (EMB) for OM3 and 4700 MHz•km EMB for OM4.
multimode fiber optic cable to ensure the data center will The OM multimode fiber nomenclature originated in the
support current and future data rates. As data rates and ISO/IEC-11801, second edition standard and has been
the physical size of data centers increase, the need for adopted into TIA standards such as TIA-568, Rev C.3.
designing a bandwidth and link-length scalable network In addition to OM3 and OM4, OM1 and OM2 designations
is more important then ever. The purpose of this chapter are included for standard 62.5 µm and 50 µm multimode
is to familiarize the reader with OM3 and OM4 fiber types fibers, respectively. See Table 3.1.
and performance requirements needed to support local
area network (LAN) and storage area network (SAN) Data center high data rates in conjunction with the desired
applications commonly used in data centers. application distances support OM3 and OM4 as the default
choice fiber types. The small core size of 50/125 µm fiber
OM3/OM4 Laser-Optimized yields an inherent higher bandwidth capability than other
50/125 µm Multimode Fiber multimode fibers such as OM1 fiber. Tables 3.2 and 3.3
Data center LAN and SAN networks should be designed provide OM3 and OM4 fibers distance capabilities for
to support legacy applications as well as emerging high- Ethernet and Fibre Channel data rates.
data-rate applications. The emergence of high-data-rate
systems such as 10, 40 and 100 Gigabit Ethernet and 8 and Corning Cable Systems strongly recommends OM3 and
16 Gigabit Fibre Channel has resulted in OM3 and OM4 OM4 fibers for the data center. When compared to OM1
multimode fibers being the dominant optical fiber types and OM2 multimode fibers, OM3/OM4 fibers have
deployed in the data center. the highest 850 nm bandwidth to accommodate longer
distances, provide more system budget margin and support
The TIA-492AAAC OM3 detailed fiber standard was migration to higher data rates such as 16/40/100G.
released in March 2002, and the TIA-492AAAD OM4
detailed fiber standard was released in August 2009. The
fibers are optimized for laser-based 850 nm operation and

TABLE 3.1
Overfilled Modal Effective Modal
Optical Fiber
Fiber Reference Wavelength Bandwidth-Length Bandwidth-Length
Cable Type
Product (MHz•km) Product (MHz•km)
TIA-492AAAA-A
62.5/125 µm 850 200 Not Required
IEC 60793-2-10
multimode (OM1) 1300 500 Not Required
Type A1b
TIA-492AAAB
50/125 µm 850 500 Not Required
IEC 60793-2-10
multimode (OM2) 1300 500 Not Required
Type A1a.1
850 µm TIA-492AAAC-A
850 1500 2000
laser-optimized IEC 60793-2-10
1300 500 Not Required
50/125 µm (OM3) Type A1a.2
850 µm TIA-492AAAD
850 3500 4700
laser-optimized IEC 60792-2-10
1300 500 Not Required
50/125 µm (OM4) Type A1a.3
TABLE 3.2: 850 nm Ethernet Distance (m)
1G 10G 40G 100G

OM3 1000 300 100 100

OM4 1000 550 150 150

TABLE 3.3: 850 nm Fibre Channel Distance (m)


4G 8G 16G

OM3 380 150 100

OM4 480 190 125

Expectation is that implementing an OM3/OM4 physical outside sources, fiber produces no electronic emissions,
layer solution should provide a 10-15 year service life therefore it is not a concern of the Federal Communications
without recabling. Commission (FCC) or European emissions regulations.
Cross-talk does not occur in fiber systems and there are no
Cable, connectors, hardware and electronics are now readily shared sheath issues as with multipair unshielded twisted-pair
available to support usage of these 50 µm fibers. The techni- (UTP) copper cables. Also, standards activity has shown
cal and commercial community has recognized the benefits evidence of alien cross-talk between UTP copper cables that
of OM3/OM4 as the fibers have been adopted into IEEE cannot be corrected by electronic digital signal processing
40/100G and Fibre Channel 4/8/16G transmission stan- (DSP). Because all-dielectric cables, as well as the new
dards as well as the TIA-568-G3 structured cabling and dielectric armored cables, can be used, grounding concerns
connectivity standards. The 850 nm wavelength now offers can be eliminated and lightning effects dramatically reduced.
and will continue to offer the most economical solution Optical fibers are virtually impossible to tap, making it the
for data center applications based on electronic costs. most secure media type. Most importantly, optical bandwidth
The data rate scalability of OM3 and OM4 fibers provides cannot be adversely affected by installation conditions.
the ultimate media solution for data center managers to Compare this to the copper system impairments that an
ensure their structured wiring systems support legacy as installer can impact.
well as future application needs.
10G Electronics and Cooling –
Fiber vs. Copper The Optical Advantage
A well-planned structured cabling system in the data center 10G optical switch electronics and server adapter cards
will support both the applications of today as well as the require less power to operate compared to 10G UTP cop-
future. Corning Cable Systems’ data center solutions do per. The high insertion loss of copper cables at the extended
just that, allowing today’s systems to grow gracefully as frequency range needed to support 10G and the required
requirements change without concern of obsolescence. electronic digital signal processing (DSP) noise-reduction
Fiber is the most attractive medium for structured cabling circuitry means that energy consumption will inevitably be
because of its ability to support the widest range of applica- higher than that of low-loss fiber interconnects. 10GBASE-
tions at the fastest speeds for the longest distances. SR SFP+ optical transceivers consume a maximum of 1.0
Additionally, fiber has a number of intrinsic advantages watt (typical 0.5 watt) per port compared to 6-8 watts per
beneficial to any application at any speed. Fiber is immune port for a 10GBASE-T copper switch. SFP+ chassis line
to electromagnetic interference (EMI) and radio frequency cards are intended to support up to 48-64 ports, while
interference (RFI), therefore its signals cannot be corrupted 10GBASE-T cards are expected to have 8-16 ports.
by external interference. Just as it is immune to EMI from 10GBASE-SR server adapter cards typically use less
than nine watts to service up to 300 m, while announced
10GBASE-T cards use 24 watts to service up to 100 m. As network speed grows, optical fiber
Experts have stated that 10GBASE-T over CAT 6A or offers significant advantages over copper
CAT 7 twisted-pair can extend up to 100 m, but power
requirements hinder its cost-effectiveness. A 10G optical 10 Gbps Example
system requires far fewer switches and line cards for equiva- 90%
lent bandwidth capability of a 10G copper system. Fewer

10G optical instead of 10G copper (%)


Electricity cost savings by using
switches and line cards translate into less energy consump-
tion for electronics and cooling to minimize operational 85%
expenses and support environmental initiatives. See Figure
3.1. One optical 48-port line card equals three 16-port line
80%
cards. As with the 10G copper switches, the 10G copper
server adapter card’s high power consumption and cooling
needs result in a higher operational expense. The industry
75%
10GBASE-T expectation is that three to four watts per
port will be the lowest achievable power consumption.
70%

240

288
144

192
High fiber density, combined with the small diameter of

96
48
optical cable, maximizes raised floor pathways and space
utilization for routing and cooling. Optical cables also offer Number of 10G Ports

superior pathway usage when routed in aerial cable trays. Figure 3.1
A 0.7-inch diameter optical cable would contain 216 fibers Electronics and Cooling Savings
to support 108 10G optical circuits. The 108 copper cables
required to provide equivalent capability would have a
5-inch bundle diameter. The 10G twisted-pair copper cooling damming effects and interference with the ability
cable’s physical design contributes to major patch panel of ventilation systems to remove dust and dirt. Optical
and electronic cable management problems. The larger cable offers significantly better system density and cable
CAT 6A outer diameter impacts conduit size and fill ratio management and minimizes airflow obstructions in
as well as cable management due to the physical size and the rack and cabinet for better cooling efficiencies.
increased bend-radius. Copper cable congestion in pathways See Figures 3.2 and 3.3.
increases the potential for damage to electronics due to air

Figure 3.2 Figure 3.3


Optical Cable (left) vs. Equivalent Copper Cabling | Photo LAN874 Copper Cable Management
End Equipment Through Optical Fiber
Distance Capabilities
Span length, application and data rate are the determining
factors in the selection of fiber type and end equipment.
All must be considered in order to make the best overall
selection. OM3 and OM4 fibers are appropriate for the
majority of data center applications, as the associated opto-
electronic transmission equipment is usually more economi-
cal than that for single-mode systems. Analysis of a specific
system design will lead to the selection of the most suitable
fiber type and end equipment, after which detailed consider-
ation of the optical parameters for both fiber and the system
is necessary. The following is a discussion of the nature and
meaning of those optical parameters with which the design-
er should be familiar.

Transceivers Figure 3.5


SFP/SFP+ Transceiver | Drawing ZA-3674
The transceiver is an electronic device that receives an
electrical signal, converts it into a light signal and launches
the signal into a fiber. It also receives the light signal and
converts it into an electrical signal as well. For data rates 850 nm transceiver. See Figure 3.4. The 850 nm VCSEL
=>1G, a multimode transceiver uses an 850 nm VCSEL and transceiver provides the optimum technical and economic
a single-mode transceiver uses a 1310 nm fabry-perot (FP) solution for high bit rate (≥ 1 Gb/s) operation that makes
or distributed feedback (DFB) laser. Transceivers operating OM3/OM4 the most deployed optical fibers in the data
at 1G and higher data rates migrated from light emitting center today.
diodes (LEDs) to laser sources due to the LED modulation
rate limitation and wide spectral width. For systems operat- SFP/SFP+ are the dominant transceivers used for data rates
ing at data rates greater than 622 Mb/s, lasers must be used. 1G to 16G (see Figure 3.5). Industry-standard multisource
VCSEL fabrication and packaging costs are significantly less alliances (MSAs) have defined the transceiver performance
than for a single-mode FP/DFB laser. The relative cost of attributes (wavelength, spectral width, Tx power, Rx power,
an FP/DFB transceiver is typically 2-3 times the cost of an etc.) to insure interoperability and reliability. The SFP/SFP+
transceiver performance attributes are incorporated into
the Ethernet and Fibre Channel standards to specify system
requirements and capabilities. Most transceivers interface
3.5 with LC duplex connectors.
850 nm optics
3.0
1300 nm optics The QSFP transceiver will be used for 40G OM3/OM4
2.5 Ethernet parallel optics. The optical connector interface
Relative Cost

will be the 12-fiber MPO-style connector. The CXP


2.0
transceiver will be used for 100G Ethernet parallel optics.
1.5 The optical connector interface will be the 24-fiber MPO-
style connector. Similar to the SFP/SFP+ transceiver, the
1.0 QSFP and CXP transceivers performance attributes are
0.5 incorporated into the 40/100G Ethernet standard to specify
system requirements and capabilities.
0.0
2004 2005 2006 2007 2008 2009

Figure 3.4
Relative Cost of Single-Mode vs. Multimode 10G Transceiver
OM3/OM4 EMBc To ensure field performance, EMB is calculated for 10
For systems operating at data rates greater than 1 Gb/s, actual laser sources which have been determined to repre-
TIA/EIA-455-220 and IEC 60793-1-49 bandwidth test sent the performance extremes of all encircled compliant
methods are used to measure the fiber effective modal VCSELs. Of these 10 sources, the one yielding the lowest
bandwidth (EMB) that include a series of small spot size EMBc value is taken to represent the minimum expected
launches (approximately 5 µm) indexed across the fiber core. performance level of all standards-compliant VCSELs, and
Measurements are made of the output pulse time delay and the EMBc value associated with this source is therefore
mode coupling power of the fiber as a function of radial referred to as the minimum calculated EMB or minEMBc.
position. These measurements are referred to as differential
mode delay (DMD) measurements. Data from these meas- The primary advantage of the minEMBc method over the
urements can be analyzed by two methods to determine DMD mask method is that the minEMBc method guaran-
whether the fiber meets the EMB requirement of a specific tees standards-compliant fiber performance under worst
application. The first method for translating DMD meas- case source/fiber interactions while providing an actual
urements into an EMB prediction is commonly referred to value of bandwidth in the scalable units of MHz•km. The
as the DMD mask approach, where the leading and trailing minEMBc value can then be used to calculate bit rates and
edges of each pulse are recorded and normalized in power link lengths for systems requiring EMB values other than a
relative to each other. This normalization approach reduces minimum 2000 MHz•km. Corning Cable Systems recom-
mends that multimode fiber intended for current or future
use at data rates ≥ 1 Gb/s should be specified according to
the raw DMD data to focus exclusively on time delay, where
the overall fiber delay is calculated as the difference between
the times for the slowest trailing edge and the fastest leading minEMBc values rather than pass/fail performance indicated
edge in units of ps/m. In order for a fiber to be determined by the DMD mask method.
as meeting the required minimum value of 2000 MHz•km
EMB for OM3 at 850 nm, the DMD data must conform to
one of six templates or masks and must not show a DMD
measurement greater than 0.25 ps/m for any of four speci-
fied radial offset intervals. In order for a fiber to be deter-
mined as meeting the required minimum value of 4700
MHz•km EMB for OM4 at 850 nm, the DMD data must
conform to one of three templates or masks and must not
show a DMD measurement greater than 0.11 ps/m for any
of four specified radial offset intervals. It should be noted
that this method provides only a pass/fail estimation against
the 2000 MHz•km and 4700 MHz•km requirements.

The newer method for predicting EMB from DMD data


is called calculated effective modal bandwidth (EMBc). As
mentioned, the DMD measurement characterizes a single
fiber’s modal performance in high detail, including both
modal time delay and coupling as a function of radial
position. With EMBc, the fiber’s performance is then
characterized by a series of 10 sources which are chosen
to span across a range of 10,000 encircled fluxed compliant
VCSELs. Conceptually, this is done by weighting the
individual DMD launches to approximate the radial power
intensity distribution of any desired VCSEL. Those
weightings are then combined with the raw DMD data to
construct an output pulse for that fiber/laser combination.
The resultant output pulse can then be used to calculate
EMB in units of MHz•km.

You might also like