Data Centre Solutions
Data Centre Solutions
Planning Considerations
for Data Centre Facilities Systems
1
Data Centre Solutions:
Facilities Planning
Extensive research by PANDUIT Laboratories continues to address key areas of the data centre. These areas include
both network and facilities infrastructures, and effective server and storage configurations. This research enables
PANDUIT to deliver comprehensive data centre solutions for markets from finance and health care to government and
education. This white paper describes the elements necessary to develop a reliable data centre facilities infrastructure that
can grow with your business.
Introduction
Data centres are at the core of business activity, and the growing transmission speed and density of active
data centre equipment is placing ever-increasing demands on the physical layer. Enterprises are
experiencing enormous growth rates in the volume of data being moved and stored across the network.
The deployment of high-density blade servers and storage devices in the data centre to handle these
workloads has resulted in spiraling rates of power consumption and heat generation.
The implementation of a robust, integrated infrastructure to handle these demands and support future data
centre growth is now more critical than ever. This white paper shows how business priorities can be
balanced with power, cooling, and structured cabling practicalities to develop an integrated comprehensive
data centre support system. This “capacity planning” process optimises network investment by ensuring
reliable performance now and the flexibility to scale up for future business and technology requirements.
• Capacity Planning: Decisions regarding data centre design and future growth increasingly
centre on power, cooling, and space management. The collective awareness of these issues is
defined as “capacity planning”. The effective deployment and management of these core
resources allows the data centre to operate efficiently and scale up as required.
• Budget: The high cost of operating a data centre is a reality in today’s competitive business
world. Facilities managers have responsibility for a substantial portion of the annual data centre
operating costs. Effective deployment of facilities infrastructure resources is directly connected to
annual cost savings and lowest total cost of ownership (TCO).
• Aesthetics: Traditionally the focus of the facilities manager has been, “Is it in place and
functional?” However, the data centre represents a very high financial investment, with value
residing in both functionality and aesthetics. Today’s data centres have become showcase areas
to demonstrate to customers a visually appealing reflection of the company image. In this sense,
facilities managers are expected to maintain an infrastructure that is highly professional in
appearance.
Business requirements ultimately drive these and all data centre planning decisions. On a practical level,
these requirements directly impact the type of applications and Service Level Agreements (SLAs) adopted
by the organisation.
The Tiers progress upward as single points of failure are eliminated. Data centres achieving Tier II contain
at least one set of fully redundant capacity components (i.e., N+1 capacity) such as uninterruptible power
supply (UPS) units, and cooling and chiller units. Tier III data centres arrange redundant capacity into
multiple distribution pathways, including power and cooling functions, and Tier IV systems extend fault
tolerance to any and every system that supports IT operations.
As a consequence Tier level may impact the square metreage of data centre space. As single points of
failure are eliminated, more facilities equipment is required to support redundant capacity and distribution.
It also is important to note that the data centre itself is rated only as high as the weakest subsystem that
will impact site operations. For example, an owner of a Tier II data centre could upwardly invest in dual-
powered computer hardware and Tier III electrical pathways to enhance uptime; however, the rating of the
data centre would remain at Tier II.
The typical data centre power distribution system includes generators, uninterruptible power supply (UPS)
systems, batteries, transfer switches, surge suppressors, transformers, circuit breakers, power distribution
units (PDUs), and power outlet units (POUs) (see Figure 2). POUs have been historically referred to as
power strips, plug strips, and PDUs. UPS systems are coupled with batteries for energy storage, to ensure
that active data centre equipment is not subjected to power interruptions or power line disturbances. POUs
are deployed within racks and cabinets to distribute power to active equipment.
Both modular and fixed capacity power systems are available to help facilities managers maximise power
delivery per square metre of data centre space. Modular power systems often are deployed in smaller data
centres. They can easily grow and adapt to changing power requirements, and can be run at lower
capacity to save energy whenever possible. Furthermore, a power system that utilises standardised, hot-
swappable, user-serviceable modules can reduce mean time to repair (MTTR) intervals for improved
reliability. However, modular power systems are deployed in racks alongside active equipment; these
systems can take up considerable rack space and add an unacceptable amount to the cooling load. Fixed
capacity systems typically are best suited for medium to large-size data centres, which benefit from the
economies of scale that can be achieved with larger stand-alone power supply units.
Facilities managers can work with IT managers to implement power-saving techniques in active equipment
areas. Common techniques include deploying energy-efficient servers and high-efficiency power supplies,
practicing server consolidation and virtualisation and decommissioning inactive servers. Also, the U.S. EPA
currently is developing a new product specification for enterprise servers, and has submitted a draft report
to Congress on data centre and server energy efficiency.
recovery time. A power system with mistake-proofing features (such as modular, intelligent, and/or
pluggable design) and reduced single points of failure (such as bypass functionality) improves system
availability by streamlining maintenance tasks and minimising unplanned downtime.
Many redundancies can be designed into a power system to meet N+1 reliability requirements, including
power conversions, paralleling controls, static transfer switches and bypass connections. PDUs are often
connected to redundant UPS units for higher availability in case one UPS fails or is taken down for
maintenance service or repair. In addition to UPS redundancy, the power supplies in the servers
themselves are often redundant, with two or even three power supplies in each server box capable of
powering the server completely if one or more of them fails.
Raised Floor
The installation of a raised floor is often used to distribute cooled air and manage the cabling found within
data centres, and to enhance the appearance of the room. The bulk of secondary system components (i.e.,
pipes, cables, power cords) can be deployed safely under the raised floor, in order to maximise the
available space above-floor for cabling and active equipment subject to moves, adds and changes (MACs).
All data centre stakeholders should work together to ensure that the different underfloor infrastructure
components integrate smoothly along pre-designated routes.
When deploying a raised floor, its height, anticipated loading requirements, and seismic requirements all
must be carefully considered. Current trends and best practices indicate that a raised floor height of 609.6-
914.4 mm enables adequate cooling capacity (see Figure 3). Sites with shallow raised floors will struggle to
deliver sufficient air volume, and balanced airflow will be difficult to achieve.
In addition, certified performance data on the mechanical strength of the floor should be provided by the
raised floor manufacturer in accordance with the Ceilings and Interior Systems Construction Association’s
(CISCA’s) “Recommended Test Procedures for Access Floors”, to verify that the floor will meet data centre
design requirements.
The primary purpose of the grounding and bonding system is to create a robust path for electrical surges
and transient voltages to return either to their source power system or to earth. Lightning, fault currents,
circuit switching (motors on and off), activation of surge protection devices (SPD) and electrostatic
discharge (ESD) are common causes of these electrical surges and transient voltages. An effective
grounding and bonding system can minimise or eliminate the detrimental effects of these events.
TBB (Telecommunications
Bonding Backbone)
According to standards TIA-942, J-STD-607-A-2002, and IEEE 1100 (the Emerald Book), a properly
designed grounding system as shown in Figure 4 has the following characteristics:
1. Is intentional: each connection must be engineered properly, as the grounding system is only as
reliable as its weakest link
2. Is visually verifiable
3. Is adequately sized to handle fault currents
4. Directs damaging currents away from sensitive electronic equipment
5. Has all metallic components in the data centre bonded to the grounding system (e.g., equipment,
racks, cabinets, ladder racks, enclosures, cable trays, water pipes, conduit, building steel, etc.)
6. Ensures electrical continuity throughout the structural members of racks and cabinets
7. Provides grounding path for electrostatic discharge (ESD) protection wrist straps
In addition to meeting these standards, all grounding and bonding components should be listed with a
nationally recognised test lab (such as Underwriters Laboratories, Inc.) and must adhere to all local
electrical codes. The PANDUIT™ STRUCTUREDGROUND™ System for data centre grounding provides robust
connections that have low resistance, are easy to install, and are easily checked during yearly inspections.
unit (CDU), which is the control interface between building chilled water and the equipment water loop.
Underfloor space should accommodate flexible hoses that transport water from the CDU to the cooling
devices.
Labelling Considerations
A properly identified and documented infrastructure allows managers to quickly reference all
telecommunication and facility elements, reduce maintenance windows, and optimise the time spent on
MACs. TIA-942 recommends that data centre identification start with the floor tile grid system. Each 609 x
609 mm floor tile is assigned an alphanumeric grid identifier, so that a lettered system for rows (AA, AB,
AC, etc.) and a numbered system for columns (01, 02, 03, etc.) can be used to reference any
given component in the data centre by specific location (i.e., rack located at grid location AB03). Grid
identifiers can range from computer printable adhesive labels to engraved marking plates.
A thorough identification strategy will include the following: labels for cabling infrastructure (cables, panels,
racks, cabinets, and pathways); labels for active equipment (switches, servers, storage); labels for cooling
pipe, electrical, and grounding systems; and floor grid markers, voltage markers, firestops, and other safety
signage. TIA/EIA-606-A is the standard for labelling and administration of structured cabling, and TIA-942
Annex B provides supplemental recommendations for data centres. “TIA/EIA-606-A Labelling Compliance”
discusses how to implement standards-based labeling solutions.
Based on data collected by the Uptime Institute, it is estimated that the cost to power the cooling system
will be approximately equal to the cost of powering active equipment.2 Therefore, the cooling system
requires careful design and constant oversight to maintain an acceptable level of performance at a
reasonable cost. A worksheet to help you estimate the cooling load of your data centre is available in the
PANDUIT white paper “Facility Considerations for the Data Centre”.
A perforated tile with 25% open area and a cool air throughput rate of 4.2-5.7 cubic metre per minute
(cmm) can disperse about 1 kW of heat. For loads much greater than that, or for heat loads that increase
with active equipment loads over the life of the data centre, several options are available to expand cooling
capacity. These options include increasing tile open area (from 25% to 40-60%), minimising air flow leaks,
and increasing additional CRAC capacity. Also, supplemental cooling units such as chilled water racks and
ceiling-mounted air conditioners/fans can be deployed as active equipment is added and refreshed.
One useful tool for analysing airflow through the data centre is computational fluid dynamics (CFD)
software. Basic CFD programs simulate airflow paths and calculate the flow rates and air temperatures
through perforated panels so facility managers can determine whether equipment cabinets are being
supplied with enough cold air. Advanced CFD programs model temperature distributions and spaces above
the floor.
Strategies for reducing bypass air (and thereby increasing uptime by preventing active equipment from
overheating) include installing blank panels in unused rack spaces and closing cable cutouts in the raised
floor with air sealing devices. These techniques help promote airflow along preferred routes while keeping
underfloor cables accessible. Furthermore, air sealing grommets can provide abrasion resistance for
cables as they pass by the sharp edges of the raised floor tiles and provide a conductive path from the
cable to the floor for electrostatic discharge. As a result, thermal management can lead to significant cost
savings.
Structured cabling is expected to sustain growth and support changing equipment over the 10-15 year life
cycle of the data centre. Effective cable management is considered key to the reliability of the data centre
network infrastructure. However, the relationship between cabling and facilities systems is often
overlooked. This relationship centres on the successful deployment of structured cabling along pathways
that complement facilities systems. Effective cable pathways protect cables to maximise network uptime,
and showcase your data centre investment.
The key capacity planning issue is an accurate estimation of cable count and volume in order to specify
pathway size. For initial deployments, maximum fill should be 25-40% to leave room for future growth. A
calculated fill ratio of 50-60% will physically fill the entire pathway due to spaces between cables and
random placement. The PANDUIT online fill calculator tool can help you determine the size of pathway
needed for a specified cable quantity and diameter.
The TCO of cable routing systems also must be considered before making final purchasing decisions. The
costs associated with installation, maintenance, accessibility, physical protection and security should all be
considered as components of TCO. Features that promote low cost of ownership include:
One effective pathway strategy is to use overhead fibre optic cable routing systems to route horizontal fibre
cables, and use underfloor wire baskets for horizontal copper and backbone fibre cables. This strategy
offers several benefits:
• The combination of overhead and underfloor ensures physical separation between the copper
and fiber cables, as recommended in TIA-942
• Overhead pathways such as the PANDUIT™ FIBERRUNNER™ System protect fibre optic jumpers,
ribbon interconnect cords, and multi-fibre cables in a solid, enclosed channel that provides bend
radius control, and the location of the pathway is not disruptive to raised floor cooling (see Figure
6)
• Underfloor pathways hide the bulkiest cabling from view; also, copper cables can be loosely
bundled to save installation cost, and each underfloor pathway can serve two rows of equipment
Underfloor cabling pathways should complement the hot aisle/cold aisle layout to help maintain cool airflow
patterns. TIA/EIA-942 and -569-B state that cable trays should be specified for a maximum fill ratio of 50%
to a maximum of 150 mm inside depth. TIA-942 further recommends that cable trays for data cables should
be suspended from the floor under hot aisles, while power distribution cables should be positioned in the
cold aisles under the raised floor and on the slab.
• Pathways for power and twisted pair data cables can be spaced as far as possible from each
other (i.e., 152-457 mm), to minimise longitudinal coupling (i.e., interference) between cables
• Copper and fibre cable pathways are suspended under hot aisles, the direction toward which
most server ports face
• Cable pathways do not block airflow to the cold aisles through the perforated tiles.
A variety of cable bundling solutions are effective in high-density data centre cabling environments (see
Figure 7). For example, PANDUIT hook & loop cable ties can be used to bundle cables across overhead
areas and inside cabinets and racks. They are adjustable, releasable, reusable, and soft, enabling
installers to deploy bundles quickly in an aesthetically pleasing fashion as well as to address data centre
scalability requirements.
PANDUIT cable rack spacers are used with ladder racks as a stackable cable management accessory that
helps ensure proper cable bend radius and minimise stress on cable bundles. Also, PANDUIT waterfall
accessories provide bend radius control as cables transition from ladder rack or conduit to cabinets and
racks below.
Wire Basket with Hook & Loop Cable Ties Ladder Rack with Stackable Cable Rack Spacers and Waterfall
Accessories
A combination of passive cooling solutions also helps disperse heat and direct airflow in server areas.
Cabling in these areas is less dense than in switching areas, with more power cables but fewer data
cables. Filler panels can be deployed in horizontal and vertical spaces to prevent cold air bypass and
recirculation of warm air through the cabinet and equipment. Use of wider cabinets and effective cable
management can reduce static pressure at the rear of the cabinet by keeping the server exhaust area free
from obstruction. Also, door perforations should be optimised for maximum cool airflow to equipment
intakes. Deployment of racks and cabinets should follow standards established in TIA/EIA-310-D.
Conclusion
Next-generation active equipment is drawing more power, generating more heat, and moving more bits in
the data centre than ever before. Essential systems like power and cooling must work closely with
structured cabling to achieve an integrated facilities infrastructure that can meet aggressive uptime goals
and survive multiple equipment refreshes.
Successful capacity planning of these systems is a process that requires the combined efforts of all data
centre stakeholders. Facility managers in particular are in a unique position to survey the overall data
centre planning landscape to identify power, cooling, grounding, pathway, and routing strategies that
achieve lowest TCO. Communication between facilities managers, IT managers, and senior executives will
ensure that business requirements are balanced with power, cooling, and cabling practicalities to craft a
robust, reliable, and visually attractive data centre.
References
1. Koomey, Jonathan G. 2007. “Estimating Total Power Consumption by Servers in the U.S. and the
World.” Final report. February 15. Available at:
http://enterprise.amd.com/Downloads/svrpwrusecompletefinal.pdf.
2. Brill, Kenneth G. 2007. “Data Centre Energy Efficiency and Productivity.” The Uptime Institute: Santa
Fe, NM. Available at: http://www.upsite.com/cgi-bin/admin/admin.pl?admin=view_whitepapers.
About PANDUIT
PANDUIT is a leading, world-class developer and provider of innovative networking and electrical
solutions. We engineer and manufacture leading edge products that assist our customers in the
deployment of the latest technologies. Through PANDUIT Laboratories, our cross-functional R&D teams
analyse and resolve complex technical challenges to address specific industry and customer needs. Our
global expertise and strong industry relationships provide businesses unmatched service and support that
enable customers to move forward with their strategic objectives. For more than 50 years, our commitment
to innovation, quality and service has created competitive advantages to earn customer preference.
www.panduit.com · [email protected]