DATA Center Design
DATA Center Design
This special report has been put together with the data center
engineer in mind. Our goal is to give you information to help
you design, specify, and manage data centers, along with other
important aspects of these mission critical facilities.
Thank you to the authors for their expert knowledge for this
collection of articles. And a special thanks to the sponsors for
making this Digital Report available.
CONTENT
10 aspects to consider for data center clients 4
O
f all the data center markets throughout North America, Northern Virginia
(NoVa) has consistently been the most active due in large part to its history.
In the early 1990s, the region played a crucial role in the development of the
internet infrastructure, which naturally drew a high concentration of data
center operators who could connect to many networks in one place.
NoVa, and especially Loudoun County, Virginia, was made for data centers. With its
abundant fiber, inexpensive and reliable power, rich water supply in an area that does
not experience droughts, and attractive tax incentive programs, it’s ideal for many data
center clients.
There are more than 40 data centers located in Loudoun Count, and the majority are
in “Data Center Alley,” which boasts a high concentration of data centers and supports
about half of the country’s Internet traffic. With more than 4.5 million sq ft of data cen-
ter space available and a projected 10 million sq ft by 2021, Ashburn, Virginia, data
centers continue to lead the pack. As Ashburn becomes the site of some of the indus-
try’s most progressive energy-saving initiatives and connectivity infrastructure develop-
ments, there’s no doubt that the region will continue to be a market to watch.
1. First cost
When businesses turn to a colocation provider, and the fiscal benefits of such strate-
gies are only increasing, first costs become a primary motivation. A recent study ex-
plained that rising competition in the colocation sector is leading to price declines in
leasing and creating an extremely client-friendly environment.
2. Energy efficiency
Because power consumption directly drives operating costs, energy efficiency is a big
concern for many businesses. Choosing a data center that integrates the latest tech-
nologies and architecture can help minimize environmental impacts. Innovations like
highly efficient cooling plants and leveraging medium voltage electrical distribution sys-
tems can help reduce the amount of energy needed to power the building, resulting in
a lower Power Usage Effectiveness (PUE).
3. Reliability
To avoid financial and business repercussions in the case of a planned or unplanned
outage, reliability is a must. If going offline for even a few minutes will have significant
financial and business repercussions, then employing MEP solutions that have backup
options available in case of a planned or unplanned outage is a must.
5. Redundancy
Providing continuous operations through all foreseeable circumstances, such as power
outages and equipment failure, is necessary to ensure a data center’s reliability. Redun-
dant systems that are concurrently maintainable provide peace of mind that the client’s
infrastructure is protected.
6. Maintainability
Clients want systems that are easily maintainable to be able to ensure their critical as-
sets are running at full speed. The system sections should be focused on operational
excellence in order to protect customers’ critical power load and cooling resources.
7. Speed to market
Clients’ leases usually hinge on having timely inventory. Clients expect a fast- tracked,
constructible design that is coordinated and installed in a timely manner. Through the
integrated design-build model, long lead items can be pre-purchased in parallel with
designs being completed and coordinated.
8. Scalability
Scalability and speed to market go hand in hand. It’s vital to understand that system
9. Sustainability
Customers benefit from solar power, reclaimed water-based cooling systems, water-
less cooling technologies, and much more. Water is becoming a larger consideration
with mechanical system selections. The enormous volume of water required to cool
high-density server farms with mechanical systems is making water management a
growing priority for data center operators. A 15-megawatt data center can use up to
360,000 gallons of water per day. Clients recognize that sustainability is not only good
for the environment, but is also good for their bottom line.
Mark A. Kosin is vice president, business team leader for mid-Atlantic division at
Southland Industries. This article originally appeared on Southland Industries blog.
Southland Industries is a CFE Media content partner.
Respondents
• Robert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture
and Engineering, Albany, N.Y. (Top left)
• Karl Fenstermaker, PE, Principal Engineer, Southland Engineering, Portland, Ore.
• Bill Kosik, PE, CEM, LEED AP, BEMP, Senior Mechanical Engineer, exp , Chicago
CSE: What’s the No. 1 trend you see today in data center design?
Keith Lane: We’re seeing modularity, increased efficiency, and flexibility. Most data
center end users require all of these in their facilities.
Bill Kosik: There will still be a high demand for data centers. Technology will contin-
ue to evolve, morph, and change. The outlook for new or renovated data centers con-
tinues to be bullish with analysts looking at the industry doubling cloud strategies over
the next 10 years. So, trends will center around lower-cost, higher shareholder-return
data centers that need to address climate change and comply with data-sovereignty
laws.
Kenneth Kutsmeda: A trend that will become more popular in data centers is the
use of lithium batteries. One manufacturer of lithium batteries recently acquired UL
listings (UL 1642: Standard for Lithium Batteries and UL 1973: Standard for Batteries
for Use in Light Electric Rail (LER) Applications and Stationary Applications), and others
will soon follow. Unlike cell phones that use lithium cobalt oxide, which has a high-en-
ergy density and is prone to safety risks when damaged, data center batteries use a
combination of lithium manganese oxide and lithium nickel manganese cobalt oxide,
which has a lower energy density but longer lifecycle and inherent safety features. Ja-
cobs recently completed a project using lithium batteries. The lithium battery has a
more than 15-year lifecycle and requires no maintenance. Lithium batteries provide a
65% space savings and 75% weight reduction as compared with wet-cell batteries.
The lithium battery-management system provides the ability to isolate individual cab-
Rener: New metrics on reliability versus the old terms of availability. We are seeing
a move away from prescriptive terms on availability to calculations on reliability using
IEEE. Edge-cooling approaches (local to the server) have become more popular as well
as fluid-based cooling at the rack.
Yoon: We haven’t seen much in the way of large modular data centers (a la Microsoft
ITPACs). Those seem to be mostly limited to large cloud providers. Our clients typically
prefer traditional “stick-built” construction—simply because the scale associated with
modular data center deployment doesn’t make much sense for them.
Lane: With all of our modular data center projects, we continue to strive to increase
efficiency, lower cost, and increase flexibility. These challenges can be achieved with
good planning between all members of the design team and innovation with prefabrica-
tion. The more construction that can be completed and is repeatable in the controlled
environment of a prefabrication warehouse, the more money can be saved on the proj-
ect.
CSE: What are the newest trends in data centers in mixed-use build-
ings?
Fenstermaker: One emerging trend is recovering heat from the data center to heat
the rest of the building. This is most commonly employed by using hot-aisle air for the
air side of a dual-duct system or heating air intake at a central AHU. In addition, small-
er data centers are using direct-expansion (DX) fan coils connected to a central vari-
able refrigerant flow (VRF) system with heat-recovery capabilities to transfer heat from
the data center to other zones requiring heating.
Yoon: One of the newest trends is smaller, denser, and less redundancy.
Tumber: Large-scale data center deployments are not common in mixed-use build-
ings as they have unique requirements that typically can only be addressed in sin-
gle-use buildings. One of the main issues is with securing the data center. This is
because even the most comprehensive security strategy cannot eliminate non-data
center users from the premises. For small-scale deployments where security is not a
big concern, a common infrastructure that can serve both the needs of the data center
and other building uses is important to ensure cost-effectiveness. Emphasis is being
CSE: Have you designed any such projects using the integrated
project delivery (IPD) method? If so, describe one.
Tumber: I recently worked on a project that involved wholesale upgrades at the flag-
ship data center of a Fortune 500 company. The data center is located in the Midwest,
and IPD was implemented. The project was rife with challenges, as the data center
was live and downtime was not acceptable. In fact, a recent unrelated outage lasting
30 seconds led to stoppage of production worldwide and caused $10 million in losses.
We worked in collaboration with contractors. They helped with pricing, logistical sup-
port, equipment procurement, construction sequencing, and more during the design
phase. The project was a success, and all project goals were met.
CSE: What are the challenges that you face when designing data
centers that you don’t normally face during other building projects?
Robert C. Eichelman: With few exceptions, data centers serve missions that are
much more critical than those served by other building types. The infrastructure design,
therefore, requires a higher degree of care and thoughtfulness in ensuring that systems
support the mission’s reliability and availability requirements. Most data centers have
very little tolerance for disruptions to their IT processes, as interruptions can result in
disturbances to critical business operations, significant loss of revenue and customers,
or risk to public safety. Most often, the supporting mechanical, electrical, and plumb-
ing (MEP) systems need to be concurrently maintainable, meaning that each and every
Rener: Future flexibility and modular growth. IT and computer technologies are rap-
idly changing. Oftentimes during the planning and design of the facility, the owner has
not yet identified the final equipment, so systems need to be adaptable. Also, the own-
er will often have multiyear plans for growth, and the building must grow without dis-
ruption.
Tumber: The project requirements and design attributes of a data center are different
from other uses. The mission is to sustain IT equipment as opposed to humans. They
are graded on criteria including availability, capacity, resiliency, PUE, flexibility, adapt-
ability, time to market, scalability, cost, and more. These criteria are unique to data
centers, and designing a system that meets all the requirements can be challenging.
Lane: The shell in a colocation facility must be built with flexibility in mind. You must
provide all of the components for reliability and concurrent maintainability while allow-
ing the end user to tweak the data center to their own unique needs. Typically, the shell
design will stop either at the UPS output distribution panel or at the power distribution
Tumber: The design of a colocation data center is influenced by its business mod-
el. Powered shell, wholesale colocation, retail colocation, etc. need to be tackled dif-
ferently. If the tenant requirements are extensive, the entire colocation facility can be
designed to meet their unique needs, i.e., built-to-suit. Market needs and trends typ-
ically dictate the designs of wholesale and retail data centers. These data centers are
designed around the requirements of current and target tenants. They offer varying
degrees of flexibility, and any unique or atypical needs that could push the limits of the
designed infrastructure are reviewed on a case-by-case basis.
Fenstermaker: The most important thing is to work with the colocation providers to
fully understand their rate structures, typical contract size, and the menu of reliability/
resiliency they want to offer to their clients in the marketplace. The optimal design solu-
tion for a retail colocation provider that may lease a few 10-kW racks at a time with Tier
4 systems, located in a high-rise in the downtown area of Southern California, is dras-
tically different than another that leases 1-MW data halls in central Oregon with Tier 2
systems. Engineers need to be fully aware of all aspects of the owner’s business plan
before a design solution can be developed.
Eichelman: For a colocation data center, it’s important to understand the types of
clients that are likely to occupy the space:
The tendency to allow unusual requirements to drive the design, however, should be
carefully considered or avoided, unless the facility is being purpose-built for a spe-
cific tenant. To optimize return on investment, it’s important to develop a design that
is modular and rapidly deployable. This requires the design to be less dependent on
equipment and systems that have long lead times, such as custom paralleling switch-
gear. Designs need to be particularly sensitive to initial and ongoing operational costs
that are consistent with the provider’s business model.
M
etro Data Center (MDC) was established in 2011 in Dublin, Ohio. The com-
pany is a full-service hosting and data center that specializes in serving
the data needs of small- to mid-sized businesses (SMB), as well as local
schools and governmental agencies. MDC’s core values center around the
5 C’s: Colocation, Cloud, Connectivity, Consulting and Community.
As part of its commitment to the community, MDC built and maintains the city of Dub-
lin’s Dublink 100GB broadband network, which provides ultra-high-speed connectivity
to local users through 125 miles of fiberoptic lines that run underground throughout
Dublin.
Through several state and regional infrastructure projects, including the Smart Mobili-
ty Ohio Project, the Smart City Initiative, and the Transportation Research Center hub,
MDC has become central to the community.
From cloud hosted services and dedicated servers to the speed and connectivity of the
Dublink 100GB network, MDC offers its clients optimized, secure productivity backed
by on-site systems and 24/7 monitoring and maintenance. Comprehensive resources,
services and expertise assure optimal performance, and the ability to meet and resolve
clients’ needs within minutes. Metro Data Center is a carrier-neutral environment that
also offers a blended internet option to its SMB clients.
MDC also created a pod-based approach to improve energy efficiency and service
larger customers. Each pod utilizes as many as 20 cabinets, and feature cold-row con-
tainment. Currently six pods are in use and the facility has the capacity for 12 pod de-
ployments. This pod approach has achieved significant energy efficiency improvements
with measurable kilowatt savings recorded quarter over quarter.
Starting at Zero
When MDC took over the space from a global software security development compa-
ny, it contained zero racks and 32,000 square feet of open space. MDC began to make
General Features
Metro Data Center is a top-tier provider and has the physical and colocation capaci-
ty to meet the needs of thriving Dublin-area business and public community. The site
spans nearly 55,000 square feet. This includes 5,903 square feet of raised data center
floor space, 364 square feet of demarcation locations yielding 200 total rack spaces
available in server cabinets. The center is able to provide the highest quality infrastruc-
ture services available, including extensive redundancy, stability and security.
Metro Data Center accomplishes this through data colocation with businesses; fully re-
dundant power (2N); carrier neutral access to most central Ohio carriers, and blended
internet services to others. Finally, MDC offers workgroup recovery services including
office space, telephones, computers and conference rooms.
Power Management
One of the strengths of the facility is its state-of-the-art power management plan. This
At the rack level, Rittal facilitates power management through dual corded servers
plugged into color-coded, power strips integrated into the enclosure. Single-corded
servers get in-rack Automatic Transfer Switches for redundant power.
The MDC also uses dual transient voltage surge suppressors (TVSS). The purpose of
TVSS is to prevent damage to data processing and other critical equipment by limiting
transient voltages and currents on electrical circuits. Sources of dangerous transient
currents and voltages can be as rare as lighting strikes, or as common as the switch-
on of elevators, heating, air conditioning, refrigeration or other inductive load equip-
ment.
Site technicians perform automated generator run tests weekly and manual generator
load tests quarterly. They offer enhanced quarterly preventive maintenance on all facility
equipment with full-time, year-round emergency support on all facility equipment, in-
cluding the generators, and UPS and battery units.
Cooling
MDC engineers employed a number of overlapping strategies for facility thermal man-
agement, including the use of N+1-rated glycol pumping stations, dry coolers and
In addition to
traditional cold
row contain-
ment, MDC
also deploys
an in-rack con-
Rittal helps Metro Data Center deliver tainment system to handle their pod configurations
targeted colocation solutions. and other high density thermal challenges. All thermal
management equipment is backed up by generators,
including seven 30-ton glycol-based air handlers.
Security
Data Security
Metro Data Center has passed the Statement on Standards for Attestation Engage-
ments (SSAE) Service Organization Control 2 (SOC2) audit from A-lign. The SSAE
The Service Organization Control (SOC) 2 report is performed in accordance with the
AT 101 and is based on the Trust Services Principles, with the ability to test and report
on the design (Type I) and operating (Type II) effectiveness of a service organization’s
controls, as with SOC 1/SSAE. The SOC 2 report focuses on a business’s non-financial
reporting controls as they relate to security, availability, processing integrity, confidenti-
ality, and privacy of a system, as opposed to SOC 1/SSAE 16 which is focused on the
financial reporting controls, according to the SSAE website.
Physical Security
Metro Data Center uses Honeywell security Systems, including integrated video surveil-
lance, full-hand scan biometrics, and interior and exterior infra-red cameras. The secu-
rity system components provide (N+1) security system features for full redundancy.
Cabinet doors are secured using Medeco highly secured locking systems that are
uniquely keyed for each customer. This system installs into the Rittal standard handle
and can be modified or changed as the customer using the rack is changed. The phys-
ical layer of security also allows for a master key function for Metro. This system can be
installed both in the factory as well as by Metro personnel as needed. Closed-circuit TV
(CCTV) camera systems are used for continuous visual surveillance.
Monitoring systems provide accurate information on the state of all infrastructure that is
required to maintain efficient power, cooling and energy usage within the facility. MDC
Through its consulting services the company has delivered IT strategy and deployment
solutions that enable SMB customers the ability to focus on their core competencies in
order to grow their business. By targeting SMB customers and fostering local econom-
ic development around IT services and infrastructure, MDC is revolutionizing the role of
data centers and creating a blueprint for the Midwest.
O
f all the data center markets throughout North America, Northern Virginia
(NoVa) has consistently been the most active due in large part to its history.
In the early 1990s, the region played a crucial role in the development of the
internet infrastructure, which naturally drew a high concentration of data
center operators who could connect to many networks in one place.
Over the past several years, mission critical clients seem to be asking the same series
of questions regarding data center designs. These questions relate to the best distribu-
tion system and best level of redundancy, the correct generator rating to use, whether
solar power can be used in a data center, and more. The answer to these questions is
“It depends,” which really doesn’t help address the root of their questions. For every
one of these topics, an entire white paper can be written to highlight the attributes and
deficiencies, and in many cases, white papers are currently available. However, some-
times a simple and concise overview is what is required rather than an in-depth analy-
sis. The following are the most common questions that this CH2M office has received
along with a concise overview.
• 3M2: This topology aligns the load over more than two independent systems. The
distributed redundant topology is commonly deployed in a “three-to-make-two”
(3M2) configuration, which allows more of the capacity of the equipment to be used
while maintaining sufficient redundancy for the load in the event of a failure (see Fig-
ure 2). The systems are aligned in an “A/B/C” configuration, where if one system fails
(e.g., A), the other two (B and C) will accept and support the critical load. The load is
evenly divided with each system supporting 33.4% of the load or up to 66.7% of the
equipment rating. In the event of a component failure or maintenance in one system,
the overall topology goes to an N level of redundancy. In theory, additional systems
could be supplied, such as 4M3 or 5M4, but deployment can significantly complicate
the load management and increases the probability of operator error.
• N+1 (SR): The shared-redundant (SR) topology concept defines critical-load blocks.
Each block is supported 100% by its associated electrical system. In the event of
A commonality between the different topologies presented is the need to transfer load
between systems. No matter the system topology, the requirement to transfer load
between electrical systems—either for planned maintenance activities, expansions, or
failure modes—must be done. Load management refers to how the load is managed
across multiple systems.
2N topology. The premise behind a 2N system is that there are two occurrences of
each piece of critical electrical equipment to allow the failure or maintenance of any
one piece without impacting the overall operation of the data center IT equipment. This
configuration has a number of impacts:
• Load management: Among the topologies presented here, 2N has a relatively sim-
ple load-management scheme. The system will run independently of other distri-
bution systems and can be sized to accommodate the total demand load of the IT
block and associated HVAC equipment, minimizing the failure zone. The primary con-
sideration for load management is to ensure the total load doesn’t overload a single
substation/UPS system.
• Backup power generation: This topology uses a 2N backup generation with the
simplest of schemes: having the generator paired to the distribution block. Each
• First cost/TCO: The 2N system requires twice the quantity and capacity of electri-
cal equipment than the load requires, causing the system to run at nominally 50%
of nameplate capacity. Due to the nature of how electrical equipment operates, this
tends to cause the equipment to run at a lower efficiency than can be realized in oth-
er topologies. An additional impact of the 2N system topology is that the first cost
tends to be greater because of the quantity of equipment. Also, because there are
• Spatial considerations: Because it generally has the most equipment, the 2N con-
figuration typically has the largest physical footprint. However, this system is the sim-
plest to construct as a facility is expanded, thereby minimizing extra work and allow-
ing the facility to grow with the IT demands.
• Time to market: As has been discussed, this system will have more equipment to
support the topology, therefore there may be additional time to construct and com-
mission the equipment. The systems are duplicates of each other, which allows for
construction and commissioning efficiencies when multiple systems are installed,
assuming the installation teams are maintained.
Distributed redundant (3M2) topology. The premise behind a 3M2 system is that
there are three independent paths for power to flow, each path designed to run at ap-
proximately 66.7% of its rated capacity and at 100% during a failure or maintenance
event. This configuration is realized by carefully assigning load such that the failover is
properly distributed among the remaining systems.
• Load management: The load management for the 3M2 system should be carefully
considered. The load will need to be balanced between the A, B, and C systems to
ensure the critical load is properly supported without overloading any single system.
Load management of a system like this can be aided by a power-monitoring system.
• First cost/TCO: The 3M2 system requires about 1.5 times the capacity of electrical
equipment than the load requires and runs at 66.7% of its rated capacity. Because
the equipment is running at a higher percentage, the 3M2 system tends to be more
energy-efficient than the 2N, but less efficient than either of the shared redundant
systems. An additional impact of the 3M2 system topology is that lower-capacity
equipment can be used to support a similar size IT block, thereby causing the system
to have a higher cost per kilowatt to install. However, if the greater capacity is real-
ized by either sizing the IT blocks large enough to realize the benefits of this topology
or by installing two IT blocks on each distribution system, then there will be a lower
first cost. Essentially, the 2N system needs two substations and associated equip-
ment for each IT block while the 3M2 system would need only three substation sys-
tems to support the IT block. First-cost savings is in addition to operational savings
• Spatial considerations: Similar to the first-cost discussion above, the spatial layout
can either be smaller or larger than a 2N system depending on how the topology is
deployed and how many IT blocks each system supports.
• Time to market: The balance between the IT blocks supported by each system and
the quantity of equipment will have an impact on the time to market, though the bal-
ance for this system is unlikely to be significant. The additional equipment should
be balanced against smaller pieces of equipment, allowing faster installation time
per unit. The systems are duplicates of each other, which allow for construction and
commissioning efficiencies when multiple systems are installed, assuming the instal-
lation teams are maintained.
N+1 shared redundant (N+1 SR). The premise behind the N+1 SR system is that
each IT block is supported by one primary path. In the event of maintenance or a fail-
ure, there is a redundant but shared module that provides backup support. The shared
module in this topology has the same equipment capacities and configuration as the
primary power system, minimizing the types of equipment to maintain.
For example, if six IT blocks are to be installed, then seven distribution systems (sub-
stations, generators, and UPS) will need to be installed for an N+1 system. This N+1
system can easily be reconfigured to an N+2 system with minimal impact (procuring
eight systems in lieu of seven). This reconfiguration would allow the system to provide
full reserve capacity even while a system is being maintained.
• Load management: The N+1 SR system has the simplest load management of to-
pologies presented. As long as the local UPS and generator are not overloaded, the
system will not be overloaded.
• Backup power generation: This topology follows the normal power flow and uses
an N+1 SR backup generation where the generator is paired to the distribution block.
Each generator is sized for the entire block load, with the SR generator also sized to
carry one block. Parallel generation can be used for block-redundant systems. How-
ever, carefully consider the need for redundancy in the paralleling switchgear. True
N+1 redundancy would require redundant paralleling switchgear. However, this level
of redundancy while on generator power may not be required.
• First cost/TCO: For a large-scale deployment (i.e., exceeding two modules), the
N+1 SR system has the lowest installed cost per kilowatt of the systems explored
• Spatial considerations: The N+1 SR layout will have the smallest spatial impact.
Additional distribution is required between modules as well as a central location to
house the redundant system.
• Time to market: The balance between the IT-blocks distribution system and the
quantity of equipment will have an impact on the time to market. However, due to the
fact that the N+1 SR has the smallest quantity of equipment, this configuration po-
tentially has the shortest time to market of any system explored so far. This timing is
further supported due to system duplicates, which should allow for construction and
commissioning efficiencies on the subsequent installations, assuming the teams are
maintained.
N+1 common bus (N+1 CB). The premise behind the N+1 CB system is there is one
primary path that supports each IT block. This path also has an N+1 capacity UPS to
facilitate maintenance and function in the event of a UPS failure. The system is backed
up by a simple transfer switch system with a backup generator.
• Load management: Similar to the N+1 SR system, the load management for the
N+1 CB is simple. As long as the local UPS/generator combination is not overloaded,
the system will not be overloaded.
• First cost/TCO: The N+1 CB system potentially has the lowest installed cost per
kilowatt of any of the systems. This lower cost is due to a combination of lower
quantities of UPS and generators coupled with simpler distribution. Additionally, less
equipment means ongoing operation and maintenance costs should be lower as well.
• Spatial considerations: The N+1 CB layout will have a small spatial impact. Addi-
tional distribution is required between modules as well as a central location to locate
the central bus system (transfer switches and generator).
• Time to market: Similar to the N+1 SR system, the N+1 CB has significantly few-
er pieces of equipment than the 2N or 3M2 systems. This equipment count should
support a faster time to market. However, it is difficult to determine which of the N+1
systems would have a quicker time to market.
The above topology descriptions only highlight a few systems. There are other topol-
ogies and multiple variations on these topologies. There isn’t a ranking system for to-
pologies; one isn’t better than another. Each topology has pros and cons that must be
1. Continuous power: designed for a constant load and unlimited operating hours;
provides 100% of the nameplate rating for 100% of the operating hours.
2. Prime power: designed for a variable load and unlimited running hours; provides
100% of nameplate rating for a short period but with a load factor of 70%; 10%
overload is allowed for a maximum of 1 hour in 12 hours and no more than 25
hours/year.
4. Emergency standby power: designed for a variable load with a maximum run time
of 200 hours/year; rated to run at 70% of the nameplate.
The generator industry also has two additional ratings that are not defined by ISO-
8528: mission critical standby and standby. Mission critical standby allows for an 85%
load factor with only 5% of the run time at the nameplate rating. A standby-rated gen-
erator can provide the nameplate rating for the duration of an outage assuming a load
factor of 70% and a maximum run time of 500 hours/year.
Data center designs assume a constant load and worst-case ambient temperatures.
This does not reflect real-world operation and results in overbuilt and excess equip-
ment. Furthermore, it is unrealistic to expect 100% load for 100% of the operating
hours, as the generator typically requires maintenance and oil changes after every 500
hours of run time. Realistically during a long outage, the ambient temperature will fluc-
tuate below the maximum design temperature. Similarly, the load in a data center is
not constant. Based on research performed by Caterpillar, real-world data center appli-
cations show an inherent variability in loads. This variability in both loads and ambient
temperatures allows manufacturers to state that a standby-rated generator will provide
nameplate power for the duration of the outage and it’s appropriate for a data center
application. However, if an end user truly desires an unlimited number of run hours,
then a standby-rated generator is not the appropriate choice.
4. Encapsulated transformers have open wound windings that are insulated with
epoxy, which makes them highly resistant to short-circuit forces, severe climate con-
ditions, and cycling loads.
For liquid-filled transformers, various types of fluids can be used to insulate and cool
the transformers. These include less-flammable fluids, nonflammable fluids, mineral oil,
and Askarel.
When put into the context of a mission critical environment, two transformers stand
out: the cast-coil transformer due to its exceptional performance and the less-flamma-
ble liquid-immersed transformer due to its dependability and longevity in commercial
and industrial environments. While both transformer types are appropriate for a data
center, each comes with pros and cons that require evaluation for the specific environ-
ment.
Liquid-filled transformers are more efficient than cast coil. Because air is the basic
cooling and insulating system for cast coil transformers, they will be larger than liq-
uid-filled units of the same voltage and capacity. When operating at the same current,
more material and more core and coil imply higher losses for cast coil. Liquid-filled
transformers have the additional cooling and insulating properties associated with the
oil-and-paper systems and tend to have lower losses than corresponding cast coil
units.
For many businesses, edge computing represents a big change in both operations and distributed
transformer. Additionally, omitting the oil sam-
information technology. At Rittal, we can show you how and why this is a Change for the Better.
pling does not decrease the transformer effi-
To download the Edge Infrastructure ciency.
Handbook visit RittalEnclosures.com.
Make a Change for the Better.
When a transformer fails, a decision must be made on whether to repair or replace it.
Cast coil transformers typically are not repairable; they must be replaced. However,
there are a few companies who are building recyclable cast coil transformers. On the
other hand, in most cases, liquid-filled transformers can be repaired or rewound.
When a cast-coil transformer fails, the entire winding is rendered useless because it
is encapsulated in epoxy resin. Because of the construction, the materials are difficult
and expensive to recycle. Liquid-filled transformers are easily recycled after they’ve
reached the end of their useful life. The steel, copper, and aluminum can be recycled.
Cast-coil transformers have a higher operating sound level than liquid-filled transform-
ers. Typical cast coil transformers operate in the 64 to 70 dB range while liquid-filled
transformers operate in the 58 to 63 dB range. A decibel is a logarithmic function and
sound pressure doubles for every 3-dB increase.
Liquid-filled transformers have less material for the core and coil and use highly effec-
Dry-type transformers have the advantage of being easy to install with fire-resistant
and environmental benefits. Liquid-filled transformers have the distinct disadvantage of
requiring fluid containment. However, advances in insulating fluids, such as Envirotemp
FR3 by Cargill, a natural ester derived from renewable vegetable oil, is reducing the
advantages of dry-type transformers.
For indoor installations of transformers, cast coil must be located in a transformer room
with minimum 1-hour fire-resistant construction in accordance with NFPA 70-2017: Na-
tional Electrical Code (NEC) Article 450.21(B). However, if less-flammable liquid-insu-
lated transformers are installed indoors, they are permitted in an area that is protected
by an automatic fire-extinguishing system and has a liquid-confinement area in accor-
dance with NEC Article 450.23.
• How much power needs to be delivered to each IT cabinet initially, and what does
the power-growth curve look like for the future?
• Can the facilities team decide on the power supplies to be ordered when new IT
equipment is purchased?
Let’s start with the power of a 3-phase circuit. A 208 Y/120 V, 3-phase, 20-amp circuit
can power up to a 5.7-kVA cabinet. Per NEC Article 210.20, branch-circuit breakers
If the specifications for the IT equipment can be tightly controlled, the decision to stan-
dardize on 415 Y/240 V distribution is a pretty simple one. However, if the IT environ-
ment cannot be tightly controlled, the decision is more challenging. Currently, most IT
power supplies have a wide range of operating voltage, from 110 V to 240 V. This al-
lows the equipment to be powered from numerous voltage options while only having to
change the plug configuration to the power supply. However, legacy equipment or spe-
cialized IT equipment may have very precise voltage requirements, thereby not allowing
for operation at the higher 240 V level. To address this problem, both 208 Y/120 V and
415 Y/120 V can be deployed within a data center, but this is rarely done as it creates
confusion for deployment of IT equipment.
The follow-on question typically asked is if the entire data center can run at 415 V, rath-
er than bringing in 480 V and having the energy loss associated with the transformation
to 415 V. While technically feasible, the equipment costs are high because standard
HVAC motors operate at 480 V. Use of 415 V for HVAC would require specially wound
motors, thus increasing the cost of the HVAC equipment.
Solar cells are not 100% efficient. In the infrared region of light, solar cells are too weak
to create electricity; and in the ultraviolet region of light, solar cells create heat instead
of electricity. The amount of power that can be generated with a PV array also varies
due to the average sunshine (insolation, or the delivery of solar radiation to the earth’s
surface) along with the temperature and wind. Typically, PV arrays are rated at 77˚F,
allowing them to perform better in cold rather than in hot climates. As temperatures
rise above 77˚F, the array output decays (the amount of decay varies by type of sys-
tem). Ultimately, what this means is that the power generation of an array can vary over
the course of a day and year. Added to this are the inefficiencies of the inverter, and if
used, storage batteries.
A PV system may or may not provide power during a utility power failure, depending on
the type of inverter installed. A standard grid-tied inverter will disconnect the PV sys-
tem from the distribution system to prevent islanding. The inverter will reconnect when
utility power is available. An interactive inverter will remain connected to the distribution
system, but it is designed to only produce power when connected to an external pow-
er source of the correct frequency and voltage (i.e., it will come online under genera-
tor power). Typically, interactive inverters include batteries to carry the system through
power outages, therefore the system should be designed such that there is enough
PV-array capacity to supply the load and charge the batteries.
A medium-voltage alternative
to low-voltage UPS
Design topology evaluation also should
consider the medium-voltage uninter-
ruptible power supply (UPS). Like the
Debra Vieira is a senior electrical engineer at CH2M, Portland, Ore., with more than
20 years of experience for industrial, municipal, commercial, educational, and military
clients globally.
Respondents
• Doug Bristol, PE, Electrical Engineer,
Spencer Bristol, Peachtree Corners, Ga.,
• Terry Cleis, PE, LEED AP, Principal,
Peter Basso Associates Inc., Troy,
Mich.
• Scott Gatewood, PE, Project Manag-
er/Electrical Engineer/Senior Associ-
ate, DLR Group, Omaha, Neb.
• Darren Keyser, Principal, kW Mission
Critical Engineering, Troy, N.Y.
• Bill Kosik, PE, CEM, LEED AP, BEMP,
Senior Engineer – Mission Critical,
exp, Chicago
• Keith Lane, PE, RCDD, NTS, LC,
LEED AP BD&C, President, Lane Co-
burn & Associates LLC, Seattle
• John Peterson, PE, PMP, CEM, LEED AP BD+C, Program Manager, AECOM, Wash-
ington, D.C.
• Brandon Sedgwick, PE, Vice President, Commissioning Engineer, Hood Patterson &
Dewar Inc., Atlanta
• Daniel S. Voss, Mission Critical Technical Specialist, M.A. Mortenson Co., Chicago
Terry Cleis: Designing overall systems that are focused at the rack level. These de-
signs include targeted rack-level cooling and row containment for hot or cold areas.
Some of these systems can be designed to provide flexible levels of cooling to match
changing needs for individual racks. These designs include rack-mounted monitoring
for temperature and power and associated power and cooling systems designed to
cover a predetermined range of equipment. These systems also often allow for raised
floor elevations to be minimized or even removed. This enables any space that is below
the floor to be used for other systems with less concern on air movement.
Scott Gatewood: Beyond reliability and durability, efficiency and scalability remain
top priorities for our clients’ infrastructures. Although this is not a new revelation, the
means and methods of achieving them through design and information technology (IT)
hardware continue to evolve. Data center energy (with an estimated 90 billion kWh of
data center energy waste this year, according to the Natural Resources Defense Coun-
cil) remains a key operational cost-management goal. The tools, methods, and hard-
ware needed to reduce energy continue advancing. The Internet of Things (IoT) has
entered the data center with data center infrastructure-management (DCIM) software,
sensors, analytics, and architectures that closely couple cooling and energy recovery,
providing energy efficiencies rarely achievable just 6 years ago. With increased auto-
mation, managing the plant is increasingly achievable from remote locations, just as
the IT infrastructure has been. Scalability also remains critical to our clients. How this
Bill Kosik: Over the past 10 years, data center design has evolved tremendous-
ly. During that maturation process, we have seen trends related to reliability, energy
efficiency, security, consolidation, etc. I don’t believe there is a singular trend that is
broadly applicable to data centers like the trends we’ve seen in the past. They are
more subtle and more specific to the desired business outcome; data center-plan-
ning strategies must include the impacts of economical cloud solutions, stricter capital
spending rules, and the ever-changing business needs of the organization. Data cen-
ters are no longer the mammoth one-size-fits-all operation consolidated from multiple
locations. We see that one organization will use different models across divisions, es-
pecially when the divisions have very diverse business goals.
Keith Lane: Some of the new trends we see in the industry include striving for in-
creased efficiency and reliability without increasing the cost. Efficiency can be gained
with better uninterruptible power supply (UPS) systems, proper loading on the UPS,
transformers, and increased cold-aisle temperatures. Additionally, a proper evaluation
of the specific critical loads and the actual required redundancies can allow some of
the loads to be fed at 2N, some at N+1, others at N, and others with straight utility
power. Allowing this type of evaluation to match specific levels of redundancy/reliability
with actual load types can significantly increase efficiency.
John Peterson: We are seeing a continuation of the many trends that have been
Brandon Sedgwick: The biggest trends we see in data centers today are meg-
asites and demand-dependent construction. In this highly competitive market, min-
imizing cost per megawatt of installed capacity is a priority for data center owners,
which is why megasites spanning millions of square feet with hundreds of megawatts
of capacity are becoming more common. Borrowing a page from just-in-time manu-
facturing principles, these megasites (and even smaller facilities) are designed to be
built or expanded in phases in response to precontracted demand to minimize upfront
capital expenditure and expedite time to market. Consequently, these phased projects
often demand compressed construction schedules with unyielding deadlines driven by
financial penalties for the owner. This has led to simpler or modular designs to expedite
construction, maximize capacity, and reduce costs while allowing flexible redundancy
and maintainable configurations to meet individual client demands.
Daniel S. Voss: We’re noticing large colocation providers with faster speed-to-
market construction and implementation. There is a high level of competition between
the major countrywide colocation providers to have the ideal space with all amenities
(watts per square foot, raised access floor, security, appropriate cooling, etc.) ready for
each new client and customer.
Voss: There are really two trends. The first is using existing, heavy industrial buildings
and repurposing them for data centers. To increase the speed to market, many owners
and constructors are eyeing containers and containerization for electrical, mechanical,
and IT disciplines. The second involves building hyperscale data centers with 20 mW
or more of critical IT computing power. Many large colocation providers are construct-
Cleis: We’re seeing targeted cooling with more options including water and refriger-
ant for racks. Better options for the piping distribution associated with these systems
will continue to evolve to make the work associated with ongoing maintenance and
future changes better suited to take place in a data center environment. We have own-
ers asking for more modular designs and designs that will prevent issues like software/
firmware problems that can ultimately shut down entire systems. These include smaller
UPS systems or using multiple UPS manufacturers. Smaller systems can be located
closer to the loads and allow equipment upgrades or replacements associated with
failures without affecting the entire facility. Replacement and repairs to smaller compo-
nents can also help reduce costs associated with ongoing maintenance and repairs.
Sedgwick: One trend we are seeing more frequently is that IT is leveraging methods,
such as virtualization, that can be used to “shift” server processes from one location to
another in the event of a failure, to offset physical power-delivery system redundancy.
This allows engineers to streamline infrastructure design by reducing power transfor-
mations between incoming sources and the load, simplifying switching automation,
and minimizing—or even eliminating—UPS and backup generation. Simpler power-de-
livery systems consume less square footage, are faster to build, and free up more of a
facility’s footprint for white space.
Peterson: Liquid and immersion cooling is likely to grow in the coming years. As
power densities increase and the costs and implementation challenges are solved,
liquid and immersion cooling practices can start to develop, as efficiency is still a prime
Cleis: Engineers and designers should always keep an open mind and spend time
researching and reading to stay informed about evolving design and system innova-
tions. I find that good ideas very often come from owners and end users during the
early programming stages of the design process. Many owners and end users have a
solid technical background and a historic understanding of how data centers operate.
Most of these people spend a lot of time working in data centers, which enables them
to bring an insightful perspective. They are able to inform us what systems are reliable
and have worked well for them in the past, and what systems have given them prob-
lems. They also provide the design team with ideas for how to make the systems func-
tion better.
Voss: With increasing IT power densities, cooling and power can become limiting fac-
tors in optimizing the built environment. Additionally, data center customers use varying
Lane: It is incumbent for engineers who provide design and engineering service for
mission critical facilities to keep up with technology and with the latest data center
trends. Our company has vendors present the latest technology to us; we belong to
several professional organizations, read numerous industry magazines, and provide ex-
tensive independent research on codes, design standards, and emerging technologies.
Peterson: In most cases of data centers up to 20 years old, revisions to the exist-
ing data center are possible to allow increases in density. Specialized cooling systems
have allowed for increased density, which is often in localized areas of a legacy data
center. With more choices of adaptable air segregation and other means to decrease
bypass air, older data centers can control hot spots and better serve future needs.
For new designs, data center layouts are often being coordinated for specific densities
that work with their common operation within a certain power, space, and cooling ra-
tio. Some new facilities are aiming for the flexibility of more direct liquid (water) cooling
and are willing to invest in the upfront coordination and installations to meet their future
needs.
Gatewood: Emerging technologies are difficult to predict accurately. I recall the 1995
white paper preceding the creation of the Uptime Institute predicting 500 W/sq ft white
space by the early 2000s. Predictably, Moore’s Law did produce exponential perfor-
Darren Keyser: While all projects are presented with unique challenges, a recent
multistory data center on the West Coast presented significant challenges, particularly
for the fuel system design. The client’s goal of maximizing the amount of leasable white
space meant there was little space for the generator plant, which needed 48 hours of
fuel storage. Adding to the challenge was unfavorable soil conditions for underground
fuel tanks. With limited room at grade, the tanks needed to be vertical. In addition, the
facility is in a seismic zone, which added to the complexity of the tank support. The 10
3-mW engines were placed on the roof of the facility, adding to the intricacy of the fu-
el-delivery system. Additionally, even though the piping was abovegrade welded steel,
the client wanted to manage the risk of a fuel leak and decided to exceed code by im-
plementing a double-wall piping system.
Gatewood: While new data centers are more straightforward, renovating existing
data center environments are not for the faint of heart. In line with the importance of a
proper structural design, we are wrapping up the reconstruction of a new data center
Cleis: We are in the process of designing a moderately sized data center to fit inside
an existing vacant building. The owner requested that the design include smaller-scale
equipment configured in a modular design to allow for easier maintenance and equip-
ment replacement. This includes smaller UPS units, PDUs, and non-paralleled gener-
ators. Providing levels of redundancy using these smaller pieces of equipment and not
paralleling the generators proved to be a challenge. The current design contains mod-
ules that are based on a predetermined generator size. The overall generator system is
backed up using transferring equipment and an extra generator unit in the event a sin-
gle generator fails.
Voss: The QTS Chicago data center fits that description to a tee. QTS leveraged this
former Sun-Times printing facility’s robust base structure and efficient layout to sup-
port its repurposing as an industry-leading data center. The innovative conversion is
a modular design that populates the structure from east to west as more clients and
tenants occupy the data hall space. We are currently constructing a 125-mW substa-
tion for Commonwealth Edison onsite, which will not only provide power to the existing
470,000-sq-ft building but also have sufficient capacity to expand on the same site.
Lane: Most facilities and general buildings do not draw consistent power over a 24-
hour period. Data centers and other mission critical facilities draw power with a high
load factor. Duct banks can overheat when feeding a data center with a high load fac-
tor. Specific to data centers with high load factors, Neher-McGrath duct-bank heating
calculations are required to ensure the conductors feeding the facility are adequately
sized.
Bristol: Mostly, the challenge is the relentless search for maximum reliability and con-
current maintainability. These high-performance buildings are required to be operating
essentially at full throttle all the time, even during times of maintenance, so multiple ser-
vice paths for all utilities (cooling, power, air, etc.) is essential.
Peterson: Data centers are made for silicon-based life, not carbon-based. Once
owners and operators understand that modern IT equipment can withstand high air-in-
let temperatures, they can start to gain monumentally through cooling efficiency.
Peterson: We perform all of our designs using BIM. Through practice, we are able to
incorporate more information in our models to reduce the number of coordination er-
rors that lead to changes in the field. Owners have seen the benefits over time, as new
additions and changes to the designs can be shared with consultants, added to the
BIM, and then returned. Third-party construction-management groups have then taken
the model and added updates as necessary throughout the process, including input
from commissioning and controls changes.
Bristol: Yes, we use BIM with mission critical projects during the design. During con-
struction, we share the model with contractors and subcontractors to fine-tune their
systems, then the record (sometimes referred to as “as-built”) model is turned over to
the owner not only for long-term operations and maintenance but also for use by future
design teams when the inevitable renovations or expansions occur.
Voss: Absolutely, especially for data centers. Our firm uses BIM for all of our projects
throughout the country. This is mandatory for repurposing existing buildings; often-
times, the amount of available space to install the overhead infrastructure is less than
in a data center-designed structure. On a recent data center, our company leveraged
BIM to support the graphics for the building management system (BMS). This not only
saved time and money in creating new graphics for the BMS system, but it also provid-
ed the customer with a far more accurate representation of their facility.
Lane: A majority of data center projects we are involved in are using REVIT.
Cleis: One of our jobs as design engineers is to help owners understand the risks,
benefits, and costs associated with different levels of redundancy for the various sys-
tems that make up an overall data center facility. Hybrid designs with varying levels of
redundancy between different systems are not uncommon, particularly for smaller and
midsize systems. Our job is to educate owners and help them understand their op-
tions, but ultimately to design a facility that meets their needs and works within their
budget. It may sometimes appear that we are underdesigning a certain system in a
facility, but in fact, we are establishing a lower overall baseline of design redundancy
for the facility. Then, with the owner’s input, we design some specific systems to higher
levels of reliability to address historic problems or known weaknesses for that particular
client or facility.
Gatewood: The bulk of the data center’s initial costs is the electrical and mechanical
systems needed to provide 100 to 200 times the power demands of an average office
building. Add to this the redundancy and resilience required so that a system failure or
service outage, say a fan motor, must not result in an outage of the IT work product.
This is where the high initial costs come from. However, many operations can grow
over time, which permits using scalable infrastructure that allows our client to grow
their plant as their IT needs grow. This results in the best initial cost while allowing them
to grow quickly as their needs change.
Lane: This is the real challenge and the mark of a good engineer. The engineer must
dig deep into the owner’s basis of design and work closely with the owner to under-
stand where some loads need high reliability and where lower reliability and associated
redundancies can be removed. Also, right-sizing the equipment will save money up-
front and increase efficiency. Always design toward constructibility and work hand-in-
hand with the electrical contractor. Using BIM and asking for input from the contrac-
tors will save time and money during construction. We are seeing ever-evolving code
changes with respect to arc flash calculations, labeling, and mitigation. It is critical to
ensure that the available fault current at the rack-mounted PDU is not exceeded. As a
firm, we provide the design of mission critical facilities as well as fault-current and arc
flash calculations and selective-coordination studies. We always design toward reduc-
ing cost and arc and fault-current hazards during the design process.
Voss: It is a true balancing act to arrive at an optimal solution that meets or exceeds
the needs of the customer within the established budget. We work closely with engi-
neers and architects to perform detailed cost-benefit analysis to ensure features and
requirements are evaluated holistically. Using modular construction techniques and un-
Bristol: Most designs now include a modularity strategy so owners can build (and
spend) as they go. Modularity almost always includes a roadmap to the “end game”
and has to include strategies to minimize the impact to the existing live data center as
the facility is built out. For example, if the data center’s capacity includes 10 MW of
generator capacity at N+1, but only the first two are being installed on day one, then
all exterior yard equipment—pads, conduit rough-ins, etc.—would be included on day
one so that adding generators would be almost plug-and-play. Outdoor cooling equip-
ment, interior gear, UPS, batteries, etc. would work in a similar way.
Peterson: They’re doing it by using typical equipment sizes and modularity; vendors
have been able to bring down costs considerably. Contractors also see savings with
typical equipment; installations gain in speed as the project progresses.
Peterson: We have performed more risk assessment studies for data center cli-
ents over the past year than in previous years. While this often starts based on se-
vere-weather outlooks, we examine redundancy on fiber, power, and other utilities.
Clients also have been aiming to consolidate data centers to certain regions to reduce
latency, but using multiple sites across that region to avoid loss of connectivity. A high-
er degree of care is taken with data centers, as they most often serve missions that are
more critical than other building types.
Cleis: When designing a facility, the team should always address known factors re-
garding potential natural disasters in a geographic region when searching for a site
for a new facility. It’s also common to include similar concerns when selecting spac-
es within existing facilities when the data center will only occupy part of the building.
Avoiding windows, potential roof leaks, and flooding are common requirements. We of-
ten try to select an area in a building that has easy access to MEP systems, while also
avoiding exterior walls, top floors, and basements. Typically, we try to avoid areas that
are too public and select areas that are easy to secure and have limited access. It’s
also important to select an area that has access paths that will allow large equipment
to be moved.
Lane: We typically provide an NFPA 780: Standard for the Installation of Lightning
Protection Systems lightning-protection analysis for mission critical systems. A majority
of data centers are designed with a lightning-protection and/or a lightning-mitigation
system.
Voss: Fortunately for projects in the greater Chicagoland area, the worst weather we
have ranges from subzero temperatures to heavy snows/blizzards to high winds with a
lot of rain. Keeping the building out of flood plains, and for certain clients, constructing
an enclosure that can withstand a tornado rating of EF-4 (207-mph winds) are issues
we’ve faced.
Voss: Absolutely. Many corporations are moving from onsite computing facilities to
cloud-based colocation data centers. The quantity of new enterprise data centers is
decreasing while the quantity of colocation sites is increasing at a rapid pace.
Peterson: We’ve seen a lot of growth from the main cloud providers, and indus-
try analysts are expecting that this growth will continue for at least the next 10 years.
Since most have a typical format for their buildings, the structures themselves haven’t
changed a lot to accommodate the enormous pressure on schedule to meet the cloud
demand. Over time, the trends may shift to lower costs and yield higher returns for
shareholders that are investing now.
Gatewood: Cloud computing’s visible impact on current and future data centers
clearly reveals itself in the enterprise client’s white space. The combination of virtual
machines and the cloud have slowed the growth of rack deployments. Clearly, each
client’s service and application set will affect cloud strategy. In some cases, growth has
stopped as applications move to the cloud.
CSE: How do data center project requirements vary across the U.S.
or globally?
Keyser: The local environment has a huge impact on the mechanical solution. Ques-
tions to ask: Is free cooling an option? What are the local utility costs for water versus
electricity? Questions like these are key elements that will drive the design.
Kosik: There are many variations, primarily due to geo-specific implications includ-
ing climate and weather, impacts on cooling system efficiency, severe weather events,
water and electricity dependability, equipment and parts availability, sophistication and
capability of local operational teams, prevalence and magnitude of external security
threats, local customs, traditional design approaches, and codes/standards. It is im-
portant to be cognizant of these issues before planning a new data center facility.
Voss: Selecting a data center site normally goes through many steps to reach a po-
Lane: We have provided the design and engineering for data centers across the
globe. We have seen many variations in design. Some of these variations include serv-
ing utility voltage, server voltage, lightning protection, grounding requirements, surge
and transient protection, and others. Additionally, the energy cost can significantly drive
the design. In areas of the world where energy costs are higher, efficiency is very crit-
ical. In areas of high lightning strike density, lightning protection and/or mitigation is a
must.
I
n the design of power and cooling systems for data centers, there must be a known
base load that becomes the starting point from which to work. This is the minimum
capacity that is required. From there, decisions will have to be made on the additional
capacity that must be built in. This capacity could be used for future growth or could
be held in reserve in case of a failure. (Oftentimes, this reserve capacity is already built
into the base load). The strategy to create modularity becomes a little more complex
when engineers build in redundancy into each module.
In this article, we will take a closer look at different parameters that assist in establish-
ing the base load, additional capacity, and redundancy in the power and cooling sys-
tems. While the focus of this article is on data center modularity with respect to cooling
systems, the same basic concepts apply to electrical equipment and distribution sys-
tems. Analyzing modularity of both cooling and power systems together-the recom-
mended approach-will often result in a synergistic outcome.
• What is the base load that is used to size the power and cooling central plant equip-
ment (expressed in kilovolt-amp, or kVa, and tons, respectively)? In the initial phase
of the building, if one power and cooling module is used, this is considered an “N”
Figure 1: This graph shows the three scenarios (one to system where the capacity of the mod-
three chillers) running at loads of 10% to 100% (x-axis) ule is equal to the base load.
and the corresponding chiller power multiplier (y-axis).
The chiller power multiplier for the one-chiller scenario
• In the base load scenario, what is
tracks the overall system load very closely, where the
the N that the central plant used as
two- and three- chiller scenarios have smaller power
a building block? For example, if the
multipliers (use less energy) due to the ability to run the
base cooling load is 500 tons and two
chiller compressors at lower, more efficient operating
chillers are used with no redundancy,
points. All graphics courtesy: Bill Bosik, independent
consultant the N is 250 tons. If a level of concur-
rent maintainability is required,
an “N+1” configuration can be used. In this case, the N is still 250 tons but now there
are three chillers. In terms of cooling in this scenario, there would be 250 tons over the
design capacity.
• How do we plan for future modules? If the growth of the power and cooling load is
Module-in-a-module
Each module will have multiple pieces of power and cooling gear that are sized in
various configurations to, at a minimum, serve the day one load. This could be done
without reserve capacity, all the way to systems that are fault-tolerant, like 2N, 2(N+1),
2(N+2), etc. So the growth of the system has a direct impact on the overall module.
For example, if each module will serve a discreet area within the facility without any
interconnection to the other modules, the modular approach will stay pure and the fa-
cility will be designed and constructed with equal-size building blocks. While this ap-
proach is very clean and understandable, it doesn’t take advantage of an opportunity
that exists: sharing reserve capacity while maintaining the required level of reliability.
If a long-range strategy includes interconnecting the modules as the facility grows, there
will undoubtedly be opportunities to reduce expenditures, both capital expense and on-
going operating costs related to energy use and maintenance costs. The interconnec-
tion strategy results in a design that looks more like a traditional central plant and less
like a modular approach. While this is true, the modules can be designed to accom-
modate the load if there were some type of catastrophic failure (like a fire) in one of the
modules. This is where the modular approach can become an integral part in achieving
high levels of uptime. Having the modules physically separated will allow for shutting
down a module that is in a failure mode; the other module(s) will take on the capacity
Figure 2: This graph demonstrates the same concept that was shed by the failed module.
as in Figure 1, but the facility cooling load is at 75%.
As the cooling load decreases, the energy use of the
Using the interconnected approach can
different scenarios begins to equalize.
reduce the quantity of power and cool-
ing equipment as more modules are built, simply because there are more modules of N
size installed (see Figure 1 through 4). Installing the modules with a common capacity
and reserve capacity will result in a greater power and cooling capacity for the facility.
If uncertainty exists as to the future cooling load in the facility, the power and cooling
equipment can be installed on day one, but this approach deviates from the basic de-
sign tenets of modular data centers. And while this approach certainly provides a large
“cushion,” the financial outlay is considerable and the equipment will likely operate at
extremely low loads for quite some time.
Going to the other end of the spectrum yields an equipment layout that consists of
many smaller pieces of equipment. Using this approach will certainly result in a highly
modular design, but it comes with a price: All of that equipment must be installed, with
each piece requiring electrical hookups (plus the power distribution, disconnects, start-
ers, etc.), testing, commissioning, and long-term operations and maintenance. This is
where finding a middle ground is important; the key is to build in the required level of
reliability, optimize energy efficiency, and minimize maintenance and operation costs.
• The location of the facility immediately influences the type of module design ap-
proach. For example, when facilities are located in sparsely populated areas where
skilled piping, sheet metal, and electrical design and construction experts are hard to
come by, it will be beneficial to use a factory-built, tested, and commissioned module
that is delivered to the site-probably in multiple sections-assembled, and connected
• The construction schedule of data centers and other critical facilities typically is driv-
en by a customer’s needs, which is often driven by revenue generation or a need
by the customer’s end-user (e.g., the community, business enterprises, government
agencies) to use/occupy the proposed facility as soon as possible. When analyzing
the best approach to the construction of the overall facility, it is advantageous to have
the module built offsite, in parallel with the construction of the facility. The module
can be shipped to the site and installed even if the facility is not complete. Because
all of the equipment, piping, and electrical in the module have been installed, tested,
and commissioned, the overall time to build the facility can be reduced. Additionally,
commissioning and testing of the equipment in a factory setting can be more effec-
tive-especially when the people who built the module are onsite with the commission-
ing authority and all are working together to make sure all of the kinks are worked
through.
Figure 3: This graph demonstrates the same concept• In between the two choices of site-
built and factory-built is the hybrid ap-
as in Figure 1, but the facility cooling load is at 50%.
As the cooling load decreases, the energy use of theproach to constructing a module. As
different scenarios begins to equalize, especially the
he name implies, the hybrid approach
one-chiller scenario.
uses a combination of factory-built
and site-erected components. There is not one solution for this approach because the
amount of work done on the site, as compared to within the factory, vary greatly from
project to project. A good example of why a hybrid approach would be used is when
there could be difficulty in shipping large pieces of power and cooling equipment that
will be installed in a module. The balance of the HVAC and electrical work could still
be completed at the factory and take advantage of reducing the overall schedule. And
future expansions can be handled the same way, building in quick expansion capability.
Performance comparisons
An advantage of using a modular design approach is obtaining a higher degree of flex-
ibility and maintainability that comes from having multiple smaller chillers, pumps, fans,
CFE Media Digital Report: Data Center Design • 82
✓ Designing modular etc. When there are multiple redundant pieces of equipment, maintenance procedures
data centers
are less disruptive and, in an equipment-failure scenario, the redundant equipment can
be repaired or replaced without threatening the overall operation.
In data centers, the idea of designing in redundant equipment is one of the corner-
stones of critical facility design, so these tactics are well-worn and readily understood
by data center designers and owners. Layering modularization on top of redundancy
strategies just requires the long-range planning exercises to be more focused on how
the design plays out over the life of the build-out.
To illustrate this concept, a new facility could start out with a chilled-water system that
uses an N+2 redundancy strategy where the N becomes the building block of the cen-
tral plant. A biquadratic algorithm is used to compare the different chiller-compressor
unloading curves. These curves essentially show the difference between the facility air
conditioning load and the capability of the compressors to reduce energy use.
In the analysis, each chiller will share an equal part of the load; as the number of
chillers increases, each chiller will have a smaller loading percentage. In general, com-
pressorized equipment is not able to have a linear energy-use reduction as the air con-
ditioning load decreases. This is an inherent challenge in system design when attempt-
ing to optimize energy use, expandability, and reliability. The following parameters were
used in the analysis:
Figure 4: This graph demonstrates the same concept fPLR&dt indicates that EIR is a function
as in Figure 1, but the facility cooling load is at 25%. of the part load ratio of the chiller and
As the cooling load decreases, the energy use of the the lift of the compressor-chilled water
different scenarios begins to equalize, especially the
supply temperature subtracted from the
one-chiller scenario. The two- and three- chiller sce-
entering condenser water temperature.)
narios are already operating at a very small load, so
changes in cooling loads will not have a large impact
on the efficiency of the system. Type of curve: biquadratic in ratio and dT
Coefficients:
• c1 = 0.27969646 • c2 = 0.57375735
• c3 = 0.25690463 • c4 = -0.00580717
• c5 = 0.00014649 • c6 = -0.00353007
Each of the scenarios (Figures 1 through 4) were developed using this approach, and
CFE Media Digital Report: Data Center Design • 84
✓ Designing modular the results demonstrate how the efficiency of the chiller plants decrease as the overall air
data centers
conditioning load decreases.
Summary of analysis:
• The N+2 system (three chillers) has the smallest decrease in energy performance when
the overall facility load is reduced from 100% to 25%. This is due to the fact that the
chillers are already operating at a very small load. So large swings in cooling loads will
not have a large impact on the efficiency of the system.
• The N system (one chiller) shows the greatest susceptibility to changes in facility cool-
ing load. The chiller will run at the highest efficiency levels at peak loading, but will drop
off quickly as the system becomes unloaded.
• The N+1 system (two chillers) is in between the N and N+2 systems in terms of sensi-
tivity to changes in facility loading.
When put into practice, similar types of scenarios (in one form or another) will be a part
of many data center projects. When these scenarios are modeled and analyzed, the re-
sults will make the optimization strategies clearer and enable subsequent technical and
financial exercises. The type of modularity ultimately will be driven by reliability, and first
and operational costs. Because a range of different parameters and circumstances will
shape the final design, a well-planned, methodical procedure will ultimately allow for an
informed and streamlined decision-making process.
T
he lack of data center capacity, low efficiency, flexibility and scalability, time
to market, and limited capital are some of the major issues today’s building
owners and clients have to address with their data centers. Modular data
centers (MDCs) are well-suited to address these issues. Owners are also
looking for “plug-and-play” installations and are turning to MDCs for the solution. And
why not? MDCs can be up and running in a very short time frame and with minimal
investment-while also meeting corporate criteria for sustainability. They have been used
successfully since 2009 (and earlier) by Internet giants, such as Microsoft and Google,
and other institutions like Purdue University.
With that said, Microsoft has recently indicated the company is abandoning the use of
their version of the MDC known as information technology pre-assembled components
(IT-PACs) because they couldn’t expand the data center’s capacity fast enough. So
which is it? Are MDCs the modern alternative to traditional brick-and-mortar data cen-
ters? This contradicting information may have some owners concerned and confused
as they ask if MDCs are right for their building.
The IT capacity of an MDC can vary significantly. Networking MDCs are typically 50
kW or less, standalone MDCs with power, mechanical, and IT systems can range up
to 750 kW, and blade-packed PODs connected to redundant utilities may be 1 MW or
greater.
According to 451 Research, the MDC market is expected to reach $4 billion by 2018,
up from $1.5 billion in 2014. In addition, 451 Research believes that MDCs are strate-
gically important to the data center industry. MDCs are expected to play a significant
CFE Media Digital Report: Data Center Design • 88
✓ How to choose a role in the next generation of products and technology, as they offer a flexible data
modular data center
infrastructure that can be built in larger “chunks” than many scalable brick-and-mortar
data centers. MDC designs are improving, and owners and operators are becoming
better educated about the many available options and variations. Due to the growing
interest with MDCs, there has been a surge in suppliers. An Internet search for MDCs
yields more than 20 different organizations with varying types of products-and this list
is expected to grow as vendors use technology, innovation, and geography to gain a
competitive edge. MDCs can be purchased or leased in many different configurations,
such as:
Leasing options offer significant flexibility in the physical location of the MDC. Some
lease options allow the MDC to reside onsite, or the MDC infrastructure can be leased
from a co-location provider who is delivering services. Comparing MDCs and available
options can quickly become overwhelming. There are containerized, pre-engineered,
and prefabricated versions. IT vendors have MDC products that are complete data
center solutions and are engineered to work specifically with their hardware, but they
typically work with IT products from multiple vendors. Some providers deliver co-loca-
tion and cloud services using MDC technology.
• Flexibility and scalability: With the ability to provide a quick response comes the
agility to scale data center capacity to actual business needs rather than trying to
predict capacity years in advance. MDCs allow power, cooling, and IT capacity to
be deployed when required. Through the use of MDCs, changes in densities, space
requirements, or IT technologies can be easily incorporated rather than retrofitting a
more traditional facility that was constructed years ago to old requirements. MDCs
• Prefabrication: The risk associated with the manufacturing of the MDC lies not with
the owner, but with the manufacturer who is accountable for the cost and perfor-
mance of the MDC.
• Flexible site selection: Is the intent of the MDC to be mobile or will it remain in
one location? MDCs can be disassembled and transported to another site for quick
assembly to provide for contingency operations during a natural disaster or for tem-
porary cloud computing, similar to the needs of the U.S. Army. In some cases, the
availability of materials and the logistics of building a data center in a remote location
With all the benefits of an MDC, it’s easy to forget about some of the negatives, such
as depending too heavily on the design of the MDC for high availability. The MDC will
be operated and maintained by workers, which makes it difficult to eliminate human
error. Security of an MDC is also a significant issue. For example, with the opening
of just one door, you are in the heart of the data center. Also, a bullet can penetrate
the shell or a vehicle can ram into the MDC. Some of the security issues can be ad-
dressed with bollards or berms around the MDC. Containerized data centers are in
an ISO box, which is not aesthetically pleasing and may not be allowed in some busi-
Regardless of the type of data center construction (brick-and-mortar or MDC) the pur-
chase or lease price of the property, site development costs, and the impact of local
environmental conditions on the energy required for cooling need to be considered in
the site-selection process. Site location can impact labor costs.
If the data center is built in a location that has low labor costs, then the cost savings
of a premanufactured data center may not be realized. Whereas if the labor costs are
high, then the use of MDCs may offer a cost advantage.
The process of reviewing and approving site plans may be improved by an MDC, which
is certified by Underwriters Laboratories (UL) and/or Conformité Européene (CE). This
allows the permitting agency to focus only on the installation of the MDC, rather than
the internal subsystems of the MDC. UL 2755-Modular Data Center Certification Scope
and Process addresses issues, such as:
• Transportation hazards
• HVAC
• Installed equipment
• Noise exposure.
Since the MDC may be supplied through existing on-premise wiring systems or
through a separate MDC enclosure, the data, fire alarm, communications, control,
and audio/video circuits from the MDC are typically brought into an existing facili-
ty. If the MDC is UL-certified, then the evaluation of the equipment, installed wiring,
lighting, and work space is conducted as part of the listing; only field-installed wiring
is required to be reviewed and comply with NFPA 70. Nonlisted MDCs may also be
installed under Article 646 of the 2014 edition of the NEC; however, all components
must then be installed in accordance with the code.
Choosing between MDCs and traditional data center construction is not a simple
decision. There are some issues where an MDC may not be the right solution or will
require special construction techniques to mitigate, such as security, maintenance
during inclement weather, and the need for redundant utility connections. However,
MDCs can offer multiple advantages that may be crucial to a business’s strategies
and growth. With their lower initial investment and faster deployment, scalability, and
flexibility, MDCs should be considered as a possible solution. MDCs are highly effi-
cient and offer many of the same options that are available through traditional data
center construction. MDCs can be implemented as a turnkey solution, thereby reduc-
ing risk and making them an attractive choice.
Debra Vieira is a senior electrical engineer at CH2M. She specializes in data center
and mission critical environments.
W
hat does a data center and a laundromat have in common? As far as the
International Building Code (IBC) is concerned, they are both considered
“Group B Business Occupancies.” As per IBC Section 304 Business Group
B, both types of businesses have the same basic set of minimum require-
ments to safeguard the general health and welfare of occupants. Group B Business
Occupancies are generically defined as occupancies that include office, professional,
or service-type transactions including storage of records and accounts. Data centers
and laundromats fall under the listed subset business uses of “electronic data process-
ing” and “dry-cleaning and laundries: pickup and delivery stations and self-service,”
respectively.
Why does this matter? All building codes focus on ensuring the health and safety of a
building’s occupants. The purpose of building codes does not include quantifying the
inherent value of your dirty laundry versus data sitting on a computer server. What is
considered “mission critical” by you and a client may not be shared by the authority
having jurisdiction (AHJ). While there are certain exceptions, such as designated critical
operations areas (DCOA) as defined by Article 708: Critical Operations Power Systems
(COPS) of NFPA 70: National Electrical Code (NEC), code considerations typically don’t
extend beyond the health and safety of a building’s occupants.
While the IBC is a far-reaching code encompassing structural, sanitation, lighting, ven-
tilation, and several other areas, life safety considerations in mission critical environ-
• Assembly
Figure 1: Central offices (CO) and
data centers have similar mechan- • Business
ical, electrical, plumbing, (MEP)
infrastructure and associated haz-
• Educational
ards. This photo is a large, central
DC power supply that provides
power to telecommunications • Day care
equipment with a CO. It is func-
tionally equivalent to an interrupt- • Detention and correctional
ible power supply (UPS) in a data
center. Image courtesy: McGuire • Health care
Engineers Inc.
• Residential
• Storage.
The formal definitions for each of these categories can be found in Chapter 6.1 of
NFPA 101. Each of these categories is characterized by the quantity and type of oc-
cupants, the type of hazards to which they may be exposed, and the factors that af-
fect the ability to safely egress those occupants out the building in the event of a fire.
Interestingly, unlike IBC, NFPA 101 does not define a specific occupancy type for data
centers (or self-serve laundromats, for that matter). This does not mean that NFPA 101
does not apply to data centers. Remember that NFPA 101 is not a prescriptive cook
book and requires a certain amount of interpretation to apply it properly.
The primary question is why would there be a difference in a data center’s occupancy
classification between the IBC and NFPA 101? Without a clear definition, it is debatable
as to what a data center is per NFPA 101. Without such guidance, the primary consid-
eration should be an assessment of what occupancy patterns and characteristic haz-
ards are present in a data center environment. Mission critical data centers are charac-
terized by:
• Unusually high power densities-can easily be more than 100 W/sq ft in the “white
space” where the physical server equipment is located, necessitating top-of-row bus-
duct and other similar electrical distribution equipment
• The need for single-shot, total flooding clean agent fire suppression systems that
require compartmentalization to function properly in lieu of traditional water-based fire
suppression systems
• The need for redundant mechanical, electrical, and plumbing (MEP) infrastructure to
ensure continuity of service
• The need to restrict access to the facility to only authorized personnel for security
reasons.
Again, the purpose of NFPA 101 is to mitigate risks associated with safely evacuating
the occupants of a building in the event of a fire. The primary consideration should be
an analysis of “if” and “how” each of these factors impacts the NFPA 101’s ability to
mitigate those risks, and based on that analysis, which occupancy type provides the
In some cases, the data center might be incidental to the primary function of the build-
ing (i.e., a small server room in a commercial office building), which would allow it to
be classified as part of the larger business occupancy. In other cases, it might be ex-
actly the opposite (i.e., a network operations center within a large containerized data
center). While incidental uses are discussed under NFPA 101’s “Multiple Occupancies”
section 6.1.14.1.3, there is no prescriptive-area-ratio threshold in NFPA 101 to deter-
mine if a usage is “incidental.”
The AHJ may, in some cases, classify the facility as a multiple-occupancy building (part
business and part industrial occupancy) that necessitates a multiple-occupancy des-
ignation. In these cases, the most restrictive requirements would apply if no physical
separation exists, as described by NFPA’s separated occupancy provisions.
NFPA 101’s vague details on the life safety systems that are regularly specified by en-
gineers can cause some confusion. NFPA 101 only mandates if a particular type of
life safety system should be present within a given occupancy type. Although most of
these systems are not discussed in-depth in NFPA 101, understand that when parts
of other codes and standards are directly referenced in a particular section (for exam-
ple, NFPA 2001: Standard on Clean Agent Fire Extinguishing Systems is referenced in
NFPA 101 Chapter 9.8 “Other Automatic Extinguishing Equipment”), they should be
considered integral to the requirements of that section.
If NFPA 101 identifies the requirement for a specific life safety system, the function of
the referenced code or standard is to provide additional detail as to what is acceptable
for the configuration and installation of that system. As such, any referenced codes or
standards should be considered a legally enforced part of NFPA 101.
While potentially required by other building codes, NFPA 101 does not specifically
mandate many of the engineered, life safety-related systems that are typically specified
in a data center environment. Mission critical facility owners require emergency gener-
ators, clean agent fire suppression, early warning fire detection, and similar systems to
minimize the chance of catastrophic damage or disruption to the normal operation of
a very expensive asset. Although these types of elaborate systems may not be specifi-
cally mandated, when provided, they must meet all applicable provisions of NFPA 101.
While the owner’s primary motivation for investing in these systems may be to ensure
business continuity, the engineer’s ultimate responsibility is to properly apply the code
• Means-of-egress components
• Emergency lighting
There must be a balance between maintaining a secure environment and allowing safe
egress during an emergency. Many data centers are equipped with security compo-
nents, such as electromagnetic locking devices on doors, “mantrap” vestibules, and
card-operated revolving doors, which may impede the free-egress requirement. While
engineers are often not included in the initial architectural programming decisions that
establish the need for such security components, the supporting life safety provisions
typically fall under the engineering scope of work and, without proper coordination, can
often fall through the cracks.
NFPA 101 uses specific terminology for egress door components that can’t be easi-
ly opened by turning a door lever or pushing a crash bar. This typically falls under the
category of “special locking arrangements,” and the subcategory is “access-controlled
egress door assembly.” This type of egress door is characterized by electric locking
• A sensor must be provided on the egress side to unlock the door upon detection of
an approaching occupant (typically a passive infrared motion sensor above the door).
• The door must automatically unlock in the direction of egress upon loss of power
(i.e., fail-safe).
• The door must be provided with a manual-release device (“push to exit” button or
similar) within 60 in. of the door, and the door must remain unlocked for at least 30
seconds.
• The activation of the fire-protective signaling system automatically unlocks the door.
• Activation of the building’s fire detection or sprinkler system automatically unlocks the
door.
UL 294: Standard for Access Control System Units is also incorporated as a reference
standard. As such, any approved hardware must be UL 294-compliant. UL 294 also
includes a specific product category, FWAX, which pertains to “special locking arrange-
ments” to prevent unauthorized egress. Activating a manual fire alarm pull station is not
required by NFPA 101 to unlock these doors. However, this does not mean that the
The primary function of emergency lighting is to provide adequate illumination for the
path of egress out a building for its occupants. But what if the building is usually emp-
ty? While NFPA 101 recognizes that many special-purpose industrial occupancies are
normally unoccupied, the engineer also has to consider the characteristic hazards in
a data center environment and determine if omitting emergency lighting from usually
unoccupied buildings impacts the safety of the occupants who may infrequently work
within the space (security, maintenance staff, etc.). This question becomes more per-
tinent in contained data centers (i.e., a “plug-and-play” data center in an intermodal
shipping container) that have high densities of equipment and supporting infrastructure
in an unusually confined environment. This decision to provide emergency lighting be-
comes moot if any type of access-controlled egress-door assembly is provided, which
• Must be able to support the load without being refueled for at least 1.5 hours
Again, the referenced codes and standards should be considered an integral part of
NFPA 101 in the context of the sections in which they’re mentioned. When any partic-
ular type of system is provided, even if not required by NFPA 101, it has to meet the
applicable reference code. Certain standby systems that are characteristic of large
Use of a data center’s UPS system for emergency lighting should be avoided. Al-
though NFPA 76 does have provisions for using a telecommunication facility’s battery
system to power the emergency lighting, this very broad statement can be mislead-
ing and discounts conflicting requirements in other codes and standards. The first
hurdle is that any UPS used for emergency lighting must be listed for central-lighting
inverter duty in accordance with UL 924. This listing is extremely unusual in larg-
er-capacity UPS systems and nonexistent in multi-module UPS systems. Even if ap-
propriately listed, optional standby loads still have to be segregated from emergency
lighting loads in accordance with NEC Article 700.10. Even the requirements for an
“emergency power off” (EPO) button can add further complications. The requirement
for separation, and the prioritization of life safety loads over optional standby loads,
would compromise the primary function of the UPS system to support data center
equipment.
There are many considerations when evaluating a data center’s generator system to
use as an auxiliary source for life safety systems. Most challenges revolve around the
10-second load-acceptance requirement in NFPA 110. While generator-paralleling
control systems have evolved dramatically over the past few years, there are still con-
cerns regarding the use of large, paralleled generator systems for life safety loads.
Larger prime movers (about 2 MW and greater), which are becoming relatively com-
For example, failure of the generator’s emission system will usually cause an automatic
shutdown of the generator system. The cause for the failure may be relatively benign,
such as depletion of diesel exhaust fluid in the selective catalytic reduction (SCR) por-
tion of the emissions system, and wouldn’t necessarily cause damage to the generator
or otherwise affect its ability to generate power. However benign the cause may be, the
fact remains that the ability to support the life safety load would be compromised. A
However, if-for example-a data center does not meet these reduced, minimum thresh-
olds for a fire alarm system, the next question would be what characteristic hazards or
other project owner requirements would necessitate the installation of a fire alarm sys-
tem?
The most common project requirement that would trigger the need for a fire alarm sys-
tem is installing a large UPS system. NFPA 1: The Fire Code and the International Fire
Code both require the installation of smoke detection when the volume of electrolytes
stored in the batteries reaches a certain threshold, typically 50 or 100 gal depend-
ing on which of these two codes is being followed. This requirement applies to the
valve-regulated lead-acid (VRLA) batteries that are typically used in UPS systems, not
Regardless, clean agent fire suppression systems that are not required by code are
still commonly used to protect the server equipment within the white space. While not
required, if a clean agent fire system is provided, it has to be furnished and installed in
accordance with the applicable codes and standards. NFPA 2001: Standard on Clean
Agent Fire Extinguishing Systems specifically requires automatic detection and actua-
tion by default and requires a fire alarm system for proper operation and supervision.
John Yoon is a lead electrical engineer at McGuire Engineers Inc. and is a member of
the Consulting-Specifying Engineer editorial advisory board.
E
ngineering clients typically want to investigate and integrate energy-efficient
and sustainable solutions based on return on investment (ROI) or total cost
of ownership (TCO). Enterprise data center owners and operators tend to be
more willing to look at much longer ROI or TCO periods to see benefits. They
are also looking at public-perception benefits as well as how this works into their over-
all business model. Owners and operators of colocation data centers, which provide
access to multiple clients, generally focus on their bottom lines, resulting in shorter ROI
or TCO periods. Although they do care about public perception and energy efficiency,
their primary concern is to attract customers. This does not mean that they aren’t trying
to be sustainable or energy-efficient. They just have a different set of business priorities
Andy Baxter, PE, is principal, mission critical at Page. Page is a CFE Media con-
tent partner.
I
n simple thermodynamic terms, heat transfer is the exchange of thermal energy
from a system at a high temperature to one at lower temperature. In a data center,
the information technology equipment (ITE) is the system at the higher temperature.
The objective is to maintain the ITE at an acceptable temperature by transferring
thermal energy in the most effective and efficient way, usually by expending the least
amount of mechanical work.
Heat transfer is a complex process and the rate and effectiveness depends on a multi-
tude of factors. The properties of the cooling medium (i.e., the lower-temperature sys-
tem) are pivotal, as they directly impact flow rate, the resultant temperature differential
between the two systems and the mechanical work requirement.
The rate at which thermal energy is generated by the ITE is characteristic of the hard-
ware (central processing units, graphics processing units, etc.) and the software it is
running. During steady-state operation, the thermal energy generated equals the rate
at which it is transferred to the cooling medium flowing through its internal compo-
nents. The flow rate requirement and the temperature envelope of the cooling medium
is driven by the peak rate of thermal energy generated and the acceptable temperature
internal to the ITE.
The flow rate requirement has a direct bearing on the mechanical work expended at
the cooling medium circulation machine (pump or fan). The shaft work for a reversible,
Figure 1: A flow diagram for an open-im- steady-state process with negligible change in
mersion cooling configuration is shown. kinetic or potential energy is equal to ∫vdP, where v
The ITE is immersed in a liquid bath open is the specific volume and P is the pressure. While
to the atmosphere. Image courtesy: Envi-
the pump and fan processes are nonideal, they
ronmental Systems Design Inc.
follow the same general trend.
For data centers, air-cooling systems have been de facto. From the perspective of ITE,
air cooling refers to the scenario where air must be supplied to the ITE for cooling. As
the airflow requirement increases due to an increase in load, there is a corresponding
increase in fan energy at two levels: the air distribution level (i.e., mechanical infrastruc-
ture such as air handling units, computer room air handlers, etc.) and the equipment
level, because ITE has integral fans for air circulation.
Strategies including aisle containment, cabinet chimneys, and in-row cooling units help
improve effectiveness and satisfactorily cool the equipment. However, the fact remains
For extreme load densities typically in excess of 50 to 75 kW/cabinet, the liquid should
preferably be in direct contact with ITE internal components to transfer thermal energy
effectively and maintain an acceptable internal temperature. This type of deployment is
called liquid-immersion cooling and it is at the extreme end of the liquid cooling spec-
trum. Occasionally referred to as “chip-level cooling,” the commercially available solu-
tions can essentially be categorized into two configurations:
For both types of systems, thermal energy can be transferred to the ambient by
means of fluid coolers (dry or evaporative) or a condenser. It can also be transferred to
facility water (chilled water, low-temperature hot water, or condenser water) by means
of a heat exchanger.
A number of proprietary solutions are available for immersion cooling, and most pro-
viders can retrofit off-the-shelf ITE to make them compatible with their technology.
Some technology providers are capable of providing turnkey solutions and require
limited to no involvement of the consulting engineer.
Others provide products as “kit of parts” and rely on the consulting engineer to design
the associated infrastructure. For the latter, collaboration between the design team
and the cooling technology provider is critical to project success. The design respon-
sibilities should be identified and delineated early in the project. Note that a compre-
hensive guide for designing liquid cooling systems is beyond the scope of this article.
Once the total ITE load (in kilowatts) and load density (kilowatt/cabinet) have been
defined by the stakeholders, the criteria can be used in conjunction with the design
liquid-supply temperature and anticipated delta T across the ITE, to determine the
flow rate requirement and the operating-temperature envelope. Recommendations for
a liquid-supply temperature and anticipated delta T are typically provided by the tech-
Figure 2: This flow diagram shows a nology provider, and empirical data is preferred
over theoretical assumptions. For example, a flow
sealed immersion cooling configuration.
The ITE is enclosed in liquid-tight enclo-
rate requirement of 1 gpm/kW, liquid-supply tem-
sures typically under positive pressure.
perature of 104° F, and anticipated delta T of 10°
Image courtesy: Environmental Systems
F was used as the basis of design when deploying
Design Inc.
a specific technology. Requirements can vary sig-
nificantly between different providers.
Selecting a liquid
The liquid properties impact major facets of the design and should be reviewed in
detail. The mechanical industry is accustomed to working with typical liquids, such as
water, glycol solutions, and refrigerants; deviations associated with unique liquids can
create challenges. Properties such as kinematic viscosity, dynamic viscosity, specific
heat, density, thermal conductivity, the coefficient of thermal expansion, and heat ca-
System pressure drop calculations can also be challenging. One option is to use the
underlying principles of fluid mechanics. For example, the Darcy-Weisbach equation
can be used to estimate the pressure drop through pipes when circulating Newtonian
fluids. For sealed-immersion applications, the pressure drop through the ITE enclosure
is typically supplied by the technology provider. When selecting pumps for liquid cir-
culation, properties like density and viscosity will impact the brake horsepower, head,
flow capacity, and efficiency of the pump. Major manufacturers can provide pump per-
formance when circulating unique liquids. Another option is to estimate performance by
using correction factors, equations, or charts developed by organizations, such as the
Hydraulic Institute, and applying them to standard pump curves developed for water.
When the loop design is being established, the compatibility of materials that are ex-
pected to be in direct contact with the liquid should be reviewed. For example, pump
seals and valve seats are frequently constructed of ethylene propylene diene mono-
mer (EPDM), a type of synthetic rubber. However, EPDM is not compatible with petro-
leum-based liquids.
For sealed immersion configurations, pressure limitations of the ITE enclosures must
be considered. For a particular application, the pressure rating was less than 10 psig.
The requirement impacts the elevation of mechanical infrastructure relative to the ITE,
as the static head imposed on ITE needs to be kept to a minimum. Similarly, the pres-
Despite the mechanical advantages, there are reasons for caution when deploying
liquid-immersion cooling in data centers. The impact on infrastructure, such as struc-
tural, electrical, fire protection, and structured cabling, should be evaluated. In a typ-
ical data center, air-cooling systems are still needed as certain ITE, such as spinning
drives, cannot be liquid-cooled. Immersion cooling is still in its nascent stage, and
long-term statistical data is needed for detailed evaluation of ITE and infrastructure
reliability, serviceability, maintainability, and lifecycle costs.
Rittal North America LLC, the U.S. subsidiary of Rittal GmbH &
Co. KG, manufactures industrial and IT enclosures, racks and
accessories, including climate control and power management
systems.
www.rittalenclosures.com
THANK YOU FOR DOWNLOADING CFE MEDIA’S
DATA CENTER DESIGN DIGITAL REPORT!