91% found this document useful (11 votes)
4K views

DATA Center Design

The document provides an overview of key considerations for data center clients and mechanical, electrical, and plumbing systems design. It discusses 10 common aspects to consider: 1) first cost, 2) energy efficiency, 3) reliability, 4) flexibility, 5) redundancy, 6) maintainability, 7) speed to market, 8) scalability, 9) sustainability, and 10) location. The Northern Virginia region is highlighted as a major market for data centers due to abundant fiber, inexpensive power, water supply, and tax incentives.

Uploaded by

Elie Baradhy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
91% found this document useful (11 votes)
4K views

DATA Center Design

The document provides an overview of key considerations for data center clients and mechanical, electrical, and plumbing systems design. It discusses 10 common aspects to consider: 1) first cost, 2) energy efficiency, 3) reliability, 4) flexibility, 5) redundancy, 6) maintainability, 7) speed to market, 8) scalability, 9) sustainability, and 10) location. The Northern Virginia region is highlighted as a major market for data centers due to abundant fiber, inexpensive power, water supply, and tax incentives.

Uploaded by

Elie Baradhy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

2019 Digital Report

Data Center Design


DATA CENTER DESIGN
Click the video link below for an overview on the Data Center Design Digital Report
from Consulting-Specfiying Engineer, Content Manager, Amara Rozgus.

Welcome to the Digital Report on Data Center Design. I’m


Amara Rozgus, editor in chief and content strategy leader with
CFE Media.

This special report has been put together with the data center
engineer in mind. Our goal is to give you information to help
you design, specify, and manage data centers, along with other
important aspects of these mission critical facilities.

Data centers, data closets, edge and cloud computing, co-


location facilities, and similar topics are among the fastest-
changing in the industry. Data centers have high-level power
demands, customized climate and cooling needs, unique
enclosure systems—and all of this must operate without
interruption.

According to recent research of consulting engineers who


specify HVAC systems, 43% of them specify, design, or
make product selections for data centers. And among HVAC
challenges that survey respondents listed, energy efficiency
was their No. 1 challenge, with 59% of respondents indicating
it was an issue. Learn more about this and other topics within
this special report.

Thank you to the authors for their expert knowledge for this
collection of articles. And a special thanks to the sponsors for
making this Digital Report available.
CONTENT
10 aspects to consider for data center clients 4

Digital Report brought Analyzing data centers 8


to you by:

Metro Data Center Offers Colocation to Dublin, Ohio,


and Metropolitan Area
20

Data center design considerations 27

Designing efficient data centers 54

Designing modular data centers 76

How to choose a modular data center 86

Understanding NFPA 101 for mission critical facilities 98

New! Sustainable strategies for data centers 118

New! Designing with liquid-immersion cooling systems 120


10 ASPECTS TO CONSIDER
FOR DATA CENTER CLIENTS
There are 10 common aspects to consider in the analysis of mechanical, electrical, and plumbing systems.

O
f all the data center markets throughout North America, Northern Virginia
(NoVa) has consistently been the most active due in large part to its history.
In the early 1990s, the region played a crucial role in the development of the
internet infrastructure, which naturally drew a high concentration of data
center operators who could connect to many networks in one place.

NoVa, and especially Loudoun County, Virginia, was made for data centers. With its
abundant fiber, inexpensive and reliable power, rich water supply in an area that does
not experience droughts, and attractive tax incentive programs, it’s ideal for many data
center clients.

There are more than 40 data centers located in Loudoun Count, and the majority are
in “Data Center Alley,” which boasts a high concentration of data centers and supports
about half of the country’s Internet traffic. With more than 4.5 million sq ft of data cen-
ter space available and a projected 10 million sq ft by 2021, Ashburn, Virginia, data
centers continue to lead the pack. As Ashburn becomes the site of some of the indus-
try’s most progressive energy-saving initiatives and connectivity infrastructure develop-
ments, there’s no doubt that the region will continue to be a market to watch.

Recently, an increase in competition has been driving technology and innovations


throughout the NoVa data center colocation market. With such a competitive land-
scape, clients are looking at all aspects of their mechanical, electrical, and plumbing

CFE Media Digital Report: Data Center Design • 4


✓ 10 aspects to (MEP) designs to differentiate themselves from the competition. By looking holistically
consider for data
center clients at clients’ priorities, the firm evaluates various factors during system comparisons and
allows each client to choose the right mechanical and electrical systems to achieve
their overall goals and optimize success. There are ten common aspects to consider in
the analysis of mechanical, electrical, and plumbing systems.

1. First cost
When businesses turn to a colocation provider, and the fiscal benefits of such strate-
gies are only increasing, first costs become a primary motivation. A recent study ex-
plained that rising competition in the colocation sector is leading to price declines in
leasing and creating an extremely client-friendly environment.

2. Energy efficiency
Because power consumption directly drives operating costs, energy efficiency is a big
concern for many businesses. Choosing a data center that integrates the latest tech-
nologies and architecture can help minimize environmental impacts. Innovations like
highly efficient cooling plants and leveraging medium voltage electrical distribution sys-
tems can help reduce the amount of energy needed to power the building, resulting in
a lower Power Usage Effectiveness (PUE).

3. Reliability
To avoid financial and business repercussions in the case of a planned or unplanned
outage, reliability is a must. If going offline for even a few minutes will have significant
financial and business repercussions, then employing MEP solutions that have backup
options available in case of a planned or unplanned outage is a must.

CFE Media Digital Report: Data Center Design • 5


✓ 10 aspects to 4. Flexibility
consider for data
center clients Flexibility with scaling systems has been an attractive strategy, particularly with coloca-
tion providers. Adaptability to multiple clients for phasing and making sure design pro-
visions are made so the construction of a new phase can occur without downtime in
active phases. Flexibility is a key component when it comes to meeting your business
objectives because it allows your needs to be accommodated at any given time.

5. Redundancy
Providing continuous operations through all foreseeable circumstances, such as power
outages and equipment failure, is necessary to ensure a data center’s reliability. Redun-
dant systems that are concurrently maintainable provide peace of mind that the client’s
infrastructure is protected.

6. Maintainability
Clients want systems that are easily maintainable to be able to ensure their critical as-
sets are running at full speed. The system sections should be focused on operational
excellence in order to protect customers’ critical power load and cooling resources.

7. Speed to market
Clients’ leases usually hinge on having timely inventory. Clients expect a fast- tracked,
constructible design that is coordinated and installed in a timely manner. Through the
integrated design-build model, long lead items can be pre-purchased in parallel with
designs being completed and coordinated.

8. Scalability
Scalability and speed to market go hand in hand. It’s vital to understand that system

CFE Media Digital Report: Data Center Design • 6


✓ 10 aspects to infrastructure choices early in design can affect equipment lead times and installation
consider for data
center clients durations for future phases. Also, in order to provide control and save operational costs
during a period of accelerated MEP growth, systems need to be easily scalable to fast-
track additional growth.

9. Sustainability
Customers benefit from solar power, reclaimed water-based cooling systems, water-
less cooling technologies, and much more. Water is becoming a larger consideration
with mechanical system selections. The enormous volume of water required to cool
high-density server farms with mechanical systems is making water management a
growing priority for data center operators. A 15-megawatt data center can use up to
360,000 gallons of water per day. Clients recognize that sustainability is not only good
for the environment, but is also good for their bottom line.

10. Design tolerances


Since 2011, new temperature and humidity guidelines have helped rethink the design
of data centers. Service level agreements (SLAs) are being designed with different lim-
its. That has resulted in more and more innovations with MEP systems within mission
critical facilities.

Mark A. Kosin is vice president, business team leader for mid-Atlantic division at
Southland Industries. This article originally appeared on Southland Industries blog.
Southland Industries is a CFE Media content partner.

CFE Media Digital Report: Data Center Design • 7


ANALYZING DATA CENTERS
Data is the lifeblood of any business or organization—which makes a data center a facility’s beating heart.
Here, engineers with experience on data center projects show how to succeed on such facilities, and how to
keep your finger on the pulse of data center trends.

Respondents
• Robert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture
and Engineering, Albany, N.Y. (Top left)
• Karl Fenstermaker, PE, Principal Engineer, Southland Engineering, Portland, Ore.
• Bill Kosik, PE, CEM, LEED AP, BEMP, Senior Mechanical Engineer, exp , Chicago

CFE Media Digital Report: Data Center Design • 8


✓ Analyzing data • Kenneth Kutsmeda, PE, LEED AP, Engineering Manager—Mission Critical, Jacobs,
centers
Philadelphia
• Keith Lane, PE, RCDD, NTS, LC, LEED AP BD&C, President, Lane Coburn & Associ-
ates LLC, Bothell, Wash.
• Brian Rener, PE, LEED AP, Senior Electrical Engineer, SmithGroupJJR, Chicago
• Mark Suski, SET, CFPS, Associate Director, JENSEN HUGHES, Lincolnshire, Ill.
• Saahil Tumber, PE, HBDP, LEED AP, Senior Associate, Environmental Systems De-
sign, Chicago
• John Yoon, PE, LEED AP, Lead Electrical Engineer, McGuire Engineers Inc., Chicago

CSE: What’s the No. 1 trend you see today in data center design?

Karl Fenstermaker: ASHRAE’s thermal guidelines for data processing centers is


becoming more accepted and implemented in the industry. Operating over a wider
range of temperature and humidity conditions requires more attention to detail during
the design and operation of the data center, so as a result, we are seeing more lever-
aging of advanced technology tools, such as computational fluid design for thermal
modeling and data center infrastructure management (DCIM) systems for more precise
monitoring and control of the data center environment.

Keith Lane: We’re seeing modularity, increased efficiency, and flexibility. Most data
center end users require all of these in their facilities.

Brian Rener: Calculated and measured performance, whether on energy efficiency,


reliability, or life cycle costs. Owners are seeking verified value for their investment in
the data center facility.

CFE Media Digital Report: Data Center Design • 9


✓ Analyzing data Saahil Tumber: Colocation providers used to be conservative in their approach and
centers
tended to follow standardized designs. However, they are now open to deploying new
technologies and topologies to increase resiliency, improve power-usage effectiveness
(PUE), reduce time to market, reduce cost, and gain a competitive advantage. They
are coming out of their comfort zones. They are also laying emphasis on strategies that
reduce stranded capacity and space. For enterprise clients, there is more collaboration
between various stakeholders (information technology, operations, security, engineer-
ing, etc.). They are not working in silos anymore, but working toward a common goal.
We are seeing consistency in their needs and requirements.

John Yoon: A trend is reduced infrastructure-redundancy requirements for clients


that are migrating services to the cloud. A 2N UPS and N+1 computer room air condi-
tioner (CRAC) redundancy used to be commonplace in our designs for corporate head-
quarters building-type data centers. That type of redundancy is now becoming the ex-
ception. The prevailing information technology (IT) mindset seems to be that if mission
critical services are being moved offsite, why invest extra money in redundant infra-
structure (and manpower) for what’s left behind? One significant experience that would
speak to the contrary involved a client that decommissioned their main data center at
headquarters and replaced it with a much smaller server room. The new server room
was provided with no redundancy for the UPS equipment. That UPS was in service
for more than 4 years without an incident. However, one day during a utility blip, the
UPS dropped the critical load because a single battery cell faulted, causing a full bat-
tery-string failure. Although the power interruption was brief and the generator started,
the inability of the UPS to immediately sync to an unstable bypass voltage took down
everything downstream of the UPS—including the core network switches that allowed
headquarters to communicate with the rest of their facilities around the world. Although

CFE Media Digital Report: Data Center Design • 10


✓ Analyzing data power was quickly restored via the UPS manual bypass, the reboot of the core switch-
centers
es did not occur smoothly. Communications back to headquarters were knocked out
for nearly a day. Needless to say, executives were not pleased.

CSE: What other trends should engineers be on the lookout regard-


ing such projects in the near future (1 to 3 years)?

Bill Kosik: There will still be a high demand for data centers. Technology will contin-
ue to evolve, morph, and change. The outlook for new or renovated data centers con-
tinues to be bullish with analysts looking at the industry doubling cloud strategies over
the next 10 years. So, trends will center around lower-cost, higher shareholder-return
data centers that need to address climate change and comply with data-sovereignty
laws.

Kenneth Kutsmeda: A trend that will become more popular in data centers is the
use of lithium batteries. One manufacturer of lithium batteries recently acquired UL
listings (UL 1642: Standard for Lithium Batteries and UL 1973: Standard for Batteries
for Use in Light Electric Rail (LER) Applications and Stationary Applications), and others
will soon follow. Unlike cell phones that use lithium cobalt oxide, which has a high-en-
ergy density and is prone to safety risks when damaged, data center batteries use a
combination of lithium manganese oxide and lithium nickel manganese cobalt oxide,
which has a lower energy density but longer lifecycle and inherent safety features. Ja-
cobs recently completed a project using lithium batteries. The lithium battery has a
more than 15-year lifecycle and requires no maintenance. Lithium batteries provide a
65% space savings and 75% weight reduction as compared with wet-cell batteries.
The lithium battery-management system provides the ability to isolate individual cab-

CFE Media Digital Report: Data Center Design • 11


✓ Analyzing data inets without taking down the UPS and eliminates the need for a separate monitoring
centers
system.

Rener: New metrics on reliability versus the old terms of availability. We are seeing
a move away from prescriptive terms on availability to calculations on reliability using
IEEE. Edge-cooling approaches (local to the server) have become more popular as well
as fluid-based cooling at the rack.

Yoon: We expect to see further densification of server equipment. As recently as 10


years ago, a 45U high rack full of 1U “pizza-box” servers seemed like absurdly high
density. Now, the highest-density blade server solution that I’m currently aware of has
280 blade servers in a 60U high rack—that’s a six-fold increase in density. With these
dramatically higher equipment densities, traditional environmental design criteria just
won’t cut it anymore. Much higher cold/hot-aisle temperatures are becoming the norm.
In the next year or so, we also expect to see an increase in the use of lithium-ion (Li-
ion) in place of valve-regulated lead-acid batteries for systems 750 kVA and larger. The
value proposition appears to be there—they’re lighter, last longer, and more tolerant of
higher temperatures. The one uncertainty is which Li-ion battery chemistry gains dom-
inance. Some chemistries offer high energy densities but at the expense of increased
volatility. The guiding NFPA safety codes and standards haven’t yet evolved to the
point where any significant distinction can be made between these.

CSE: Please describe a recent data center project you’ve worked


on—share details about the project, including location, systems en-
gineered, team involved, etc.

CFE Media Digital Report: Data Center Design • 12


✓ Analyzing data Tumber: I’m currently working on a colocation data center campus in Chicago. The
centers
existing building can support 8 MW of IT load. The new 2-story building incorporates
160,000 sq ft of white space and will be capable of supporting 32 MW of IT load. The
data halls are conditioned using outdoor packaged DX units, which use heat pipe for
indirect airside economization. Each unit has a net-sensible cooling capacity of 400
kW, and each one discharges into a 48-in.-high raised-access floor. The electrical de-
sign is based on block-redundant topology and uses a 97%-efficient UPS system.

CSE: Describe a modular data center you’ve worked on recently, in-


cluding any unique challenges and their solutions.

Yoon: We haven’t seen much in the way of large modular data centers (a la Microsoft
ITPACs). Those seem to be mostly limited to large cloud providers. Our clients typically
prefer traditional “stick-built” construction—simply because the scale associated with
modular data center deployment doesn’t make much sense for them.

Lane: With all of our modular data center projects, we continue to strive to increase
efficiency, lower cost, and increase flexibility. These challenges can be achieved with
good planning between all members of the design team and innovation with prefabrica-
tion. The more construction that can be completed and is repeatable in the controlled
environment of a prefabrication warehouse, the more money can be saved on the proj-
ect.

CSE: What are the newest trends in data centers in mixed-use build-
ings?

CFE Media Digital Report: Data Center Design • 13


✓ Analyzing data Rener: One of the more exciting projects we’ve worked on in a mixed-use building is
centers
the National Renewable Energy Lab—Energy Systems Integration Facility, which is a
182,500-sq-ft energy research lab with supporting offices and high-performance com-
puting (HPC) data center located in Golden, Colo. The IT cabinets supporting the HPC
research component are direct water-cooled cabinets and the cooling system has the
ability to transfer the waste heat from the data center to preheat laboratory outside air
during the winter months. This ability to use waste energy from the data center in other
parts of the building is sure to become an emerging trend in mixed-use buildings with
data centers.

Fenstermaker: One emerging trend is recovering heat from the data center to heat
the rest of the building. This is most commonly employed by using hot-aisle air for the
air side of a dual-duct system or heating air intake at a central AHU. In addition, small-
er data centers are using direct-expansion (DX) fan coils connected to a central vari-
able refrigerant flow (VRF) system with heat-recovery capabilities to transfer heat from
the data center to other zones requiring heating.

Yoon: One of the newest trends is smaller, denser, and less redundancy.

Tumber: Large-scale data center deployments are not common in mixed-use build-
ings as they have unique requirements that typically can only be addressed in sin-
gle-use buildings. One of the main issues is with securing the data center. This is
because even the most comprehensive security strategy cannot eliminate non-data
center users from the premises. For small-scale deployments where security is not a
big concern, a common infrastructure that can serve both the needs of the data center
and other building uses is important to ensure cost-effectiveness. Emphasis is being

CFE Media Digital Report: Data Center Design • 14


✓ Analyzing data placed on designs that recover low-grade heat from the data center and uses it for
centers
other purposes, such as space heating.

CSE: Have you designed any such projects using the integrated
project delivery (IPD) method? If so, describe one.

Tumber: I recently worked on a project that involved wholesale upgrades at the flag-
ship data center of a Fortune 500 company. The data center is located in the Midwest,
and IPD was implemented. The project was rife with challenges, as the data center
was live and downtime was not acceptable. In fact, a recent unrelated outage lasting
30 seconds led to stoppage of production worldwide and caused $10 million in losses.
We worked in collaboration with contractors. They helped with pricing, logistical sup-
port, equipment procurement, construction sequencing, and more during the design
phase. The project was a success, and all project goals were met.

CSE: What are the challenges that you face when designing data
centers that you don’t normally face during other building projects?

Robert C. Eichelman: With few exceptions, data centers serve missions that are
much more critical than those served by other building types. The infrastructure design,
therefore, requires a higher degree of care and thoughtfulness in ensuring that systems
support the mission’s reliability and availability requirements. Most data centers have
very little tolerance for disruptions to their IT processes, as interruptions can result in
disturbances to critical business operations, significant loss of revenue and customers,
or risk to public safety. Most often, the supporting mechanical, electrical, and plumb-
ing (MEP) systems need to be concurrently maintainable, meaning that each and every

CFE Media Digital Report: Data Center Design • 15


✓ Analyzing data component has the ability to be shut down, isolated, repaired/replaced, retested, and
centers
put back into service in a planned manner without affecting the continuous operation of
the critical IT equipment. Systems usually have a high degree of fault tolerance as well.
The infrastructure design needs to be responsive to these requirements and most often
includes redundant major components, alternate distribution paths, and compartmen-
talization, among other strategies. Power-monitoring systems are much more extensive
to give operators a complete understanding of all critical parameters in the power sys-
tem. Systems are also more rigorously tested and commissioned and routinely include
factory witness testing of major equipment including UPS, generators, and paralleling
switchgear. MEP engineers also have a larger role in controlling costs. The MEP infra-
structure for data centers represents a much higher percentage of the total building
construction and ongoing operating costs than for other building types, requiring engi-
neers to be much more sensitive to these costs when designing their systems.

Lane: A data center is a mission critical environment, so power cannot go down.


We are always striving to provide the most reliable and maintainable data center as
cost-effectively as possible. These projects are always challenging when considering
new and emerging technologies while maintaining reliability.

Rener: Future flexibility and modular growth. IT and computer technologies are rap-
idly changing. Oftentimes during the planning and design of the facility, the owner has
not yet identified the final equipment, so systems need to be adaptable. Also, the own-
er will often have multiyear plans for growth, and the building must grow without dis-
ruption.

Yoon: Managing people and personalities. Most management information systems/IT

CFE Media Digital Report: Data Center Design • 16


✓ Analyzing data (MIS/IT) department staff are highly intelligent, extremely motivated people, but they are
centers
not used to being questioned on technical points. This can make the data center pro-
gramming process extremely challenging—and even confrontational at times—when
you’re trying to lock in MEP infrastructure requirements. The key is to remember that
many CIOs and their MIS/IT departments are accustomed to operating with reasonably
high levels of independence within their companies. Many people within their own or-
ganizations don’t understand exactly what the MIS/IT staff members do, only that they
control the key infrastructure that’s critical to the day-to-day operations. If they haven’t
been involved in the construction of a data center before, the MEP engineer is often
viewed as an external threat. The key is to make sure they understand the complemen-
tary set of skills that you bring to the table.

Tumber: The project requirements and design attributes of a data center are different
from other uses. The mission is to sustain IT equipment as opposed to humans. They
are graded on criteria including availability, capacity, resiliency, PUE, flexibility, adapt-
ability, time to market, scalability, cost, and more. These criteria are unique to data
centers, and designing a system that meets all the requirements can be challenging.

CSE: Describe the system design in a colocation data center. With


all the different clients in a colocation facility, how do you meet the
unique needs of each client?

Lane: The shell in a colocation facility must be built with flexibility in mind. You must
provide all of the components for reliability and concurrent maintainability while allow-
ing the end user to tweak the data center to their own unique needs. Typically, the shell
design will stop either at the UPS output distribution panel or at the power distribution

CFE Media Digital Report: Data Center Design • 17


✓ Analyzing data unit (PDU). The redundancy (N, N+1, or 2N) and the specific topology to the servers
centers
can be unique to the end user. Some larger clients will take a more significant portion
of the data center, if timing allows, and they will be able to select the UPS, generator,
and medium-voltage electrical distribution topology.

Tumber: The design of a colocation data center is influenced by its business mod-
el. Powered shell, wholesale colocation, retail colocation, etc. need to be tackled dif-
ferently. If the tenant requirements are extensive, the entire colocation facility can be
designed to meet their unique needs, i.e., built-to-suit. Market needs and trends typ-
ically dictate the designs of wholesale and retail data centers. These data centers are
designed around the requirements of current and target tenants. They offer varying
degrees of flexibility, and any unique or atypical needs that could push the limits of the
designed infrastructure are reviewed on a case-by-case basis.

Fenstermaker: The most important thing is to work with the colocation providers to
fully understand their rate structures, typical contract size, and the menu of reliability/
resiliency they want to offer to their clients in the marketplace. The optimal design solu-
tion for a retail colocation provider that may lease a few 10-kW racks at a time with Tier
4 systems, located in a high-rise in the downtown area of Southern California, is dras-
tically different than another that leases 1-MW data halls in central Oregon with Tier 2
systems. Engineers need to be fully aware of all aspects of the owner’s business plan
before a design solution can be developed.

Yoon: Colocation facilities seem to be evolving into one-size-fits-all commodities.


Power availability and access to multiple carriers/telecommunication providers with
low-latency connections still seem to be how they try to differentiate themselves. How-

CFE Media Digital Report: Data Center Design • 18


✓ Analyzing data ever, simple economies of scale give larger facilities the upper hand in these key met-
centers
rics.

Eichelman: For a colocation data center, it’s important to understand the types of
clients that are likely to occupy the space:

• Is it retail or wholesale space?


• What power densities are required?
• Any special cooling systems/solutions needed for the IT equipment?
• Are there any special physical or technical security requirements?

The specific design solutions need to be responsive to the likely/typical requirements


while also being flexible and practical to accommodate other needs that may arise. A
typical approach could include designing a facility with a pressurized raised floor, which
allows for air-cooled equipment while making provisions for hot-aisle or cold-aisle con-
tainment and underfloor chilled water for water-cooled equipment and in-row coolers.
Power distribution could also be provided via an overhead busway system to allow
flexibility in accommodating a variety of power requirements.

The tendency to allow unusual requirements to drive the design, however, should be
carefully considered or avoided, unless the facility is being purpose-built for a spe-
cific tenant. To optimize return on investment, it’s important to develop a design that
is modular and rapidly deployable. This requires the design to be less dependent on
equipment and systems that have long lead times, such as custom paralleling switch-
gear. Designs need to be particularly sensitive to initial and ongoing operational costs
that are consistent with the provider’s business model.

CFE Media Digital Report: Data Center Design • 19


METRO DATA CENTER OFFERS COLOCATION
TO DUBLIN, OHIO, AND METROPOLITAN AREA
Businesses, Schools, and Governmental Agencies Flourish with Dublink 100GB Broadband Network in
Place.

M
etro Data Center (MDC) was established in 2011 in Dublin, Ohio. The com-
pany is a full-service hosting and data center that specializes in serving
the data needs of small- to mid-sized businesses (SMB), as well as local
schools and governmental agencies. MDC’s core values center around the
5 C’s: Colocation, Cloud, Connectivity, Consulting and Community.

As part of its commitment to the community, MDC built and maintains the city of Dub-
lin’s Dublink 100GB broadband network, which provides ultra-high-speed connectivity
to local users through 125 miles of fiberoptic lines that run underground throughout
Dublin.

Through several state and regional infrastructure projects, including the Smart Mobili-
ty Ohio Project, the Smart City Initiative, and the Transportation Research Center hub,
MDC has become central to the community.

From cloud hosted services and dedicated servers to the speed and connectivity of the
Dublink 100GB network, MDC offers its clients optimized, secure productivity backed
by on-site systems and 24/7 monitoring and maintenance. Comprehensive resources,
services and expertise assure optimal performance, and the ability to meet and resolve
clients’ needs within minutes. Metro Data Center is a carrier-neutral environment that
also offers a blended internet option to its SMB clients.

CFE Media Digital Report: Data Center Design • 20


✓ Metro Data Center Metro Data Center provides
Offers Colocation to
Dublin, Ohio, and
scalable fiberoptic network
Metropolitan Area capacity with points of
presence (POPs) via Lev-
el3, CenturyLink, Oarnet,
Spectrum, and XO. MDC
specializes in linking SMB
organizations to IT solu-
tions that help to grow their
business. Some small- and
medium-sized business-
es are vulnerable to problems caused by lack of IT staff, expertise and solutions. To
meet this need, the city of Dublin established the Dublin Entrepreneurial Center, and
the TechDEC, a community of entrepreneurs working together to grow each of their
businesses. Today, TechDEC hosts more than 150 established companies and startups
that are owned, operated and staffed by local residents.

MDC also created a pod-based approach to improve energy efficiency and service
larger customers. Each pod utilizes as many as 20 cabinets, and feature cold-row con-
tainment. Currently six pods are in use and the facility has the capacity for 12 pod de-
ployments. This pod approach has achieved significant energy efficiency improvements
with measurable kilowatt savings recorded quarter over quarter.

Starting at Zero
When MDC took over the space from a global software security development compa-
ny, it contained zero racks and 32,000 square feet of open space. MDC began to make

CFE Media Digital Report: Data Center Design • 21


✓ Metro Data Center necessary infrastructural changes to maximize air flow, optimize the floor plan and or-
Offers Colocation to
Dublin, Ohio, and
ganize racks to facilitate the company’s rational growth strategy.
Metropolitan Area
To build out its infrastructure, MDC required cabinets that could withstand the physical
demands of holding and protecting servers, cables and power equipment. Rittal, with
a manufacturing facility in nearby Urbana, Ohio, was chosen because of the quality of
its products. Its TS ITTM enclosures, are known globally for their durability and offer a
variety of options to facilitate cooling, cable management and on-the-fly configuration.
After extensive review, MDC decided Rittal was an excellent fit for their infrastructure
requirements.

General Features
Metro Data Center is a top-tier provider and has the physical and colocation capaci-
ty to meet the needs of thriving Dublin-area business and public community. The site
spans nearly 55,000 square feet. This includes 5,903 square feet of raised data center
floor space, 364 square feet of demarcation locations yielding 200 total rack spaces
available in server cabinets. The center is able to provide the highest quality infrastruc-
ture services available, including extensive redundancy, stability and security.

Metro Data Center accomplishes this through data colocation with businesses; fully re-
dundant power (2N); carrier neutral access to most central Ohio carriers, and blended
internet services to others. Finally, MDC offers workgroup recovery services including
office space, telephones, computers and conference rooms.

Power Management
One of the strengths of the facility is its state-of-the-art power management plan. This

CFE Media Digital Report: Data Center Design • 22


✓ Metro Data Center begins with fully redundant (2N) active power transmitted through dual electric feeds
Offers Colocation to
Dublin, Ohio, and
from two substations provided by American Electric Power.
Metropolitan Area
Electrical wiring to the facility is supplied via two unique routes through dual transform-
ers, dual switch gear, and dual 625 KVAUPS systems, all of which are autonomous in
supplying the center’s (2N) power at 480 volts.

At the rack level, Rittal facilitates power management through dual corded servers
plugged into color-coded, power strips integrated into the enclosure. Single-corded
servers get in-rack Automatic Transfer Switches for redundant power.

The MDC also uses dual transient voltage surge suppressors (TVSS). The purpose of
TVSS is to prevent damage to data processing and other critical equipment by limiting
transient voltages and currents on electrical circuits. Sources of dangerous transient
currents and voltages can be as rare as lighting strikes, or as common as the switch-
on of elevators, heating, air conditioning, refrigeration or other inductive load equip-
ment.

Site technicians perform automated generator run tests weekly and manual generator
load tests quarterly. They offer enhanced quarterly preventive maintenance on all facility
equipment with full-time, year-round emergency support on all facility equipment, in-
cluding the generators, and UPS and battery units.

Cooling
MDC engineers employed a number of overlapping strategies for facility thermal man-
agement, including the use of N+1-rated glycol pumping stations, dry coolers and

CFE Media Digital Report: Data Center Design • 23


✓ Metro Data Center Rittal cold row
Offers Colocation to
Dublin, Ohio, and
containment
Metropolitan Area systems. The
Rittal solution
maximizes per
cabinet cooling
and lowers the
facilities power
usage effective-
ness (PUE).

In addition to
traditional cold
row contain-
ment, MDC
also deploys
an in-rack con-
Rittal helps Metro Data Center deliver tainment system to handle their pod configurations
targeted colocation solutions. and other high density thermal challenges. All thermal
management equipment is backed up by generators,
including seven 30-ton glycol-based air handlers.

Security
Data Security
Metro Data Center has passed the Statement on Standards for Attestation Engage-
ments (SSAE) Service Organization Control 2 (SOC2) audit from A-lign. The SSAE

CFE Media Digital Report: Data Center Design • 24


✓ Metro Data Center SOC2/AT-101 was developed to put requirements in place for independent accounting
Offers Colocation to
Dublin, Ohio, and
firms to examine and issue reports on controls over subject matter other than financial
Metropolitan Area reporting.

The Service Organization Control (SOC) 2 report is performed in accordance with the
AT 101 and is based on the Trust Services Principles, with the ability to test and report
on the design (Type I) and operating (Type II) effectiveness of a service organization’s
controls, as with SOC 1/SSAE. The SOC 2 report focuses on a business’s non-financial
reporting controls as they relate to security, availability, processing integrity, confidenti-
ality, and privacy of a system, as opposed to SOC 1/SSAE 16 which is focused on the
financial reporting controls, according to the SSAE website.

Physical Security
Metro Data Center uses Honeywell security Systems, including integrated video surveil-
lance, full-hand scan biometrics, and interior and exterior infra-red cameras. The secu-
rity system components provide (N+1) security system features for full redundancy.

Cabinet doors are secured using Medeco highly secured locking systems that are
uniquely keyed for each customer. This system installs into the Rittal standard handle
and can be modified or changed as the customer using the rack is changed. The phys-
ical layer of security also allows for a master key function for Metro. This system can be
installed both in the factory as well as by Metro personnel as needed. Closed-circuit TV
(CCTV) camera systems are used for continuous visual surveillance.

Monitoring systems provide accurate information on the state of all infrastructure that is
required to maintain efficient power, cooling and energy usage within the facility. MDC

CFE Media Digital Report: Data Center Design • 25


✓ Metro Data Center also monitors physical access to the facility with state-of-the-art, full-palm scan bio-
Offers Colocation to
Dublin, Ohio, and
metric control systems and alarms.
Metropolitan Area
Conclusion
With a pod approach to organizing the data center, MDC has created a model in ener-
gy-efficient design that scales to meet the needs of more and more customers without
significant infrastructure investments. Rittal helps ensure that MDC has the cabinet and
cooling solutions that scale to meet growing customer needs.

Through its consulting services the company has delivered IT strategy and deployment
solutions that enable SMB customers the ability to focus on their core competencies in
order to grow their business. By targeting SMB customers and fostering local econom-
ic development around IT services and infrastructure, MDC is revolutionizing the role of
data centers and creating a blueprint for the Midwest.

CFE Media Digital Report: Data Center Design • 26


DATA CENTER DESIGN CONSIDERATIONS
This article provides guidelines on distribution systems’ levels of redundancy, the correct generator rating to
use, and whether solar power can be used in a data center.

O
f all the data center markets throughout North America, Northern Virginia
(NoVa) has consistently been the most active due in large part to its history.
In the early 1990s, the region played a crucial role in the development of the
internet infrastructure, which naturally drew a high concentration of data
center operators who could connect to many networks in one place.

Over the past several years, mission critical clients seem to be asking the same series
of questions regarding data center designs. These questions relate to the best distribu-
tion system and best level of redundancy, the correct generator rating to use, whether
solar power can be used in a data center, and more. The answer to these questions is
“It depends,” which really doesn’t help address the root of their questions. For every
one of these topics, an entire white paper can be written to highlight the attributes and
deficiencies, and in many cases, white papers are currently available. However, some-
times a simple and concise overview is what is required rather than an in-depth analy-
sis. The following are the most common questions that this CH2M office has received
along with a concise overview.

What is the best system topology?


There isn’t a single “best” system topology. There is only the best topology for an indi-
vidual data center end user. The electrical distribution system for a data center can be
configured in multiple topologies. While the options and suboptions can be myriad, the
following topologies are commonly deployed (see Figure 1).

CFE Media Digital Report: Data Center Design • 27


✓ Data center design • 2N: Simply designing twice as much equipment
considerations
as needed for the base (i.e., N) load and using
static transfer switches (STS), automatic trans-
fer switches (ATS), and the information technol-
ogy (IT) and HVAC equipment’s dual cording to
transfer the load between systems. The systems
are aligned in an “A/B” configuration and the
load is divided evenly over the two systems. In
Figure 1: Conceptual one-line configura-
tions of electrical topologies highlighting the event of failure or maintenance of one sys-
redundancy. Image courtesy: CH2M tem, the overall topology goes to an N level of
redundancy.

• 3M2: This topology aligns the load over more than two independent systems. The
distributed redundant topology is commonly deployed in a “three-to-make-two”
(3M2) configuration, which allows more of the capacity of the equipment to be used
while maintaining sufficient redundancy for the load in the event of a failure (see Fig-
ure 2). The systems are aligned in an “A/B/C” configuration, where if one system fails
(e.g., A), the other two (B and C) will accept and support the critical load. The load is
evenly divided with each system supporting 33.4% of the load or up to 66.7% of the
equipment rating. In the event of a component failure or maintenance in one system,
the overall topology goes to an N level of redundancy. In theory, additional systems
could be supplied, such as 4M3 or 5M4, but deployment can significantly complicate
the load management and increases the probability of operator error.

• N+1 (SR): The shared-redundant (SR) topology concept defines critical-load blocks.
Each block is supported 100% by its associated electrical system. In the event of

CFE Media Digital Report: Data Center Design • 28


✓ Data center design maintenance or a failure, the unsup-
considerations
ported equipment would be transferred
to a backup system that can support
one or two blocks depending on the
design. This backup system is shared
across multiple blocks, with the number
of blocks supported being left to the
design team but typically in the range of
4:1 up to 6:1.

• N+1 (CB): The common-bus (CB)


redundant system is like the shared
redundant system in that the IT equip-
ment’s A and B sources are connect-
ed to an N+1 uninterruptible power
supply (UPS) source, but in the event
of a failure or maintenance activities,
the load is transferred to a raw power
source via STS. The raw power source
has the capability of being backed up
by generators that are required to be
Figure 2: Mission critical electrical room showing run during maintenance activities to
overhead conduit routing and complexity for a 3.6 maintain the critical load.
MW, three-to-make-two (3M2) distribution system.
Image courtesy: CH2M
The above topologies assume a low-volt-
age UPS installation. However, similar systems can be developed using a medium-volt-

CFE Media Digital Report: Data Center Design • 29


✓ Data center design age UPS. Beyond the redundancy configuration, these low-voltage UPS topologies
considerations
also can be evaluated on ease-of-load management, backup power generation, their
ability to deploy and commission initially and when expanding, first costs and total cost
of ownership, physical footprint of the equipment comprising the topology, and time to
construct the initial installation as well as expansion of the system.

A commonality between the different topologies presented is the need to transfer load
between systems. No matter the system topology, the requirement to transfer load
between electrical systems—either for planned maintenance activities, expansions, or
failure modes—must be done. Load management refers to how the load is managed
across multiple systems.

2N topology. The premise behind a 2N system is that there are two occurrences of
each piece of critical electrical equipment to allow the failure or maintenance of any
one piece without impacting the overall operation of the data center IT equipment. This
configuration has a number of impacts:

• Load management: Among the topologies presented here, 2N has a relatively sim-
ple load-management scheme. The system will run independently of other distri-
bution systems and can be sized to accommodate the total demand load of the IT
block and associated HVAC equipment, minimizing the failure zone. The primary con-
sideration for load management is to ensure the total load doesn’t overload a single
substation/UPS system.

• Backup power generation: This topology uses a 2N backup generation with the
simplest of schemes: having the generator paired to the distribution block. Each

CFE Media Digital Report: Data Center Design • 30


✓ Data center design generator is sized for the entire block load and will carry
considerations
50% of the load under normal conditions. For large data
centers, the option exists to parallel together multiple gen-
erator sets to create an “A” backup source and parallel
together an equal number of generators to create a “B”
backup source, distributing power via two different sets
of paralleling switchgear. Typically, this is more expensive
Figure 3: Installa- due to the addition of paralleling switchgear and controls.
tion of feeders and Selection of the voltage class usually depends on the size of load
control cables for a as well as physical space and cost to route cable from the genera-
3.6-MW 3M2 distri- tor to the switchgear. The ability to parallel generators tends to be
bution system. Image
limited by the paralleling switchgear bus ampacity ratings as well
courtesy: CH2M
as short-circuit ratings. Beyond 6,000 amps at 480 V, consider
using 15-kV-class generators.

• Deployment: Each 2N system can be designed to accommodate a discrete IT


block. This allows multiple systems to be deployed independently, facilitating pro-
curement, construction, commissioning, and operations with no impact to existing or
future systems.

• First cost/TCO: The 2N system requires twice the quantity and capacity of electri-
cal equipment than the load requires, causing the system to run at nominally 50%
of nameplate capacity. Due to the nature of how electrical equipment operates, this
tends to cause the equipment to run at a lower efficiency than can be realized in oth-
er topologies. An additional impact of the 2N system topology is that the first cost
tends to be greater because of the quantity of equipment. Also, because there are

CFE Media Digital Report: Data Center Design • 31


✓ Data center design additional systems in place, the ongoing operational and maintenance costs tend to
considerations
be greater.

• Spatial considerations: Because it generally has the most equipment, the 2N con-
figuration typically has the largest physical footprint. However, this system is the sim-
plest to construct as a facility is expanded, thereby minimizing extra work and allow-
ing the facility to grow with the IT demands.

• Time to market: As has been discussed, this system will have more equipment to
support the topology, therefore there may be additional time to construct and com-
mission the equipment. The systems are duplicates of each other, which allows for
construction and commissioning efficiencies when multiple systems are installed,
assuming the installation teams are maintained.

Distributed redundant (3M2) topology. The premise behind a 3M2 system is that
there are three independent paths for power to flow, each path designed to run at ap-
proximately 66.7% of its rated capacity and at 100% during a failure or maintenance
event. This configuration is realized by carefully assigning load such that the failover is
properly distributed among the remaining systems.

This configuration has a number of impacts to the distribution:

• Load management: The load management for the 3M2 system should be carefully
considered. The load will need to be balanced between the A, B, and C systems to
ensure the critical load is properly supported without overloading any single system.
Load management of a system like this can be aided by a power-monitoring system.

CFE Media Digital Report: Data Center Design • 32


✓ Data center design
considerations
• Backup power generation: This topology follows the normal power flow and uses a
3M2 backup generation where the generator is paired to the distribution block. Each
generator is sized for the entire block load and will carry 66.7% of their capacity un-
der normal conditions. Parallel generator configurations are rarely used for 3M2 sys-
tems. Like 2N systems, the selection of the voltage class depends on the size of load
as well as physical space and cost to route cable from the generator to switchgear
(see Figure 3).

• Deployment: Each 3M2 system can be designed to accommodate a discrete IT


block. Expansion within a deployed 3M2 system is exceptionally challenging and
difficult, if not impossible to commission. Deployment of multiple 3M2 systems is the
best option for addressing expansion and commissioning.

• First cost/TCO: The 3M2 system requires about 1.5 times the capacity of electrical
equipment than the load requires and runs at 66.7% of its rated capacity. Because
the equipment is running at a higher percentage, the 3M2 system tends to be more
energy-efficient than the 2N, but less efficient than either of the shared redundant
systems. An additional impact of the 3M2 system topology is that lower-capacity
equipment can be used to support a similar size IT block, thereby causing the system
to have a higher cost per kilowatt to install. However, if the greater capacity is real-
ized by either sizing the IT blocks large enough to realize the benefits of this topology
or by installing two IT blocks on each distribution system, then there will be a lower
first cost. Essentially, the 2N system needs two substations and associated equip-
ment for each IT block while the 3M2 system would need only three substation sys-
tems to support the IT block. First-cost savings is in addition to operational savings

CFE Media Digital Report: Data Center Design • 33


✓ Data center design because there are fewer pieces of equipment to maintain. And the energy savings is
considerations
because the equipment is running at a higher efficiency.

• Spatial considerations: Similar to the first-cost discussion above, the spatial layout
can either be smaller or larger than a 2N system depending on how the topology is
deployed and how many IT blocks each system supports.

• Time to market: The balance between the IT blocks supported by each system and
the quantity of equipment will have an impact on the time to market, though the bal-
ance for this system is unlikely to be significant. The additional equipment should
be balanced against smaller pieces of equipment, allowing faster installation time
per unit. The systems are duplicates of each other, which allow for construction and
commissioning efficiencies when multiple systems are installed, assuming the instal-
lation teams are maintained.

N+1 shared redundant (N+1 SR). The premise behind the N+1 SR system is that
each IT block is supported by one primary path. In the event of maintenance or a fail-
ure, there is a redundant but shared module that provides backup support. The shared
module in this topology has the same equipment capacities and configuration as the
primary power system, minimizing the types of equipment to maintain.

For example, if six IT blocks are to be installed, then seven distribution systems (sub-
stations, generators, and UPS) will need to be installed for an N+1 system. This N+1
system can easily be reconfigured to an N+2 system with minimal impact (procuring
eight systems in lieu of seven). This reconfiguration would allow the system to provide
full reserve capacity even while a system is being maintained.

CFE Media Digital Report: Data Center Design • 34


✓ Data center design This configuration has several impacts to consider:
considerations

• Load management: The N+1 SR system has the simplest load management of to-
pologies presented. As long as the local UPS and generator are not overloaded, the
system will not be overloaded.

• Backup power generation: This topology follows the normal power flow and uses
an N+1 SR backup generation where the generator is paired to the distribution block.
Each generator is sized for the entire block load, with the SR generator also sized to
carry one block. Parallel generation can be used for block-redundant systems. How-
ever, carefully consider the need for redundancy in the paralleling switchgear. True
N+1 redundancy would require redundant paralleling switchgear. However, this level
of redundancy while on generator power may not be required.

• Deployment/commissioning: The deployment of the N+1 SR system is modular


because each system functions independently. However, commissioning a new sys-
tem with an existing redundant system may be challenging if the redundancy needs
to be always available for the critical load. In the event of a multiple-fault scenario
(multiple generators failing to operate or multiple UPS failing to support the load while
generators start), the faults will cascade and overload the redundant system. There
are multiple ways to mitigate this risk (load-management tripping breakers or inhibit-
ing the STS), but the concern is valid. Any of the methods implemented to prevent a
cascading failure will cause some IT loads to go offline.

• First cost/TCO: For a large-scale deployment (i.e., exceeding two modules), the
N+1 SR system has the lowest installed cost per kilowatt of the systems explored

CFE Media Digital Report: Data Center Design • 35


✓ Data center design here that have full UPS protection for both the normal and redundant power distri-
considerations
bution systems, due to the lower quantity of equipment. In addition, less equipment
should also result in lower ongoing operation and maintenance costs.

• Spatial considerations: The N+1 SR layout will have the smallest spatial impact.
Additional distribution is required between modules as well as a central location to
house the redundant system.

• Time to market: The balance between the IT-blocks distribution system and the
quantity of equipment will have an impact on the time to market. However, due to the
fact that the N+1 SR has the smallest quantity of equipment, this configuration po-
tentially has the shortest time to market of any system explored so far. This timing is
further supported due to system duplicates, which should allow for construction and
commissioning efficiencies on the subsequent installations, assuming the teams are
maintained.

N+1 common bus (N+1 CB). The premise behind the N+1 CB system is there is one
primary path that supports each IT block. This path also has an N+1 capacity UPS to
facilitate maintenance and function in the event of a UPS failure. The system is backed
up by a simple transfer switch system with a backup generator.

This configuration has a number of impacts on the distribution:

• Load management: Similar to the N+1 SR system, the load management for the
N+1 CB is simple. As long as the local UPS/generator combination is not overloaded,
the system will not be overloaded.

CFE Media Digital Report: Data Center Design • 36


✓ Data center design
considerations
• Backup power generation: Like the previous topology, there is a generator paired
to each distribution block including the redundant block.

• Deployment/commissioning: The deployment of the N+1 CB system is a modular


deployment because each system functions independently. The only location where
existing work has to be tested with the new equipment is on the common bus sys-
tem.

• First cost/TCO: The N+1 CB system potentially has the lowest installed cost per
kilowatt of any of the systems. This lower cost is due to a combination of lower
quantities of UPS and generators coupled with simpler distribution. Additionally, less
equipment means ongoing operation and maintenance costs should be lower as well.

• Spatial considerations: The N+1 CB layout will have a small spatial impact. Addi-
tional distribution is required between modules as well as a central location to locate
the central bus system (transfer switches and generator).

• Time to market: Similar to the N+1 SR system, the N+1 CB has significantly few-
er pieces of equipment than the 2N or 3M2 systems. This equipment count should
support a faster time to market. However, it is difficult to determine which of the N+1
systems would have a quicker time to market.

The above topology descriptions only highlight a few systems. There are other topol-
ogies and multiple variations on these topologies. There isn’t a ranking system for to-
pologies; one isn’t better than another. Each topology has pros and cons that must be

CFE Media Digital Report: Data Center Design • 37


✓ Data center design weighed against the performance, budget, schedule, and the ultimate function of each
considerations
data center.

What generator rating should be used for a data center?


Generators need to be able to deliver backup power for an unknown number of hours
when utility power is unavailable. To help select the appropriate generator, manufactur-
ers have developed ratings for engine-generators to meet load and run time require-
ments under different conditions. The International Standards Organization (ISO) Stan-
dard 8528-2005, Reciprocating Internal Combustion Engine Driven Alternating Current
Generating Sets, tries to provide consistency across manufacturers. However, the ISO
standard only defines the minimum requirements. If the generator is capable of a high-
er performance, then the manufacturer can determine the listed rating. To complicate
generator ratings even more, some industries have their own ratings specific to that
industry and application. These various ratings can make selecting the correct genera-
tor type complicated.

There are four ratings defined by ISO-8528:

1. Continuous power: designed for a constant load and unlimited operating hours;
provides 100% of the nameplate rating for 100% of the operating hours.

2. Prime power: designed for a variable load and unlimited running hours; provides
100% of nameplate rating for a short period but with a load factor of 70%; 10%
overload is allowed for a maximum of 1 hour in 12 hours and no more than 25
hours/year.

CFE Media Digital Report: Data Center Design • 38


✓ Data center design 3. Limited running: designed for a constant load with a maximum run time of 500
considerations
hours annually; same nameplate rating as a prime-rated unit but allows for a load fac-
tor of up to 100%; there is no allowance for a 10% overload.

4. Emergency standby power: designed for a variable load with a maximum run time
of 200 hours/year; rated to run at 70% of the nameplate.

The generator industry also has two additional ratings that are not defined by ISO-
8528: mission critical standby and standby. Mission critical standby allows for an 85%
load factor with only 5% of the run time at the nameplate rating. A standby-rated gen-
erator can provide the nameplate rating for the duration of an outage assuming a load
factor of 70% and a maximum run time of 500 hours/year.

Data center designs assume a constant load and worst-case ambient temperatures.
This does not reflect real-world operation and results in overbuilt and excess equip-
ment. Furthermore, it is unrealistic to expect 100% load for 100% of the operating
hours, as the generator typically requires maintenance and oil changes after every 500
hours of run time. Realistically during a long outage, the ambient temperature will fluc-
tuate below the maximum design temperature. Similarly, the load in a data center is
not constant. Based on research performed by Caterpillar, real-world data center appli-
cations show an inherent variability in loads. This variability in both loads and ambient
temperatures allows manufacturers to state that a standby-rated generator will provide
nameplate power for the duration of the outage and it’s appropriate for a data center
application. However, if an end user truly desires an unlimited number of run hours,
then a standby-rated generator is not the appropriate choice.

CFE Media Digital Report: Data Center Design • 39


✓ Data center design What type of transformer is best?
considerations
The type of transformer to be used for a data center is constantly questioned and chal-
lenged by end users trying to understand if they should invest in a high-performance
transformer or not. There are two categories of distribution transformers: dry-type and
liquid-filled. Within each category, there are several different types. The dry-type cate-
gory can be subdivided into five categories with the following features:

1. Open-wound transformers apply a layer of varnish on heated conductor coils and


bake the coils until the varnish cures.

2. Vacuum-pressure impregnated (VPI) transformers are impregnated with a


high-temperature polyester varnish, allowing for better penetration of the varnish into
the coils and offering increased mechanical and short-circuit strength.

3. Vacuum-pressure encapsulated (VPE) transformer windings are encapsulat-


ed with silicon resin typically applied in accordance with a military spec and used in
locations exposed to salt spray, such as shipboard applications with the U.S. Navy.
VPE transformers are superior to VPI transformers, with better dielectric, mechani-
cal, and short-circuit strength.

4. Encapsulated transformers have open wound windings that are insulated with
epoxy, which makes them highly resistant to short-circuit forces, severe climate con-
ditions, and cycling loads.

5. Cast-coil-type transformers have windings that are hermetically sealed in epoxy


to provide both electrical and mechanical strength for higher levels of performance

CFE Media Digital Report: Data Center Design • 40


✓ Data center design and environmental protection in high-moisture, dust-laden, and chemical environ-
considerations
ments.

For liquid-filled transformers, various types of fluids can be used to insulate and cool
the transformers. These include less-flammable fluids, nonflammable fluids, mineral oil,
and Askarel.

When put into the context of a mission critical environment, two transformers stand
out: the cast-coil transformer due to its exceptional performance and the less-flamma-
ble liquid-immersed transformer due to its dependability and longevity in commercial
and industrial environments. While both transformer types are appropriate for a data
center, each comes with pros and cons that require evaluation for the specific environ-
ment.

Liquid-filled transformers are more efficient than cast coil. Because air is the basic
cooling and insulating system for cast coil transformers, they will be larger than liq-
uid-filled units of the same voltage and capacity. When operating at the same current,
more material and more core and coil imply higher losses for cast coil. Liquid-filled
transformers have the additional cooling and insulating properties associated with the
oil-and-paper systems and tend to have lower losses than corresponding cast coil
units.

Liquid-filled transformers have an average lifespan of 25 to 35 years. The average lifes-


pan of a cast coil transformer is 15 to 25 years. Because liquid-filled transformers last
longer than cast coil, they save on material, labor to replace, and operational impact
due to replacement.

CFE Media Digital Report: Data Center Design • 41


Rittal Can Help You Move
Recommended annual maintenance for a
Beyond The Edge Of Possibility
cast coil transformer consists of inspection,
infrared examination of bolted connections,
and vacuuming of grills and coils to maintain
adequate cooling. Most times, cleaning of
the grills and coils requires the transformer
to be de-energized, which often leads to this
maintenance procedure being skipped. The
buildup of material on the transformer grills
and coils can lead to decreased transformer
efficiency due to decreased airflow.

Maintenance for a liquid-filled transformer


consists of drawing and analyzing an oil sam-
ple. The oil analysis provides an accurate
assessment of the transformer condition and
Edge Infrastructure Handbook Highlights
Turnkey Solutions To Help Companies Grow allows for a scheduled repair or replacement
Edge computing is speeding up communication by bringing the network closer to the data to rather than an unforeseen failure. This kind
reduce latency and increase real-time analysis. From the plant floor to the subway tunnel and
remote desert solar installation, the Internet of Things (IoT) is integrating sensors, data and of assessment is not possible on a cast coil
systems to help centralize control over the systems that run the world today.

For many businesses, edge computing represents a big change in both operations and distributed
transformer. Additionally, omitting the oil sam-
information technology. At Rittal, we can show you how and why this is a Change for the Better.
pling does not decrease the transformer effi-
To download the Edge Infrastructure ciency.
Handbook visit RittalEnclosures.com.
Make a Change for the Better.

Cast-coil-type transformers have a history of


catastrophic failures within data centers due

CFE Media Digital Report: Data Center Design • 42


✓ Data center design to switching induced transient voltages when switched by upstream vacuum breakers.
considerations
There has been significant research by IEEE committees, which resulted in guidelines
for mitigating techniques (i.e., resistive-capacitive [RC] snubbers) published in IEEE
C57.142-2010: IEEE Guide to Describe the Occurrence and Mitigation of Switching
Transients Induced by Transformers, Switching Device, and System Interaction. Liq-
uid-filled transformers seem less susceptible to this problem, as there is no published
data on their failure. Regardless of the transformer type installed, best industry prac-
tice is to perform a switching transient study and install RC snubbers on the systems if
warranted.

When a transformer fails, a decision must be made on whether to repair or replace it.
Cast coil transformers typically are not repairable; they must be replaced. However,
there are a few companies who are building recyclable cast coil transformers. On the
other hand, in most cases, liquid-filled transformers can be repaired or rewound.

When a cast-coil transformer fails, the entire winding is rendered useless because it
is encapsulated in epoxy resin. Because of the construction, the materials are difficult
and expensive to recycle. Liquid-filled transformers are easily recycled after they’ve
reached the end of their useful life. The steel, copper, and aluminum can be recycled.

Cast-coil transformers have a higher operating sound level than liquid-filled transform-
ers. Typical cast coil transformers operate in the 64 to 70 dB range while liquid-filled
transformers operate in the 58 to 63 dB range. A decibel is a logarithmic function and
sound pressure doubles for every 3-dB increase.

Liquid-filled transformers have less material for the core and coil and use highly effec-

CFE Media Digital Report: Data Center Design • 43


✓ Data center design tive oil and paper cooling systems, which allow them to be small in physical footprint
considerations
and weigh less than the corresponding cast coil unit. Because cast coil transformers
are air-cooled, they are often larger than their liquid counterparts assuming the same
voltage and capacity (kVA rating). Cast coil transformers have more core material,
which implies higher costs and losses.

Dry-type transformers have the advantage of being easy to install with fire-resistant
and environmental benefits. Liquid-filled transformers have the distinct disadvantage of
requiring fluid containment. However, advances in insulating fluids, such as Envirotemp
FR3 by Cargill, a natural ester derived from renewable vegetable oil, is reducing the
advantages of dry-type transformers.

For indoor installations of transformers, cast coil must be located in a transformer room
with minimum 1-hour fire-resistant construction in accordance with NFPA 70-2017: Na-
tional Electrical Code (NEC) Article 450.21(B). However, if less-flammable liquid-insu-
lated transformers are installed indoors, they are permitted in an area that is protected
by an automatic fire-extinguishing system and has a liquid-confinement area in accor-
dance with NEC Article 450.23.

Traditionally less-flammable liquid-filled transformers are installed outdoors. However,


both types can be installed outdoors. This option of outdoor installation has the addi-
tional advantage of reducing data center cooling requirements. In this case, cast coil
transformers need to have a weatherproof enclosure and cannot be located within 12
in. of combustible building materials per NEC Article 450.22. The liquid-filled transform-
er must be physically separated from doors, windows, and similar building openings in
accordance with NEC Article 450.27.

CFE Media Digital Report: Data Center Design • 44


✓ Data center design
considerations
The choice between a cast coil and a less-flammable liquid-filled transformer can be a
challenging one to make. A liquid-filled transformer is a solid choice for a data center
application because it is more efficient, physically smaller and lighter, quieter, recycla-
ble, and has a longer lifespan. However, if the demand for high electrical and mechan-
ical performance is of the utmost concern, then cast coil would be the appropriate
choice.

What IT distribution voltage should be used?


By now it’s well understood in the data center industry that 3-phase circuits can pro-
vide more power to the IT cabinet than a single-phase circuit. However, the choice of
distribution voltage between 208 Y/120 V or 415 Y/240 V depends on the answers to
several questions, such as:

• How much power needs to be delivered to each IT cabinet initially, and what does
the power-growth curve look like for the future?

• What are the requirements of the IT equipment power supplies?

• Will legacy equipment be installed in the data center?

• Can the facilities team decide on the power supplies to be ordered when new IT
equipment is purchased?

Let’s start with the power of a 3-phase circuit. A 208 Y/120 V, 3-phase, 20-amp circuit
can power up to a 5.7-kVA cabinet. Per NEC Article 210.20, branch-circuit breakers

CFE Media Digital Report: Data Center Design • 45


✓ Data center design can be used up to 80% of their rating, assuming it’s not a 100%-rated device. There-
considerations
fore, a 208 V, 3-phase, 20-amp circuit can power a cabinet up to 5.7 kVA (20 amps
x 0.8 x √3 x 208 V). Now, if that same 20-amp circuit was operating at 415 Y/240 V,
3-phase, then that circuit could power a cabinet up to 11.5 kVA (20 amps x 0.8 x √3 x
415 V). That’s more than twice the power from the same circuit for no extra distribution
cost.

If the specifications for the IT equipment can be tightly controlled, the decision to stan-
dardize on 415 Y/240 V distribution is a pretty simple one. However, if the IT environ-
ment cannot be tightly controlled, the decision is more challenging. Currently, most IT
power supplies have a wide range of operating voltage, from 110 V to 240 V. This al-
lows the equipment to be powered from numerous voltage options while only having to
change the plug configuration to the power supply. However, legacy equipment or spe-
cialized IT equipment may have very precise voltage requirements, thereby not allowing
for operation at the higher 240 V level. To address this problem, both 208 Y/120 V and
415 Y/120 V can be deployed within a data center, but this is rarely done as it creates
confusion for deployment of IT equipment.

The follow-on question typically asked is if the entire data center can run at 415 V, rath-
er than bringing in 480 V and having the energy loss associated with the transformation
to 415 V. While technically feasible, the equipment costs are high because standard
HVAC motors operate at 480 V. Use of 415 V for HVAC would require specially wound
motors, thus increasing the cost of the HVAC equipment.

Must we install an emergency power-off system?


Emergency power-off (EPO) buttons are the fear of every data center operator. With

CFE Media Digital Report: Data Center Design • 46


✓ Data center design the push of a button, the entire data
considerations
center power and cooling can be shut
down. Because of the devastation that
activation of an EPO can cause, EPOs
typically are designed with a two- or
three-step activation process, such as
lifting a cover and pressing the button
or having two EPO buttons that must
be activated simultaneously. These
multistep options assume that the au-
thority having jurisdiction has provided
approval for such a design. However,
Figure 4: Rendering of a CH2M EPOs are not necessarily required. The need for an EPO
design of a data center, confer- is typically triggered by NEC Article 645.10, which allows
ence center, and office buildings alternative and significantly relaxed wiring methods in
for Saudi Airlines. Image courte-
comparison with the requirements of Chapter 3 and Ar-
sy: CH2M
ticles 708, 725, and 770. These relaxed wiring methods
are allowed in exchange for adding an EPO system and ensuring separation of the IT
equipment’s HVAC occupancies from other occupancies. The principle benefit of using
Article 645.10 is to allow more flexible wiring methods in the plenum spaces and raised
floors. However, if the wiring is compliant with Chapter 3 and Articles 708, 725, and
770, the EPO is not required.

Can we use photovoltaic systems to power our data center?


Corporations and data center investors are demanding sustainability be built into the
data center. The positive impact on public relations by showcasing a sustainable data

CFE Media Digital Report: Data Center Design • 47


✓ Data center design center can’t be underestimated, especially considering how much of a power hog data
considerations
centers can be. Additionally, many utility companies will offer incentives for the use of
energy-efficient and sustainable technologies. An often-questioned item is whether
photovoltaic (PV) systems can be used to meet some of the sustainability requirements
in a data center environment. The answer is yes, but a good understanding of PV sys-
tems and the limitations and impacts on a data center are required prior to making the
investment (see Figure 4).

The power production of PV equipment varies considerably depending on type and


location of the system installed. There are three main types of solar panel technologies.
Crystalline silicon (c-Si) is the most common PV array type, along with thin-film and
concentrating PV. Thin-film is generally less efficient than c-Si, but also less expensive.
Concentrating PV arrays use lenses and mirrors to reflect concentrated solar energy
onto high-efficiency cells. Concentrating PV arrays require direct sunlight and tracking
systems to be most effective and are typically used by utility companies.

Solar cells are not 100% efficient. In the infrared region of light, solar cells are too weak
to create electricity; and in the ultraviolet region of light, solar cells create heat instead
of electricity. The amount of power that can be generated with a PV array also varies
due to the average sunshine (insolation, or the delivery of solar radiation to the earth’s
surface) along with the temperature and wind. Typically, PV arrays are rated at 77˚F,
allowing them to perform better in cold rather than in hot climates. As temperatures
rise above 77˚F, the array output decays (the amount of decay varies by type of sys-
tem). Ultimately, what this means is that the power generation of an array can vary over
the course of a day and year. Added to this are the inefficiencies of the inverter, and if
used, storage batteries.

CFE Media Digital Report: Data Center Design • 48


✓ Data center design
considerations
The physical space required to install
the PV array can be significant. A sim-
ple rule is to assume 100,000 sq ft
(about 2.5 acres) for a 1-MW PV-gen-
erating plant. However, this does not
include the space required for access
or other ground-mounted appurte-
nances. The total land required is
better estimated at about 4 acres per
MW. This estimate assumes a tradi-
tional c-Si PV array (without trackers).
Increase this area by 30%, for a total
Figure 5: The photo shows the outdoor in- of about 6 acres, if thin-film technology (with-
stallation of the power electronic switch of the out trackers) is used due to the inefficiencies of
medium-voltage UPS at Michigan State Uni-
the technology.
versity. Image courtesy: CH2M

A PV system may or may not provide power during a utility power failure, depending on
the type of inverter installed. A standard grid-tied inverter will disconnect the PV sys-
tem from the distribution system to prevent islanding. The inverter will reconnect when
utility power is available. An interactive inverter will remain connected to the distribution
system, but it is designed to only produce power when connected to an external pow-
er source of the correct frequency and voltage (i.e., it will come online under genera-
tor power). Typically, interactive inverters include batteries to carry the system through
power outages, therefore the system should be designed such that there is enough
PV-array capacity to supply the load and charge the batteries.

CFE Media Digital Report: Data Center Design • 49


✓ Data center design
considerations
Most data centers do not have the necessary land to install a PV system, which sub-
stantially offsets the power demand. Then there is the question of what happens when
the PV system is generating low or no power. Interactive inverters and deep-cycle
storage batteries can be installed to cover these low-PV production periods, but this
introduces new equipment, maintenance, and space requirements into the data center,
thus creating more costs and more maintenance than may have been originally envi-
sioned. Generally, data center sustainability is addressed more directly through efficient
cooling and electrical distribution systems. Sustainability achieved through solar pow-
Figure 6: The photo shows the
er, while nice to have, is generally not the focus of data
outdoor installation of the switch- center investments.
gear in an enclosure used in con-
junction with the power electronic The trend is to provide a PV system that offsets some of
switch. Image courtesy: CH2M the noncritical-administration power usage. These sys-
tems are typically small (less than 500
kW) and can be located on building
rooftops, carports, and on the ground.
They use a standard grid-tied inverter
connected through the administration
electrical distribution system, which
ultimately ties into the site-distribution
switchgear where the utility meter re-
sides. A grid-tied inverter system will
disconnect from the utility if there is a
failure or when on generator power.

CFE Media Digital Report: Data Center Design • 50


✓ Data center design Because the grid-tied inverter connection is downstream from the utility revenue meter,
considerations
a billing mechanism known as net metering generally is used. With net metering, own-
ers are credited for any electricity they may add to the grid when the PV production is
greater than the site usage. Although in most data centers, the critical load dwarfs the
noncritical load; therefore it’s rare that a PV system would generate power on the grid.
There are differences between states and utility companies regarding the implemen-
tation, regulations, and incentives for net metering. Furthermore, there are some utility
companies that perceive net metering as lost revenue
Figure 7: The photo shows the and will not allow connection to their system.
outdoor installation of the me-
dium-voltage UPS system with
batteries in an enclosure equipped
A great resource for PV and renewable energy in gen-
with a heating and ventilating sys- eral is the National Renewable Energy Laboratory. The
tem and integrated air conditioning. NREL website provides information on PV research,
Image courtesy: CH2M applications, publications as well as a free online tool,
PVWatts, which estimates the energy
production and cost of energy of grid-
tied PV systems throughout the world.
The PVWatts tool easily develops esti-
mates of potential PV-installation per-
formance.

A medium-voltage alternative
to low-voltage UPS
Design topology evaluation also should
consider the medium-voltage uninter-
ruptible power supply (UPS). Like the

CFE Media Digital Report: Data Center Design • 51


✓ Data center design topologies using the low-voltage UPS, the medium-voltage UPS can be deployed in
considerations
2N, N+1, and 3N/2 configurations. Regardless of the topology used, medium-voltage
UPS systems offer advantages over low-voltage UPS systems. They generally are in-
stalled outdoors in containers, thereby minimizing the conditioned building footprint.
While not required, medium-voltage UPS topologies are typically used for full-facility
protection rather than using independent information technology and mechanical-cool-
ing UPS systems, further reducing the building footprint. Medium-voltage UPS systems
are large systems, starting at 2.5 MVA and scalable up to 20 MVA per UPS. Different
manufacturers have different voltage offerings, but medium-voltage UPS systems can
range from 5 kV up to 25 kV, with medium-voltage diesel rotary UPS systems going as
high as 34.5 kV.

In early 2018, Michigan State University is expected to complete construction on a new


25,000-sq-ft data center with 10,600 sq ft of server space and initially hosting about
300 server racks. This $46 million facility will use a medium-voltage UPS system, start-
ing with 2.5 MW of critical power and the ability to increase in critical power as needed.
The utility infrastructure is built to support an increase of load up to 10 MW. Figures
5, 6, and 7 highlight the outdoor power electronic switch, switchgear, and the medi-
um-voltage UPS installed by the university.

Debra Vieira is a senior electrical engineer at CH2M, Portland, Ore., with more than
20 years of experience for industrial, municipal, commercial, educational, and military
clients globally.

CFE Media Digital Report: Data Center Design • 52


Click the video link below to view a video provided by Rittal
DESIGNING EFFICIENT DATA CENTERS
In today’s digital age, businesses rely on running an efficient, reliable, and secure operation, especially with
mission critical facilities such as data centers. Here, engineers with experience on such structures share ad-
vice and tips on ensuring project success.

Respondents
• Doug Bristol, PE, Electrical Engineer,
Spencer Bristol, Peachtree Corners, Ga.,
• Terry Cleis, PE, LEED AP, Principal,
Peter Basso Associates Inc., Troy,
Mich.
• Scott Gatewood, PE, Project Manag-
er/Electrical Engineer/Senior Associ-
ate, DLR Group, Omaha, Neb.
• Darren Keyser, Principal, kW Mission
Critical Engineering, Troy, N.Y.
• Bill Kosik, PE, CEM, LEED AP, BEMP,
Senior Engineer – Mission Critical,
exp, Chicago
• Keith Lane, PE, RCDD, NTS, LC,
LEED AP BD&C, President, Lane Co-
burn & Associates LLC, Seattle
• John Peterson, PE, PMP, CEM, LEED AP BD+C, Program Manager, AECOM, Wash-
ington, D.C.
• Brandon Sedgwick, PE, Vice President, Commissioning Engineer, Hood Patterson &
Dewar Inc., Atlanta
• Daniel S. Voss, Mission Critical Technical Specialist, M.A. Mortenson Co., Chicago

CFE Media Digital Report: Data Center Design • 54


✓ Designing efficient CSE: What’s the biggest trend you see today in data centers?
data centers

Doug Bristol: I’m seeing increasing emphasis on modularity and build-as-you-go to


minimize the initial expense.

Terry Cleis: Designing overall systems that are focused at the rack level. These de-
signs include targeted rack-level cooling and row containment for hot or cold areas.
Some of these systems can be designed to provide flexible levels of cooling to match
changing needs for individual racks. These designs include rack-mounted monitoring
for temperature and power and associated power and cooling systems designed to
cover a predetermined range of equipment. These systems also often allow for raised
floor elevations to be minimized or even removed. This enables any space that is below
the floor to be used for other systems with less concern on air movement.

Scott Gatewood: Beyond reliability and durability, efficiency and scalability remain
top priorities for our clients’ infrastructures. Although this is not a new revelation, the
means and methods of achieving them through design and information technology (IT)
hardware continue to evolve. Data center energy (with an estimated 90 billion kWh of
data center energy waste this year, according to the Natural Resources Defense Coun-
cil) remains a key operational cost-management goal. The tools, methods, and hard-
ware needed to reduce energy continue advancing. The Internet of Things (IoT) has
entered the data center with data center infrastructure-management (DCIM) software,
sensors, analytics, and architectures that closely couple cooling and energy recovery,
providing energy efficiencies rarely achievable just 6 years ago. With increased auto-
mation, managing the plant is increasingly achievable from remote locations, just as
the IT infrastructure has been. Scalability also remains critical to our clients. How this

CFE Media Digital Report: Data Center Design • 55


✓ Designing efficient is achieved also continues to evolve. For businesses seeking innovative advantages
data centers
through speed to market, modular approaches using pre-engineered scaled solutions
with fast deployment continue to grow. Although not for everyone or every site, more
options exist to scale rapidly than ever before.

Bill Kosik: Over the past 10 years, data center design has evolved tremendous-
ly. During that maturation process, we have seen trends related to reliability, energy
efficiency, security, consolidation, etc. I don’t believe there is a singular trend that is
broadly applicable to data centers like the trends we’ve seen in the past. They are
more subtle and more specific to the desired business outcome; data center-plan-
ning strategies must include the impacts of economical cloud solutions, stricter capital
spending rules, and the ever-changing business needs of the organization. Data cen-
ters are no longer the mammoth one-size-fits-all operation consolidated from multiple
locations. We see that one organization will use different models across divisions, es-
pecially when the divisions have very diverse business goals.

Keith Lane: Some of the new trends we see in the industry include striving for in-
creased efficiency and reliability without increasing the cost. Efficiency can be gained
with better uninterruptible power supply (UPS) systems, proper loading on the UPS,
transformers, and increased cold-aisle temperatures. Additionally, a proper evaluation
of the specific critical loads and the actual required redundancies can allow some of
the loads to be fed at 2N, some at N+1, others at N, and others with straight utility
power. Allowing this type of evaluation to match specific levels of redundancy/reliability
with actual load types can significantly increase efficiency.

John Peterson: We are seeing a continuation of the many trends that have been

CFE Media Digital Report: Data Center Design • 56


✓ Designing efficient happening in the industry over the past few years. For the more cutting-edge, power
data centers
density and temperature ranges move higher while infrastructure moves toward be-
coming more automated and software-defined. Modularity for scalability is more popu-
lar. Enterprises are mimicking the more agile IT environments that large cloud providers
have established as the new paradigm. Edge computing continues to grow, and with
that, support will be needed. Clients will be balancing bandwidth and storage to deploy
in quantities that are closer to what they need.

Brandon Sedgwick: The biggest trends we see in data centers today are meg-
asites and demand-dependent construction. In this highly competitive market, min-
imizing cost per megawatt of installed capacity is a priority for data center owners,
which is why megasites spanning millions of square feet with hundreds of megawatts
of capacity are becoming more common. Borrowing a page from just-in-time manu-
facturing principles, these megasites (and even smaller facilities) are designed to be
built or expanded in phases in response to precontracted demand to minimize upfront
capital expenditure and expedite time to market. Consequently, these phased projects
often demand compressed construction schedules with unyielding deadlines driven by
financial penalties for the owner. This has led to simpler or modular designs to expedite
construction, maximize capacity, and reduce costs while allowing flexible redundancy
and maintainable configurations to meet individual client demands.

Daniel S. Voss: We’re noticing large colocation providers with faster speed-to-
market construction and implementation. There is a high level of competition between
the major countrywide colocation providers to have the ideal space with all amenities
(watts per square foot, raised access floor, security, appropriate cooling, etc.) ready for
each new client and customer.

CFE Media Digital Report: Data Center Design • 57


✓ Designing efficient
data centers
CSE: What trends and technologies do you think are on the horizon
for such projects?

Kosik: Information and communications technology (ICT), particularly high-end enter-


prise servers, continues to evolve by increasing the computing power while simultane-
ously reducing energy use. The robust workloads that run on these machines are de-
signed to take advantage of the increased productivity, so even though the computing
efficiency has increased, the overall power consumption also increases. This leads to a
greater electrical-demand density (watts per square foot) across the data center and a
greater electrical density at the server-cabinet level (watts per cabinet).

Gatewood: In addition to the plant infrastructure, we tend to watch emerging IT in-


frastructure trends for their potential effects on the future of the physical environment.
Here, the landscape continues its rapid change. Beyond the megatrends of the cloud/
hybrid, edge computing, and security, we see changes in storage—and networking
technologies will alter the personalities of the white space with more storage equip-
ment. Due to the vastly larger amounts of data production from IoT and video applianc-
es, combined with costs and performance increases, data center and edge storage will
explode and change the IT footprint of the white space.

Voss: There are really two trends. The first is using existing, heavy industrial buildings
and repurposing them for data centers. To increase the speed to market, many owners
and constructors are eyeing containers and containerization for electrical, mechanical,
and IT disciplines. The second involves building hyperscale data centers with 20 mW
or more of critical IT computing power. Many large colocation providers are construct-

CFE Media Digital Report: Data Center Design • 58


✓ Designing efficient ing multibuilding campuses with total campus capacity exceeding 50 mW of critical IT
data centers
compute power.

Cleis: We’re seeing targeted cooling with more options including water and refriger-
ant for racks. Better options for the piping distribution associated with these systems
will continue to evolve to make the work associated with ongoing maintenance and
future changes better suited to take place in a data center environment. We have own-
ers asking for more modular designs and designs that will prevent issues like software/
firmware problems that can ultimately shut down entire systems. These include smaller
UPS systems or using multiple UPS manufacturers. Smaller systems can be located
closer to the loads and allow equipment upgrades or replacements associated with
failures without affecting the entire facility. Replacement and repairs to smaller compo-
nents can also help reduce costs associated with ongoing maintenance and repairs.

Sedgwick: One trend we are seeing more frequently is that IT is leveraging methods,
such as virtualization, that can be used to “shift” server processes from one location to
another in the event of a failure, to offset physical power-delivery system redundancy.
This allows engineers to streamline infrastructure design by reducing power transfor-
mations between incoming sources and the load, simplifying switching automation,
and minimizing—or even eliminating—UPS and backup generation. Simpler power-de-
livery systems consume less square footage, are faster to build, and free up more of a
facility’s footprint for white space.

Peterson: Liquid and immersion cooling is likely to grow in the coming years. As
power densities increase and the costs and implementation challenges are solved,
liquid and immersion cooling practices can start to develop, as efficiency is still a prime

CFE Media Digital Report: Data Center Design • 59


✓ Designing efficient factor for operations. Surveys have shown that enterprise businesses will be continuing
data centers
or expanding their investment in hybrid or cloud solutions. This indicates that the soft-
ware-defined data center market is still growing and that it won’t matter where the data
centers are or who owns and operates them. As DCIM becomes implemented in more
comprehensive ways, we’ll see improvements that are a step or two away from being
automated.

Bristol: Lithium-ion batteries appear to be ready for prime time.

CSE: What are engineers doing to ensure data centers—new and in


existing structures—meet the challenges associated with emerging
technologies?

Cleis: Engineers and designers should always keep an open mind and spend time
researching and reading to stay informed about evolving design and system innova-
tions. I find that good ideas very often come from owners and end users during the
early programming stages of the design process. Many owners and end users have a
solid technical background and a historic understanding of how data centers operate.
Most of these people spend a lot of time working in data centers, which enables them
to bring an insightful perspective. They are able to inform us what systems are reliable
and have worked well for them in the past, and what systems have given them prob-
lems. They also provide the design team with ideas for how to make the systems func-
tion better.

Voss: With increasing IT power densities, cooling and power can become limiting fac-
tors in optimizing the built environment. Additionally, data center customers use varying

CFE Media Digital Report: Data Center Design • 60


✓ Designing efficient electrical distribution topologies and facilities need to be designed to accommodate
data centers
these different needs. A goal is to create a flexible design that can evolve with differing
customer requirements and emerging technologies. These designs also need to accept
modular construction with traditional building materials and methods and provide the
necessary landing/connection points for containers.

Lane: It is incumbent for engineers who provide design and engineering service for
mission critical facilities to keep up with technology and with the latest data center
trends. Our company has vendors present the latest technology to us; we belong to
several professional organizations, read numerous industry magazines, and provide ex-
tensive independent research on codes, design standards, and emerging technologies.

Peterson: In most cases of data centers up to 20 years old, revisions to the exist-
ing data center are possible to allow increases in density. Specialized cooling systems
have allowed for increased density, which is often in localized areas of a legacy data
center. With more choices of adaptable air segregation and other means to decrease
bypass air, older data centers can control hot spots and better serve future needs.
For new designs, data center layouts are often being coordinated for specific densities
that work with their common operation within a certain power, space, and cooling ra-
tio. Some new facilities are aiming for the flexibility of more direct liquid (water) cooling
and are willing to invest in the upfront coordination and installations to meet their future
needs.

Gatewood: Emerging technologies are difficult to predict accurately. I recall the 1995
white paper preceding the creation of the Uptime Institute predicting 500 W/sq ft white
space by the early 2000s. Predictably, Moore’s Law did produce exponential perfor-

CFE Media Digital Report: Data Center Design • 61


✓ Designing efficient mance increases, but not exponential energy consumption. But the change too-often
data centers
overlooked is the ever-increasing weight of the product footprint. An existing struc-
ture can be improved to meet the more than 250-psf loads that today’s white spaces
demand. Future technologies may incorporate liquid cooling and require even denser
liquid-cooling weights to existing and new data center structures.

CSE: Tell us about a recent project you’ve worked on that’s inno-


vative, large-scale, or otherwise noteworthy. In your description,
please include significant details—location, systems your team en-
gineered, key players, interesting challenges or obstacles, etc.

Darren Keyser: While all projects are presented with unique challenges, a recent
multistory data center on the West Coast presented significant challenges, particularly
for the fuel system design. The client’s goal of maximizing the amount of leasable white
space meant there was little space for the generator plant, which needed 48 hours of
fuel storage. Adding to the challenge was unfavorable soil conditions for underground
fuel tanks. With limited room at grade, the tanks needed to be vertical. In addition, the
facility is in a seismic zone, which added to the complexity of the tank support. The 10
3-mW engines were placed on the roof of the facility, adding to the intricacy of the fu-
el-delivery system. Additionally, even though the piping was abovegrade welded steel,
the client wanted to manage the risk of a fuel leak and decided to exceed code by im-
plementing a double-wall piping system.

Gatewood: While new data centers are more straightforward, renovating existing
data center environments are not for the faint of heart. In line with the importance of a
proper structural design, we are wrapping up the reconstruction of a new data center

CFE Media Digital Report: Data Center Design • 62


✓ Designing efficient environment within an existing 50,000-sq-ft data center while simultaneously carrying
data centers
out a complete retrofit of the existing footings and structural steel beneath an active
data center. Careful sequencing of the work, a well-thought-out method of procedures,
and change-management controls are allowing a space that was designed to handle
90 psf to carry a new modern data center, without affecting the existing operations.

Sedgwick: Iron Mountain’s 200-acre campus in western Pennsylvania is one of the


world’s most secure colocation facilities. Located 220 ft underground in an abandoned
limestone mine, it is completely powered by renewable energy and is geothermal-
ly cooled by an underground lake that provides naturally chilled, recycled water. This
“chiller-free” cooling saves millions of gallons of water each year. During two expan-
sions of this 1.8 million-sq-ft Tier IV data center, which included repurposing single
generators to a new parallel-generator plant, we commissioned UPS modules, pow-
er distribution units (PDUs), and the associated electrical power-monitoring system
(EPMS) infrastructure.

Cleis: We are in the process of designing a moderately sized data center to fit inside
an existing vacant building. The owner requested that the design include smaller-scale
equipment configured in a modular design to allow for easier maintenance and equip-
ment replacement. This includes smaller UPS units, PDUs, and non-paralleled gener-
ators. Providing levels of redundancy using these smaller pieces of equipment and not
paralleling the generators proved to be a challenge. The current design contains mod-
ules that are based on a predetermined generator size. The overall generator system is
backed up using transferring equipment and an extra generator unit in the event a sin-
gle generator fails.

CFE Media Digital Report: Data Center Design • 63


✓ Designing efficient Bristol: We recently helped a large corporate enterprise data center operator replace
data centers
a legacy UPS and switchgear dating from 1992, using rental units. The data center
facility-management team, corporate IT, data center operators, commissioning agents,
UPS and switchgear vendors, and generator/switchgear-maintenance contractors all
contributed and partnered to successfully implement a no-outage seamless cutover for
the 2N, 5-mW system.

Voss: The QTS Chicago data center fits that description to a tee. QTS leveraged this
former Sun-Times printing facility’s robust base structure and efficient layout to sup-
port its repurposing as an industry-leading data center. The innovative conversion is
a modular design that populates the structure from east to west as more clients and
tenants occupy the data hall space. We are currently constructing a 125-mW substa-
tion for Commonwealth Edison onsite, which will not only provide power to the existing
470,000-sq-ft building but also have sufficient capacity to expand on the same site.

CSE: Each type of project presents unique challenges—what types


of challenges do you encounter on projects for data centers that
you might not face on other types of structures?

Sedgwick: In data centers, downtime is not an option. Period. As a commission-


ing firm, this challenge presents itself in different ways depending on whether you are
building a new facility or modifying an existing live site. In a new facility, building a re-
liable system is the primary focus throughout the entire project, and commissioning is
verified by the owner that once the system goes live, it won’t go down. However, as
project schedules are constantly under pressure to be expedited, or issues cause time
frames to slip, it’s usually the commissioning schedule that is shortened to accom-

CFE Media Digital Report: Data Center Design • 64


✓ Designing efficient modate delays upstream. The stakes are high when equipment needs to be added,
data centers
capacity expanded, or controls upgraded in a live facility. Working safely while main-
taining power to critical components requires scrutiny above and beyond that of new
construction to prevent injury, property damage, and service disruptions. The commis-
sioning agent must be knowledgeable enough to anticipate unintended consequences
of planned actions, and the agent must thoroughly understand operational sequences
and system responses to mitigate unnecessary risks to personnel and property. Dis-
cernment is crucial when determining what level of commissioning is required. Com-
missioning specifications for a live site often duplicate those developed for the original
installation. The commissioning authority may suggest specification modifications to
align the commissioning effort and approach with functional verification requirements,
and to minimize operational impact. In some cases, the live-site environment may war-
rant more testing or different methods, or the scope may need to be reduced to miti-
gate risk.

Lane: Most facilities and general buildings do not draw consistent power over a 24-
hour period. Data centers and other mission critical facilities draw power with a high
load factor. Duct banks can overheat when feeding a data center with a high load fac-
tor. Specific to data centers with high load factors, Neher-McGrath duct-bank heating
calculations are required to ensure the conductors feeding the facility are adequately
sized.

Bristol: Mostly, the challenge is the relentless search for maximum reliability and con-
current maintainability. These high-performance buildings are required to be operating
essentially at full throttle all the time, even during times of maintenance, so multiple ser-
vice paths for all utilities (cooling, power, air, etc.) is essential.

CFE Media Digital Report: Data Center Design • 65


✓ Designing efficient
data centers
Voss: A greater level of commissioning for electrical-redundant paths and mechan-
ical equipment requires a review of each system early in the project, which includes
reviewing what components make up the system and what schedule must be met to
provide the proper turnover date. Understanding each item that is to be commissioned
and how it interacts with other electrical and mechanical equipment is critical, so the
sequence of operations and various levels of commissioning are being actively thought
about during preconstruction and throughout the entire project.

Peterson: Data centers are made for silicon-based life, not carbon-based. Once
owners and operators understand that modern IT equipment can withstand high air-in-
let temperatures, they can start to gain monumentally through cooling efficiency.

CSE: Is your team using BIM in conjunction with the architects,


trades, and owners to design a project? Describe an instance in
which you’ve turned over the BIM model to the owner for long-term
operations and maintenance (O&M) or measurement and verification
(M&V).

Peterson: We perform all of our designs using BIM. Through practice, we are able to
incorporate more information in our models to reduce the number of coordination er-
rors that lead to changes in the field. Owners have seen the benefits over time, as new
additions and changes to the designs can be shared with consultants, added to the
BIM, and then returned. Third-party construction-management groups have then taken
the model and added updates as necessary throughout the process, including input
from commissioning and controls changes.

CFE Media Digital Report: Data Center Design • 66


✓ Designing efficient Keyser: Absolutely. This is key. Our firm provides a complete mechanical, electrical,
data centers
plumbing, and fire protection (MEP/FP) consulting engineering model. Our goal is to
provide a clash-free model, including a complete fire suppression system layout, when
issued for construction. Because we dedicate so much time and effort working with
the client to meet their needs, we will often “own” the model for the initial phase of
construction coordination. This ensures all those conversations and decisions made
during design—prior to the construction team being brought on board—are main-
tained. This also makes the construction process more efficient. The more efficiently
the entire design-build team works together, the better and quicker the construction
process. Speed to market is a huge driver for our clients.

Bristol: Yes, we use BIM with mission critical projects during the design. During con-
struction, we share the model with contractors and subcontractors to fine-tune their
systems, then the record (sometimes referred to as “as-built”) model is turned over to
the owner not only for long-term operations and maintenance but also for use by future
design teams when the inevitable renovations or expansions occur.

Voss: Absolutely, especially for data centers. Our firm uses BIM for all of our projects
throughout the country. This is mandatory for repurposing existing buildings; often-
times, the amount of available space to install the overhead infrastructure is less than
in a data center-designed structure. On a recent data center, our company leveraged
BIM to support the graphics for the building management system (BMS). This not only
saved time and money in creating new graphics for the BMS system, but it also provid-
ed the customer with a far more accurate representation of their facility.

Lane: A majority of data center projects we are involved in are using REVIT.

CFE Media Digital Report: Data Center Design • 67


✓ Designing efficient CSE: How are engineers designing data centers to keep initial costs down while also
data centers
offering appealing features, complying with relevant codes, and meeting client needs?

Cleis: One of our jobs as design engineers is to help owners understand the risks,
benefits, and costs associated with different levels of redundancy for the various sys-
tems that make up an overall data center facility. Hybrid designs with varying levels of
redundancy between different systems are not uncommon, particularly for smaller and
midsize systems. Our job is to educate owners and help them understand their op-
tions, but ultimately to design a facility that meets their needs and works within their
budget. It may sometimes appear that we are underdesigning a certain system in a
facility, but in fact, we are establishing a lower overall baseline of design redundancy
for the facility. Then, with the owner’s input, we design some specific systems to higher
levels of reliability to address historic problems or known weaknesses for that particular
client or facility.

Gatewood: The bulk of the data center’s initial costs is the electrical and mechanical
systems needed to provide 100 to 200 times the power demands of an average office
building. Add to this the redundancy and resilience required so that a system failure or
service outage, say a fan motor, must not result in an outage of the IT work product.
This is where the high initial costs come from. However, many operations can grow
over time, which permits using scalable infrastructure that allows our client to grow
their plant as their IT needs grow. This results in the best initial cost while allowing them
to grow quickly as their needs change.

Keyser: Predicting the future is impossible. Whether it’s colocation or enterprise,


the industry needs to plan for it. We create a master plan for a facility, yet only build to

CFE Media Digital Report: Data Center Design • 68


meet the initial needs of our client and future-proof the rest, the best we can. The ini-
tial build consists of space fit for immediate deployment while the balance may be shell
space. Implementing a container solution not only speeds up construction, it also al-
lows the client to defer purchase and installation of expensive infrastructure until the IT
loads require expansion. It’s a tough balance, which is why master planning is so cru-
cial. While there are systems and equipment that can wait, there are certain systems in
future spaces of the facility that must be installed from day one to minimize disruption
of the active data center.

Lane: This is the real challenge and the mark of a good engineer. The engineer must
dig deep into the owner’s basis of design and work closely with the owner to under-
stand where some loads need high reliability and where lower reliability and associated
redundancies can be removed. Also, right-sizing the equipment will save money up-
front and increase efficiency. Always design toward constructibility and work hand-in-
hand with the electrical contractor. Using BIM and asking for input from the contrac-
tors will save time and money during construction. We are seeing ever-evolving code
changes with respect to arc flash calculations, labeling, and mitigation. It is critical to
ensure that the available fault current at the rack-mounted PDU is not exceeded. As a
firm, we provide the design of mission critical facilities as well as fault-current and arc
flash calculations and selective-coordination studies. We always design toward reduc-
ing cost and arc and fault-current hazards during the design process.

Voss: It is a true balancing act to arrive at an optimal solution that meets or exceeds
the needs of the customer within the established budget. We work closely with engi-
neers and architects to perform detailed cost-benefit analysis to ensure features and
requirements are evaluated holistically. Using modular construction techniques and un-

CFE Media Digital Report: Data Center Design • 69


✓ Designing efficient derstanding the client in great detail will help the design team come up with innovative
data centers
ideas and opportunities.

Bristol: Most designs now include a modularity strategy so owners can build (and
spend) as they go. Modularity almost always includes a roadmap to the “end game”
and has to include strategies to minimize the impact to the existing live data center as
the facility is built out. For example, if the data center’s capacity includes 10 MW of
generator capacity at N+1, but only the first two are being installed on day one, then
all exterior yard equipment—pads, conduit rough-ins, etc.—would be included on day
one so that adding generators would be almost plug-and-play. Outdoor cooling equip-
ment, interior gear, UPS, batteries, etc. would work in a similar way.

Peterson: They’re doing it by using typical equipment sizes and modularity; vendors
have been able to bring down costs considerably. Contractors also see savings with
typical equipment; installations gain in speed as the project progresses.

CSE: High-performance design strategies have been shown to have


an impact on the performance of the building and its occupants.
What value-add items are you adding to data centers to make the
buildings perform at a higher level?

Voss: By focusing on energy-efficient enclosure systems and operational infrastruc-


ture systems (lighting, office HVAC) we can help reduce the noncritical energy usage of
the data center. This helps reduce the power-usage effectiveness (PUE) value, which
not only saves our customers money but also improves the marketability of their asset.

CFE Media Digital Report: Data Center Design • 70


✓ Designing efficient Gatewood: A value-added performance item many of our clients appreciate is adi-
data centers
abatic humidification techniques that save substantial amounts of energy and water
while also improving humidity control.

CSE: We’ve seen severe weather devastate businesses in many re-


gions in the U.S. How are you working to safeguard a clients’ infor-
mation and systems against extreme weather conditions?

Peterson: We have performed more risk assessment studies for data center cli-
ents over the past year than in previous years. While this often starts based on se-
vere-weather outlooks, we examine redundancy on fiber, power, and other utilities.
Clients also have been aiming to consolidate data centers to certain regions to reduce
latency, but using multiple sites across that region to avoid loss of connectivity. A high-
er degree of care is taken with data centers, as they most often serve missions that are
more critical than other building types.

Cleis: When designing a facility, the team should always address known factors re-
garding potential natural disasters in a geographic region when searching for a site
for a new facility. It’s also common to include similar concerns when selecting spac-
es within existing facilities when the data center will only occupy part of the building.
Avoiding windows, potential roof leaks, and flooding are common requirements. We of-
ten try to select an area in a building that has easy access to MEP systems, while also
avoiding exterior walls, top floors, and basements. Typically, we try to avoid areas that
are too public and select areas that are easy to secure and have limited access. It’s
also important to select an area that has access paths that will allow large equipment
to be moved.

CFE Media Digital Report: Data Center Design • 71


✓ Designing efficient Gatewood: Many clients understand the ever-escalating cost of downtime and the
data centers
consequences of disaster recovery following a complete loss of equipment and staff.
The Federal Emergency Management Agency’s FEMA 361, Safe Rooms for Tornadoes
and Hurricanes: Guidance for Community and Residential Safe Rooms, FEMA 426,
Reference Manual to Mitigate Potential Terrorist Attacks Against Buildings, and FEMA
P-431, Tornado Protection: Selecting Refuge Area in Buildings, along with FM Global
1-40 offer national reference standards for durability and survivability by design. Many
jurisdictions subject to natural hazards have created their own set of enforceable codes
that draw from some of these standards. It’s critical that the design team understands
the client’s risk tolerances and can communicate the costs of physical durability. Sur-
prisingly, it is typically not as costly as one might think when compared with the project
cost and the value of the assets within.

Lane: We typically provide an NFPA 780: Standard for the Installation of Lightning
Protection Systems lightning-protection analysis for mission critical systems. A majority
of data centers are designed with a lightning-protection and/or a lightning-mitigation
system.

Voss: Fortunately for projects in the greater Chicagoland area, the worst weather we
have ranges from subzero temperatures to heavy snows/blizzards to high winds with a
lot of rain. Keeping the building out of flood plains, and for certain clients, constructing
an enclosure that can withstand a tornado rating of EF-4 (207-mph winds) are issues
we’ve faced.

Bristol: Depending on the risk of a shutdown to a given data center, strategies to


“stormproof” the building are popular, such as minimizing the possibilities of projectiles

CFE Media Digital Report: Data Center Design • 72


✓ Designing efficient impacting the building by eliminating or reducing the amount of roof-mounted and out-
data centers
door-mounted equipment. Another strategy is having not only redundant systems, but
also a completely redundant site for disaster recovery.

CSE: Interest in cloud computing is on the rise—according to your


experience and observations, has that had a visible impact on cur-
rent/future data center projects?

Voss: Absolutely. Many corporations are moving from onsite computing facilities to
cloud-based colocation data centers. The quantity of new enterprise data centers is
decreasing while the quantity of colocation sites is increasing at a rapid pace.

Peterson: We’ve seen a lot of growth from the main cloud providers, and indus-
try analysts are expecting that this growth will continue for at least the next 10 years.
Since most have a typical format for their buildings, the structures themselves haven’t
changed a lot to accommodate the enormous pressure on schedule to meet the cloud
demand. Over time, the trends may shift to lower costs and yield higher returns for
shareholders that are investing now.

Gatewood: Cloud computing’s visible impact on current and future data centers
clearly reveals itself in the enterprise client’s white space. The combination of virtual
machines and the cloud have slowed the growth of rack deployments. Clearly, each
client’s service and application set will affect cloud strategy. In some cases, growth has
stopped as applications move to the cloud.

Sedgwick: We live in an on-demand, instant-gratification world; cloud computing

CFE Media Digital Report: Data Center Design • 73


✓ Designing efficient enables users and companies to take advantage of a great deal of computing power
data centers
and storage without massive capital outlay for systems and infrastructure. This unprec-
edented access, coupled with current and emerging data-intensive applications (i.e.
streaming entertainment services, ever-present mobile devices, artificial intelligence,
home automation, autonomous vehicles, etc.), is driving demand at an accelerated
pace. As a result, we’ve seen a demonstrable uptick in construction as wholesale and
retail data center providers clamber for market share. Additionally, to remain competi-
tive, data center operators are paying more attention to operational efficiency, resource
utilization, streamlined data processing, and other functional strategies to reduce costs
and improve flexibility and scalability without sacrificing reliability.

CSE: How do data center project requirements vary across the U.S.
or globally?

Keyser: The local environment has a huge impact on the mechanical solution. Ques-
tions to ask: Is free cooling an option? What are the local utility costs for water versus
electricity? Questions like these are key elements that will drive the design.

Kosik: There are many variations, primarily due to geo-specific implications includ-
ing climate and weather, impacts on cooling system efficiency, severe weather events,
water and electricity dependability, equipment and parts availability, sophistication and
capability of local operational teams, prevalence and magnitude of external security
threats, local customs, traditional design approaches, and codes/standards. It is im-
portant to be cognizant of these issues before planning a new data center facility.

Voss: Selecting a data center site normally goes through many steps to reach a po-

CFE Media Digital Report: Data Center Design • 74


✓ Designing efficient tential final location. The climate, geography, an adequately trained workforce, state
data centers
and local concessions, and constructability play a large part in the selected location.
The chosen location, in turn, dictates which building codes, electrical codes, and other
applicable codes impact the data center design. The actual owner requirements most
likely will have very few changes, as the project design is created from the owner’s ba-
sis of design.

Lane: We have provided the design and engineering for data centers across the
globe. We have seen many variations in design. Some of these variations include serv-
ing utility voltage, server voltage, lightning protection, grounding requirements, surge
and transient protection, and others. Additionally, the energy cost can significantly drive
the design. In areas of the world where energy costs are higher, efficiency is very crit-
ical. In areas of high lightning strike density, lightning protection and/or mitigation is a
must.

CFE Media Digital Report: Data Center Design • 75


DESIGNING MODULAR DATA CENTERS
When planning for modular data center design, the engineer should focus on attributes such as system effi-
ciency and operational characteristics.

I
n the design of power and cooling systems for data centers, there must be a known
base load that becomes the starting point from which to work. This is the minimum
capacity that is required. From there, decisions will have to be made on the additional
capacity that must be built in. This capacity could be used for future growth or could
be held in reserve in case of a failure. (Oftentimes, this reserve capacity is already built
into the base load). The strategy to create modularity becomes a little more complex
when engineers build in redundancy into each module.

In this article, we will take a closer look at different parameters that assist in establish-
ing the base load, additional capacity, and redundancy in the power and cooling sys-
tems. While the focus of this article is on data center modularity with respect to cooling
systems, the same basic concepts apply to electrical equipment and distribution sys-
tems. Analyzing modularity of both cooling and power systems together-the recom-
mended approach-will often result in a synergistic outcome.

What is modular planning?


When planning a modular facility, such as a data center, there are three main questions
that need to be answered:

• What is the base load that is used to size the power and cooling central plant equip-
ment (expressed in kilovolt-amp, or kVa, and tons, respectively)? In the initial phase
of the building, if one power and cooling module is used, this is considered an “N”

CFE Media Digital Report: Data Center Design • 76


✓ Designing modular
data centers

Figure 1: This graph shows the three scenarios (one to system where the capacity of the mod-
three chillers) running at loads of 10% to 100% (x-axis) ule is equal to the base load.
and the corresponding chiller power multiplier (y-axis).
The chiller power multiplier for the one-chiller scenario
• In the base load scenario, what is
tracks the overall system load very closely, where the
the N that the central plant used as
two- and three- chiller scenarios have smaller power
a building block? For example, if the
multipliers (use less energy) due to the ability to run the
base cooling load is 500 tons and two
chiller compressors at lower, more efficient operating
chillers are used with no redundancy,
points. All graphics courtesy: Bill Bosik, independent
consultant the N is 250 tons. If a level of concur-
rent maintainability is required,
an “N+1” configuration can be used. In this case, the N is still 250 tons but now there
are three chillers. In terms of cooling in this scenario, there would be 250 tons over the
design capacity.

• How do we plan for future modules? If the growth of the power and cooling load is

CFE Media Digital Report: Data Center Design • 77


✓ Designing modular determined to be linear and predictable (which is a rare scenario), the day one mod-
data centers
ule will be replicated and used for future growth. However, when the growth is not
predictable or the module design has to be changed due to changing loads or re-
serve-capacity requirements, there has to be a strategy in place to address these
issues. This is where the module-in-a-module approach can be used.

Module-in-a-module
Each module will have multiple pieces of power and cooling gear that are sized in
various configurations to, at a minimum, serve the day one load. This could be done
without reserve capacity, all the way to systems that are fault-tolerant, like 2N, 2(N+1),
2(N+2), etc. So the growth of the system has a direct impact on the overall module.
For example, if each module will serve a discreet area within the facility without any
interconnection to the other modules, the modular approach will stay pure and the fa-
cility will be designed and constructed with equal-size building blocks. While this ap-
proach is very clean and understandable, it doesn’t take advantage of an opportunity
that exists: sharing reserve capacity while maintaining the required level of reliability.

If a long-range strategy includes interconnecting the modules as the facility grows, there
will undoubtedly be opportunities to reduce expenditures, both capital expense and on-
going operating costs related to energy use and maintenance costs. The interconnec-
tion strategy results in a design that looks more like a traditional central plant and less
like a modular approach. While this is true, the modules can be designed to accom-
modate the load if there were some type of catastrophic failure (like a fire) in one of the
modules. This is where the modular approach can become an integral part in achieving
high levels of uptime. Having the modules physically separated will allow for shutting
down a module that is in a failure mode; the other module(s) will take on the capacity

CFE Media Digital Report: Data Center Design • 78


✓ Designing modular
data centers

Figure 2: This graph demonstrates the same concept that was shed by the failed module.
as in Figure 1, but the facility cooling load is at 75%.
As the cooling load decreases, the energy use of the
Using the interconnected approach can
different scenarios begins to equalize.
reduce the quantity of power and cool-
ing equipment as more modules are built, simply because there are more modules of N
size installed (see Figure 1 through 4). Installing the modules with a common capacity
and reserve capacity will result in a greater power and cooling capacity for the facility.

If uncertainty exists as to the future cooling load in the facility, the power and cooling
equipment can be installed on day one, but this approach deviates from the basic de-
sign tenets of modular data centers. And while this approach certainly provides a large
“cushion,” the financial outlay is considerable and the equipment will likely operate at
extremely low loads for quite some time.

Equipment capacity, maintenance, and physical size


When analyzing the viability of implementing a modular solution, one of the parameters
CFE Media Digital Report: Data Center Design • 79
✓ Designing modular to understand is the size of the N and how it will impact long-range costs and flexibility.
data centers
To demonstrate this point, consider a facility with a base load of 1,000 tons. The mod-
ule could be designed with the N being 1,000 tons. This approach leaves little reserve
capacity or the ability to maintain the equipment in a way that minimizes out of range
temperature and humidity risks to the IT systems. In this N configuration, taking out a
major piece of HVAC equipment will render the entire cooling system inoperable (unless
temporary chillers, pumps, etc., are activated during testing or maintenance).

Going to the other end of the spectrum yields an equipment layout that consists of
many smaller pieces of equipment. Using this approach will certainly result in a highly
modular design, but it comes with a price: All of that equipment must be installed, with
each piece requiring electrical hookups (plus the power distribution, disconnects, start-
ers, etc.), testing, commissioning, and long-term operations and maintenance. This is
where finding a middle ground is important; the key is to build in the required level of
reliability, optimize energy efficiency, and minimize maintenance and operation costs.

Factory-built versus site-erected modules


When considering how the modules are constructed, there are a few options:
site-erected, hybrid, or factory-built. Each of these options has its own set of advan-
tages and constraints. Consider:

• The location of the facility immediately influences the type of module design ap-
proach. For example, when facilities are located in sparsely populated areas where
skilled piping, sheet metal, and electrical design and construction experts are hard to
come by, it will be beneficial to use a factory-built, tested, and commissioned module
that is delivered to the site-probably in multiple sections-assembled, and connected

CFE Media Digital Report: Data Center Design • 80


✓ Designing modular to the other systems and facilities. It’s a bit more complex and detailed, but for this
data centers
type of scenario, the factory-built option makes sense.

• Oftentimes, facilities must be built in geographical areas without manufacturer sup-


port for start-up, commissioning, and maintenance of new power and cooling equip-
ment. This will require long-distance travel by the manufacturer’s technical teams-not
desirable in cases of operating anomalies or equipment failures. If there is no choice
on the location, upfront planning and special requirements can be written into the
specifications to proactively address these concerns. While there will be an increased
cost from the equipment vendor, purchasing spare parts upfront and stipulating max-
imum response time in case of an operating anomaly will lessen the impact of an
equipment failure.

• The construction schedule of data centers and other critical facilities typically is driv-
en by a customer’s needs, which is often driven by revenue generation or a need
by the customer’s end-user (e.g., the community, business enterprises, government
agencies) to use/occupy the proposed facility as soon as possible. When analyzing
the best approach to the construction of the overall facility, it is advantageous to have
the module built offsite, in parallel with the construction of the facility. The module
can be shipped to the site and installed even if the facility is not complete. Because
all of the equipment, piping, and electrical in the module have been installed, tested,
and commissioned, the overall time to build the facility can be reduced. Additionally,
commissioning and testing of the equipment in a factory setting can be more effec-
tive-especially when the people who built the module are onsite with the commission-
ing authority and all are working together to make sure all of the kinks are worked
through.

CFE Media Digital Report: Data Center Design • 81


✓ Designing modular
data centers

Figure 3: This graph demonstrates the same concept• In between the two choices of site-
built and factory-built is the hybrid ap-
as in Figure 1, but the facility cooling load is at 50%.
As the cooling load decreases, the energy use of theproach to constructing a module. As
different scenarios begins to equalize, especially the
he name implies, the hybrid approach
one-chiller scenario.
uses a combination of factory-built
and site-erected components. There is not one solution for this approach because the
amount of work done on the site, as compared to within the factory, vary greatly from
project to project. A good example of why a hybrid approach would be used is when
there could be difficulty in shipping large pieces of power and cooling equipment that
will be installed in a module. The balance of the HVAC and electrical work could still
be completed at the factory and take advantage of reducing the overall schedule. And
future expansions can be handled the same way, building in quick expansion capability.

Performance comparisons
An advantage of using a modular design approach is obtaining a higher degree of flex-
ibility and maintainability that comes from having multiple smaller chillers, pumps, fans,
CFE Media Digital Report: Data Center Design • 82
✓ Designing modular etc. When there are multiple redundant pieces of equipment, maintenance procedures
data centers
are less disruptive and, in an equipment-failure scenario, the redundant equipment can
be repaired or replaced without threatening the overall operation.

In data centers, the idea of designing in redundant equipment is one of the corner-
stones of critical facility design, so these tactics are well-worn and readily understood
by data center designers and owners. Layering modularization on top of redundancy
strategies just requires the long-range planning exercises to be more focused on how
the design plays out over the life of the build-out.

To illustrate this concept, a new facility could start out with a chilled-water system that
uses an N+2 redundancy strategy where the N becomes the building block of the cen-
tral plant. A biquadratic algorithm is used to compare the different chiller-compressor
unloading curves. These curves essentially show the difference between the facility air
conditioning load and the capability of the compressors to reduce energy use.

In the analysis, each chiller will share an equal part of the load; as the number of
chillers increases, each chiller will have a smaller loading percentage. In general, com-
pressorized equipment is not able to have a linear energy-use reduction as the air con-
ditioning load decreases. This is an inherent challenge in system design when attempt-
ing to optimize energy use, expandability, and reliability. The following parameters were
used in the analysis:

Curve designation: CentH2OVSD-EIR-fPLR&dT


(This is energy modeling shorthand for a water-cooled centrifugal chiller with a variable
speed compressor. EIR is the energy input ratio, which is what the equations solve for.

CFE Media Digital Report: Data Center Design • 83


✓ Designing modular
data centers

Figure 4: This graph demonstrates the same concept fPLR&dt indicates that EIR is a function
as in Figure 1, but the facility cooling load is at 25%. of the part load ratio of the chiller and
As the cooling load decreases, the energy use of the the lift of the compressor-chilled water
different scenarios begins to equalize, especially the
supply temperature subtracted from the
one-chiller scenario. The two- and three- chiller sce-
entering condenser water temperature.)
narios are already operating at a very small load, so
changes in cooling loads will not have a large impact
on the efficiency of the system. Type of curve: biquadratic in ratio and dT

Equation: f(r1,dT) = c1 + c2*r1 + c3*r12 + c4*dT + c5*dT2 + c6*r1*dT

Coefficients:
• c1 = 0.27969646 • c2 = 0.57375735
• c3 = 0.25690463 • c4 = -0.00580717
• c5 = 0.00014649 • c6 = -0.00353007

Each of the scenarios (Figures 1 through 4) were developed using this approach, and
CFE Media Digital Report: Data Center Design • 84
✓ Designing modular the results demonstrate how the efficiency of the chiller plants decrease as the overall air
data centers
conditioning load decreases.

Summary of analysis:
• The N+2 system (three chillers) has the smallest decrease in energy performance when
the overall facility load is reduced from 100% to 25%. This is due to the fact that the
chillers are already operating at a very small load. So large swings in cooling loads will
not have a large impact on the efficiency of the system.

• The N system (one chiller) shows the greatest susceptibility to changes in facility cool-
ing load. The chiller will run at the highest efficiency levels at peak loading, but will drop
off quickly as the system becomes unloaded.

• The N+1 system (two chillers) is in between the N and N+2 systems in terms of sensi-
tivity to changes in facility loading.

When put into practice, similar types of scenarios (in one form or another) will be a part
of many data center projects. When these scenarios are modeled and analyzed, the re-
sults will make the optimization strategies clearer and enable subsequent technical and
financial exercises. The type of modularity ultimately will be driven by reliability, and first
and operational costs. Because a range of different parameters and circumstances will
shape the final design, a well-planned, methodical procedure will ultimately allow for an
informed and streamlined decision-making process.

Bill Kosik is a data center energy engineer. He is a member of the Consulting-Specifying


Engineer editorial advisory board.

CFE Media Digital Report: Data Center Design • 85


HOW TO CHOOSE A MODULAR DATA CENTER
Modular data centers can be cost-effective, scalable options. There are several variables to consider, how-
ever, when comparing them to brick-and-mortar facilities.

T
he lack of data center capacity, low efficiency, flexibility and scalability, time
to market, and limited capital are some of the major issues today’s building
owners and clients have to address with their data centers. Modular data
centers (MDCs) are well-suited to address these issues. Owners are also
looking for “plug-and-play” installations and are turning to MDCs for the solution. And
why not? MDCs can be up and running in a very short time frame and with minimal
investment-while also meeting corporate criteria for sustainability. They have been used
successfully since 2009 (and earlier) by Internet giants, such as Microsoft and Google,
and other institutions like Purdue University.

With that said, Microsoft has recently indicated the company is abandoning the use of
their version of the MDC known as information technology pre-assembled components
(IT-PACs) because they couldn’t expand the data center’s capacity fast enough. So
which is it? Are MDCs the modern alternative to traditional brick-and-mortar data cen-
ters? This contradicting information may have some owners concerned and confused
as they ask if MDCs are right for their building.

MDCs versus traditional data centers


There are many terms used to describe MDCs: containerized, self-contained, prefab-
ricated, portable, mobile, skid, performance-optimized data center (POD), and many
others. An MDC is a pre-engineered, factory-built and integrated, tested assembly that
is mounted on a skid or in an enclosure with systems that are traditionally installed

CFE Media Digital Report: Data Center Design • 86


✓ How to choose a onsite by one or
modular data center
more contractors.
An MDC uses
standard compo-
nents in a repeat-
able and scalable
design, allowing
for rapid deploy-
ment. A contain-
erized data center
incorporates the
Figure 1: A rendering of phase one and two of the necessary power and/or cooling infra-
Barajeels project shows the administration building structure, along with the information
flanked by four enclosure buildings. Each enclosure technology (IT) hardware, in a container
building contains premanufactured buildings. Each en- that is built in accordance with the In-
closure building contains premanufactured electrical and ternational Standards Organization (ISO)
mechanical equipment skids on the ground floor with 16
for shipping containers. A modular data
MDC units on the first floor. Image courtesy: CH2M
center is not the same thing as a con-
tainerized data center; however, a containerized data center may be a component of a
modular data center.

The IT capacity of an MDC can vary significantly. Networking MDCs are typically 50
kW or less, standalone MDCs with power, mechanical, and IT systems can range up
to 750 kW, and blade-packed PODs connected to redundant utilities may be 1 MW or
greater.

CFE Media Digital Report: Data Center Design • 87


✓ How to choose a
modular data center

There is a lot of disagreement in the


Figure 2: A section through an enclosure building
data center industry in regards to the
shows oremanufactured electrical equipment skids on
the ground floor and the MDC units on the first floor,
performance, cost-effectiveness, time
highlighted the IT, electrical, and mechanical distribution
efficiency, and standardization of MDCs,
pathways. Image courtesy: CH2M
and their ability to outperform traditional
brick-and-mortar data centers that use a standard, repeatable, and scalable design.
An advantage of an MDC is its ability to be shipped anywhere. It can arrive as a single,
stand-alone enclosed unit, be integrated into an existing data center, or be combined
into a system of modules to establish a large-capacity data center.

According to 451 Research, the MDC market is expected to reach $4 billion by 2018,
up from $1.5 billion in 2014. In addition, 451 Research believes that MDCs are strate-
gically important to the data center industry. MDCs are expected to play a significant
CFE Media Digital Report: Data Center Design • 88
✓ How to choose a role in the next generation of products and technology, as they offer a flexible data
modular data center
infrastructure that can be built in larger “chunks” than many scalable brick-and-mortar
data centers. MDC designs are improving, and owners and operators are becoming
better educated about the many available options and variations. Due to the growing
interest with MDCs, there has been a surge in suppliers. An Internet search for MDCs
yields more than 20 different organizations with varying types of products-and this list
is expected to grow as vendors use technology, innovation, and geography to gain a
competitive edge. MDCs can be purchased or leased in many different configurations,
such as:

• Only equipped with IT hardware

• Power and/or cooling with space for IT hardware

• Power and/or cooling and equipped with IT hardware

• Only equipped with power or cooling.

Leasing options offer significant flexibility in the physical location of the MDC. Some
lease options allow the MDC to reside onsite, or the MDC infrastructure can be leased
from a co-location provider who is delivering services. Comparing MDCs and available
options can quickly become overwhelming. There are containerized, pre-engineered,
and prefabricated versions. IT vendors have MDC products that are complete data
center solutions and are engineered to work specifically with their hardware, but they
typically work with IT products from multiple vendors. Some providers deliver co-loca-
tion and cloud services using MDC technology.

CFE Media Digital Report: Data Center Design • 89


✓ How to choose a MDCs can reduce capital
modular data center
investment, construction,
and schedule, provide
for faster deployment,
and offer flexibility for
changing IT technolo-
gies. MDCs also reduce
the risk associated with
design, such as the tech-
nical risk of the design
not adhering to the re-
quirements, the schedule
risk of the design not be-
ing completed on time,
and the cost risk of the
final product exceeding
the budget. Providers of
MDCs offer turnkey solu-
tions for customers who
do not have the in-house
skill set to design and
construct their own data
center. Here is a look at
Figure 3: An exploded view of one enclosure building with the concrete a few more key advan-
structure housing the electrical and mechanical equipment, the 16 MDC’s tages of MDCs:
with the core MDC, and the shade structure, Image courtesy: CH2M

CFE Media Digital Report: Data Center Design • 90


✓ How to choose a • Improved schedule: MDCs are standardized, allowing for fast construction and
modular data center
module commissioning within a few months (or less). They are manufactured offsite
and transported to the construction site for final placement and utility connection.
This allows for site preparation to occur at the same time as MDC construction,
rather than the traditional linear process of site preparation before construction of
the data center. When evaluating traditional construction versus MDC, consider the
speed needed to deploy and whether the deployment is required at one or multiple
sites. The increased quantity of data center sites results in a more complex deploy-
ment, which may be eased by using MDCs.

• Reduced capital investment and risk: Brick-and-mortar-type data centers are


expensive to build and involve significant risk (i.e., going over budget, missing sched-
ules, not meeting requirements). Both MDCs and traditional data centers can be
sized to known criteria with the ability to easily expand, however, MDCs allow the
provisioning of “just-in-time” data center capacity by matching the investment to the
planned growth. The risk associated with the manufacturing of the MDC lies with
the manufacturer who is accountable for the cost, schedule, and performance of the
MDC. Whereas in a traditional brick-and-mortar data center, the risk associated with
construction, cost, and performance is the owner’s responsibility.

• Flexibility and scalability: With the ability to provide a quick response comes the
agility to scale data center capacity to actual business needs rather than trying to
predict capacity years in advance. MDCs allow power, cooling, and IT capacity to
be deployed when required. Through the use of MDCs, changes in densities, space
requirements, or IT technologies can be easily incorporated rather than retrofitting a
more traditional facility that was constructed years ago to old requirements. MDCs

CFE Media Digital Report: Data Center Design • 91


✓ How to choose a can be prefabricated, allowing the use of an industrialized approach for standardiza-
modular data center
tion in the design, construction, and operation of the modules. In fact, operation of
multiple modules can be identical. While this approach can also be used for tradition-
al data centers, the speed at which MDCs can be deployed may be an advantage.

• Performance, efficiency, and power-usage effectiveness (PUE): MDCs are very


efficient and can accommodate densities as high as 20 kW per rack and potentially
higher. However, the cost savings associated with higher-density racks diminishes
quickly around 5kW per rack, with no savings realized beyond 11 kW per rack. Cool-
ing systems are optimized for local site conditions along with the internal IT systems.
The PUE metric is a good measure of efficiency for MDCs, but a word of caution:
Many MDCs require external power and cooling connections, therefore, the repre-
sented PUE may actually be a partial PUE (pPUE). When evaluating MDC products,
verify that power and cooling requirements are properly represented and compared
with each product by working with the vendor to determine what they have included
and excluded in their PUE number.

• Prefabrication: The risk associated with the manufacturing of the MDC lies not with
the owner, but with the manufacturer who is accountable for the cost and perfor-
mance of the MDC.

• Flexible site selection: Is the intent of the MDC to be mobile or will it remain in
one location? MDCs can be disassembled and transported to another site for quick
assembly to provide for contingency operations during a natural disaster or for tem-
porary cloud computing, similar to the needs of the U.S. Army. In some cases, the
availability of materials and the logistics of building a data center in a remote location

CFE Media Digital Report: Data Center Design • 92


✓ How to choose a makes the MDC a good solu-
modular data center
tion, like in the oil and gas in-
dustry.

• Disaster recovery: Is disas-


ter recovery a major consid-
eration for your company?
Even if a company does have
a disaster recovery center, is
it capable of handling the fail-
ure for an extended period of
time, such as in the event of
a major hurricane or flooding?
An MDC may be an effective
Figure 4: A section of the MDC unit shows the hot and cold aisles
strategy for disaster recovery
along with electrical and mechanical equipment locations and access
paths for maintenance. Image courtesy: CH2M
since it can be quickly de-
ployed to a site.

With all the benefits of an MDC, it’s easy to forget about some of the negatives, such
as depending too heavily on the design of the MDC for high availability. The MDC will
be operated and maintained by workers, which makes it difficult to eliminate human
error. Security of an MDC is also a significant issue. For example, with the opening
of just one door, you are in the heart of the data center. Also, a bullet can penetrate
the shell or a vehicle can ram into the MDC. Some of the security issues can be ad-
dressed with bollards or berms around the MDC. Containerized data centers are in
an ISO box, which is not aesthetically pleasing and may not be allowed in some busi-

CFE Media Digital Report: Data Center Design • 93


✓ How to choose a
modular data center

Figure 5: A computational fluidness parks without additional screening and land-


scaping. Many MDCs have a single utility-connection
dynamic airflow modeling was per-
formed to show the distribution of
point for power and cooling, which can be a point
airflow through the unit with no hot
of failure for the MDC. Once built, an MDC is diffi-
spots in the server rack. Image cour-
cult to modify and may require specialized support.
tesy: CH2M
Maintenance of the MDC may also be difficult during
inclement weather since entry doors are limited and access panels often expose criti-
cal systems to the weather.

CFE Media Digital Report: Data Center Design • 94


✓ How to choose a Site preparation is an important aspect of MDC installation. This includes the need for
modular data center
civil, mechanical, electrical, and telecommunication site plans that focus on the slabs
and foundations as well as the utility and networking interconnections to the MDC
units. Additional design considerations should be given to MDCs located in seismic
regions for proper anchoring and isolation. These types of issues should be considered
when evaluating an MDC solution.

Regardless of the type of data center construction (brick-and-mortar or MDC) the pur-
chase or lease price of the property, site development costs, and the impact of local
environmental conditions on the energy required for cooling need to be considered in
the site-selection process. Site location can impact labor costs.

If the data center is built in a location that has low labor costs, then the cost savings
of a premanufactured data center may not be realized. Whereas if the labor costs are
high, then the use of MDCs may offer a cost advantage.

The process of reviewing and approving site plans may be improved by an MDC, which
is certified by Underwriters Laboratories (UL) and/or Conformité Européene (CE). This
allows the permitting agency to focus only on the installation of the MDC, rather than
the internal subsystems of the MDC. UL 2755-Modular Data Center Certification Scope
and Process addresses issues, such as:

• Potential enclosure hazards

• Transportation hazards

CFE Media Digital Report: Data Center Design • 95


✓ How to choose a • Electrical construction
modular data center
• Supply and distribution

• Working-space exit routes and signage

• Fire detection and suppression

• HVAC

• Installed equipment

• Noise exposure.

Since the MDC may be supplied through existing on-premise wiring systems or
through a separate MDC enclosure, the data, fire alarm, communications, control,
and audio/video circuits from the MDC are typically brought into an existing facili-
ty. If the MDC is UL-certified, then the evaluation of the equipment, installed wiring,
lighting, and work space is conducted as part of the listing; only field-installed wiring
is required to be reviewed and comply with NFPA 70. Nonlisted MDCs may also be
installed under Article 646 of the 2014 edition of the NEC; however, all components
must then be installed in accordance with the code.

For additional information on the design, efficiency, procurement, and installation of


MDCs, Lawrence Berkeley National Laboratory, on behalf of the General Services
Administration, prepared a vendor-neutral MDC procurement guide. Furthermore, the
Data Center Knowledge Guide to Modular Data Centers provides substantial infor-

CFE Media Digital Report: Data Center Design • 96


✓ How to choose a mation on different MDC solutions and highlights many practical considerations when
modular data center
purchasing an MDC.

Choosing between MDCs and traditional data center construction is not a simple
decision. There are some issues where an MDC may not be the right solution or will
require special construction techniques to mitigate, such as security, maintenance
during inclement weather, and the need for redundant utility connections. However,
MDCs can offer multiple advantages that may be crucial to a business’s strategies
and growth. With their lower initial investment and faster deployment, scalability, and
flexibility, MDCs should be considered as a possible solution. MDCs are highly effi-
cient and offer many of the same options that are available through traditional data
center construction. MDCs can be implemented as a turnkey solution, thereby reduc-
ing risk and making them an attractive choice.

Debra Vieira is a senior electrical engineer at CH2M. She specializes in data center
and mission critical environments.

CFE Media Digital Report: Data Center Design • 97


UNDERSTANDING NFPA 101 FOR MISSION
CRITICAL FACILITIES
NFPA 101: Life Safety Code 2015 is a reference used for strategies to protect people based on building
construction, protection, and occupancy features that minimize the effects of fire and other related hazards.
It is the only document that covers life safety for new and existing structures. It is vital to understand the
electrical/power systems in mission critical facilities and best practices.

W
hat does a data center and a laundromat have in common? As far as the
International Building Code (IBC) is concerned, they are both considered
“Group B Business Occupancies.” As per IBC Section 304 Business Group
B, both types of businesses have the same basic set of minimum require-
ments to safeguard the general health and welfare of occupants. Group B Business
Occupancies are generically defined as occupancies that include office, professional,
or service-type transactions including storage of records and accounts. Data centers
and laundromats fall under the listed subset business uses of “electronic data process-
ing” and “dry-cleaning and laundries: pickup and delivery stations and self-service,”
respectively.

Why does this matter? All building codes focus on ensuring the health and safety of a
building’s occupants. The purpose of building codes does not include quantifying the
inherent value of your dirty laundry versus data sitting on a computer server. What is
considered “mission critical” by you and a client may not be shared by the authority
having jurisdiction (AHJ). While there are certain exceptions, such as designated critical
operations areas (DCOA) as defined by Article 708: Critical Operations Power Systems
(COPS) of NFPA 70: National Electrical Code (NEC), code considerations typically don’t
extend beyond the health and safety of a building’s occupants.

While the IBC is a far-reaching code encompassing structural, sanitation, lighting, ven-
tilation, and several other areas, life safety considerations in mission critical environ-

CFE Media Digital Report: Data Center Design • 98


✓ Understanding NFPA ments is an important area of
101 for mission
critical facilities
focus. The applicable code is
NFPA 101: Life Safety Code,
which has a more detailed
perspective than IBC and
is limited to life safety. Sim-
ilar to the IBC, NFPA 101 is
an occupancy-based code.
NFPA 101 broadly categoriz-
es occupancy types into the
12 following categories:

• Ambulatory health care

• Assembly
Figure 1: Central offices (CO) and
data centers have similar mechan- • Business
ical, electrical, plumbing, (MEP)
infrastructure and associated haz-
• Educational
ards. This photo is a large, central
DC power supply that provides
power to telecommunications • Day care
equipment with a CO. It is func-
tionally equivalent to an interrupt- • Detention and correctional
ible power supply (UPS) in a data
center. Image courtesy: McGuire • Health care
Engineers Inc.

CFE Media Digital Report: Data Center Design • 99


✓ Understanding NFPA • Industrial
101 for mission
critical facilities
• Mercantile

• Residential

• Residential board and care

• Storage.

The formal definitions for each of these categories can be found in Chapter 6.1 of
NFPA 101. Each of these categories is characterized by the quantity and type of oc-
cupants, the type of hazards to which they may be exposed, and the factors that af-
fect the ability to safely egress those occupants out the building in the event of a fire.
Interestingly, unlike IBC, NFPA 101 does not define a specific occupancy type for data
centers (or self-serve laundromats, for that matter). This does not mean that NFPA 101
does not apply to data centers. Remember that NFPA 101 is not a prescriptive cook
book and requires a certain amount of interpretation to apply it properly.

Is a data center a business or an industrial occupancy?


There can be uncertainty regarding the occupancy-type classification for data cen-
ters. NFPA 101 defines an industrial occupancy as “an occupancy in which products
are manufactured or in which processing, assembling, mixing, packaging, finishing,
decorating, or repair operations are conducted.” This broad definition would not seem
to apply to data centers. However, “telephone exchanges,” which also are defined as
a Group B Business Occupancy under IBC, are instead specifically defined as an in-

CFE Media Digital Report: Data Center Design • 100


✓ Understanding NFPA dustrial occupancy type under the Annex Section A.6.1.12.1 of NFPA 101. While this
101 for mission
critical facilities
annex material is intended to be informative and not part of the base requirements of
NFPA 101, it is the most definitive interpretation that most AHJ’s will have immediate
access to.

Historically, a telephone exchange consisted of numerous human operators manually


connecting calls with telephone switchboards-similar to a modern call center. Howev-
er, modern telephone exchanges/central offices are quite different from that historical
definition and do not look much different from a typical data center (see Figure 1). So
while the IBC makes a clear distinction between “telephone exchanges” and “electron-
ic data processing,” the basic functionality, occupancy, and characteristic hazards for
these two different uses would seem to be similar in a modern context. By extension, it
would be reasonable to assume that if NFPA 101 defined modern telephone exchang-
es as an industrial occupancy, that classification should also apply to mission critical
data centers. Ultimately, that determination is at the discretion of the AHJ.

The primary question is why would there be a difference in a data center’s occupancy
classification between the IBC and NFPA 101? Without a clear definition, it is debatable
as to what a data center is per NFPA 101. Without such guidance, the primary consid-
eration should be an assessment of what occupancy patterns and characteristic haz-
ards are present in a data center environment. Mission critical data centers are charac-
terized by:

• Unusually high power densities-can easily be more than 100 W/sq ft in the “white
space” where the physical server equipment is located, necessitating top-of-row bus-
duct and other similar electrical distribution equipment

CFE Media Digital Report: Data Center Design • 101


✓ Understanding NFPA • Onsite energy storage in the form of lead-acid batteries and diesel fuel, which can be
101 for mission
critical facilities
fire hazards in of themselves when present in sufficient quantity

• Unusually high air movement/cooling requirements-can be more than 400 cfm/server


cabinet in high-density environments

• Concealed/confined spaces (containerized data centers, raised floors, isolation of


hot/cold aisles, etc.) (see Figure 3)

• The need for single-shot, total flooding clean agent fire suppression systems that
require compartmentalization to function properly in lieu of traditional water-based fire
suppression systems

• The need for redundant mechanical, electrical, and plumbing (MEP) infrastructure to
ensure continuity of service

• A relatively low headcount as compared with traditional business occupancies, with


the occupants often clustered in one particular portion of the facility

• The need to restrict access to the facility to only authorized personnel for security
reasons.

Again, the purpose of NFPA 101 is to mitigate risks associated with safely evacuating
the occupants of a building in the event of a fire. The primary consideration should be
an analysis of “if” and “how” each of these factors impacts the NFPA 101’s ability to
mitigate those risks, and based on that analysis, which occupancy type provides the

CFE Media Digital Report: Data Center Design • 102


✓ Understanding NFPA most appropriate
101 for mission
critical facilities
level of safety for
the occupants.

While the generic


definition of an
industrial occu-
pancy might not
seem to be the
most appropri-
ate description
for a data center,
NFPA 101 also
lists a “special
purpose” indus-
trial-occupancy
Figure 2: This is a photo of an uninterruptible subset that is described as an industrial occu-
power supply (UPS) battery string with over 50 pancy in which ordinary and low-hazard in-
gallons of electrolytes. Special ventilation for
dustrial operations are conducted and charac-
this installation is required per the International
terized by a relatively low density of employee
Fire Code (IFC). Image courtesy: McGuire Engi-
neers Inc. population, with much of the area occupied by
machinery or equipment. This particular de-
scription might be a better fit for most data center environments where the white space
and supporting mechanical, electrical, and similar unoccupied back-of-house rooms
dominate the overall composition of a facility.

CFE Media Digital Report: Data Center Design • 103


✓ Understanding NFPA Although not incorporated as a reference standard in NFPA 101-2015, NFPA 76: Stan-
101 for mission
critical facilities
dard for the Fire Protection of Telecommunication Facilities supports this occupancy
categorization. The special-purpose industrial-occupancy subset does allow a signif-
icant reduction in the egress requirements for a facility, but that ability to reduce life
safety provisions and associated costs should not be the primary consideration when
selecting this particular occupancy type. Before reducing life safety features, a risk
analysis should be performed to confirm that this is the appropriate course of action.

In some cases, the data center might be incidental to the primary function of the build-
ing (i.e., a small server room in a commercial office building), which would allow it to
be classified as part of the larger business occupancy. In other cases, it might be ex-
actly the opposite (i.e., a network operations center within a large containerized data
center). While incidental uses are discussed under NFPA 101’s “Multiple Occupancies”
section 6.1.14.1.3, there is no prescriptive-area-ratio threshold in NFPA 101 to deter-
mine if a usage is “incidental.”

The AHJ may, in some cases, classify the facility as a multiple-occupancy building (part
business and part industrial occupancy) that necessitates a multiple-occupancy des-
ignation. In these cases, the most restrictive requirements would apply if no physical
separation exists, as described by NFPA’s separated occupancy provisions.

Requirements for business, industrial occupancies


When reviewing NFPA 101, most engineers are surprised by how little content is devot-
ed to the MEP systems that are specified. There are inherent limitations for all codes,
but when considered as a group, they can be complementary and not compromise the
overall intent of the code. NFPA 101 incorporates, by reference, numerous other perti-

CFE Media Digital Report: Data Center Design • 104


✓ Understanding NFPA nent NFPA codes and standards relat-
101 for mission
critical facilities
ed to MEP systems.

Although many MEP systems such as


sprinklers are only briefly mentioned in
NFPA 101, their presence can directly
impact seemingly unrelated provisions
in NFPA 101. For example, the prima-
ry distinction between business and
industrial occupancies is allowable
travel distance. Many engineers may
consider travel distance to be an ar-
chitectural design issue. While NFPA
101 specifically does not require an
NFPA 13: Standard for the Installation
of Sprinkler Systems-compliant auto-
matic fire sprinkler system in all types
of business and industrial occupan-
cies, it is recognized that sprinklers are
the most effective means of preventing
a fire from spreading. Accordingly, the
maximum allowable travel distance
Figure 3: Concealed spaces present addition- is increased when they are present for both
al hazards in data center environments. This
occupancy types. This can allow significantly
figure illustrates a sprinkler installation within
greater flexibility for an architect in laying out
a 48-in raised floor. Image courtesy: McGuire
Engineers Inc.
a facility. Note that NFPA 101 makes a dis-

CFE Media Digital Report: Data Center Design • 105


✓ Understanding NFPA tinction between automatic fire sprinkler systems and “other automatic extinguishing
101 for mission
critical facilities
equipment” such as gaseous fire suppression systems.

NFPA 101’s vague details on the life safety systems that are regularly specified by en-
gineers can cause some confusion. NFPA 101 only mandates if a particular type of
life safety system should be present within a given occupancy type. Although most of
these systems are not discussed in-depth in NFPA 101, understand that when parts
of other codes and standards are directly referenced in a particular section (for exam-
ple, NFPA 2001: Standard on Clean Agent Fire Extinguishing Systems is referenced in
NFPA 101 Chapter 9.8 “Other Automatic Extinguishing Equipment”), they should be
considered integral to the requirements of that section.

If NFPA 101 identifies the requirement for a specific life safety system, the function of
the referenced code or standard is to provide additional detail as to what is acceptable
for the configuration and installation of that system. As such, any referenced codes or
standards should be considered a legally enforced part of NFPA 101.

While potentially required by other building codes, NFPA 101 does not specifically
mandate many of the engineered, life safety-related systems that are typically specified
in a data center environment. Mission critical facility owners require emergency gener-
ators, clean agent fire suppression, early warning fire detection, and similar systems to
minimize the chance of catastrophic damage or disruption to the normal operation of
a very expensive asset. Although these types of elaborate systems may not be specifi-
cally mandated, when provided, they must meet all applicable provisions of NFPA 101.
While the owner’s primary motivation for investing in these systems may be to ensure
business continuity, the engineer’s ultimate responsibility is to properly apply the code

CFE Media Digital Report: Data Center Design • 106


✓ Understanding NFPA as it pertains to these systems to ensure the safety of the building’s occupants.
101 for mission
critical facilities
The life safety systems that are typically the most important to electrical engineers are:

• Means-of-egress components

• Emergency lighting

• Fire detection and alarm


Figure 4: NFPA 110 requires an expanded
set of monitoring and alarm functions. This
• Automatic sprinklers
photo is a genset-mounted annunciator that
is compliance with Type 10, Class 1.5, Level 1
emergency power system requirements. Image • Other automatic extinguishing equipment.
courtesy: McGuire Engineers Inc.
The following commentary for
each of these will often derive
more from the referenced codes
and standards than from NFPA
101. It also should be noted that
while NFPA 75: Standard for
the Fire Protection of Informa-
tion Technology Equipment and
NFPA 76 would seem to be very
pertinent to any discussion of
life safety within the data center
environment, neither is directly

CFE Media Digital Report: Data Center Design • 107


✓ Understanding NFPA incorporated as a reference standard into NFPA 101.
101 for mission
critical facilities
Means of egress
There are numerous tragic examples of obstructed paths of egress contributing to the
loss of life during a fire. The 1927 Building Exits Code, which eventually evolved into
NFPA 101, was in part developed in response to these types of tragedies. The code
emphasizes the basic concept that the ability to survive a fire depends on the occu-
pants’ ability to safely and quickly get out of the building. NFPA 101 requires a contin-
uous and unobstructed path of egress from any accessible point in the building to the
public way or a suitable exit discharge (Section 7.7.1). As such, doors must be easily
opened from the egress side. All components of the means of egress must be “under
the control” of the occupants.

There must be a balance between maintaining a secure environment and allowing safe
egress during an emergency. Many data centers are equipped with security compo-
nents, such as electromagnetic locking devices on doors, “mantrap” vestibules, and
card-operated revolving doors, which may impede the free-egress requirement. While
engineers are often not included in the initial architectural programming decisions that
establish the need for such security components, the supporting life safety provisions
typically fall under the engineering scope of work and, without proper coordination, can
often fall through the cracks.

NFPA 101 uses specific terminology for egress door components that can’t be easi-
ly opened by turning a door lever or pushing a crash bar. This typically falls under the
category of “special locking arrangements,” and the subcategory is “access-controlled
egress door assembly.” This type of egress door is characterized by electric locking

CFE Media Digital Report: Data Center Design • 108


✓ Understanding NFPA hardware and does not have a simple manual lever handle or push bar on the door leaf
101 for mission
critical facilities
to allow for free egress. While acceptable for both industrial and business occupancy
types per NFPA 101, certain requirements must be met:

• A sensor must be provided on the egress side to unlock the door upon detection of
an approaching occupant (typically a passive infrared motion sensor above the door).

• The door must automatically unlock in the direction of egress upon loss of power
(i.e., fail-safe).

• The door must be provided with a manual-release device (“push to exit” button or
similar) within 60 in. of the door, and the door must remain unlocked for at least 30
seconds.

• The activation of the fire-protective signaling system automatically unlocks the door.

• Activation of the building’s fire detection or sprinkler system automatically unlocks the
door.

• Emergency lighting is provided.

UL 294: Standard for Access Control System Units is also incorporated as a reference
standard. As such, any approved hardware must be UL 294-compliant. UL 294 also
includes a specific product category, FWAX, which pertains to “special locking arrange-
ments” to prevent unauthorized egress. Activating a manual fire alarm pull station is not
required by NFPA 101 to unlock these doors. However, this does not mean that the

CFE Media Digital Report: Data Center Design • 109


✓ Understanding NFPA local AHJ will not require it. In fact, inter-
101 for mission
critical facilities
pretation of egress-door hardware require-
ments can vary by jurisdiction, making it
critical to confirm local requirements early
in the design process.

For example, power-operated revolving


doors with card access are often used in
large data centers to minimize the chance
of “piggybacking”-a situation where an
unauthorized user could follow an autho-
rized user into a secure facility. NFPA 101
has detailed provisions for use of revolv-
ing door assemblies as a component in a
means of egress. The primary requirement
is to have a breakaway leaf that freely and
fully collapse into a book-fold position to
allow free egress under similar conditions
Figure 5: While not necessarily required by (power failure, sprinkler activation, etc.).
NFPA101, sprinkler systems and other types of fire However, even when such provisions are in-
suppression systems are usually specified in data cluded with power-operated revolving doors,
center applications. This picture is of a deluge the local AHJ may require additional safe-
valve and releasing panel in a pre-action sprinkler guards beyond what is required in the code
systems. Installing these types of systems may
or even prohibit their use altogether.
trigger other requirements such as the need for
a supervised fire alarm system. Image courtesy:
McGuire Engineers Inc. Using two-way security (i.e., card readers

CFE Media Digital Report: Data Center Design • 110


✓ Understanding NFPA used to enter and leave) also is regularly forbidden by AHJs unless additional provi-
101 for mission
critical facilities
sions, such as extra doors with delayed-egress hardware, are included. While local re-
quirements can vary dramatically, any discussion with an AHJ on this subject should be
predicated on the core goal of providing the most appropriate degree of safety for the
building occupants, not what provides the greatest amount of security for the building’s
contents.

It may be a controversial statement to say that emergency lighting is not required in


special-purpose industrial occupancies without routine human habitation. While NFPA
101 states that emergency lighting must be provided for industrial occupancies in gen-
eral, the first exception in 40.2.9.2(1) clearly states that emergency lighting is not re-
quired in special-purpose industrial occupancies. That is a statement of fact, but not
always the appropriate engineering decision when trying to ensure life safety for the
building’s occupants.

The primary function of emergency lighting is to provide adequate illumination for the
path of egress out a building for its occupants. But what if the building is usually emp-
ty? While NFPA 101 recognizes that many special-purpose industrial occupancies are
normally unoccupied, the engineer also has to consider the characteristic hazards in
a data center environment and determine if omitting emergency lighting from usually
unoccupied buildings impacts the safety of the occupants who may infrequently work
within the space (security, maintenance staff, etc.). This question becomes more per-
tinent in contained data centers (i.e., a “plug-and-play” data center in an intermodal
shipping container) that have high densities of equipment and supporting infrastructure
in an unusually confined environment. This decision to provide emergency lighting be-
comes moot if any type of access-controlled egress-door assembly is provided, which

CFE Media Digital Report: Data Center Design • 111


✓ Understanding NFPA separately mandates emer-
101 for mission
critical facilities
gency lighting elsewhere in
the NFPA 101.

Even if not directly required


by the AHJ or NFPA 101, the
potential hazards often justify
the inclusion of emergency
lighting-even if the area is
rarely occupied. Although not
mentioned in NFPA 101, it
should be noted that many
AHJs will require emergency
lighting if, in their determi-
nation, the safety of first re-
Figure 6: NFPA 101 and the Inter- sponders will also be impacted.
national Building Code (OBC) both
require a means to manually unlock Once the need is established, the actual illumination
an electronically-locked egrees door. requirements for emergency lighting in NFPA 101 are
When not integrated as a door han- relatively straightforward. Emergency lighting must
dle, that device must be located with-
provide initial illumination so that at least an average
in 5 ft of the secure door. A request
to exit motion sensor is not enough.
of 1 fc (10.8 lux) and a minimum of 0.1 fc (1.1 lux) is
This image shows the manual release maintained along the path of egress at floor level. It is
and pre-action sprinkler system pull allowable for these levels to decline to not less than an
station. Image courtesy: McGuire average of 0.6 fc (6.5 lux) and a minimum of 0.06 fc
Engineers Inc. (0.65 lux) at the end of 90 minutes. To maintain a rea-

CFE Media Digital Report: Data Center Design • 112


✓ Understanding NFPA sonable amount of uniformity, the maximum-to-minimum illumination cannot exceed a
101 for mission
critical facilities
40:1 ratio.

Approved auxiliary sources


Beyond a statement that emergency illumination must be maintained for 90 minutes
in the event of a failure of normal lighting, NFPA 101 is fairly vague regarding the con-
figuration and installation requirements for auxiliary power source requirements serv-
ing emergency lighting. Most of the details fall to the referenced codes and standards
NFPA 70, NFPA 110: Standard for Emergency and Standby Power Systems, and UL
924: Standard for Emergency Lighting and Power Equipment. NFPA 101 does, how-
ever, specifically mention that emergency power systems need to be compliant with
NFPA 110, Type 10, Class 1.5, Level 1 emergency power system requirements (see
Figure 4). Level 1 is the most stringent and is used “where failure of the equipment to
perform could result in the loss of human life or serious injuries.” These systems have
the following basic requirements:

• Must restore power within 10 seconds of the primary source

• Must be able to support the load without being refueled for at least 1.5 hours

• Have an enhanced/expanded set of monitoring and alarm functions.

Again, the referenced codes and standards should be considered an integral part of
NFPA 101 in the context of the sections in which they’re mentioned. When any partic-
ular type of system is provided, even if not required by NFPA 101, it has to meet the
applicable reference code. Certain standby systems that are characteristic of large

CFE Media Digital Report: Data Center Design • 113


✓ Understanding NFPA data centers, such as large multimodule uninterruptible power supply (UPS) units
101 for mission
critical facilities
and paralleled generator systems, may seem to meet the fundamental requirements
for emergency lighting auxiliary sources. When considered in the context of the other
codes and standards, there are small details that may make an otherwise highly resil-
ient backup system technically inadequate for emergency lighting use.

Use of a data center’s UPS system for emergency lighting should be avoided. Al-
though NFPA 76 does have provisions for using a telecommunication facility’s battery
system to power the emergency lighting, this very broad statement can be mislead-
ing and discounts conflicting requirements in other codes and standards. The first
hurdle is that any UPS used for emergency lighting must be listed for central-lighting
inverter duty in accordance with UL 924. This listing is extremely unusual in larg-
er-capacity UPS systems and nonexistent in multi-module UPS systems. Even if ap-
propriately listed, optional standby loads still have to be segregated from emergency
lighting loads in accordance with NEC Article 700.10. Even the requirements for an
“emergency power off” (EPO) button can add further complications. The requirement
for separation, and the prioritization of life safety loads over optional standby loads,
would compromise the primary function of the UPS system to support data center
equipment.

There are many considerations when evaluating a data center’s generator system to
use as an auxiliary source for life safety systems. Most challenges revolve around the
10-second load-acceptance requirement in NFPA 110. While generator-paralleling
control systems have evolved dramatically over the past few years, there are still con-
cerns regarding the use of large, paralleled generator systems for life safety loads.
Larger prime movers (about 2 MW and greater), which are becoming relatively com-

CFE Media Digital Report: Data Center Design • 114


✓ Understanding NFPA mon for large data centers,
101 for mission
critical facilities
may not start as quickly as
smaller generators. When
considered in combination
with the associated super-
visory control and data ac-
quisition (SCADA) system
used to parallel generators
and associated signal la-
tency issues in larger con-
trol systems, meeting the
10-second threshold can
sometimes be a challenge.
Where U.S. Environmental
Protection Agency Tier 4
Figure 7: SCADA control systems are com- emission packages are required (typically when
mon for large paralleled generator systems. generators are used for storm avoidance and
The complexity of such system may impact rate curtailment), they can also add unexpected
their ability to meet NFPA 110 requirements. points of failure.
Image courtesy: McGuire Engineers Inc.

For example, failure of the generator’s emission system will usually cause an automatic
shutdown of the generator system. The cause for the failure may be relatively benign,
such as depletion of diesel exhaust fluid in the selective catalytic reduction (SCR) por-
tion of the emissions system, and wouldn’t necessarily cause damage to the generator
or otherwise affect its ability to generate power. However benign the cause may be, the
fact remains that the ability to support the life safety load would be compromised. A

CFE Media Digital Report: Data Center Design • 115


✓ Understanding NFPA separate, smaller emergency generator dedicated to life safety loads may be the better
101 for mission
critical facilities
solution this challenge. Although inelegant from a design perspective, simple unit bat-
tery lights and exit signs with battery backup may also be compelling as a simple solu-
tion to an otherwise complex problem.

Fire detection and notification


While certain systems may not be mandated by NFPA 101 as part of the basic require-
ments for particular occupancy types, the presence of other seemingly unrelated sys-
tems can trigger installation. Fire alarm systems are no different. If other contributing
factors are dismissed, the basic threshold for fire alarm systems is an occupant load
of 100 people in an industrial occupancy and 1,000 people in a business occupan-
cy. There are other factors that can significantly reduce these numbers depending on
the height of the building or the proximity of the occupants to the primary level of dis-
charge.

However, if-for example-a data center does not meet these reduced, minimum thresh-
olds for a fire alarm system, the next question would be what characteristic hazards or
other project owner requirements would necessitate the installation of a fire alarm sys-
tem?

The most common project requirement that would trigger the need for a fire alarm sys-
tem is installing a large UPS system. NFPA 1: The Fire Code and the International Fire
Code both require the installation of smoke detection when the volume of electrolytes
stored in the batteries reaches a certain threshold, typically 50 or 100 gal depend-
ing on which of these two codes is being followed. This requirement applies to the
valve-regulated lead-acid (VRLA) batteries that are typically used in UPS systems, not

CFE Media Digital Report: Data Center Design • 116


✓ Understanding NFPA just traditional wet-cell batteries. It is not unusual to exceed this threshold with as little
101 for mission
critical facilities
as 10 minutes of battery capacity for a 160-kVA UPS.

Fire suppression systems


Although automatic sprinkler and similar fire suppression systems are often required
by the AHJ, they are not mandated by NFPA 101 as part of the basic requirements for
business and industrial occupancies. However, as previously stated, their presence
allows for a certain leniency elsewhere in NFPA 101, such as longer paths of egress.
Other codes and standards like those that apply to common data center installations,
such as diesel-fuel storage and battery-storage battery rooms, may indirectly necessi-
tate the installation of a sprinkler system.

Regardless, clean agent fire suppression systems that are not required by code are
still commonly used to protect the server equipment within the white space. While not
required, if a clean agent fire system is provided, it has to be furnished and installed in
accordance with the applicable codes and standards. NFPA 2001: Standard on Clean
Agent Fire Extinguishing Systems specifically requires automatic detection and actua-
tion by default and requires a fire alarm system for proper operation and supervision.

John Yoon is a lead electrical engineer at McGuire Engineers Inc. and is a member of
the Consulting-Specifying Engineer editorial advisory board.

CFE Media Digital Report: Data Center Design • 117


SUSTAINABLE STRATEGIES
FOR DATA CENTERS
Andy Baxter, PE, Page principal and director of MEP engineering discussed three trends related to sustain-
able strategies for the mission critical data center market.

E
ngineering clients typically want to investigate and integrate energy-efficient
and sustainable solutions based on return on investment (ROI) or total cost
of ownership (TCO). Enterprise data center owners and operators tend to be
more willing to look at much longer ROI or TCO periods to see benefits. They
are also looking at public-perception benefits as well as how this works into their over-
all business model. Owners and operators of colocation data centers, which provide
access to multiple clients, generally focus on their bottom lines, resulting in shorter ROI
or TCO periods. Although they do care about public perception and energy efficiency,
their primary concern is to attract customers. This does not mean that they aren’t trying
to be sustainable or energy-efficient. They just have a different set of business priorities

Clients are concentrating more on reduced power-usage effectiveness (PUE)—the


ratio of total facility energy over information technology (IT) equipment energy—and
overall operating costs. Page is seeing more evaluations and decisions moving toward
efficient and sustainable designs as this field continues to receive public exposure.

Changing attitudes regarding LEED certification


There continues to be a desire to obtain a LEED certification at some level. However,
it’s often more of a marketing decision rather than a requirement to be sustainable.
Enterprise data center owners are much more likely to make sustainable selections
as long as they do not add risk to the facility. On the colocation side, those decisions
are almost always about marketing. Looking at the efficiency of the IT equipment itself

CFE Media Digital Report: Data Center Design • 118


✓ Sustainable would have a much big-
strategies for data
centers
ger impact on the overall
sustainability of a facility.

New strategies for


making data cen-
ters more sustain-
able
Ideal services would
include cooling solutions
using direct or indirect
economizers, direct
evaporative cooling, liq-
uid cooling and heat re-
covery using data center
energy to heat domestic
and heating hot-water systems for other parts of a building. In general, it is recom-
mended that evaluating all aspects of energy-efficient design—including cost, sustain-
ability, marketing, risk management and how it will impact a project—before deciding
on a particular solution.

Andy Baxter, PE, is principal, mission critical at Page. Page is a CFE Media con-
tent partner.

CFE Media Digital Report: Data Center Design • 119


DESIGNING WITH LIQUID-IMMERSION
COOLING SYSTEMS
Liquid cooling is an option in some data centers. Consider these best practices when looking at immersion
cooling for your next data center project.

I
n simple thermodynamic terms, heat transfer is the exchange of thermal energy
from a system at a high temperature to one at lower temperature. In a data center,
the information technology equipment (ITE) is the system at the higher temperature.
The objective is to maintain the ITE at an acceptable temperature by transferring
thermal energy in the most effective and efficient way, usually by expending the least
amount of mechanical work.

Heat transfer is a complex process and the rate and effectiveness depends on a multi-
tude of factors. The properties of the cooling medium (i.e., the lower-temperature sys-
tem) are pivotal, as they directly impact flow rate, the resultant temperature differential
between the two systems and the mechanical work requirement.

The rate at which thermal energy is generated by the ITE is characteristic of the hard-
ware (central processing units, graphics processing units, etc.) and the software it is
running. During steady-state operation, the thermal energy generated equals the rate
at which it is transferred to the cooling medium flowing through its internal compo-
nents. The flow rate requirement and the temperature envelope of the cooling medium
is driven by the peak rate of thermal energy generated and the acceptable temperature
internal to the ITE.

The flow rate requirement has a direct bearing on the mechanical work expended at
the cooling medium circulation machine (pump or fan). The shaft work for a reversible,

CFE Media Digital Report: Data Center Design • 120


✓ Designing with liquid-
immersion cooling
systems

Figure 1: A flow diagram for an open-im- steady-state process with negligible change in
mersion cooling configuration is shown. kinetic or potential energy is equal to ∫vdP, where v
The ITE is immersed in a liquid bath open is the specific volume and P is the pressure. While
to the atmosphere. Image courtesy: Envi-
the pump and fan processes are nonideal, they
ronmental Systems Design Inc.
follow the same general trend.

For data centers, air-cooling systems have been de facto. From the perspective of ITE,
air cooling refers to the scenario where air must be supplied to the ITE for cooling. As
the airflow requirement increases due to an increase in load, there is a corresponding
increase in fan energy at two levels: the air distribution level (i.e., mechanical infrastruc-
ture such as air handling units, computer room air handlers, etc.) and the equipment
level, because ITE has integral fans for air circulation.

Strategies including aisle containment, cabinet chimneys, and in-row cooling units help
improve effectiveness and satisfactorily cool the equipment. However, the fact remains

CFE Media Digital Report: Data Center Design • 121


✓ Designing with liquid- that air has inferior thermal properties and its abilities are getting stretched to the limit as
immersion cooling
systems
cabinet loads continue to increase with time. For loads typically exceeding 15 kW/cabinet,
alternative cooling strategies, such as liquid cooling, have become worthy of consideration.

The case for liquid cooling


Liquid cooling refers to a scenario where liquid (or coolant) must be supplied to the ITE.
An IT cabinet is considered to be liquid-cooled if liquid, such as water, dielectric flu-
id, mineral oil, or refrigerant, is circulated to and from the cabinet or cabinet-mounted
equipment for cooling. Several configurations are possible, depending on the boundary
being considered (i.e., external or internal to the cabinet). For the same heat-transfer
rate, the flow rate requirement for a liquid and the energy consumed by the pump are
typically much lower than the flow rate requirement for air and the energy consumed by
the fan system. This is primarily because the specific volume of a liquid is significantly
lower than that of air.

For extreme load densities typically in excess of 50 to 75 kW/cabinet, the liquid should
preferably be in direct contact with ITE internal components to transfer thermal energy
effectively and maintain an acceptable internal temperature. This type of deployment is
called liquid-immersion cooling and it is at the extreme end of the liquid cooling spec-
trum. Occasionally referred to as “chip-level cooling,” the commercially available solu-
tions can essentially be categorized into two configurations:

1. Open/semi-open immersion. In this type of system, the ITE is immersed in a bath


of liquid, such as dielectric fluid or mineral oil. The heat-transfer mechanism is va-
porization, natural convection, forced convection, or a combination of vaporization
and convection (see Figure 1).

CFE Media Digital Report: Data Center Design • 122


✓ Designing with liquid- 2. Sealed immersion. In this type of system, the ITE is sealed in liquid-tight en-
immersion cooling
systems
closures and liquid, such as refrigerant, dielectric fluid, or mineral oil, is pumped
through the enclosure. The heat-transfer mechanism is vaporization or forced
convection, and the enclosure is typically under positive pressure (see Figure 2).

For both types of systems, thermal energy can be transferred to the ambient by
means of fluid coolers (dry or evaporative) or a condenser. It can also be transferred to
facility water (chilled water, low-temperature hot water, or condenser water) by means
of a heat exchanger.

A number of proprietary solutions are available for immersion cooling, and most pro-
viders can retrofit off-the-shelf ITE to make them compatible with their technology.
Some technology providers are capable of providing turnkey solutions and require
limited to no involvement of the consulting engineer.

Others provide products as “kit of parts” and rely on the consulting engineer to design
the associated infrastructure. For the latter, collaboration between the design team
and the cooling technology provider is critical to project success. The design respon-
sibilities should be identified and delineated early in the project. Note that a compre-
hensive guide for designing liquid cooling systems is beyond the scope of this article.

Once the total ITE load (in kilowatts) and load density (kilowatt/cabinet) have been
defined by the stakeholders, the criteria can be used in conjunction with the design
liquid-supply temperature and anticipated delta T across the ITE, to determine the
flow rate requirement and the operating-temperature envelope. Recommendations for
a liquid-supply temperature and anticipated delta T are typically provided by the tech-

CFE Media Digital Report: Data Center Design • 123


✓ Designing with liquid-
immersion cooling
systems

Figure 2: This flow diagram shows a nology provider, and empirical data is preferred
over theoretical assumptions. For example, a flow
sealed immersion cooling configuration.
The ITE is enclosed in liquid-tight enclo-
rate requirement of 1 gpm/kW, liquid-supply tem-
sures typically under positive pressure.
perature of 104° F, and anticipated delta T of 10°
Image courtesy: Environmental Systems
F was used as the basis of design when deploying
Design Inc.
a specific technology. Requirements can vary sig-
nificantly between different providers.

Selecting a liquid
The liquid properties impact major facets of the design and should be reviewed in
detail. The mechanical industry is accustomed to working with typical liquids, such as
water, glycol solutions, and refrigerants; deviations associated with unique liquids can
create challenges. Properties such as kinematic viscosity, dynamic viscosity, specific
heat, density, thermal conductivity, the coefficient of thermal expansion, and heat ca-

CFE Media Digital Report: Data Center Design • 124


✓ Designing with liquid- pacity can influence the design. Because the liquid is typically proprietary, the prop-
immersion cooling
systems
erties are not available in standard design guides or catalogs and are provided by the
cooling technology provider.

Liquid properties have a direct bearing on heat-transfer characteristics and greatly


impact the selection of heat-transfer equipment, such as coils, heat exchangers, and
fluid coolers. As discussed, standard catalog data cannot be used for this purpose.
However, major manufacturers are capable of providing estimated performance for
unique liquids.

System pressure drop calculations can also be challenging. One option is to use the
underlying principles of fluid mechanics. For example, the Darcy-Weisbach equation
can be used to estimate the pressure drop through pipes when circulating Newtonian
fluids. For sealed-immersion applications, the pressure drop through the ITE enclosure
is typically supplied by the technology provider. When selecting pumps for liquid cir-
culation, properties like density and viscosity will impact the brake horsepower, head,
flow capacity, and efficiency of the pump. Major manufacturers can provide pump per-
formance when circulating unique liquids. Another option is to estimate performance by
using correction factors, equations, or charts developed by organizations, such as the
Hydraulic Institute, and applying them to standard pump curves developed for water.

When the loop design is being established, the compatibility of materials that are ex-
pected to be in direct contact with the liquid should be reviewed. For example, pump
seals and valve seats are frequently constructed of ethylene propylene diene mono-
mer (EPDM), a type of synthetic rubber. However, EPDM is not compatible with petro-
leum-based liquids.

CFE Media Digital Report: Data Center Design • 125


✓ Designing with liquid- Similarly, the use of inert pipe
immersion cooling
systems
materials, such as stainless steel
and copper, and use of me-
chanical joints in lieu of welding,
soldering, or brazing should be
discussed with the technology
provider. Any contamination of
liquid due to incompatibility with
materials of construction can
have serious repercussions and
can lead to catastrophic failure of
the ITE.

Requirements for fluid mainte-


Figure 3: This rendering shows liquid-im- nance should be discussed with the technology
mersion cooled IT cabinets (sealed con- provider. Petroleum-based liquids, such as mineral
figuration). Image courtesy: Environmental oil, are susceptible to biological and water con-
Systems Design Inc. tamination over time. Liquid degradation can neg-
atively impact the heat-transfer properties and can lead to premature failure of the ITE.
The infrastructure should incorporate suitable means for fluid maintenance if deemed
necessary.

For sealed immersion configurations, pressure limitations of the ITE enclosures must
be considered. For a particular application, the pressure rating was less than 10 psig.
The requirement impacts the elevation of mechanical infrastructure relative to the ITE,
as the static head imposed on ITE needs to be kept to a minimum. Similarly, the pres-

CFE Media Digital Report: Data Center Design • 126


✓ Designing with liquid- sure drop through the circulation loop, hence the pump head requirement, should be
immersion cooling
systems
minimized. Pressure-relief valves or other means should be incorporated to prevent
accidental overpressurization.

The right solution?


When dealing with extremely dense cabinets, immersion cooling is worthy of consider-
ation. It is suitable for deployments ranging from a few kilowatts to several megawatts.
Due to improved heat-transfer performance as compared with an air-cooling system,
liquid-supply temperatures higher than 100° F are feasible. Higher liquid temperatures
increase the hours of economization, offer the potential for heat recovery, and in cer-
tain climates can eliminate the need for chillers completely. The elimination of internal
ITE fans reduces energy consumption and noise. In addition, pump energy for circu-
lating liquid is typically lower than fan energy.

Despite the mechanical advantages, there are reasons for caution when deploying
liquid-immersion cooling in data centers. The impact on infrastructure, such as struc-
tural, electrical, fire protection, and structured cabling, should be evaluated. In a typ-
ical data center, air-cooling systems are still needed as certain ITE, such as spinning
drives, cannot be liquid-cooled. Immersion cooling is still in its nascent stage, and
long-term statistical data is needed for detailed evaluation of ITE and infrastructure
reliability, serviceability, maintainability, and lifecycle costs.

Saahil Tumber is a senior associate and lead mechanical engineer at Environmental


Systems Design Inc., and is responsible for the overall design of HVAC systems for
data centers, trading areas, and other mission critical facilities requiring high availabili-
ty. His data center experience spans both enterprise and colocation projects.

CFE Media Digital Report: Data Center Design • 127


YOUR SPONSOR

Rittal North America LLC, the U.S. subsidiary of Rittal GmbH &
Co. KG, manufactures industrial and IT enclosures, racks and
accessories, including climate control and power management
systems.

www.rittalenclosures.com
THANK YOU FOR DOWNLOADING CFE MEDIA’S
DATA CENTER DESIGN DIGITAL REPORT!

For feedback on this CFE Media Digital Report please contact

Paul Brouch at [email protected]

We look forward to your feedback!

You might also like