How to Design and Build a Data Center _ TechTarget(Original)
How to Design and Build a Data Center _ TechTarget(Original)
Facility. The facility is the physical building used for the data
center. In simplest terms, a data center is just a big open
space where infrastructure will be deployed. Although almost
any space has the potential to operate some amount of IT
infrastructure, a properly designed facility considers the
following array of factors:
Large enterprises can require very large data centers, like this Google data center in
Douglas County, Ga.
Data center tier standards outline what is required to ensure reliability and
performance needs are met.
There are two primary cooling issues. The first issue is the
amount of cooling required
(https://www.techtarget.com/searchdatacenter/tip/How-to-
calculate-data-center-cooling-requirements), which ultimately
defines the size or capacity of the data center's HVAC
subsystems. However, designers must make the translation
from the data center's power demand in watts (W) to cooling
capacity gauged in tons (t) -- i.e., the amount of heat energy
required to melt one ton of ice at 32 degrees Fahrenheit in one
hour. The typical calculation first requires the conversion of
watts into British thermal units (BTU) per hour, which can then
be converted into tons:
W x 3.41 = BTU/hour
BTU/hour / 12,000 = t
The key is understanding the data center's power demands in
watts and planned scalability, so it's important to right-size
the building's cooling subsystem. If the cooling system is too
small, the data center can't hold or scale the expected amount
of IT infrastructure. If the cooling system is too large, it poses
a costly and inefficient utility for the business.
The second cooling issue for data centers is efficient use and
handling of cooled and heated air. For an ordinary human
space, just introducing cooled air from one vent and then
exhausting warmed air from another vent elsewhere in the
room causes mixing and temperature averaging that yields
adequate human comfort. But this common home and office
approach doesn't work well in data centers, where racks of
equipment create extreme heat in concentrated spaces.
Racks of extremely hot gear demand careful application of
cooled air, and then deliberate containment and removal of
heated exhaust. Data center designers must take care to
avoid the mixing of hot and cold air that keeps human air-
conditioned spaces so comfortable.
In data centers that don't use a hot/cold aisle design, the cooling units aren't always
able to efficiently cool equipment.
In data centers designed around hot/cold aisles, the cooling units are able to more
efficiently cool equipment.
Other approaches to cooling include end of row and top of
rack air-conditioning systems, which introduce cooled air into
portions of a row of racks and exhaust heated air into hot
aisles.
This diagram shows the basic concepts of room-, row- and rack-based cooling
architectures. The blue arrows indicate the relation of the primary cooling supply paths
to the room.
Power usage effectiveness is a metric used to assess the efficiency of a data center.
Raise the temperature. The colder a server room is, the more
power-hungry and expensive it is. Rather than keeping the
server room colder, evaluate the effect of actually raising the
temperature. For example, rather than running a cold aisle at
68 to 72 degrees Fahrenheit, consider running the cold aisle at
78 to 80 degrees Fahrenheit. Most IT gear can tolerate
elevated temperatures in this way.