Data Cente
Data Cente
Contents
[show]
[edit]History
Data centers have their roots in the huge computer rooms of the early ages of the computing
industry. Early computer systems were complex to operate and maintain, and required a special
environment in which to operate. Many cables were necessary to connect all the components,
and methods to accommodate and organize these were devised, such as standard racks to
mount equipment, elevated floors, and cable trays (installed overhead or under the elevated
floor). Also, old computers required a great deal of power, and had to be cooled to avoid
overheating. Security was important – computers were expensive, and were often used for
military purposes. Basic design guidelines for controlling access to the computer room were
therefore devised.
During the boom of the microcomputer industry, and especially during the 1980s, computers
started to be deployed everywhere, in many cases with little or no care about operating
requirements. However, as information technology (IT) operations started to grow in complexity,
companies grew aware of the need to control IT resources. With the advent of client-
server computing, during the 1990s, microcomputers (now called "servers") started to find their
places in the old computer rooms. The availability of inexpensive networking equipment,
coupled with new standards for network cabling, made it possible to use a hierarchical design
that put the servers in a specific room inside the company. The use of the term "data center," as
applied to specially designed computer rooms, started to gain popular recognition about this
time,
The boom of data centers came during the dot-com bubble. Companies needed fast Internet
connectivity and nonstop operation to deploy systems and establish a presence on the Internet.
Installing such equipment was not viable for many smaller companies. Many companies started
building very large facilities, called Internet data centers (IDCs), which provide businesses with
a range of solutions for systems deployment and operation. New technologies and practices
were designed to handle the scale and the operational requirements of such large-scale
operations. These practices eventually migrated toward the private data centers, and were
adopted largely because of their practical results.
As of 2007, data center design, construction, and operation is a well-known discipline. Standard
Documents from accredited professional groups, such as the Telecommunications Industry
Association, specify the requirements for data center design. Well-known operational metrics for
data center availability can be used to evaluate the business impact of a disruption. There is still
a lot of development being done in operation practice, and also in environmentally-friendly data
center design. Data centers are typically very expensive to build and maintain. For instance,
Amazon.com's new 116,000 sq ft (10,800 m2) data center in Oregon is expected to cost up to
$100 million.[1]
IT operations are a crucial aspect of most organizational operations. One of the main concerns
is business continuity; companies rely on their information systems to run their operations. If a
system becomes unavailable, company operations may be impaired or stopped completely. It is
necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance
of disruption. Information security is also a concern, and for this reason a data center has to
offer a secure environment which minimizes the chances of a security breach. A data center
must therefore keep high standards for assuring the integrity and functionality of its hosted
computer environment. This is accomplished through redundancy of both fiber optic cables and
power, which includes emergency backup power generation.
Effective data center operation requires a balanced investment in both the facility and the
housed equipment. The first step is to establish a baseline facility environment suitable for
equipment installation. Standardization and modularity can yield savings and efficiencies in the
design and construction of telecommunications data centers.
Standardization means integrated building and equipment engineering. Modularity has the
benefits of scalability and easier growth, even when planning forecasts are less than optimal.
For these reasons, telecommunications data centers should be planned in repetitive building
blocks of equipment, and associated power and support (conditioning) equipment when
practical. The use of dedicated centralized systems requires more accurate forecasts of future
needs to prevent expensive over construction, or perhaps worse — under construction that fails
to meet future needs.
Tier
Requirements
Level
A data center can occupy one room of a building, one or more floors, or an entire building. Most
of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are
usually placed in single rows forming corridors (so-called aisles) between them. This allows
people access to the front and rear of each cabinet. Servers differ greatly in size from 1U
servers to large freestanding storage silos which occupy many tiles on the floor. Some
equipment such as mainframe computers and storage devices are often as big as the racks
themselves, and are placed alongside them. Very large data centers may useshipping
containers packed with 1,000 or more servers each;[6] when repairs or upgrades are needed,
whole containers are replaced (rather than repairing individual servers).[7]
A bank of batteries in a large data center, used to provide power until diesel generators can start.
[edit]Environmental control
Main article: Data center environmental control
Modern data centers try to use economizer cooling, where they use outside air to keep the data
center cool.[10] Washington state now has a few data centers that cool all of the servers using
outside air 11 months out of the year. They do not use chillers/air conditioners, which creates
potential energy savings in the millions.[11]
Telcordia GR-2930, NEBS: Raised Floor Generic Requirements for Network and Data Centers,
presents generic engineering requirements for raised floors that fall within the strict NEBS
guidelines.
There are many types of commercially available floors that offer a wide range of structural
strength and loading capabilities, depending on component construction and the materials used.
The general types of raised floors include stringerless, stringered, and structural platforms, all of
which are discussed in detail in GR-2930 and summarized below.
Stringered Raised Floors - This type of raised floor generally consists of a vertical
array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular
upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the
concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright
and the overall height is adjustable with a leveling nut on the welded stud of the pedestal
head.
To prevent single points of failure, all elements of the electrical systems, including backup
systems, are typically fully duplicated, and critical servers are connected to both the "A-side"
and "B-side" power feeds. This arrangement is often made to achieve N+1 Redundancy in the
systems. Static switches are sometimes used to ensure instantaneous switchover from one
supply to the other in the event of a power failure.
Data centers typically have raised flooring made up of 60 cm (2 ft) removable square tiles. The
trend is towards 80–100 cm (31–39 in) void to cater for better and uniform air distribution. These
provide aplenum for air to circulate below the floor, as part of the air conditioning system, as
well as providing space for power cabling.
[edit]Security
Physical security also plays a large role with data centers. Physical access to the site is usually
restricted to selected personnel, with controls including bollards and mantraps.[14] Video
camerasurveillance and permanent security guards are almost always present if the data center
is large or contains sensitive information on any of the systems within. The use of finger print
recognition man traps is starting to be commonplace.
[edit]Energy use
Main article: IT energy management
Energy use is a central issue for data centers. Power draw for data centers ranges from a few
kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have
power densities more than 100 times that of a typical office building.[15] For higher power density
facilities, electricity costs are a dominant operating expense and account for over 10% of
the total cost of ownership (TCO) of a data center.[16] By 2012 the cost of power for the data
center is expected to exceed the cost of the original capital investment.[17]
Siting is one of the factors that affect the energy consumption and environmental effects of a
datacenter. In areas where climate favors cooling and lots of renewable electricity is available
the environmental effects will be more moderate. Thus countries with favorable conditions, such
as Finland,[21] Sweden[22] and Switzerland,[23] are trying to attract cloud computing data centers.
In an 18-month investigation by scholars at Rice University’s Baker Institute for Public Policy in
Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-
related emissions will more than triple by 2020. [24]
[edit]Energy efficiency
The most commonly used metric to determine the energy efficiency of a data center is power
usage effectiveness, or PUE. This simple ratio is the total power entering the data center
divided by the power used by the IT equipment.
Power used by support equipment, often referred to as overhead load, mainly consists of
cooling systems, power delivery, and other facility infrastructure like lighting. The average
data center in the US has a PUE of 2.0,[19] meaning that the facility uses one Watt of
overhead power for every Watt delivered to IT equipment. State-of-the-art data center
energy efficiency is estimated to be roughly 1.2.[25]Some large data center operators
like Microsoft and Yahoo! have published projections of PUE for facilities in
development; Google publishes quarterly actual efficiency performance from data centers
in operation.[26]
[edit]Network infrastructure
An example of "rack mounted" servers.
Some of the servers at the data center are used for running the
basic Internet and intranet services needed by internal users in the organization, e.g., e-
mailservers, proxy servers, and DNS servers.
[edit]Applications
The main purpose of a data center is running the applications that handle the core
business and operational data of the organization. Such systems may be proprietary and
developed internally by the organization, or bought from enterprise software vendors. Such
common applications are ERP and CRM systems.
A data center may be concerned with just operations architecture or it may provide other
services as well.
Often these applications will be composed of multiple hosts, each running a single
component. Common components of such applications are databases, file
servers, application servers, middleware, and various others.
Data centers are also used for off site backups. Companies may subscribe to backup
services provided by a data center. This is often used in conjunction with backup tapes.
Backups can be taken of servers locally on to tapes. However, tapes stored on site pose a
security threat and are also susceptible to fire and flooding. Larger companies may also
send their backups off site for added security. This can be done by backing up to a data
center. Encrypted backups can be sent over the Internet to another data center where they
can be stored securely.
For disaster recovery, several large hardware vendors have developed mobile solutions
that can be installed and made operational in very short time. Vendors such as Cisco
Systems,[28] Sun Microsystems,[29][30] IBM, and HP have developed systems that could be
used for this purpose.[31]