M 4 - Introduction of Server Farms
M 4 - Introduction of Server Farms
• Core Layer
• Aggregation Layer
• Access Layer
• Storage Layer
• Data Centre Transport Layer
• Data Centre Services
• IP Infrastructure Services
• Application Services
• Security Services
• Storage Services
Introduction
Data centres are rapidly evolving to match the high expectations for growth, centralisation, and
security. The infrastructure demands that make a data centre design more challenging include
support for dynamic and seamless growth without major interruptions to the live data centre,
addition of new services without major changes to the infrastructure and disruption to current
services, and continuous and predictable uptime with no single point of failure.
The data centre infrastructure and network setup should meet the infrastructure demands. A typical
enterprise data centre environment would consist of key building blocks such as campus network,
private WAN, remote access and server farms.
The application environments such as Enterprise Resource Planning (ERP) systems, Management
Information System (MIS) are supported by server farms.
Server farms are crucial for any data centre design. Therefore, all data centres have a support for
at least one type of server farm.
• Internet Server Farm: These servers are used for users connecting to the internet.
• Intranet Server Farm: These servers are used for enterprise users connecting to the
internet as well as enterprise applications.
• Extranet Server Farm: These servers are used by external authorised parties connecting
to the enterprise applications.
Internet
SP1 SP2
PSTN VPN Partners
AAA
DMZ
Internet Server Farm
RPMS
Remote Access Core Switches
Campus
Intranet Server Farm
Data Center
Typical Data Centre Topology
Figure depicts how a data centre is organised and various kinds of server farms are
interconnected with each other. All the three types of server farms reside within the data centre. A
data centre in which all server farms are deployed within the same data centre facility is referred
to as an enterprise data enter.
The server farms may differ a bit in actual engineering due to their core functions. However, their
objectives and the architecture needs for each of them are quite different. Based on the functional
purpose of each type of server farm, its integration in the topology is defined by the following
factors:
• Security considerations
• Redundancy
• Scalability
• Performance
While designing a data centre, the particular set of data centre requirements depends on which type
of server farm must be supported. Each individual server farm requires a specific infrastructure
and it fulfils different deployment needs. It is important to take the correct decision while designing
the technical infrastructure of the data centre.
The internet server farms interface with the Internet. This means that the users who access the
internet server farms are mainly located somewhere on the internet. These users reach the server
farms through the internet. e.g.: Consider an ecommerce site, which has online users accessing the
ecommerce site to view the information and make purchases.
These server farms typically support the business-to-consumer services. The users can access the
data residing on Internet server farms through web browsers and web interfaces.
Dedicated Internet server farms: The dedicated internet server farm is designed to support
large-scale Internet-facing applications.
Internet
SP1 SP2
Internet Edge
The applications in this type of server farms support the core business function or the enterprise e-
business goals. Due to the nature of this server farm and the type of users accessing it, there are
two main concerns that internet server farms face:
• Security of data: The number of user’s access to the internet increases the security risks.
• Scalability of operations: As the use of internet for business grows, the number of users
accessing the servers would also increase, causing potential scalability issues.
The dedicated server farms generally provide support for services such as ecommerce. These
server farms provide an access to applications that are used by internet users. Since these server
farms provide access to internet user, the scalability will depend on the extent of the user base.
Demilitarised Zone (DMZ) server farms: Before being familiar with DMZ server farm you
should know what DMZ is. DMZ is a strategy used to provide different levels of security on a
network. Basically, a DMZ separates the trusted network from the untrusted network.
Internet
SP1 SP2
Private Internet
WAN Server
Farm
Campus
The infrastructure that support the server farms are also used to provide internet access to
enterprise users. Typically, these server farms are located in the DMZ, since they are accessible
through the internet and they are also a part of the enterprise network. DMZ server farms require
stringent security policies that not only keep the enterprise’s network safe, but also protect the
server farms from external attacks.
Intranet Server Farms
Intranet has evolved over the last few decades primarily due to the increased acceptance of the
client/server model and of web-based applications on the Internet.
Intranet server farms are similar to the internet server farms in terms of how it is used. However,
the applications supported by the intranet server farms are only available to the internal users of
an enterprise.
Internet
SP1 SP2
Internet Edge
Core Switches
Private
WAN Campus
The intranet server farms host the enterprise-critical computing resources. These resources support
business processes and internal applications.. These core switches are a part of the enterprise
backbone. The switches connect the Internet Edge modules with the private WAN.
The Internet Edge module provides the following functions:
The following measures are taken to make the data more secure on the intranet server farm:
• Security policies to application tiers, especially when the servers on different tiers
communicate with each other. The application architecture determines the security level to
be applied to each tier.
Since intranet server farms are used for enterprise users, the size and architecture of the server
farm is determined by the number of users accessing the applications as well as the type of
applications. Rich applications would impose a higher load on the server farm, as the number of
users grows the demand for accessing the intranet applications would also increase. This
necessitates scalability to become a key factor while designing the architecture of the server farm.
With intranet server farms, the internet users would not be able to use the enterprise applications
on the intranet. However, the access can be given to Internet users through the Virtual Private
Network (VPN) technology.
If you consider the placement of the extranet server farms from a functional perspective, it would
be between the internet and intranet server farms. Unlike the internet or intranet server farms, the
applications on the extranet server farms are accessed only by a specific user group. This type of
setup is mainly used to provide access to business partners, who are external to the enterprise, yet
considered as trusted group.
The main function that an extranet server farm provides is to enable business-to-business
communication. It also provides a secure environment for a fast exchange of information. A high-
speed access is important for businesses as it helps in reducing the time to market as well as the
cost..
Internet
SP1 SP2
Partners
VPN
DMZ
Internet Server Farm
Internet Edge
Core Switches
Extranet Server Farm
Private Intranet
WAN Server Farm
Campus
Figure depicts how dedicated links communicate between the enterprise and its business
partners. Many enterprises are using VPN to support this communication due to the following
advantages:
• Ease of setup
• Lower implementation costs
• Support for concurrent voice, video, and data traffic over an IP network
This type of architecture requires specific security measures, since the access is provided to only
trust external users. These users would have access to a part of the enterprise, but would be
restricted from accessing the remaining part of the enterprise network. Further, the internal users
of the enterprise can also access the extranet server farm.
For implementing a highly secure environment, dedicated firewalls and routers can be added to
the network. However, if the number of users accessing this network is small, then the existing
Internet Edge infrastructure can be leveraged.
There are two ways in which partners can access this network:
A data centre can have any combination of the server farms. The type of server farms that a data
centre will cater to is important, as it defines the technical architecture of the data centre.
Based on the core functions that a data centre provides, it can be of three types:
In very simple terms, an Internet Data Centre (IDC) is a data centre connected to the internet and
can be accessed from the internet. An IDC is traditionally an outsourced solution, where it is built
and managed by service providers. Also, there are few enterprises whose core business would be
based on internet commerce. Such enterprises operate their own IDCs.
The architecture of enterprise IDCs and service provider IDCs are quite similar. However, in case
of an enterprise IDC, the demand for scalability is generally lower as the user base and services
provided is comparatively smaller than a service provider IDC. A service provider IDC needs to
operate on a large scale as it hosts multiple customers.
In case a business would like to operate an enterprise IDC, but may not want to build a server farm
from scratch, they could also use the data centre facilities in a collocated data centre. An enterprise
can rent out a server or a cabinet, which remains under the control of the enterprise and not the
service provider. Such an arrangement provides the following list of benefits to reduce:
• Additional task of building the data centre internally from ground up.
• Cost of building a server farm.
• Product’s time to market.
Corporate Data Centre
Corporate data centres are built to support the core functions of a business. The business model
can be based on internet services, intranet services or both. Hence, corporate data centre
architecture can be any type of server farms. An example of this architecture is depicted in 4.1.1,
in which the data centre is supported by all the three types of server farms and is connected to the
enterprise network.
Though a corporate data centre can support all three types of server farms, they are typically used
for supporting intranet server farms hosting enterprise applications.
In a software-Defined data centre (SDDC), all infrastructure elements are virtualized and delivered
as a service. The infrastructure elements include networking, storage, and security. The entire
infrastructure is abstracted from the hardware and managed through software. The key tasks
related to deployment, provisioning, configuration and operation of the infrastructure is automated
and carried out through software. This way of managing the infrastructure through software is
known as virtualization.
Virtualization is the core behavior of an SDDC. The major building blocks of an SDDC are:
• Network virtualization
• Storage virtualization
• Server virtualization
Network virtualization: In this type of virtualization, the available network resources are
combined by splitting up the available bandwidth into channels. Each channel is independently
secured and can be assigned to a specific server or device in real time. The authorized users can
also have shared access to all the network resources.
Network virtualization helps to reduce the cumbersome and time-consuming task of network
management. Since the tasks are virtualized, it improves the productivity and efficiency of the
network administrator.
Some of the tasks that the administrator does through network virtualization include:
• Centrally managing the files, videos, images, programs, and folders from a single physical
site.
• Assigning and reassigning the storage media such as hard drives and tape drives.
Network virtualization optimizes the network speed and improves the reliability of the network.
This virtualization technique is especially useful in networks that may experience a sudden
increase in usage.
Storage virtualization: In this type of virtualization, the physical storage from multiple
network storage devices is pooled together into a single storage device. This pooled storage
device is then centrally managed from a single console.
This type of virtualization is typically used in a Storage Area Network (SAN). With storage
virtualization, the storage administrator can easily perform the tasks of data backup, data archiving,
and data recovery more easily. This essentially disguises a complex SAN environment.
In storage virtualization, a new software and/or hardware layer is created between the storage
system and server. This means that the applications on the server need not know on which storage
subsystems their data is residing.
Some of the tasks that a storage administrator can do through storage virtualisation include:
With this type of virtualization, the storage administrator can combine the storage systems offered
by different vendors into tiered storage. This is possible since the virtualization functions as an
intermediate layer and is an interface between the storage and server.
Server virtualization: In this type of virtualization, the server resources are masked from the
server users. The server resources include the identity and number of physical servers, processors,
and operating systems.
Server virtualisation allows for hiding the complicated details of the server from the users. This
helps in increasing resource sharing and managing the server capacity for meeting the scalability
needs.
To implement server virtualisation, the server administrator creates multiple isolated virtual
environments from a single physical server. These virtual environments are known as virtual private
servers or guests, instances, or containers.
In this type of virtualisation, each server runs on a virtual imitation of the hardware layer. So, the
guest OS can run without the need for any modifications.
The guest has no knowledge of the host's operating system because it is not aware that it's not
running on real hardware. It does, however, require real computing resources from the host.
Some of the tasks that a server administrator can do through server virtualisation include:
• Emulate the connectivity between applications and services within a test environment. So,
the administrator need not physically test the software on the possible hardware.
• Consolidate the server resources, thus reducing the number of servers and racks required.
The modular approach of data centre design has several advantages. Especially when the network
topology requires modifications, specific blocks can be moved around, added or deleted without
disrupting the entire network. Hence, modular data centres are scalable.
When a data centre is designed using a modular approach, the overall network is divided into three
main layers with a specific hierarchy to define the communication between the layers. These layers
that make up the network topology of a data centre can be physical or logical. The four main layers
are:
• Core layer
• Aggregation layer
• Access layer
• Storage layer
In case of a distributed data centre design, an additional layer of data transport between the data
centres needs to be implemented.
Generic Enterprise Network Topology
As you can see in Fig. 4.2.1 the core connectivity functions include:
• Internet Edge: It ensures the connectivity between the enterprise and internet.
• Campus Core: It physically connects the devices to enable access over the network areas.
• Server Farm: It connects to the various network layers. Server farm includes various
network layers of the enterprise data centre topology as listed below:
▪ Core layer
▪ Aggregation layer
▪ Access layer
▪ Storage layer
▪ Data Centre transport layer
The inclusion of the different layers into the architecture depends on how the multi-layer model is
implemented in the data centre. Therefore, some layers may not be present in implementing data
centre. Dividing the entire topology into various layers helps enterprises to build high scalable and
flexible data centres and thereby allows the data centre transport and storage layers to support:
• High-speed mirroring: This is a data replication technique where a complete copy of the
data is created.
• Blade servers
• Single rack units
• Servers
• Mainframes
The data centre aggregation layer brings the core and access layers together. Therefore, the
aggregation layer demands a high level of security and availability. As shown in Fig. 4.2.1, both
the aggregation and core layers are usually installed with redundant components (in pairs) to avoid
single points of failure.
The connection between the layers is done by network switches. There are switches implemented
in all the layers of the topology. In most data centre topologies, the switching is implemented in
the following manner:
Layer 2 switching: This layer encodes and decodes packets into bits. Its key functions include:
• Physical addressing: The switch locates the Media Access Control (MAC) address of the
sending and receiving devices.
• Packet forwarding: The switch builds a MAC address table to selectively forward data
packets.
Layer 3 switching: This layer includes the switching and routing technologies for transmitting
data between nodes. Its key functions include:
• Routing and forwarding: The switch works as an intelligent and fast router to forward
data packets in a network.
• Addressing: The ID addresses are found in layer 3 switching, which are used for routing.
• Congestion control: This switch makes intelligent decisions when managing traffic within
a network, thus controlling traffic and reducing congestion.
Core Layer
The data centre core layer is also known as the backbone of the network topology. This layer
connects the campus core to the aggregation layer. The core layer uses Layer 3 links for high-
speed packet switching.
• Low-latency switching: It allows the applications to get their tasks done quickly.
• 10 Gigabit Ethernet: provides data transfer speed up to 10 billion bits per second.
Implementing an independent core layer is optional. Having a data centre core layer is highly
recommended for large data centres, which face a high volume of data movement. As the packets
need to be transported at high speed, a separate core layer with few security restrictions is required.
For smaller data centres a collapsed core layer design can be used, in which the core and
aggregation layers are combined. In this type of a model, the packet forwarding as well as packet
aggregation and distribution is performed by the switches. In such a topology, the core and
distribution switching components are integrated within one physical switch.
Aggregation Layer
The aggregation layer is responsible for aggregating the uplinks from the data centre access layer
to the core layer. The multilayer switches found in this layer are called as aggregation switches,
since they perform aggregate functions. These aggregation switches support the traditional
switching of packets at Layer 3 and Layer 2. It also supports the protocols and features for Layer
3 and Layer 2 connectivity. The data centre aggregation layer is also known as the distribution
layer due to the packet routing it performs.
Service Devices at Aggregation Layer
The Layer 3 connectivity is implemented between the aggregation and the core layers. This layer
is responsible for providing security and application services. It aggregates the following service
devices that give service to the server farms:
• Multilayer switches: A network device that can operate at higher layers of an Open
Systems Interconnection (OSI) reference model.
• Firewalls: A network security system used to monitor and control all network traffic as
per predetermined security rules.
• Load balancers: A network device that distributes workloads across multiple computing
resources.
• Intrusion Detection Systems: It is used to monitor network for any malicious activities or
violation of policies.
As per the needs of the data centre design, the boundary between Layer 2 and Layer 3 within the
aggregation layer can be set in the multilayer switches, firewalls, or content switching devices.
It is also possible to build multiple aggregation layers within a network, where each of these
aggregation layers would have its own security zone and application services.
Some of the redundancy protocols that are used in the aggregation layer include:
• Hot Standby Router Protocol (HRSP): A routing protocol that enables multiple routers
to act as a single virtual router. This maintains connectivity even if the first hop router fails,
as there are other routers on hot standby or ready to go.
• Gateway Load Balancing Protocol (GLBP): It protects the data traffic from a failed
router or circuit and allows packet load sharing between a group of redundant routers.
• Aggregation of traffic from the access layer and connection to the core layer.
• Support for advanced application and security services.
• Support for Layer 4 services such as firewall and server load balancing.
• Large Spanning Tree Protocol (STP) processing load.
• Highly flexible and scalable design.
• Reduced total cost of ownership by simplifying the number of devices that need to be
managed.
Access Layer
The access layer provides Layer 2 connectivity and features to the server farms in the data centre.
This layer has switches that provide high-performance and low-latency switching. For this layer,
both Layer 2 and Layer 3 access is available. However, in most data centre designs, the access
layers are built using the Layer 2 access connectivity.
In the Layer 2 access design, the virtual LAN (VLAN) trunks the link upstream to share the
aggregation layer service across the same VLAN and multiple switches. A trunk link marks data
frames as they exit the port to indicate which VLAN each frame is associated with.
In case of a multi-tier server farm, the access switches on different segments of the access layer
provide the networking functions. These different segments include:
• Front-end segment
• Application segment
• Back-end segment
As you can see in above Fig. 4.2.5, the front-end segment is made up of:
• Layer 2 switches
• Security devices (Intrusion Detection System (IDS) and Host IDS)
• Front-end server farms (web and client facing servers)
The access switches in the front-end segment connect to the aggregation switches. The type of
front-end server and their functions determine the network features required in the front-end
segment.
e.g. 1: If video streaming over IP is supported in the network, then the multicast feature would be
required.
2: If Voice over IP (VoIP) is supported in the network, then the quality of service (QoS)
must be enabled.
The Layer 2 switches at the front-end segment provide intelligent network services. Layer 2
connectivity through access switches is required between:
Some of the security features that are implemented in the front-end segment include:
• Address Resolution Protocol (ARP) inspection: This is a security measure in which the
ARP packets are validated in a network.
• Broadcast suppression: This is a security measure that prevents LAN interfaces being
disrupted from a broadcast storm (broadcast packets flooding the subnet impacting the
network performance).
• Private VLANs: This is a security measure in which switch ports are restricted so that they
can only communicate only with a specified uplink.
Some of the security devices to monitor and detect intruders in the front-end
segment include:
• Network-based IDS: It monitors the network traffic for any network-based threats.
• Host-based IDSs: It monitors the internals of a computing system for any security threats.
The infrastructure components and security features of the application segment is similar to that
of the front-end segment. Additionally, this segment also supports connectivity to the application
servers.
The application segment adheres the Layer 2 connectivity strictly. It is important to consider the
security measures for this segment depending on the protection required for the application servers.
This is because the application servers have direct access to the database systems.
Based on the security policies, the following security features are implemented in the
application segment:
Similar to the front-end segment, the Layer 2 switches at the application segment provide
intelligent network services due to the functions provided by the application services.
Application servers provide the business logic functions as it defines the communication logic
between the front end and the back end. This means that the user requests from the front end are
translated into commands, so that the back end database systems can understand.
Since the application server communicates with the front-end, increased security
measures are implemented to avoid the following security threats:
• Trust exploitation: In which the security attack is aimed at compromising a trusted host.
The attacker uses privileges that are granted to a system for malicious uses.
The infrastructure components and security features of the application segment is similar to that
of the front-end and application segments. Additionally, this segment supports connectivity to
database servers as you can see in Fig. 4.2.5.
The security features of back-end segment are almost identical to that of the application segment.
However, the security policies are strictly adhered and are aimed at protecting the data in the
database.
The database system hardware can range from medium-sized servers to high-end servers.
• Disk arrays attached to a Storage Area Network (SAN): The storage is on a SAN
network and a network connection is required to access the data.
In case the storage is separated using disk arrays, the database server connects to the SAN and
Ethernet switch. The Fibre Channel interface is commonly used to connect to the SAN.
Storage Layer
The storage layer consists of Fibre Channel switches and routers. These switches and routers
support:
• Internet Small Computer System Interface (iSCSI) over IP: This is a networking
standard to link data storage devices. ISCSI transport block-level data between a server
and a storage device. ISCSI is used to data transfers over intranets and to manage storage
involving remote sites.
• Fibre Channel over IP (FCIP): This is used to link Fibre Channel SAN. FCIP enables
geographically dispersed SAN to connect using the existing IP infrastructure. As it makes
use of the IP infrastructure, it needs TCP/IP services for maintaining connectivity between
remote SANs.
Storage Layer in Primary Data Center and Distributed Data Center
The storage layer devices provide connectivity to servers and storage devices. Examples of storage
devices include disk subsystems and tape subsystems. These devices use the SAN network for
connectivity.
Let us understand the typical setup of a storage layer. The storage layer has storage subsystem
(storage servers) and tape subsystem (tape drives) connected to the Fibre Channel switches. The
servers that are connected to the Fibre Channel switches are usually critical servers and dual-
homed (use two or more network interfaces). The major advantage of dual-homed is to safe guard
the storage servers from network failures.
Since the data in the database systems needs to be available always, there are various methods that
are used to ensure the high availability. Some of the various methods used to ensure the high
availability of data in the storage systems are:
• Data mirroring: In which the complete data is copied or mirrored onto a different storage
device. The mirrored data is used in case of failure of the device holding the original data.
• Data replication: In which the data is replicated on a different storage system either
synchronously or within fixed intervals.
Implementation of these methods would require the data to be located in multiple facilities. This
creates the need for setting up distributed data centres. With distributed data centres, there is a
need to setup transport technologies, which enables the multiple data centres to communicate with
each other.
• Distributed server farms that are situated within the distributed data centres.
Depending on the network technology used (such as Ethernet or Fibre Channel), the transport
technologies should be able to support a wide range of bandwidth requirements depending on the
traffic profiles.
• Asynchronous Transfer Mode (ATM): It transfers data in form of packets of fixed size.
• Metro Ethernet: It uses Ethernet technology to connect Metropolitan Area Networks
(MAN).
• Dark Fibre: It is the unused fibre-optic cable that has been laid out. Companies
usually lease the dark strands or fibres for establishing communication between their
data centres.
• Gigabit Ethernet (GE): Transmits Ethernet frames at a rate of a gigabit per second.
• Fibre Channel (FC): High-speed network technology typically used to connect data
storage devices.
However, if ATM and GE media types are being used, the metro Ethernet transport
infrastructure would be able to use fibre more efficiently. For instance, a metro Ethernet
transport technology can be used to transport data concurrently for different media types
instead of having a dedicated fibre channel each for ESCON, GE and ATM.
Data Centre Services
The data centre services improve the way in which the network operates in each of its
functional service areas (IP infrastructure, security or storage).
The IP and storage infrastructure services are the main building blocks, since both IP and
storage are the key elements of a network. Next, the server farms are built to support the
application environments. The application services optimise the application environment by
using network technologies. The security services leverage the security features on
networking devices. Lastly, the Business Continuity (BC) infrastructure service is used to
enable redundancy levels.
IP Infrastructure Services
The IP infrastructure services provide the core features required by a data centre IP
infrastructure to function.
• Layer 2 features
• Layer 3 features
• Intelligent Network Services
The Layer 2 feature supports Layer 2 adjacency between the server farms and the service
devices such as virtual LAN (VLAN). Layer 2 adjacency means that a data packet sent out
can directly reach its destination without going through a device that would change the
packet.
The two features that enable virtualisation of the physical infrastructure and
consolidation of the server farm segments include:
• VLAN: Similar to LAN, VLANs are configured through software and are used to
divide a network into virtual LANs. With VLAN, you can group segments that are not
physically located on a single LAN.
• Trunking: Provides network access to many clients by sharing a set of lines instead
of providing them individually.
The Layer 3 feature enables a fast-convergence routed network for basic Layer 3 services like
the default gateway support. The purpose of the Layer 3 feature is to provide high availability
of Layer 3 in which the network operation is predictable in case of both normal and failure
conditions.
Intelligent network services: It consists of various features that enable application services
at network level. Some of the common intelligent network service features are:
Application Services
Application services include features that improve the presence of application environments.
Some of these features include:
• Load balancing
• Caching
• Secure Sockets Layer (SSL) termination
• Distribute load to server farms: The load balancers can segment server farms by the
content they serve. For instance, a group of servers is dedicated to serve streaming
video and another group of servers run application scripts. On receiving data, a load
balancer would send the request for .mp4 (video) file to the first server group. If it
receives a .cgi (script) file, the load balancer will send it to the second server group.
• Monitor server health: The server health is tracked through mechanisms such as:
These tracking mechanisms ensure that traffic is not forwarded to servers that are not functioning.
Caching: The caching features offered by the application services can benefit server farms
that are specifically working in a Remote Procedure Call (RPC) mode.
In RPC mode, one program can request for a service from a program located on another
network device without having to understand network details. The caches operating in RPC
mode are located in close proximity to the server farm to intercept the requests that are sent to
the server farm. This enables offloading (transfer) of static content from the servers.
SSL termination: A process that happens at the server end of an SSL connection, where the
data traffic switches between encrypted and unencrypted forms. The SSL termination device
transfers the SSL encryption/decryption process away from the server farm.
The SSL device has the capability to process packets based on information in the payload
(essential data that is being carried within a packet). This payload is usually encrypted. This
capability of the SSL device enables the load balancer to distribute the load before re-
encrypting the packets and transferring them to the required server.
Security Services
Security services are made up of features that are used to secure the data centre infrastructure
and application environments. The security tools and features include:
Access control lists (ACLs): ACLs filter packets to prevent unwanted access to network
infrastructure devices and protect server-farm application services. ACLs can inspect and
classify packets without causing performance bottlenecks. ACLscan be applied on:
Firewalls: A firewall monitors and controls the incoming and outgoing network traffic as per
the predetermined security rules. A firewall marks a boundary between highly secured as well
as loosely secured network perimeters.
Firewalls are typically set up at the Internet Edge and the edge of the data centre. They are
also implemented in multi-tier server farm environments for added security between the
different tiers.
Intrusion Detection System (IDS) and host IDSs: To protect data in a data centre, it is
important to detect security intrusion and notify the same so that the issue can be resolved.
IDSs proactively address the security issues.
The host IDSs enable real-time analysis and response to hacking attempts on all types of
servers (database, application and web). The host IDS has the capability to identify a security
attack and prevent access to server resources even before any unauthorized transactions
occur.
Secure management: Implemented through network protocols for supporting secure
monitoring and access to network devices. Secure management includes the use of the
following:
• Secure Shell (SSH): An encrypted network protocol that allows remote login and
other network services to function securely over an unsecured network.
Storage Services
Storage services can consolidate the Direct-attached Storage (DaS) by using disk arrays that are
connected to the network. The key features provided by the storage services include:
• Effective storage disk utilisation mechanism: This is done through concurrent use
of disk arrays by multiple servers. Concurrent use is possible through various
network-based mechanisms supported by the SAN switch. This switch builds logical
paths from servers to storage arrays.
• Virtual SAN (VSAN) services: The VSAN technology (available on SAN switches)
is required to consolidate the isolated SANs on to one SAN. VSANs are similar to
VLANs, but the difference being that VSANs are supported by SAN switches instead
of Ethernet switches.
• Support for Fibre Channel over IP (FCIP) and Internet Small Computer System
Interface (iSCSI): The storage services support FCIP and iSCSI on the same storage
network infrastructure. The FCIP technology connects SANs that are geographically
distributed. SCSI is an Internet Protocol (IP) based storage networking standard that is
used to link multiple data storage facilities. It is a lower-cost alternative to a fibre
channel. The FCIP and iSCSI are used in both local SANs and SANs in a distributed
data centre.