0% found this document useful (0 votes)
12 views

Chapter 6 To 11

Uploaded by

Noriel Galoso
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Chapter 6 To 11

Uploaded by

Noriel Galoso
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Chapter 6: Network Security

Lesson 1: Network security protocols (SSL/TLS, IPsec) and encryption methods.

Both IPsec and SSL/TLS VPNs can provide enterprise-level secure remote access, but they do so
in fundamentally different ways. These differences directly affect both application and security
services and should drive deployment decisions.
IPsec VPNs protect IP packets exchanged between remote networks or hosts and an IPsec
gateway located at the edge of your private network. SSL/TLS VPN products protect application
traffic streams from remote users to an SSL/TLS gateway. In other words, IPsec VPNs connect
hosts or networks to a protected private network, while SSL/TLS VPNs securely connect a user's
application session to services inside a protected network.
IPsec VPNs can support all IP-based applications. To an application, an IPsec VPN looks just
like any other IP network. SSL/TLS VPNs can only support browser-based applications, absent
custom development to support other kinds.
Before you choose to deploy either or both, you'll want to know how SSL/TLS and IPsec VPNs
stack up in terms of security and what price you have to pay for that security in administrative
overhead. Let's compare how IPsec and SSL/TLS VPNs address authentication and access
control, defense against attack and client security, and then look at what it takes to configure and
administer both IPsec and SSL/TLS VPNs, including client vs. clientless pros and cons and
fitting VPN gateways into your network and your app servers.
Authentication and access control
Accepted security best practice is to only allow access that is expressly permitted, denying
everything else. This encompasses both authentications, making sure the entity communicating
-- be it person, application or device -- is what it claims to be, and access control, mapping an
identity to allowable actions and enforcing those limitations.
Authentication
Both SSL/TLS and IPsec VPNs support a range of user authentication methods. IPsec employs
Internet Key Exchange (IKE) version 1 or version 2, using digital certificates or pre-shared
secrets for two-way authentication. Pre-shared secrets are the single most secure way to handle
secure communications but is also the most management-intensive. SSL/TLS web servers always
authenticate with digital certificates, no matter what method is used to authenticate the user. Both
SSL/TLS and IPsec systems support certificate-based user authentication, though each offers less
expensive options through individual vendor extensions. Most SSL/TLS vendors support
passwords and tokens as extensions.
SSL/TLS is better suited for scenarios where access to systems is tightly controlled or where
installed certificates are infeasible, as with business partner desktops, public kiosk PCs and
personal home computers.
Access control
Once past authentication, an IPsec VPN relies on protections in the destination network,
including firewalls and applications for access control, rather than in the VPN itself. IPsec
standards do, however, support selectors -- packet filters that permit, encrypt or block traffic to
individual destinations or applications. As a practical matter, most organizations grant hosts
access to entire subnets, rather than keep up with the headaches of creating and modifying
selectors for each IP address change or new app.
SSL/TLS VPNs tend to be deployed with more granular access controls enforced at the gateway,
which affords another layer of protection but which also means admins spend more time
configuring and maintaining policies there. Because they operate at the session layer, SSL/TLS
VPNs can filter on and make decisions about user or group access to individual applications
(ports), selected URLs, embedded objects, application commands and even content.
If you really need per-user, per-application access control at the gateway, go SSL/TLS. If you
need to give trusted user groups homogenous access to entire private network segments or need
the highest level of security available with shared secret encryption, go IPsec.
Defense against attacks
Both SSL/TLS and IPsec support block encryption algorithms, such as Triple DES, which are
commonly used in VPNs. SSL/TLS VPNs also support stream encryption algorithms that are
often used for web browsing. Given comparable key lengths, block encryption is less vulnerable
to traffic analysis than stream encryption.
If you're implementing an SSL/TLS VPN, choose products that support the current version of
TLS, which is stronger than the older SSL. Among other benefits, TLS eliminates older SSL key
exchange and message integrity options that made it vulnerable to key cracking and forgery.
Beyond encryption, there are some important differences between IPsec VPNs and TLS VPNs
that can impact security, performance and operability. They include the following:
 Handling man in the middle (MitM) attacks. Using shared secrets for IPsec
authentication and encryption completely prevents MitM attacks. In addition, IPsec
detects and rejects packet modification, which also thwarts MitM attacks, even when not
using shared secrets. It can cause problems if there is a Network Address Translation
system between the endpoints because a NAT gateway modifies packets by its nature,
substituting public IP addresses for private ones and fiddling with port numbers.
However, nearly all IPsec products support NAT traversal extensions.

TLS has some protections against lightweight MitM attacks (those not hijacking the
encryption); it carries sequence numbers inside encrypted packets to prevent packet
injection, for example, and uses message authentication to detect payload changes.
 Thwarting message replay. Both IPsec and TLS use sequencing to detect and resist
message replay attacks. IPsec is more efficient because it discards out-of-order packets
lower in the stack in system code. In SSL/TLS VPNs, out-of-order packets are detected
by the TCP session engine or the TLS proxy engine, consuming more resources before
they are discarded. This is one reason why IPsec is broadly used for site-to-site VPNs,
where raw horsepower is critical to accommodate high-volume, low-latency needs.
 Resisting denial of service (DoS). IPsec is more resistant to DoS attacks because it
works at a lower layer of the network. TLS uses TCP, making it vulnerable to TCP SYN
floods, which fill session tables and cripple many off-the-shelf network stacks. Business-
grade IPsec VPN appliances have been hardened against DoS attacks; some IPsec
vendors even publish DoS test results.
Look carefully at individual products and published third-party test results, including
International Computer Security Association certifications for IPsec, IKE and SSL/TLS, to
assess DoS vulnerability in each implementation.
Client security
Your VPN -- IPsec or SSL/TLS -- is only as secure as the laptops, PCs or mobile devices
connected to it. Without precautions, any client device can be used to attack your network.
Therefore, companies implementing any kind of VPN should mandate complementary client
security measures, such as personal firewalls, malware scanning, intrusion prevention, OS
authentication and file encryption.
This is easier with IPsec since IPsec requires a software client. Some IPsec VPN clients include
integrated desktop security products so that only systems that conform to organizational security
policies can use the VPN.
SSL/TLS client devices present more of a challenge on this score because SSL/TLS VPNs can be
reached by computers outside a company's control -- public computers are a particular challenge.
Vendors address this in several ways -- for example:
 An SSL/TLS VPN can attempt to ensure there is no carryover of sensitive information
from session to session on a shared computer by wiping information such as cached
credentials, cached webpages, temporary files and cookies.
 An SSL/VPN can have the browser run an applet locally that looks for open ports and
verifies antimalware presence before the gateway accepts remote access.
 Some SSL/TLS VPNs combine client security with access rules. For example, the
gateway can filter individual application commands -- e.g., FTP GET but not PUT; no
retrieving HTTP objects ending in .exe -- to narrow the scope of activity of those using
unsecured computers.
Session state is a dimension of usability more than security, but it's worth noting that both IPsec
and SSL/TLS VPN products often run configurable keepalives that detect when the tunnel has
gone away. Both kinds of tunnels are disconnected if the client loses network connectivity or the
tunnel times out due to inactivity. Different methodologies are used based on different locations
in the protocol stack, but they have the same net effect on users.

Client vs. clientless


The primary allure of SSL/TLS VPNs is their use of standard browsers as clients for access to
secure systems rather than having to install client software, but there are a number of factors to
consider.
SSL/TLS VPNs do a great job making browser-based apps available to remote devices.
However, generally speaking, the more diverse the application mix, the more attractive IPsec can
become. It boils down to a tradeoff between IPsec client installation and SSL/TLS VPN
customization.
Of course, not all applications are browser-accessible. If key applications aren't, the gateway
would have to push a desktop agent, such as a Java applet, to provide access -- e.g., to a legacy
client or server application. If the environment is rich in such applications, you may spend more
time and effort developing or deploying add-ons than you would have supported an IPsec VPN.
The use of such plugins may conflict with other security policies for desktops. Most
organizations block unsigned Java, for example, since it can be used to install Trojans, retrieve or
delete files and so forth. Some organizations block all active content to be on the safe side. As a
result, you may have to reconfigure some browser clients to use an SSL/TLS VPN, which puts
you back in the business of fiddling with client configurations.
Most client platforms, including Windows, Mac OS X, Android and Apple iOS, have native
support for IPsec. Some gateways may still require third-party client software for advanced
functionality, and older clients may not have the native solution. So, be sure to evaluate potential
VPNs with this in mind. Installing third-party clients is time-consuming and requires access to
the users' devices.
Some vendors offer hardware IPsec VPN clients for organizations that must deal with diverse OS
platforms. These small appliances sit between a worker's home PC and cable or DSL (Digital
Subscriber Line) modem, acting like an IPsec VPN client. The idea is to invest in hardware
upfront to enable administering VPN access via an enterprise-controlled device rather than every
client device behind it. Organizations can instead use IPsec-enabled single office/home office
firewalls to incorporate teleworkers' LANs into their site-to-site VPN topology.
Policy distribution and maintenance are often hamstrung by user mobility and intermittent
connectivity. This is a significant issue for IPsec VPNs. IPsec administrators must create security
policies for each authorized network connection, identifying critical information, such as IKE
identity, Diffie-Hellman group, crypto-algorithms and security association lifetimes. IPsec
vendors provide centralized policy management systems to ease and automate policy
distribution, though not always in a way that integrates cleanly with other network security
policies and policy domains.
For the most part, security policy for SSL/TLS VPNs is implemented and enforced at the
gateway -- SSL/TLS proxy. Thus, there's no user or device involved and no remote management.
Integrating VPN gateways
Server-side issues tend to get lost amid the buzz about clientless savings, but understanding
what's involved is essential in VPN product selection, secure system design and cost-effective
deployment.
Whether you choose IPsec or SSL/TLS, your VPN gateway will be where the rubber meets the
road. Server-side VPN administration is required for both. Network configurations are the main
issue for IPsec, and app server management is the problem for SSL/TLS.
IPsec remote hosts become part of your private network, so IT must sort out the following:
 Address assignment. IPsec tunnels have two addresses. Outer addresses come from the
network where the tunnel starts -- e.g., the remote client. Inner addresses, for the
protected network, are assigned at the gateway. IT has to use its Dynamic Host
Configuration Protocol or other IP address management tools to provide ranges for the
gateway to use and has to ensure any internal firewalls or other security systems allow
those addresses access to the desired services.
 Traffic classification. SSL/TLS systems enable granular control of access to services at
the gateway. Deciding what to protect and then setting selectors to protect it takes time to
configure and time to maintain. For example, "HR clients should be able to reach the HR
server" must be mapped into the right set of users and destination subnets, servers, ports
and URLs and maintained over time as the services change.
 Routing. Adding an IPsec VPN gateway changes network routes. You'll spend time
deciding how client traffic should be routed to and from the VPN gateway.
SSL/TLS VPNs don't require client address assignment or changes to routing inside your
network because they work higher in the network stack. Typically, though, SSL/TLS VPN
gateways are deployed behind a perimeter firewall, which must be configured to deliver
SSL/TLS traffic to the gateway. By contrast, a perimeter firewall is often the IPsec VPN
gateway.
SSL/TLS VPN gateways can have a positive impact on the application servers inside your
private network. Should IT staff need to restrict access at a finer-than-firewall granularity -- e.g.,
user-aware access to a directory on a web server -- they may need to apply OS-level access
controls, such as Windows NTFS, and per-user or per-application authentication on the servers
themselves. This would control access for staff coming in from company endpoints or via an
IPsec or SSL/TLS VPN.
By applying the same granular access controls at SSL/TLS VPN gateways, organizations can
offload that security from the application servers. It also enables an organization to enforce
uniform policy at the gateway and across internal systems, even if the gateway is redirecting
traffic to an external target, like a SaaS service. Citrix NetScaler, for example, can provide a
uniform security policy environment for all sanctioned enterprise applications, whether on
premises or cloud-delivered.
This fine-grained access control comes at a price: More planning, configuration and verification
translates into overhead. And it doesn't eliminate the need for controls on the servers unless all
traffic passes through the gateways, so keeping policies in sync is another ongoing task.
The test of time
Will it always be SSL/TLS VPN vs. IPsec VPN? It's quite likely that IPsec will remain attractive
for groups needing the highest degree of security, requiring broader access to IT systems or to
rich sets of legacy applications, and, of course, for site-to-site connectivity -- now often under the
control of a software-defined WAN rather than a VPN. SSL/TLS will continue to be attractive
for lower-security deployments or those requiring a single place to control a lot of fine-grained
differentiation of access rights for users across multiple systems or those unable to enforce or
control use of IPsec.

IT departments should assess the specific needs of different groups of users to decide whether a
VPN is right for them, as opposed to a newer kind of system, such as a software-defined
perimeter tool; which kind of VPN will best serve their needs; and whether to provide it
themselves or contract a VPN service, such as Palo Alto Prisma or Cisco Umbrella.
Lesson 2: Network security best practices and defense strategies.

Understand the OSI Model


The International Standards Organization (ISO) developed the Open Systems Interconnect (OSI)
model in 1981. It consists of seven functional layers that provide the basis for communication
among computers over networks, as described in the table below. You can easily remember them
using the mnemonic phrase “All people seem to need data processing.” Understanding this
model will help you build a strong network, troubleshoot problems, develop effective
applications and evaluate third-party products.

Layer Function Protocols or Standards

Layer 7: Provides services such as e-mail, HTTP, FTP, TFTP, DNS,


Application file transfers and file servers SMTP, SFTP, SNMP, RLogin,
BootP, MIME

Layer 6: Provides encryption, code MPEG, JPEG, TIFF


Presentation conversion and data formatting

Layer 5: Negotiates and establishes a SQL, X- Window, ASP, DNA,


Session connection with another computer SCP, NFS, RPC

Layer 4: Supports end-to-end delivery of TCP, UDP, SPX


Transport data

Layer 3: Performs packet routing IP, OSPF, ICMP, RIP, ARP,


Network RARP

Layer 2: Data Provides error checking and Ethernet, Token Ring, 802.11
link transfer of message frames

Layer 1: Physically interfaces with EIA RS-232, EIA RS-449,


Physical transmission medium and sends IEEE, 802
data over the network

Understand Types of Network Devices


To build a strong network and defend it, you need to understand the devices that comprise it.
Here are the main types of network devices:
 Hubs connect multiple local area network (LAN) devices together. A hub also acts as a
repeater in that it amplifies signals that deteriorate after traveling long distances over
connecting cables. Hubs do not perform packet filtering or addressing functions. Hubs
operate at the Physical layer.
 Switches generally have a more intelligent role than hubs. Strands of LANs, are usually
connected using switches. Mainly working at the Data Link layer, they read the packet
headers and process the packets appropriately. Generally, switches can read the hardware
addresses of incoming packets to transmit them to the appropriate destination.
 Routers help transmit packets to their destinations by charting a path through the sea of
interconnected network devices. They remove the packets from the incoming frames,
analyze them individually and assign IP addresses. Routers normally work at the Network
layer of the OSI model.
 Bridges are used to connect two or more hosts or network segments together. The basic
role of bridges in network architecture is storing and forwarding frames between the
different segments that the bridge connects. They use hardware Media Access Control
(MAC) addresses for transferring frames. Bridges work only at the Physical and Data
Link layers of the OSI model.
 Gateways normally work at the Transport and Session layers of the OSI model. At the
Transport layer and above, there are numerous protocols and standards from different
vendors; gateways are used to deal with them.
Know Network Defenses
Using the proper devices and solutions can help you defend your network. Here are the most
common ones you should know about:
 Firewall — One of the first lines of defense in a network, a firewall isolates one network
from another. Firewalls either can be standalone systems or included in other devices,
such as routers or servers. You can find both hardware and software firewall solutions;
some firewalls are available as appliances that serve as the primary device separating two
networks.
 Intrusion detection system (IDS) — An IDS enhances cybersecurity by spotting a
hacker or malicious software on a network so you can remove it promptly to prevent a
breach or other problems, and use the data logged about the event to better defend against
similar intrusion incidents in the future. Investing in an IDS that enables you respond to
attacks quickly can be far less costly than rectifying the damage from an attack and
dealing with the subsequent legal issues.
 Intrusion prevention system (IPS) — An IPS is a network security solution that can not
only detect intruders, but also prevent them from successfully launching any known
attack. Intrusion prevention systems combine the abilities of firewalls and intrusion
detection systems. However, implementing an IPS on an effective scale can be costly, so
businesses should carefully assess their IT risks before making the investment. Moreover,
some intrusion prevention systems are not as fast and robust as some firewalls and
intrusion detection systems, so it might not be an appropriate solution when speed is an
absolute requirement.
 Network access control (NAC) involves restricting the availability of network resources
to endpoint devices that comply with your security policy. Some NAC solutions can
automatically fix non-compliant nodes to ensure it is secure before access is allowed.
NAC is most useful when the user environment is fairly static and can be rigidly
controlled, such as enterprises and government agencies. It can be less practical in
settings with a diverse set of users and devices that are frequently changing, which are
common in the education and healthcare sectors.
 Web filters are solutions that by preventing users’ browsers from loading certain pages
from particular websites. There are different web filters designed for individual, family,
institutional and enterprise use.
 Proxy servers act as negotiators for requests from client software seeking resources from
other servers. A client connects to the proxy server, requesting some service (for example,
a website); the proxy server evaluates the request and then allows or denies it. In
organizations, proxy servers are usually used for traffic filtering and performance
improvement.
 Anti-DDoS devices detect distributed denial of service (DDoS) attacks in their early
stages, absorb the volume of traffic and identify the source of the attack.
 Load balancers are physical units that direct computers to individual servers in a
network based on factors such as server processor utilization, number of connections to a
server or overall server performance. Organizations use load balancers to minimize the
chance that any particular server will be overwhelmed and to optimize the bandwidth
available to each computer in the network.
 Spam filters detect unwanted email and prevent it from getting to a user's mailbox. Spam
filters judge emails based on policies or patterns designed by an organization or vendor.
More sophisticated filters use a heuristic approach that attempts to identify spam through
suspicious word patterns or word frequency.
Segregate Your Network
Network segmentation involves segregating the network into logical or functional units called
zones. For example, you might have a zone for sales, a zone for technical support and another
zone for research, each of which has different technical needs. You can separate them using
routers or switches or using virtual local area networks (VLANs), which you create by
configuring a set of ports on a switch to behave like a separate network.
Segmentation limits the potential damage of a compromise to whatever is in that one zone.
Essentially, it divides one target into many, leaving attackers with two choices: Treat each
segment as a separate network, or compromise one and attempt to jump the divide. Neither
choice is appealing. Treating each segment as a separate network creates a great deal of
additional work, since the attacker must compromise each segment individually; this approach
also dramatically increases the attacker’s exposure to being discovered. Attempting to jump from
a compromised zone to other zones is difficult. If the segments are designed well, then the
network traffic between them can be restricted. There are always exceptions that must be allowed
through, such as communication with domain servers for centralized account management, but
this limited traffic is easier to characterize.
Segmentation is also useful in data classification and data protection. Each segment can be
assigned different data classification rules and then set to an appropriate level of security and
monitored accordingly.
An extreme example of segmentation is the air gap — one or more systems are literally not
connected to a network. Obviously, this can reduce the usefulness of many systems, so it is not
the right solution for every situation. In some cases, however, a system can be sensitive enough
that it needs to not be connected to a network; for example, having an air-gapped backup server
is often a good idea. This approach is one certain way of preventing malware infections on a
system.
Virtualization is another way to segment a network. Keep in mind that it is much easier to
segment virtual systems than it is to segment physical systems. As one simple example, consider
a virtual machine on your workstation. You can easily configure it so that the virtual machine is
completely isolated from the workstation — it does not share a clipboard, common folders or
drives, and literally operates as an isolated system.
Types of Network Segments
Network segments can be classified into the following categories:
 Public networks allow accessibility to everyone. The internet is a perfect example of a
public network. There is a huge amount of trivial and unsecured data on public networks.
Security controls on these networks are weak.
 Semi-private networks sit between public networks and private networks. From a
security standpoint, a semi-private network may carry confidential information but under
some regulations.
 Private networks are organizational networks that handle confidential and propriety
data. Each organization can own one or more private networks. If the organization is
spread over vast geographical distances, the private networks at each location may be
interconnected through the internet or other public networks.
 Demilitarized zone (DMZ) is a noncritical yet secure region at the periphery of a private
network, separated from the public network by a firewall; it might also be separated from
the private network by a second firewall. Organizations often use a DMZ as an area
where they can place a public server for access by people they might not trust. By
isolating a server in a DMZ, you can hide or remove access to other areas of your
network. You can still access the server using your network, but others aren’t able to
access further network resources.
 Software-defined networking (SDN) is a relatively recent trend that can be useful both
in placing security devices and in segmenting the network. Essentially, in an SDN, the
entire network is virtualized, which enables relatively easy segmentation of the network.
It also allows administrators to place virtualized security devices wherever they want.
Place Your Security Devices Correctly
As you design your network segregation strategy, you need to determine where to place all your
devices. The easiest device to place is the firewall: You should place a firewall at every junction
of a network zone. Each segment of your network should be protected by a firewall. This is
actually easier to do than you might think. All modern switches and routers have firewall
capabilities. These capabilities just need to be turned on and properly configured. Another device
that obviously belongs on the perimeter is an anti-DDoS device so you can stop DDoS attacks
before they affect the entire network. Behind the main firewall that faces public network, you
should have a web filter proxy.
To determine where to place other devices, you need to consider the rest of your network
configuration. For example, consider load balancers. If we have a cluster of web servers in a
DMZ, then the load balancer needs to be in the DMZ as well. However, if we have a cluster of
database servers in a private network segment, then the load balancer must be placed with that
cluster. Port mirroring will also be placed wherever your network demands it. This is often done
throughout network switches so that traffic from a given network segment is also copied to
another segment. This can be done to ensure that all network traffic is copied to an IDS or IPS; in
that case, there must be collectors or sensors in every network segment, or else the IDS or IPS
will be blind to activity in that segment.
Network aggregation switches are another device for which there is no definitive placement
advice. These switches aggregate multiple streams of bandwidth into one. One example would be
to use an aggregation switch to maximize bandwidth to and from a network cluster.
Use Network Address Translation
Network address translation (NAT) enables organizations to compensate for the address
deficiency of IPv4 networking. NAT translates private addresses (internal to a particular
organization) into routable addresses on public networks such as the internet. In particular, NAT
is a method of connecting multiple computers to the internet (or any other IP network) using one
IP address.
NAT complements firewalls to provide an extra measure of security for an organization’s internal
network. Usually, hosts from inside the protected networks, which have private addresses, are
able to communicate with the outside world, but systems that are located outside the protected
network have to go through the NAT boxes to reach internal networks. Moreover, NAT enables
an organization to use fewer IP addresses, which helps confusing attackers about which
particular host they are targeting.
Don’t Disable Personal Firewalls
Personal firewalls are software-based firewalls installed on each computer in the network. They
work in much the same way as larger border firewalls — they filter out certain packets to prevent
them from leaving or reaching your system. The need for personal firewalls is often questioned,
especially in corporate networks, which have large dedicated firewalls that keep potentially
harmful traffic from reaching internal computers. However, that firewall can’t do anything to
prevent internal attacks, which are quite common and often very different from the ones from the
internet; attacks that originate within a private network are usually carried out by viruses. So,
instead of disabling personal firewalls, simply configure a standard personal firewall according
to your organization’s needs and export those settings to the other personal firewalls.
Use Centralized Logging and Immediate Log Analysis
Record suspicious logins and other computer events and look for anomalies. This best practice
will help you reconstruct what happened during an attack so you can take steps to improve your
threat detection process and quickly block attacks in the future. However, remember that
attackers are clever and will try to avoid detection and logging. They will attack a sacrificial
computer, perform different actions and monitor what happens in order to learn how your
systems work and what thresholds they need to stay below to avoid triggering alerts.
Use Web Domain Whitelisting for All Domains
Limiting users to browsing only the websites you’ve explicitly approved helps in two ways.
First, it limits your attack surface. If users cannot go to untrusted websites, they are less
vulnerable. It’s a solid solution for stopping initial access via the web. Second, whitelisting limits
hackers’ options for communication after they compromise a system. The hacker must use a
different protocol, compromise an upstream router, or directly attack the whitelisting mechanism
to communicate. Web domain whitelisting can be implemented using a web filter that can make
web access policies and perform web site monitoring.
Route Direct Internet Access from Workstations through a Proxy Server
All outbound web access should be routed through an authenticating server where access can be
controlled and monitored. Using a web proxy helps ensure that an actual person, not an unknown
program, is driving the outbound connection. There can be up-front work required to reconfigure
the network into this architecture, but once done, it requires few resources to maintain. It has
practically no impact on the user base and therefore is unlikely to generate any pushback. It
raises the level of operational security since there is a single point device that can be easily
monitored.
Use Honeypots and Honeynets
A honeypot is a separate system that appears to be an attractive target but is in reality a trap for
attackers (internal or external). For example, you might set up a server that appears to be a
financial database but actually has only fake records. Using a honeypot accomplishes two
important goals. First, attackers who believe they have found what they are looking for will leave
your other systems alone, at least for a while. Second, since honeypots are not real systems, no
legitimate users ever access it and therefore you can turn on extremely detailed monitoring and
logging there. When an attacker does access it, you’ll be gathering an impressive amount of
evidence to aid in your investigation.
A honeynet is the next logical extension of a honeypot — it is a fake network segment that
appears to be a very enticing target. Some organizations set up fake wireless access points for
just this purpose.
Protect Your Network from Insider Threats
To deal with insider threats, you need both prevention and detection strategies. The most
important preventive measure is to establish and enforce the least-privilege principle for access
management and access control. Giving users the least amount of access, they need to do their
jobs enhances data security, because it limits what they can accidentally or deliberately access
and ensures that is their password is compromised, the hacker doesn’t have all keys to the
kingdom. Other preventative measures include system hardening, anti-sniffing networks and
strong authentication. Detection strategies include monitoring users and networks and using both
network- and host-based intrusion detection systems, which are typically based on signatures,
anomalies, behavior or heuristics.
End users also need to be trained in how to deal with the security threats they face, such as
phishing emails and attachments. The best security in the world can be undermined by end users
who fail to follow security policies. However, they cannot really be expected to follow those
policies without adequate training.
Monitor and Baseline Network Protocols
You should monitor the use of different protocol types on your network to establish baselines
both the organization level and a user level. Protocol baselining includes both wired and wireless
networks. Data for the baseline should be obtained from routers, switches, firewalls, wireless
APs, sniffers and dedicated collectors. Protocol deviations could indicate tunneling information
or the use of unauthorized software to transmit data to unknown destinations.
Use VPNs
A virtual private network (VPN) is a secure private network connection across a public network.
For example, VPNs can be used to connect LANs together across the internet. With a VPN, the
remote end appears to be connected to the network as if it were connected locally. A VPN
requires either special hardware or VPN software to be installed on servers and workstations.
VPNs typically use a tunneling protocol, such as Layer 2 Tunneling Protocol, IPSec or Point-to-
Point Tunneling Protocol (PPTP). To improve security, VPNs usually encrypt data, which can
make them slower than normal network environments.
Use Multiple Vendors
In addition to diversity of controls, you should strive for diversity of vendors. For example, to
defend against malware, you should have antimalware software on each of your computers, as
well as on the network and at the firewall — and use software from different vendors for each of
these places. Because each vendor uses the same malware detection algorithms in all its
products, if your workstation, network and firewall antimalware solutions all come from vendor
A, then anything missed by one product will be missed by all three. The best approach is to use
vendor A for the firewall antimalware, vendor B for the network solution, and vendor C to
protect individual computers. The probability of all three products, created by different vendors
and using different detection algorithms, missing a specific piece of malware is far lower than
any one of them alone missing it.
Use Your Intrusion Detection System Properly
An IDS can be an important and valuable part of your network security strategy. To get the most
value from your IDS, take advantage of both ways it can detect potentially malicious activities:
 Anomaly detection — Most systems maintain a certain baseline of activity on their
networks and sensitive hosts. An IDS can record that baseline and scan for abnormal
activity. If something unusual happens, such as a spike in activity that could indicate a
ransomware or SQL injection attack, it sends an alert so the administrator can analyze the
event and take action as soon as possible.
 Misuse detection — The IDS will also compare activities with attack signatures, which
are sets of characteristic features common to a specific attack or pattern of attacks. This
helps them spot attacks even if they don’t generate activity that violates your
organization’s baseline.
Automate Response to Attacks when Appropriate
Many network devices and software solutions can be configured to automatically take action
when an alarm is triggered, which dramatically reduces response time. Here are the actions you
can often configure:
 Block IP address — The IDS or firewall can block the IP address from which the attack
originated. This option is very effective against spam and denial-of-service attacks.
However, some attackers spoof the source IP address during attacks, so the wrong address
will be blocked.
 Terminate connections — Routers and firewalls can be configured to disrupt the
connections that an intruder maintains with the compromised system by targeting RESET
TCP packets at the attacker.
 Acquire additional information — Another option is to collect information on intruders
by observing them over a period of time. By analyzing the information, you gather, you
can find patterns and make your defense against the attack more robust. In particular, you
can:

o Look for the point of initial access, how the intruders spread and what data was
compromised. Reverse-engineer every piece of malicious software you find and
learn how it works. Then clean up the affected systems and close the vulnerability
that allowed initial access.
o Determine how malicious software was deployed. Were administrative accounts
used? Were they used after hours or in another anomalous manner? Then
determine what awareness systems you could put in place to detect similar
incidents in the figure.
Physically Secure Your Network Equipment
Physical controls should be established and security personnel should ensure that equipment and
data do not leave the building. Moreover, direct access to network equipment should be
prohibited for unauthorized personnel.

Chapter 7: Cloud Computing and Virtualization


Lesson 1: Introduction to cloud computing and its benefits.

Cloud computing is one of the hottest catchphrases in business today. It has transformed the way
organizations store, access and share information, collaborate and manage computing resources.
With the advent of the internet, cloud computing has provided new ways of conducting business
by allowing companies to rise above the conventional on-premises IT infrastructure.
Cloud computing offers modern businesses flexibility, efficiency, scalability, security, increased
collaboration and reduced costs. While the COVID-19 pandemic has accelerated cloud adoption,
the reliance on cloud technologies is set to continue in 2022, especially with hybrid work taking
center stage. So, whether an organization already uses cloud services or is planning to in the
coming year, it is imperative to understand the basics of cloud computing in order to take full
advantage of cloud-powered solutions.
In this blog, we will explore what exactly cloud computing is, how it works, its benefits and
disadvantages, and how companies can protect their SaaS data better.

What is cloud computing?


According to ZDNet, “cloud computing is the delivery of on-demand computing services —
from applications to storage and processing power — typically over the internet and on a pay-as-
you-go basis.”
In simplest terms, the cloud refers to the internet. When organizations store data in virtual data
centers or access programs using an internet connection instead of relying on their device’s hard
drive or on-premises IT infrastructure, it means they are operating in the cloud.
Cloud computing can be as simple as “servers in a third-party data center” or entire serverless
workloads that are infinitely scalable and geo-redundant. Cloud servers and services are scalable
and elastic.
How does cloud computing work?
Cloud computing is the delivery of computing resources, such as IT infrastructure or data center
over the internet. This model allows businesses to rent storage space or access software programs
from a cloud service provider, instead of building and maintaining their own IT infrastructure or
data center. One major benefit of using cloud computing services is that companies pay only for
the resources they use.
To better understand its technical aspects, cloud computing processes can be divided into
frontend and backend. The frontend component allows users to access data and programs stored
in the cloud through an internet browser or by using a cloud computing application. The backend
consists of servers, computers and databases that store the data.
History of cloud computing
According to Technology Review, the phrase “cloud computing” was first mentioned in 1996 in
a Compaq internal document.
The year 1999 was a milestone for cloud computing when Salesforce became the first company
to deliver enterprise applications over the internet. This was also the beginning of Software-as-a-
Service (SaaS).
In 2002, Amazon launched Amazon Web Services (AWS), which was another significant
development in cloud computing. Its suite of cloud-based services included storage, computation
and even human intelligence. In 2006, Amazon launched Elastic Compute Cloud (EC2),
allowing businesses as well as individuals to rent virtual computers and run their own computer
applications.
The year 2009 saw yet another giant milestone in cloud computing as Google Workspace (now
Google Workspace) started to provide browser-based enterprise applications. In the same year,
Microsoft entered the cloud computing arena with Microsoft Azure, and soon companies
like Oracle and HP followed suit.
What are examples of cloud computing?
Cloud computing includes everything from virtual machines to databases to entire serverless
applications. Some examples of cloud computing include:
Salesforce: Salesforce.com is a SaaS provider that specializes in customer relationship
management (CRM). The company provides enterprise applications to help align marketing,
sales, customer services, etc., and allows users to work from anywhere.
DigitalOcean: This company is a New York-based Infrastructure-as-a-Service (IaaS) provider
for software developers. Businesses use DigitalOcean to deploy and scale applications that run
simultaneously across multiple cloud servers.
Microsoft Azure: Microsoft Azure is a fine example of a Platform-as-a-Service (PaaS) that
supports the entire application development lifecycle, right from development to deployment and
beyond. Azure provides a plethora of tools, languages and frameworks to developers.
Dropbox: Dropbox is a cloud-based file hosting service that allows users to store and sync files
to their devices so they can access them from anywhere. It also allows users to share large files,
including images and videos via the internet, facilitating effective collaboration.
What is the importance of cloud computing?
Before cloud computing came into existence, companies were required to download applications
or programs on their physical PCs or on-premises servers to be able to use them. For any
organization, building and managing its own IT infrastructure or data centers is a huge challenge.
Even for those who own their own data centers, allocating a large number of IT administrators
and resources is a struggle.
The introduction of cloud computing and virtualization was a paradigm shift in the history of the
technology industry. Rather than creating and managing their own IT infrastructure and paying
for servers, power and real estate, etc., cloud computing allows businesses to rent computing
resources from cloud service providers. This helps businesses avoid paying heavy upfront costs
and the complexity of managing their own data centers. By renting cloud services, companies
pay only for what they use such as computing resources and disk space. This allows companies
to anticipate costs with greater accuracy.
Since cloud service providers do the heavy lifting of managing and maintaining the IT
infrastructure, it saves a lot of time, effort and money for businesses. The cloud also gives
organizations the ability to seamlessly upscale or downscale their computing infrastructure as
and when needed. Compared to the traditional on-premises data center model, the cloud offers
easy access to data from anywhere and on any device with internet connectivity, thereby
enabling effective collaboration and enhanced productivity.
What are the most common uses of cloud computing?
From startups to large corporations and government agencies, every organization uses the cloud
to access technology services to streamline workflows, improve communication, productivity,
service delivery and more. Listed below are some of the most common uses of cloud computing.
 Storage: One of the most common uses of cloud computing is file storage. While there
are several options to store and access data, such as hard drives on PCs, external hard
drives, USB drives, etc., cloud storage enables businesses to seamlessly access data from
anywhere and on any device with an internet connection. Cloud storage services
like Amazon S3, DropBox or OneDrive provide secure access to data and also allows
businesses to upscale and downscale storage space based on their requirements.
 Database: Cloud database is another popular business use case. IBM defines cloud
database as “a database service built and accessed through a cloud platform.” A cloud
database delivers most of the same functionalities as a traditional database, but with
additional benefits such as flexibility, cost savings, failover support, specialized expertise
and more.
 Web applications: Web applications are a must-have tool for businesses today. Powered
by cloud technology, anyone can access web-based apps using a web browser, providing
instant remote access to information. This allows business professionals to communicate
with customers and provide them with required information while they’re on the go, and
helps them collaborate with colleagues from anywhere.
 Collaboration: Due to its easy accessibility, integration, flexibility, security and ease of
use, cloud-based tools, such as Microsoft 365 and Google Workspace, have become the
obvious choice for businesses looking to collaborate both internally across departments
and externally with clients. Gmail, Google docs, Microsoft Outlook, Microsoft Word,
Teams, etc., are powerful business tools designed to enhance collaboration and
productivity.
 SaaS applications: Software-as-a-Service (SaaS) applications, such as Salesforce, allow
businesses to store, organize and maintain data, as well as automate marketing and
manage clients efficiently. SaaS solutions are highly functional and do not require
software and/or hardware management.
What are the different types of cloud computing?
There are four main types of cloud computing: public, private, hybrid and multicloud.
Public cloud
VMware defines public cloud as “an IT model where on-demand computing services and
infrastructure are managed by a third-party provider and shared with multiple organizations
using the public internet.” Cloud service providers offer various services like Infrastructure-as-a-
Service (IaaS), Platform-as-a-Service (PaaS) and SaaS to individuals and businesses who rent
these services on a monthly or pay-per-use basis. Amazon Web Services (AWS), Microsoft
Azure, Google Cloud, Alibaba Cloud and IBM Cloud are the top five cloud providers.
Private cloud
A private cloud or an internal cloud is where the IT infrastructure (hardware and software
resources) is solely dedicated to a single organization, unlike a public cloud where the computing
resources are shared among multiple tenants. A private cloud environment is ideal for businesses
for whom meeting regulatory requirements, security and control are a priority. Traditionally, a
private cloud is hosted at a company’s data center and uses its own hardware. However, an
organization may outsource hosting to a third-party provider who remotely manages the
computing resources.
Hybrid cloud
A hybrid cloud is a combination of both public cloud and private cloud environments. Businesses
use this model to supplement their compute capacity. When the capacity of a private cloud
reaches its peak, businesses can leverage public cloud to enhance the capabilities of the private
cloud. Hybrid cloud enables businesses to scale compute capacity up or down depending on the
traffic or service demands. This eliminates the need to purchase and maintain new servers,
allowing businesses to save cost, time and effort.
Multicloud
Multicloud is the practice of using a combination of clouds — two or more public or private
clouds, or a combination of both, from several cloud providers. A multicloud approach allows
businesses to select the best services from different cloud vendors based on their budgets,
technical requirements, geographic locations and so on. This model enables businesses to use
different clouds for different purposes. For instance, an organization can use one cloud for
software development and testing, another cloud for data backup and disaster recovery, and other
for data analytics.
What are the three different types of cloud computing services?
The three types of cloud computing services are Infrastructure-as-a-Service (IaaS), Platform-as-
a-Service (PaaS) and Software-as-a-Service (SaaS).
Infrastructure-as-a-Service (IaaS)
IaaS is a cloud computing service where cloud providers deliver and manage virtualized
computing infrastructure over the internet. Instead of creating an in-house IT infrastructure,
businesses can access essential resources, such as operating systems, networking, storage space,
development tools, etc., on demand. This saves hardware and software costs as well as
minimizes the burden of IT staff.
Platform-as-a-Service (PaaS)
PaaS allows businesses to concentrate on the development, deployment and management of
software applications and services without having to worry about the underlying infrastructure
since cloud providers do the heavy lifting. With PaaS, developers and programmers gain access
to not only IT infrastructure but also application/software platform and solution stack. Some of
the examples of PaaS include AWS Elastic Beanstalk, Google App Engine and Microsoft Azure.
Software-as-a-Service (SaaS)
SaaS provides businesses with ready-to-use software that is delivered to users over the internet.
All of the underlying infrastructure, including hardware, software, data storage, patch
management and hardware/software updates, are managed by SaaS providers. SaaS is a
subscription-based model, which requires businesses to subscribe to the services they want to
use. Users can access SaaS applications directly through web browsers, which eliminates the
need to download or install them. SaaS allows users to access web-based solutions from
anywhere and at any time with an active internet connection. Some popular SaaS solutions
include Microsoft 365, Google Workspace and Salesforce.
What are the benefits of cloud computing?
Cloud computing enables businesses to operate from virtually anywhere and with more
efficiency. Some benefits of cloud computing include:
 Cost savings: One of the greatest benefits of cloud computing is reduced costs. Since
businesses do not need to build their own IT infrastructure or purchase hardware or
equipment, it helps companies reduce capital expenses significantly.
 Flexibility/scalability: Cloud computing offers greater flexibility to businesses of all
sizes. Whether they require extra bandwidth, computing power or storage space, they can
seamlessly scale up or down computing resources depending on their needs and budget.
 Security: Data security is a major concern for businesses today. Cloud vendors provide
advanced security features like authentication, access management, data encryption, etc.,
to ensure sensitive data in the cloud is securely handled and stored.
 Mobility: Cloud computing allows users to access corporate data from any device,
anywhere and at any time, using the internet. With information conveniently available,
employees can remain productive even on the go.
 Increased collaboration: Cloud applications allow businesses to seamlessly
communicate and securely access and share information, making collaboration simple
and hassle-free. Cloud computing empowers multiple users to edit documents or work on
files simultaneously and in a transparent manner.
 Disaster recovery: Data loss and downtime can cause irreparable damage to businesses
of any size. Major cloud vendors are well-equipped to withstand unforeseen disruptive
events, such as hardware/software failure, natural disasters and power outages, to ensure
high application availability and business continuity.
 Automatic updates: Performing manual organization-wide software updates can take up
a lot of valuable IT staff time. However, with cloud computing, service providers
regularly refresh and update systems with the latest technology to provide businesses
with up-to-date software versions, latest servers and upgraded processing power.
What are the disadvantages of cloud computing?
The advantages of operating in the cloud are immense. However, there are certain disadvantages
that companies should be aware of before deciding to transition to the cloud. Listed below are
the top five disadvantages of cloud computing.
1. Downtime: Since cloud computing systems are completely reliant on the internet,
without an active internet connection, businesses cannot access the data or applications
hosted in the cloud. Google suffered three severe outages in 2020 that affected the
majority of its services and users across the globe.
2. Vendor lock-in: Migrating a company’s workloads and services from one cloud provider
to another is a major challenge in cloud computing. Differences between cloud
environments may cause compatibility or integration issues. If the transition isn’t handled
properly, it could expose an organization’s data to unnecessary security vulnerabilities.
3. Limited control: Since the cloud infrastructure is wholly owned and managed by the
cloud vendor, businesses using cloud computing services have limited control over their
data, applications and services. Therefore, it’s important to have a proper end-user license
agreement (EULA) in place to understand what a business can do and can’t do within a
cloud infrastructure.
4. Security: One of the major concerns of storing a company’s sensitive data in the cloud is
security. Although cloud service providers implement advanced security measures,
storing confidential files on remote servers that are entirely owned and operated by a
third party always opens up security risks. When an organization adopts a cloud
computing model, the IT security responsibility is shared between the cloud vendor and
the user. As such, each party is responsible for the assets, processes and functions they
control.
5. Data loss or theft: Storing crucial data in virtual data centers can open the doors to a
variety of risks that could lead to data loss, such as cloud misconfiguration, information
theft, security breach, stolen credentials, etc. Moreover, cloud service providers, such as
Microsoft and Google, follow a shared responsibility model, where the vendor assumes
responsibility for application availability and everything that entails, while the customer
retains responsibility for application data, administration and user management.
Improve SaaS data protection with Spanning Backup
According to Statista, as of 2021, around 50% of all corporate data is stored in the cloud. The
data suggests that businesses globally trust their cloud service providers with their sensitive data.
Regardless of which cloud vendor a company chooses, they must be careful that the benefits of
the cloud do not outweigh the underlying security risks.
Businesses using SaaS solutions such as Microsoft 365, Google Workspace and Salesforce lose
data every day. Many companies tend to believe that SaaS vendors are responsible for protecting
their data. However, that’s not the case. While SaaS providers ensure application uptime and
availability, data protection is the customers’ responsibility.
As such, businesses need a reliable SaaS backup solution that can protect their valuable data
against the most common causes of data loss like phishing, ransomware and malware attacks,
human error, malicious behavior, and configuration and sync errors.
Spanning backup and end-to-end protection solutions for Microsoft 365, Google
Workspace and Salesforce fill the gaps in native functionality to protect critical data from loss
caused by these threats, reducing the risk of compromise and enabling end users and
administrators to quickly find and restore data to its original state in just a few clicks.

Lesson 2: Virtualization technologies and virtual machine management.

Virtualization is the process of creating a virtual version of a physical resource, such as a server,
operating system, storage device, or network. This virtual version is called a virtual machine
(VM) and operates independently of the physical resource.
Virtualization enables multiple operating systems to run on a single physical machine, which can
increase efficiency and reduce costs. It also allows for greater flexibility and scalability in
managing IT resources.
There are several types of virtualization, including server virtualization, desktop virtualization,
network virtualization, and storage virtualization. Each type has its own unique benefits and use
cases.
What is Virtualization Software?
Virtualization software is a type of software that enables the creation and management of virtual
machines (VMs) on a host machine. Virtualization software allows multiple operating systems to
run on a single physical machine, each in its own isolated environment, known as a virtual
machine. This allows for more efficient use of hardware resources and can simplify management
and maintenance of IT systems.
The virtualization software typically consists of a hypervisor or virtual machine monitor (VMM),
which is responsible for managing the virtual machines and providing them with access to the
underlying physical resources, such as CPU, memory, storage, and network devices. The
hypervisor creates a layer of abstraction between the physical resources and the virtual machines,
allowing them to operate independently of each other and with their own unique configurations.
There are different types of virtualization software, including desktop virtualization software,
server virtualization software, and cloud virtualization software. Each type of virtualization
software has its own set of features and capabilities, and is designed to meet specific needs and
requirements. Examples of popular virtualization software include VMware, Hyper-V, and our
very own Scale Computing HyperCore.
Pros and Cons of Virtualization
Are there advantages of virtualization? Virtualization technology offers reduced upfront
hardware and continuing operating costs, a main benefit of virtualization. Lets dive a little into
both the advantages and disadvantages of virtualization to see if this makes sense for your
organization.
Pros
 Cost Savings: Virtualization can save money on hardware, energy, and maintenance.
With virtualization, companies can consolidate multiple servers into one physical
machine, which reduces the need for hardware, power, and cooling.
 Improved Resource Utilization: Virtualization can maximize resource utilization by
allowing multiple virtual machines (VMs) to share resources such as CPU, memory, and
storage.
 Increased Flexibility: Virtualization allows VMs to be created, cloned, and deleted
quickly and easily, which provides businesses with greater flexibility and agility.
 Easier Management: Virtualization simplifies management by allowing administrators
to manage multiple VMs on a single physical machine.
 Disaster Recovery: Virtualization provides easy disaster recovery options by allowing
VMs to be quickly backed up and restored.
Cons
 Performance Overhead: Virtualization can introduce performance overhead due to the
need to emulate hardware for each VM. We took this into account when we built
SC//HyperCore so that it saves you time and valuable resources because your software,
servers, and storage are in a fully integrated platform. The same innovative software and
simple user interface power your infrastructure regardless of your hardware
configuration.
 Complexity: Virtualization can add complexity to the IT infrastructure, which can make
it more difficult to manage. Again, using patented HyperCore™ technology, the award-
winning self-healing platform identifies, reduces, and corrects problems in real-time.
Achieve results easier and faster, even when local IT resources are scarce.
SC//HyperCore makes ensuring application uptime easier for IT to manage and for
customers to afford.
 Licensing Costs: Virtualization can result in additional licensing costs for operating
systems and applications that are installed on the virtual machines. SC//HyperCore
eliminates the need to combine traditional virtualization software, disaster recovery
software, servers, and shared storage from separate vendors to create a virtualized
environment. SC//HyperCore’s lightweight, all-in-one architecture makes it easy to
deploy fully integrated, highly available virtualization right out of the box and at no
additional costs.
Next you are probably wondering about some examples of virtualization. Let's dive in.
Virtualization Technology Examples
Virtualization technology provides a flexible and efficient approach to optimizing hardware and
software resources, allowing for cost savings, improved scalability, and enhanced manageability.
Three key types of virtualization technology are application virtualization, network
virtualization, server virtualization, and server hardware virtualization, each with its unique
advantages and use cases.
Application Virtualization
Application virtualization is a technique that isolates software applications from the underlying
operating system and hardware. This isolation allows applications to run in a controlled
environment, reducing compatibility issues and enhancing security. Examples of application
virtualization include Docker and Microsoft's App-V.
Docker, for instance, enables developers to package applications and their dependencies into
containers, which can run consistently across different environments. This flexibility streamlines
development and deployment processes, making it a popular choice for DevOps and
microservices architectures.
Microsoft's App-V, on the other hand, is geared towards simplifying software deployment and
maintenance. It virtualizes Windows applications, making it easier to manage and update them
centrally. This technology is especially beneficial for large organizations with diverse software
requirements.
Network Virtualization
Network virtualization abstracts the network's physical infrastructure, which allows multiple
virtual networks on a shared network infrastructure. A well-known example of network
virtualization is using Virtual LANs (VLANs) and Software-Defined Networking (SDN).
VLANs partition a physical network into multiple logical networks, each with its configuration
and security policies. This technology aids in traffic segmentation and enhances network
efficiency and security. It is widely used in data centers to isolate different departments or
clients.
SDN, on the other hand, takes network virtualization to the next level by separating the control
plane from the data plane. This decoupling enables centralized control and dynamic network
provisioning, making networks more flexible and responsive to changing demands. SDN is
commonly used in cloud environments to optimize network resources and automate network
management.
Server Virtualization
Server hardware virtualization is the most well-known form of virtualization. It involves
partitioning a physical server into multiple virtual machines (VMs), each running its operating
system and applications. Leading examples of server virtualization technology include Scale
Computing HyperCore, VMware, Hyper-V.
SC//HyperCore is based on a 64-bit, hardened, and proven OS kernel and leverages a mixture of
patented proprietary and adapted open-source components for a truly hyperconverged product.
All components—storage, virtualization, software, and hardware—interface directly through the
HyperCore hypervisor and Scale Computing Reliable Independent Block Engine (SCRIBE)
storage layers to create an ideal computing platform that can be deployed anywhere from the
data center to the edge of the network.
The SC//HyperCore software layer is a lightweight, type 1 (bare metal) hypervisor that directly
integrates into the OS kernel and leverages the virtualization offload capabilities provided by
modern CPU architectures.
Specifically, SC//HyperCore is based on components of the Kernel-based Virtual Machine
(KVM) hypervisor, which has been part of the Linux mainline kernel for many years and has
been extensively field-proven in large-scale environments.
Virtualization technology, whether through application virtualization, network virtualization, or
server virtualization, has become indispensable in modern IT landscapes. These examples
showcase how virtualization streamlines operations, increases efficiency, and reduces costs,
making it a cornerstone of today's computing infrastructure. Its continued evolution promises
even more innovations and efficiencies for the future.
Benefits of Virtualization
We have discussed the benefits above, so by now you are probably interested in how customers
like you have deployed application virtualization to help streamline and improve your IT
operations. Scale Computing is one of the most highly rated and highly reviewed software
companies in the industry. Read customer reviews to learn why our HCI for edge computing
solution is so popular with end users and partners like you!.

Lesson 3: Cloud service models (IaaS, PaaS, SaaS) and deployment models (public, private,
hybrid).
What Is Cloud Computing?
Cloud computing refers to accessing IT resources such as computing power, databases, and data
storage over the Internet on a pay-as-you-go basis.
Instead of buying and maintaining physical servers in data centers, you access technology
services on-demand via subscription from a cloud services provider like Amazon Web Services
(AWS), Microsoft Azure, Google Cloud Platform, Oracle Cloud, or Alibaba Cloud.
The cloud computing approach has many benefits, including the following.
 Equipment, software, and additional middleware don’t require a great deal of capital to
acquire, own, and maintain.
 Since you don’t require a large upfront investment, you can develop your business idea
quickly and get to market.
 You can scale up or down your cloud infrastructure as your needs change.
 You can easily pivot to another area of service in response to changes in the market.
 Since cloud providers continually develop new, more efficient technologies, you can take
advantage of the latest technologies to stay competitive.
 With effort and robust cloud cost optimization strategies, you can optimize your costs to
protect and increase your margins.
 Also, you may want to leverage managed services to free up more time for your
engineers to work on the core tasks that will ultimately grow your business.
What Are The Three Major Types Of Cloud Computing Services?
Cloud computing has three main delivery models; Infrastructure as a Service (IaaS), Platform as
a Service (PaaS), and Software as a Service (SaaS). Here’s what each model offers.
Infrastructure-as-a-Service (IaaS)
The IaaS cloud services delivery model is where a cloud service provider (CSP), like AWS or
Azure, provides the basic compute (CPU and memory), network, and storage resources to a
customer, over the internet, on an as-needed basis, and at a pay-as-you-go basis.
In addition to virtual hardware, IaaS can also deliver software, security solutions, and cost
management services. A CSP owns and leases its cloud infrastructure. You can, however,
configure the infrastructure you lease from them to suit your applications’ requirements.
Platform-as-a-Service (PaaS)
PaaS gives software developers access to ready-to-use tools for building and maintaining mobile
and web applications without having to maintain the underlying infrastructure. Pricing is also
pay-as-you-go, based on usage.
A CSP hosts middleware and infrastructure at its data center, such as servers, operating systems
software, networks, storage, and databases. You access these tools and services through a web
browser, picking only what you need to build, test, deploy, run, update, upgrade, and scale your
applications.
With PaaS, you gain access to a wide variety of innovative technologies, including AI, big data
analytics, chatbots, databases, blockchains, IoT, and content management systems.
Software as a Service (SaaS)
SaaS cloud computing enables you to subscribe to using a complete, cloud-based application that
is end-user-friendly through a web browser, API, or dedicated desktop client.
The SaaS model is the most popular cloud computing service because it saves time, money, and
effort. Most organizations prefer to subscribe to SaaS products rather than build, maintain,
update, upgrade, and secure their own software from scratch.
SaaS services include Gmail or Outlook for email, HubSpot for sales and marketing tools, and
ZenDesk for customer service.
However, SaaS, IaaS, and PaaS aren’t the only cloud computing options you should know. Aside
from these three cloud delivery models, there are also four cloud deployment models.
What Are The Four Major Types Of Cloud Computing?
These cloud deployment models are; public cloud, private cloud, hybrid cloud, and multi-cloud.
Here’s what each approach offers.
What is a public cloud in cloud computing?
The public cloud is a cloud computing approach that shares IT infrastructure among multiple
customers over the public internet.
This shared approach (multi-tenant) takes advantage of economies of scale to reduce operational
costs for the CSP and subscription prices for each of its public cloud customers.
Other features of a public cloud include:
 Computing infrastructure may be located within the CSP’s premises and delivered over
the Internet to customers.
 It could also be delivered right from the customer’s own data center (nowadays).
 The public cloud provides users with the flexibility to increase or decrease their resource
usage depending on their application needs.
 To ensure data security, customers’ workloads do not interact with one another when
using the public cloud.
 Cloud service providers own and manage the underlying infrastructure.
 Depending on the service, pricing may be a free, pay-as-you-go, or subscription service.
 Typically, the public cloud provides high-speed network connectivity for quick access to
applications and data, considering the many tenants.
 The IaaS delivery model is synonymous with public clouds. However, the public cloud
also supports PaaS, SaaS, Desktop-as-a-Service (DaaS), and Serverless computing.
You’ll notice that some of these features are common among all four types of cloud computing.
So, why would you want to use a public cloud specifically?
Public cloud pros
 Cost savings – The shared resources approach in a public cloud reduces costs per tenant.
 Ease of deployment – Using public cloud services often requires minimal setup and
configuration for many organizations.
 Flexibility – Public cloud resources can be repurposed for various use cases, including for
IaaS, PaaS, and SaaS applications.
 High scalability – Public clouds must always have extra capacity to accommodate
unanticipated demand spikes among their many customers. For example, tenants can
easily add more computing capacity to handle peak loads during specific times or expand
their service offerings to cater to a specific season.
 Availability – The majority of cloud providers support public cloud services.
 Managed services – In addition to managing the underlying infrastructure, cloud service
providers also offer additional services. For instance, they offer analytics services to help
tenants to better understand their own usage, identify new opportunities, and optimize
operational performance.
However, there are some concerns associated with using the public cloud.
Public cloud cons
 Data security – In a public cloud, a third-party (the CSP) controls the data, software, and
hardware supporting the customer’s workload. For fear of exposure, many organizations
prefer not to have their data pass through another company’s systems like this.
 Latency – With many customers and varying workloads, public clouds can experience
slowdowns during peak times.
 Reduced control – Unlike private clouds, public clouds are largely managed by the CSP,
which means that customers have less control over VM configurations, security patches,
and updates.
Speaking of private clouds, how do they compare?
What is a private cloud in cloud computing?
A private cloud is a cloud computing type built to serve the needs of a particular organization.
This is why private clouds are also known as enterprise clouds or internal clouds. Only this
particular organization can use that private cloud.
Other private cloud features include:
 A private cloud is reserved for a specific client, usually a single organization.
 It is also hosted at the customer’s location or at the cloud service provider’s data center
 It is common for private cloud services to operate on private networks.
 Infrastructure configuration in a private cloud is similar to the traditional on-premises
approach.
Private cloud pros
Running workloads on a private cloud has several powerful benefits, including:
 Compliance requirements – Many organizations use the private cloud approach to meet
their regulatory compliance requirements for customer data.
 Data protection – Organizations use the private cloud to store confidential documents,
such as business secrets, intellectual property, medical records, personally identifiable
information (PII), financial data, and other sensitive data.
 Hybrid approach – Some businesses combine public and private clouds, say, to run daily
operations in the more cost-effective public cloud and back up their data in the private
cloud to boost resilience.
 More control over infrastructure configuration – A private cloud enables the access
control (security) and infrastructure configuration of an on-premises system.
 Tighter security – Workloads run on a private network and behind the organization’s
firewall.
 Managed private clouds – If you are understaffed or inexperienced in infrastructure
management, you can still have your CSP handle most of the tasks.
Yet, using a private cloud has its fair share of challenges as well.
Private cloud cons
The following are some limitations of using a private cloud:
 Expensive – You’ll need to invest in capable hardware, software, and licenses to support
a robust private cloud, especially when you want it running in your data center. Today,
opting for managed private clouds can alleviate this burden.
 More control; more maintenance work – You’ll require more and experienced cloud
engineers to manage your private cloud environment.
That said, you are right to think that there should be a way to use private and public clouds
together. There is. Hybrid clouds.
What is a hybrid cloud in cloud computing?
A hybrid cloud approach combines one or more public clouds with one or more private clouds
into a single computing environment. You connect the public and private clouds through APIs,
Local Area Networks (LAN), Wide Area Networks (WAN), and or Virtual Private Networks
(VPNs).
The goal of a hybrid cloud strategy is to take advantage of the benefits of both private and public
clouds.
Here are more features of a hybrid cloud:
 It can comprise of at least one public cloud and one private cloud, two or more public
clouds, two or more private clouds, or an on-premises environment (virtual or physical)
that’s connected to one or more private or public clouds.
 Applications move in and out of multiple separate clouds that are interconnected.
 One or more of the multiple separate clouds needs to be able to scale computing
resources on demand.
 All the separate environments need to be managed as a single IT environment.
It might sound complex but using a hybrid cloud has multiple benefits.
Hybrid cloud pros
Some benefits of a hybrid cloud deployment include:
 Flexibility – Private clouds give you more configuration control and data protection,
while public clouds reduce the cost of running some workloads.
 Adaptability – You can also pick the most optimal cloud for each workload or
application. You’ll be able to move workloads freely between your interconnected clouds
as circumstances change.
 Minimize vendor lock-in – By using multiple CSPs, you reduce your dependence on a
single provider, enabling you to choose which services to use more often and from which
provider.
 Tap innovation – Get access to innovative products, services, and technologies from
different cloud providers at the same time.
 Improve system resilience – By using separate systems from different cloud providers,
you can switch to another cloud if one fails.
Yet, hybrid clouds aren’t flawless either.
Hybrid cloud cons
Some limitations of using a hybrid cloud include:
 Complexity – Integrating, orchestrating, and scaling the interconnected clouds in a hybrid
cloud environment can be overwhelming both in the beginning and as your applications
grow. Afterall, each cloud differs in terms of management methods, data transmission
capabilities, and security protocols.
 Cost visibility challenges – It can be tough to have full visibility of individual cost drivers
in a hybrid cloud environment than from a public or private cloud alone.
 Demands continuous management – A greater amount of effort is required to ensure that
risks or vulnerabilities appearing in one cloud do not spread to other clouds, applications,
and data.
Today, more companies are embracing multicloud computing, which is even more flexible or
complex than hybrid cloud computing depending on who you ask.
What is a multi-cloud in cloud computing?
A multi-cloud approach involves using two or more clouds supplied by two or more cloud
service providers.
At the enterprise level, talk about going multicloud usually refers to using multiple cloud
services such as SaaS, PaaS, and IaaS services from at least two distinct public cloud providers.
Yet, it can be using more than a single service from more than one cloud provider, public or
private.
It might have caught your attention by now that all hybrid clouds are multicloud deployments.
But a multicloud isn’t always a hybrid cloud.
Multicloud approaches also compound hybrid cloud advantages and disadvantages.
In recent years, other types of clouds have and continue to emerge, including big data analytics
clouds and community clouds.
Yet, every model is unique in its own way.
Cloud Computing Models FAQs
Now, perhaps you are wondering what’s the best cloud delivery or deployment type to choose.
Here are some insights to help you select the best option for your application requirements.
What are the similarities between the cloud computing models?
A number of features are common to all four approaches to cloud computing, including:
 They all offer on-demand access to computing resources.
 Some or all of the services are delivered over the internet – from and to anywhere in the
world.
 If your needs change, you can scale part or all of your infrastructure accordingly.
 Pricing is based on your usage of the cloud’s services, usually on a pay-as-you-go basis
with discounts for committed use.
 In terms of SaaS, IaaS, and PaaS, these services facilitate the flow of data from client
applications across the web, through the cloud provider’s systems, and back— although
the features vary from vendor to vendor.
What’s the safest option?
Due to their multi-tenant environment, public clouds tend to be more vulnerable. A private cloud,
on the other hand, is ideal for isolating confidential information for compliance reasons.
However, because your private cloud is more customizable and has more access controls, it is
more your responsibility to keep it safe.
Hybrid clouds and multiclouds offer more flexibility for your resources and workloads, but they
can also be more difficult to manage. For example, you must properly configure each cloud
platform and ensure that you have secure access and data encryption in place. In addition, you
must consider the legal and regulatory requirements for each cloud platform you use.
What’s the most cost-effective option?
A public cloud’s multi-tenant architecture often provides better economies of scale than a private,
hybrid, or multicloud setup. In public clouds, pricing is also pay-as-you-go, meaning that you
(ideally) only pay for what you use. Learn more about cloud cost optimization in our best
practices guide!
Which cloud computing model offers the best resources?
There are many key factors to consider when choosing a cloud computing model for your
organization. Among them are the different types of workloads you have, your budget, your
engineering experience, and the requirements of your customers.
A hybrid cloud deployment, for example, may give you more vendors, tools, and technologies,
but it will also demand more of you in terms of performance, security, and cost management.
And speaking of proper cloud cost management.
How To Understand, Control, And Optimize Your Cloud Costs
In all cloud computing models, there will be components that interfere with full cost visibility.
Plus, on-demand access to computing resources makes it easy to waste lots of them, driving up
costs.
Identifying the specific areas driving your costs will help you reduce unnecessary cloud spending
— without degrading your customers’ experience. Yet, most cost management tools only display
total or average costs. Not CloudZero.
With CloudZero, you can:
 Collect in-depth cost data with context, mapping your usage to costs.
 Uncover how much every tagged, untagged, untaggable, and shared resource costs in a
multi-tenant environment.
 See your costs per unit, per customer, per product, per feature, per project, per team, per
environment, per deployment, etc. This empowers you to track exactly who, what, and
why your cloud costs are changing.
 Manage hybrid and multicloud costs seamlessly with CloudZero AnyCost. Covers AWS,
Azure, GCP, Kubernetes, Snowflake, MongoDB, Databricks, New Relic, Datadog, and
more.
 Accurately and quickly allocate 100% of your cloud costs so you know where the money
is going.
 Prevent unexpected costs with a real-time cost anomaly detection engine. You’ll get
timely alerts via Slack, email, etc.

Chapter 8: Network Monitoring and Troubleshooting


Lesson 1: Network monitoring tools and techniques.
Network monitoring is an important piece of information security that every organization should
be implementing. Using helpful network monitoring tools, you can track performance issues and
security problems to mitigate potential issues quickly. But, with such a saturated market, it can
be overwhelming to choose a network monitoring tool that best fits your organization. To help
you better track and monitor the security of your network continuously, we’ve pulled together
five network monitoring tools to consider using.
5 Network Monitoring Tools
These network monitoring tools monitor various aspects of your network and include features
such as SNMP, alerts, bandwidth monitoring, uptime/downtime, baseline threshold calculation,
network mapping, network health, customizable reports, wireless infrastructure monitoring, and
network performance. In no particular order, these five tools were discovered to aid in some of
the top network security needs.
ManageEngine OpManager
ManageEngine OpManager is a network monitoring tool that continuously monitors devices
such as routers, switches, firewalls, load balancers, wireless LAN controllers, servers, VMs,
printers, and storage devices. Manage Engine OpManager must be installed on-site, but it comes
with pre-configured network monitor device templates for increased ease-of-use.
Key features include:
 Real-time network monitoring
 Physical and virtual server monitoring
 Multi-level thresholds
 Customizable dashboards
 WAN Link monitoring
 SNMP monitoring
 Email and SMS alerts
 Automatic discovery
Paessler PRTG Network Monitor
Paessler PRTG Network Monitor allows organizations to monitor all their systems, devices,
traffic, and applications in their IT infrastructure without additional plugins. You can choose
between a number of sensors that will monitor areas of your network, such as bandwidth
monitoring sensors, hardware parameters sensors, SNMP sensors, VOIP and QoS sensors, and
others.
Key features include:
 Integrated Technologies (SNMP, WMI, SSH, HTTP requests, SQL, and more)
 Live-status dashboards
 Email, push, or HTTP request alerts
 Threshold-based alert system
 Reports system
 Scan for devices by IP segment
Solarwinds NPM
While Solarwinds Network Performance Manager has performance in the name, it is still a
valuable network security monitoring tool because of the tracking of network elements such as
servers, switches, and applications. Solarwinds NPM can jump from SNMP monitoring to packet
analysis to give your organization greater control over the segmentation monitoring of your
network and increase network security.
Key features include:
 Critical path visualization
 Intelligent mapping
 WiFi monitoring and heat maps
 Advanced alerting
 SNMP monitoring
 Discovers connected devices automatically
Nagios
Nagios is a monitoring and alerting engine designed to run natively on Linux systems. The open-
source model of Nagios provides the opportunity for organizations to customize and adapt the
system to meet their needs. The tool breaks down statuses into three categories – Current
Network Status, Host Status Totals, and Service Status Totals. Through the use of APIs, you can
integrate other services for true flexibility.
Key features include:
 Performance dashboard
 API integration
 Availability reports
 Alerting
 Extended add-ons
 Upgrade capabilities for Nagios XI
WhatsUp Gold
WhatsUp Gold is a tool that pulls infrastructure management, application performance
management, and network monitoring all into one tool. It’s a user-friendly tool based on features
with customizable pricing packages to fit your organization’s exact structure and network
security needs.
Key features include:
 Hybrid cloud monitoring
 Real-time performance monitoring
 Automatic report generation
 Network mapping
 Easy-to-use monitoring dashboard
Things to Consider When Choosing a Network Monitoring Tool
Scalability – Depending on the size of your organization and corresponding network size, you
need to look for a tool that is able to accommodate that scale. Choose a network monitoring tool
that grows in capability as your network grows in size.
Security vs. Performance Tracking – Network monitoring tools vary in the type of monitoring
they perform. Network performance tracking tools focus on performance issues and data such as
network traffic analysis and network delays. If your goal is to decrease security threats by early
detection and prevention tactics, you should consider network security tracking tools.
Cost – The good news about the number of network monitoring tools out in the world is that
there is an option for every organization. Whether you’re looking for a free tool to start with or
ready to invest funds into a quality networking monitoring tool, there are plenty of options for
you.

Lesson 2: Troubleshooting network issues and performance problems.

If you work as an IT engineer or IT administrator and you are responsible for the network in your
organization, it’s only a matter of time before a network problem comes up and everyone’s
calling on you to solve it. The longer it takes to identify the issue, the more emails you’ll get
from staff or clients, asking you why the problem isn’t solved yet.
I’ve written this guide on the most common network troubleshooting techniques and best
practices to give you a starting point and structure for efficiently resolving issues as they arise.
I’ll be using a bit of technical jargon here, so be ready to look a few things up if you’re not sure
of the definitions. If you already know network troubleshooting methodology, but you are
looking more for automated software read more about my favorite one SolarWinds Network
Performance Monitor and read this article.

Network Troubleshooting Steps


 1. Check the hardware.
 2. Use ipconfig.
 3. Use ping and tracert.
 4. Perform a DNS check.
 5. Contact the ISP.
 6. Check on virus and malware protection.
 7. Review database logs.
Network Troubleshooting Best Practices
 1. Collect information.
 2. Customize logs.
 3. Check access and security.
 4. Follow an escalation framework.
 5. Use monitoring tools.
Troubleshooting Network Issues Conclusions
Network Troubleshooting Steps
Issues and problems can arise at numerous points along the network. Before you start trying to
troubleshoot any issue, you want to have a clear understanding of what the problem is, how it has
arisen, who it’s affecting, and how long it has been going on. By gathering the right information
and clarifying the problem, you’ll have a much better chance of resolving the issue quickly,
without wasting time trying unnecessary fixes. You can always start by working through these
simple network troubleshooting steps to diagnose the issue.
1. Check the hardware.
When you’re beginning the troubleshooting process, check all your hardware to make sure it’s
connected properly, turned on, and working. If a cord has come loose or somebody has switched
off an important router, this could be the problem behind your networking issues. There’s no
point in going through the process of troubleshooting network issues if all you need to do is plug
a cord in. Make sure all switches are in the correct positions and haven’t been bumped
accidentally.
Next, turn the hardware off and back on again. This is the mainstay of IT troubleshooting, and
while it might sound simplistic, often it really does solve the problem. Power cycling your
modem, router, and PC can solve simple issues—just be sure to leave each device off for at least
60 seconds before you turn it back on.
2. Use ipconfig.
Open the command prompt and type “ipconfig” (without the quotes) into the terminal. The
Default Gateway (listed last) is your router’s IP. Your computer’s IP address is the number next
to “IP Address.” If your computer’s IP address starts with 169, the computer is not receiving a
valid IP address. If it starts with anything other than 169, your computer is being allocated a
valid IP address from your router.
Try typing in “ipconfig /release” followed by “ipconfig /renew” to get rid of your current IP
address and request a new one. This will in some cases solve the problem. If you still can’t get a
valid IP from your router, try plugging your computer straight into the modem using an ethernet
cable. If it works, the problem lies with the router.
3. Use ping and tracert.
If your router is working fine, and you have an IP address starting with something other than
169, the problem’s most likely located between your router and the internet. At this point, it’s
time to use the ping tool. Try sending a ping to a well-known, large server, such as Google, to see
if it can connect with your router. You can ping Google DNS servers by opening the command
prompt and typing “ping 8.8.8.8”; you can also add “-t” to the end (ping 8.8.8.8 -t) to get it to
keep pinging the servers while you troubleshoot. If the pings fail to send, the command prompt
will return basic information about the issue.
You can use the tracert command to do the same thing, by typing “tracert 8.8.8.8”; this will show
you each step, or “hop,” between your router and the Google DNS servers. You can see where
along the pathway the error is arising. If the error comes up early along the pathway, the issue is
more likely somewhere in your local network.
4. Perform a DNS check.
Use the command “nslookup” to determine whether there’s a problem with the server you’re
trying to connect to. If you perform a DNS check on, for example, google.com and receive
results such as “Timed Out,” “Server Failure,” “Refused,” “No Response from Server,” or
“Network Is Unreachable,” it may indicate the problem originates in the DNS server for your
destination. (You can also use nslookup to check your own DNS server.)
5. Contact the ISP.
If all of the above turn up no problems, try contacting your internet service provider to see if
they’re having issues. You can also look up outage maps and related information on a
smartphone to see if others in your area are having the same problem.
6. Check on virus and malware protection.
Next, make sure your virus and malware tools are running correctly, and they haven’t flagged
anything that could be affecting part of your network and stopping it from functioning.
7. Review database logs.
Review all your database logs to make sure the databases are functioning as expected. If your
network is working but your database is full or malfunctioning, it could be causing problems that
flow on and affect your network performance.
Network Troubleshooting Best Practices

To make troubleshooting as efficient as possible, it’s very important to have best practices in
place. As you work through the steps to try to solve network issues, following these network
troubleshooting best practices can help streamline the process and avoid unnecessary or
redundant efforts.
1. Collect information.
To best support your end users, you first need to make sure you’re clear on what the problem is.
Collect enough information from both the people who are experiencing network issues and the
network itself, so you can replicate or diagnose the problem. Take care not to mistake symptoms
for the root cause, as what initially looks like the problem could be part of a larger issue.
2. Customize logs.
Make sure your event and security logs are customized to provide you with information to
support your troubleshooting efforts. Each log should have a clear description of which items or
events are being logged, the date and time, and information on the source of the log (MAC or IP
address).
3. Check access and security.
Ensure no access or security issues have come up by checking all access permissions are as they
should be, and nobody has accidentally altered a sensitive part of the network they weren’t
supposed to be able to touch. Check all firewalls, antivirus software, and malware software to
ensure they’re working correctly, and no security issues are affecting your users’ ability to work.
4. Follow an escalation framework.
There’s nothing worse than going to the IT help desk and being directed to another person, who
then directs you to another person, who directs you to yet another. Have a clear escalation
framework of who is responsible for which issues, including the final person in the chain who
can be approached for resolution. All your end users should know who they can go to about a
given issue, so time isn’t wasted talking to five different people who cannot fix the problem.
5. Use monitoring tools.
Troubleshooting can be done manually but can become time-consuming if you go through each
step. When you have a bunch of people knocking on your office door or sending you frantic
emails, it can be overwhelming to try to find the problem, let alone fix it. In business and
enterprise situations, it’s best to use monitoring tools to make sure you’re getting all the relevant
network information and aren’t missing anything vital, not to mention avoiding exposing the
company to unnecessary risk.
My preferred monitoring software is SolarWinds® Network Performance Monitor (NPM). It’s a
well-designed tool with features to support network troubleshooting issues in an efficient and
thorough way. It allows you to clearly baseline your network behavior, so you have good data on
what your network should look like and how it usually performs, and it includes advanced
alerting features so you don’t receive floods of alerts all the time. You can customize the
software to alert you to major issues, choose the timing of alerts, and define the conditions under
which alerts occur.

©2023 SolarWinds Worldwide, LLC. All rights reserved.


Other NPM features include NetPath™ network path analysis, which lets you see your network
topology and performance pathways, and PerfStack™, which allows you to compare different
performance metrics against each other, as well as historical data. With these tools, you can see
which performance issues may be interlinked and troubleshoot the root cause faster. NPM also
comes with tools like Wi-Fi sniffer, software for monitoring load balancers, switches, and
firewalls, as well as wireless issues and coverage, all of which enables you to keep an eye on the
overall health of your network and quickly pinpoint and fix issues as soon as they arise.

Lesson 3: Incident response and disaster recovery in network environments.

What is a network disaster recovery plan?


A network disaster recovery plan is a set of procedures designed to prepare an organization to
respond to an interruption of network services during a natural or humanmade catastrophe.
Voice, data, internet access and other network services often share the same network resources. A
network disaster recovery (DR) plan ensures that all resources and services that rely on the
network are back up and running in the event of an interruption within certain a certain specified
time frame.
These DR plans usually include procedures to recover an organization's local area networks,
wide area networks and wireless networks. It may cover network applications and services,
servers, computers and other devices, along with the data at issue.
Get started with a network DR plan
Network services are critical to ensure uninterrupted internal and external communication and
data sharing within an organization. A network infrastructure can be disrupted by any number of
disasters, including fire, flood, earthquake, hurricane, carrier issues, hardware or software
malfunction or failure, human error, and cybersecurity incidents and attacks.
Any interruption of network services can affect an organization's ability to access, collect or use
data and communicate with staff, partners and customers. Interruptions put business continuity
(BC) and data at risk and can result in huge customer service and public relations problems.
A contingency plan for dealing with any sort of network interruption is vital to an organization's
survival.
Some tips to consider with a network disaster recovery plan are the following:
 Use business continuity standards. There are nearly two dozen BCDR standards and
they are a useful place to start when creating a contingency plan.
 Determine recovery objectives. Before starting on a plan, the organization must
determine its recovery time objective (RTO) and recovery point objective (RPO) for each
key service and data type. RTO is the time an organization has to make a function or
service available following an interruption. RPO determines the acceptable age of files
that an organization can recover from its backup storage to successfully resume
operations after a network outage. RPO will vary for each type of data.
 Stick to the basics. A network DR plan should reflect the complexity of the network
itself and should include only the information needed to respond to and recover from
specific network-related incidents.
 Test and update regularly. Once complete, a network DR plan should be tested at least
twice a year and more often if the network configuration changes. It should be reviewed
regularly to ensure it reflects changes to the network, staff, potential threats, as well as
the organization's business objectives.
 Stay flexible. No one approach to creating a network disaster recovery plan will work for
every organization. Check out different types of plan templates and consider whether
specialized network DR software or services might be useful.

Elements of a network disaster recovery plan


A network disaster recovery plan outlines resources needed to perform network recovery
procedures, such as equipment suppliers and information on data storage. It describes how off-
site backups are maintained, and it identifies key staff members and departments and outlines
their responsibilities in an emergency. The plan spells out responses unique to specific types of
worst-case scenarios, such as a fire, flood, earthquake, and terrorist attack or cyberattack.
A network disaster recovery plan also identifies specific issues or threats related to an
organization's network operations. These can include interruptions caused by loss of voice or
data connectivity as a result of network provider problems or disasters caused by nature or
human activities.
Like any other disaster recovery plan, this one should include contact information for key staff
members if an emergency occurs after business hours, such as late at night or on weekends.
Some specific sections that should be included in a network disaster recovery plan include the
following:
 Emergency contacts and actions. List the IT network emergency team members and
their contact information at the front of the plan for fast access. A list of initial emergency
response actions should also be up front.
 Purpose and scope. Outline the purpose of the plan and its scope, along with
assumptions, team descriptions and other background information.
 Instructions for activating the plan. Describe the circumstances under which the
contingency plan will be activated, including outage time frames, who declares a disaster,
who is contacted and all communication procedures to be used.
 Policy information. Include any relevant IT BC/DR policies, such as data backup
policies.
 Emergency management procedures. Provide step-by-step procedures on how
networks will be reconfigured and data accessed, what outside help might be needed and
how staff will be accommodated for each different kind of potential disaster.
 Checklists and diagrams. Include checklists that prioritize hardware and software
restoration and network flow diagrams that make it easy for technical support staff to
quickly access information they may need.
 Data collection. Describe the information that might be needed before officially
declaring a network disruption, including network performance data and staff and first
responder reports.
 Disaster declaration. Identify actions to take once the network emergency team
determines it's necessary to declare a network disaster, including how the decision is
communicated, who is contacted and what additional damage assessments are needed.
 Disaster recovery. Provide instructions on restoring network operations, connectivity,
devices and related activities.
 Appendices. Provide names and contact information of IT and non-IT emergency teams,
as well as information on internet service providers and other key vendors, alternate
network configuration data, forms that emergency response teams will need and other
relevant information.
Creating and implementing a plan
An organization's network administrator works closely with network managers and other IT staff
to create a network disaster recovery plan. Get other IT staff to get involved early in the process,
including IT operations, data center and data processing managers.
Loop in finance and budget managers to ensure the financial implications of the plan are fully
understood.
Consult business managers to determine any RTO and RPO relevant to their part of the business.
They can also contribute valuable information on how their staffs work and communicate. That
information could become critical in the event of a disaster. The needs of support staff must also
be considered when creating a network disaster recovery plan.
Outside vendors, service providers and suppliers should be consulted to understand how their
operations might be affected by certain types of disasters. Will their local operations be
functional in the event of a local disaster? What sorts of disaster recovery plans do they have in
place? They can provide valuable information on how they can contribute to the organization's
recovery.
Once a plan is drafted, it must be reviewed and approved by senior management. It's critical that
all financial aspects of the plan be discussed at this point to minimize surprises in the middle of a
disaster situation.
Common network DR mistakes
Creating a network disaster recovery plan is a complex, time-consuming effort with lots of
different pieces and people involved and many ways for it to go wrong. Common mistakes
include the following.
Foregoing regular reviews. A network DR plan is not a one-and-done effort. Instead, it's a
living document that must be reviewed and updated regularly to take into account changes in the
organization, including more reliance on data and computers, new products and technologies in
use, and changing processes and business objectives. The threats an organization faces also
change over time and must be regularly reviewed.
Inadequate funding. Cutting budgetary corners in the planning process is a huge mistake.
Taking time to educate senior management on the value of having a plan can help ensure
adequate funds are allocated both for the planning process and for the implementation of the plan
should it ever be needed.
Skipping the drills. Practicing the network DR plan is critical to its success. Staff members
must know where to go and what to do each step of the way before they have to do it in an
emergency. Again, this is another place where it's tempting to save money and time, but that
could turn out to be a costly mistake in the long run.
New technology replaces DR planning. Vendors tout resiliency, high availability and cloud-
based disaster recovery as technologies that cut back on the need for DR planning. However,
they are not the same, don't apply to the full scope of a network infrastructure and don't make
business continuity planning irrelevant. The vendor hype is often just that: hype that won't help
in a disaster situation.
Overlooking the details. The more detailed a network DR plan, the better. Documenting all
network hardware, including model, serial numbers and vendor support contact information will
save time if replacements or repairs are needed. Include configuration settings for all the
networking hardware in your data center as backup in case imported settings don't work with
replacement equipment after a disaster.
Backups key to network recovery
The network disaster recovery plan doesn't exist in a vacuum, but rather is part of an
organization's broader IT disaster recovery plan. Data backup is a key part of both the overall IT
plan and the network plan, and information on an organization's backup policies and procedures
should be included in DR planning.
Options for data backup range from having dual data centers in different locations, each of which
can handle all of an organization's data processing needs. The data centers run in parallel and
synchronize or mirror data between them. Operations can be shifted from one data center to
another in an emergency. Dual data centers are not an option open to every organization.
Leased colocation facilities are an alternative.
Other options include backing up data to dedicated backup disk appliances with management
software that's either integrated in the appliance or run on a separate server. The backup software
runs the data copying process and enforces backup policies for an organization. A backup
appliance is an effective option as long as it's located where it won't be hit by the same disasters
as an organization's original data.
Cloud backup and cloud-based disaster recovery are other options, either in-house or through a
cloud data backup service. Cloud storage as a service provides low-cost, scalable capacity and
eliminates the need to buy and maintain backup hardware. However, cloud providers fees vary
depending on the types of services and accessibility required. And cloud services can require
organizations to encrypt data and take other steps to secure the information they're sending to the
cloud.
Cloud-to-cloud data backup is one alternative. It uses software as a service (SaaS) platforms,
such as Salesforce and Microsoft Office 365, to protect data. This data often exists only in the
cloud. Backed up SaaS data is copied to another cloud from where it can be restored in an
emergency.

Chapter 9: Wireless Networking


Lesson 1: Wi-Fi standards and configurations.
1. WHAT IS WIFI?
In everyday parlance, WiFi is generally used to refer to wireless signal or connectivity.
Technically, WiFi refers to a set of networking protocols that allow devices to connect to local
area networks and the Internet using radio waves. The WiFi standards are based on the IEEE
802.11 family of standards.

Wi-Fi is a trademark of the Wi-Fi Alliance, a non-profit organization that certifies the testing,
and interoperability of products, and promotes the technology. The Wi-Fi alliance controls the
"Wi-Fi Certified" logo and permits its use only on equipment that passes standard
interoperability and security testing.

WiFi-certified devices can connect to each other as well as to wired network devices and the
Internet through wireless access points. There are different versions of WiFi standards based on
maximum data rate, frequency band, and maximum range. But all the different standards are
designed to work seamlessly with one another and with wired networks.
2. WHAT ARE WIFI STANDARDS?
WiFi standards are networking standards that govern protocols for implementing wireless local
area networks (WLAN). These standards fall under the Institute of Electrical and Electronics
Engineers’s (IEEE’s) 802.11 protocol family. Wi-Fi standards are the most commonly used
networking standards for connecting devices in a wireless network.

The main goal of the WiFi standards is interoperability, which ensures that products from
different vendors are compatible with each other and can interoperate in a variety of
configurations. WiFi-certified devices are also backward compatible, which means that new
equipment can work with the existing ones.

The interoperability and backward compatibility of Wi-Fi equipment have made the continued
use of Wi-Fi equipment possible, enabling businesses to gradually upgrade their networks
without massive upfront investment.
3. WHAT ARE THE DIFFERENT WIFI NETWORKING STANDARDS?
The first version of the 802.11 protocol was released in 1997 and since then WiFi standards have
been constantly evolving to improve the quality of service provided by the network. In the
following sections, we walk you through the development of the WiFi Networking Standards
from 802.11 to the latest, 802.11ax.
1. IEEE 802.11
802.11 was the original WiFi standard released by IEEE in 1997 and specified two bit rates of 1
and 2 Mbps (Megabits per second). It also specified three non-overlapping channels operating in
the 2.4 GHz frequency band.
2. IEEE 802.11A
802.11a standard was released by IEEE in 1999. This upgraded standard operates in the 5 GHz
frequency band, which is more suitable for use in open office spaces and offers a maximum data
rate of 54 Mbps. Consequently, it quickly displaced the legacy 802.11 standard, especially in
business environments.
3. IEEE 802.11B
802.11b standard was also released in 1999. 802.11b operates in the 2.4 GHz frequency band and
offers a maximum data rate of 11 Mbps. 802.11b was more prevalent with home and domestic
users.
4. IEEE 802.11G
802.11g standard was released in 2003. It operates in the 2.4 GHz frequency band and offers a
maximum data rate of 54 Mbps. It uses Orthogonal Frequency-Division Multiplexing (OFDM)
based transmission scheme for achieving higher data rates. 802.11g standard was backward
compatible with 802.11b, so most dual-band 802.11a/b products became dual-band/tri-mode,
supporting a and b/g in a single access point. The inclusion of dual-band/tri-mod routers led to
the widespread adoption of the 802.11g standard.
5. IEEE 802.11N
The 802.11n standard, released in 2009 brought a massive increase in data rate compared to its
predecessors. It offered a maximum data rate of 600 Mbps and could operate in both the 2.4 GHz
and 5 GHz frequency bands simultaneously. It provided support for multi-user and multi-channel
transmission, making it a preferred choice for enterprise networks. The 802.11n standard was
later labeled as Wi-Fi 4.

6. IEEE 802.11AC
The 802.11ac standard was released in 2013 and brought another jump in data rates. It offers a
maximum data rate of 1.3 Gbps (Gigabits per second). Due to the higher data rate, it saw
widespread adoption. Additionally, it also offered support for MU-MIMO (multi-user multiple-
input and multiple-output) and supplementary broadcast channels at the 5GHz frequency band.
But, since it operates in the 5 GHz band, its range remained comparatively less. 802.11ac
standard was later labeled as Wi-Fi 5.
7. IEEE 802.11AX
The 802.11ax, released in 2019, is the newest and most advanced WiFi standard. It offers a
maximum data rate of 10 Gbps. 802.11ax offers better coverage and speed since it operates on
both the 2.4 GHz and 5 GHz frequency bands. 802.11ax, also called Wi-Fi 6, can amplify the
throughput in high-density environments, gives higher efficiency by providing a signal packed
with more data, and makes Wi-Fi faster by providing a wider channel.

In an earlier blog post, we covered Wi-Fi 6 and its extension 6E in greater detail. You can read it
here: Wi-Fi 6 and Wi-Fi 6E: All Your Questions Answered.
4. DATA RATE COMPARISON OF DIFFERENT WIFI STANDARDS
Here is a table showing a comparison of the data rates of different WiFi standards.

IEEE Year of New Data Frequency Range (Indoors;


Standard Release Name Rate Band Outdoors)

802.11a 1999 Wi-Fi 1 54 Mbps 5 GHz 35m; 120m

802.11b 1999 Wi-Fi 2 11 Mbps 2.4 GHz 35m; 120m

802.11g 2003 Wi-Fi 3 54 Mbps 2.4 GHz 38m; 140m


2.4 GHz and 5
802.11n 2009 Wi-Fi 4 600 Mbps 70m; 250m
GHz

802.11ac 2013 Wi-Fi 5 1.3 Gbps 5 GHz 46m; 92m

Up to 10 2.4 GHz and 5


802.11ax 2019 Wi-Fi 6 9.1m
Gbps GHz

5. WHAT IS WIRELESS NETWORK ENCRYPTION?


Wireless network encryption is the process of encoding data transmitted over wireless networks.
In the simplest form, encryption is the process of scrambling data signals transmitted between
devices to prevent unauthorized devices from intercepting the data. In wireless networks, the
process of encryption includes various tools, techniques, and standards to ensure that the data
transmitted over the WiFi connection are unreadable when in transit. Network encryption is
generally implemented on the network layer of the Open Systems Interconnection (OSI) model.

A common example of wireless encryption uses authentication protocols. It secures network


communications by requiring a password or network key when a device tries to connect to the
secured network.
6. WHAT ARE THE DIFFERENT WIFI ENCRYPTION TYPES?
WiFi networks are usually less secure than wired networks. Therefore, it is critical to choose the
right security protocols that offer the best security for your network. WiFi security protocols use
encryption technology to secure the network and data. The following are the most commonly
used Wi-Fi security protocols:
1. WIRED EQUIVALENT PRIVACY (WEP)
Wired Equivalent Privacy (WEP), established in 1999, is the oldest and most common WiFi
security protocol. It sets technical standards for providing a WLAN with a level of security that
is compatible with a wired local area network (LAN). The primary goal of WEP was to prevent
hackers from snooping on wireless data in transit between clients and access points (AP).

From the beginning, WEP was plagued with security flaws. It uses the RC4 (Rivest Cipher 4)
stream cipher for authentication and encryption that combines a pre-shared encryption key with a
24-bit initialization vector. The small size of the initialization vector made the cipher easier to
crack, especially as computing power increased exponentially over the years.

Weak encryption, security flaws, and problematic authentication mechanisms make WEP highly
vulnerable. As a result, it was officially retired in 2004 and is not recommended for use anymore.
2. WI-FI PROTECTED ACCESS (WPA)
Wi-Fi Protected Access (WPA) was released in 2003 to replace WEP. The WAP security protocol
addressed the weak encryption of its predecessor by using a 256-bit key for encryption. It also
uses the Temporal Key Integrity Protocol (TKIP) to dynamically generate a new key for each
packet of data. This makes WPA much more secure than WEP, which used fixed-key encryption.

To encourage quick and easy adoption of WAP, the WiFi Alliance designed it to be backward-
compatible with WEP. So WAP could be implemented onto WEP-enabled systems after a simple
firmware update. But this meant that WPA still relied on some vulnerable elements of WEP. So
the security provided by WPA still fell short.
3. WI-FI PROTECTED ACCESS 2 (WPA2)
Wi-Fi Protected Access 2 (WPA2) is the successor to WPA and was designed to improve the
security of WiFi networks. One of the key improvements of WPA2 over its predecessor was the
use of the Advanced Encryption System (AES), which provides stronger encryption compared to
the more vulnerable TKIP system. WPA2 also allowed devices to seamlessly roam from one
access point to another on the same WiFi network without having to re-authenticate.

WPA2 uses Cipher Block Chaining Message Authentication Code Protocol (CCMP) to protect
data confidentiality. It does so by allowing only authorized network users to receive data, and it
uses encryption to ensure message integrity. This makes WPA2 much more secure than its
predecessors.

While WPA2 networks are mostly secure, they can be vulnerable to dictionary attacks if weak
passcodes are used. A simple mitigation strategy against such attacks is the use of long
passwords composed of uppercase and lowercase letters, special characters, and numbers. Such
long passwords are extremely difficult to exploit in the real world and secure your WiFi network
from dictionary attacks and other brute force attacks.
4. WI-FI PROTECTED ACCESS 3 (WPA3)
Wi-Fi Protected Access 3 (WPA3) is the latest and most secure WiFi security protocol. It was
released by the WiFi Alliance in 2018 and as of July 2020, all WiFi-certified devices are required
to support WPA3.

WPA3 requires the use of Protected Management Frames, which augments privacy protections
by protecting against eavesdropping and forging. Other security improvements include
standardized use of the 128-bit cryptographic suite and disallowing the use of obsolete security
protocols.
WPA3 automatically encrypts the communication between each device and access point using a
new unique key, making connecting to public Wi-Fi networks a whole lot safer. Additionally,
WPA3 got rid of open-ended communication between access points and devices and eliminated
the reuse of encryption keys. WPA3 also introduced a new protocol, WiFi Easy, that simplifies
the process of onboarding IoT devices.
All of these security features make WPA3 the most secure wireless protocol available today.

Lesson 2: Wireless security and authentication methods.


When deploying a wireless LAN, it is very important to deploy secure methods for
authentication and encryption so that the network can only be used by those individuals and
devices that are authorized. This article takes a look at the commonly used methods of wireless
LAN authentication as well as the available encryption methods.
WLAN Authentication Methods
It is important to understand that there is a distinction between being authenticated onto a
wireless network and then having the traffic passed be encrypted. It is possible to be
authenticated onto a network and pass open unencrypted traffic; this section looks at the
commonly used methods of authentication.
There are three main methods of authentication that are used on today's wireless LANs:
 open authentication
 shared authentication
 EAP (Extensible Authentication Protocol) authentication
The open authentication method is the simplest of the methods used and only requires that the
end device be aware of the Service-Set Identifier (SSID) used on the network, as long as the
SSID is known then the device will be allowed onto the network. The problem with this method
is that the SSID is typically broadcast and if it is not, it can be easy to figure out with passive
capturing techniques.
The shared authentication method is commonly used on individual and small business wireless
LAN implementations; this method uses a shared key (Pre-Shared Key – PSK) that is given to
both sides of the connection; if they match then the device is allowed onto the network.
The third method uses the Extensible Authentication Protocol (EAP) and is the most common
method used by enterprises. The EAP method utilizes an authentication server that is queried for
authentication using a variety of credential options.
WLAN Encryption Methods
Along with the method used for authentication, the choice of encryption method is a very
important part of deploying a wireless LAN. Many of the encryption methods that were
implemented in earlier wireless LAN standards have been proven insecure and have been
depreciated by more modern methods. As time goes on, this is sure to happen with all encryption
techniques as they are used more commonly (thus becoming a target for exploitation) and as
processing power continues to increase.
Here are the WLAN encryption methods we'll review today:
 Wired Equivalent Privacy (WEP)
 Wi-Fi Protected Access (WPA)
 Wi-Fi Protected Access 2 (WPA2)
The first widely used standard for wireless LANs was 802.11 (prime); this included the Wired
Equivalent Privacy (WEP) algorithm which was used for security. WEP utilizes RC4 for
encryption and has been depreciated because of vulnerabilities that can be used to find the
security keys.
In response to the vulnerabilities found in WEP, Wi-Fi Protected Access (WPA) was defined.
WPA utilizes the Temporal Key Integrity Protocol (TKIP) which utilizes dynamic keys that were
not supported with WEP and RC4 for encryption. The TKIP method used with WPA was utilized
until vulnerabilities were found in TKIP. These vulnerabilities center on the fact that TKIP uses
some of the same mechanisms that WEP does which allow similar attacks.
In response to the vulnerabilities in WPA/TKIP, the IEEE 802.11i standard was defined and
implemented; the IEEE 802.11i standard is also referred to as WPA2. WPA2 replaced TKIP with
Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP)
which is based on Advanced Encryption Standard (AES); it is common for the WPA2 encryption
method to be referred to as AES. As of this writing, there are no easy methods that have been
found to break AES.
Summary
How secure a wireless LAN is, greatly depends on a number of different configuration
parameters that must be entered correctly. The problem with many existing wireless LANs is that
the people that are implementing them simply do not have the security knowledge required to
maintain a secure wireless network.
All existing and future wireless LAN implementers should make the effort to learn about the
most secure methods provided by the chosen equipment (and quite possibly be part of the
equipment selection process). The advantage that most modern equipment has is that the WPA2
standard is supported and not that hard to implement.
Lesson 3: Mobile networking and the role of cellular networks.

What are mobile networks?


A mobile network, also known as a cellular network, enables wireless communication between
many end users, and across vast distances, by transmitting signals using radio waves.
Most portable communication devices – including mobile phone handsets, laptops, tablets, and
so on – are equipped to connect to a mobile network and enable wireless communication through
phone calls, electronic messages and mail, and data.
How do mobile networks work?
Mobile networks are effectively a web of what’s known as base stations. These base stations
each cover a specific geographical land area – called a cell – and are equipped with at least one
fixed-location transceiver antenna that enables the cell to send and receive transmissions between
devices using radio waves.
When people experience poor reception or connection using their mobile devices, this is usually
because they aren’t in close enough range to a base station. This is also why, in order to provide
the best possible network coverage, many network providers and operators will employ as many
base station transceivers as they can, and overlap their cell areas.
How mobile devices connect to mobile networks
In the past, mobile phones – or portable transceivers – used an analog technology called AMPS
(Advanced Mobile Phone System) to connect to cellular networks. Today, however, portable
communication devices such as the Apple iPhone or Samsung Galaxy Android phone use digital
cellular technologies to send and receive transmissions.
These technologies can include:
 global system for mobile communications (GSM).
 code division multiple access (CDMA).
 time division multiple access (TDMA).
What is the difference between GSM and CDMA?
Devices that use the global system for mobile communications (GSM):
 can transmit data and voice at the same time
 do not have built-in encryption, and are typically less secure
 store data on a subscriber identity module (SIM) card that can be transferred between
devices
Devices that use code division multiple access (CDMA), on the other hand:
 cannot send both data types at the same time
 have built-in encryption and more security
 store data on the mobile device itself, rather than a SIM
Another key difference is in terms of usage: GSM is the predominant technology used in Europe
and other parts of the world, while CDMA is used in fewer countries.
What are the different types of mobile networks?
Mobile networks have become progressively faster and more advanced over the past few
decades.
2G
2G dates back to the early 1990s and eventually enabled early SMS and MMS messaging on
mobile phones. It is also noteworthy because it marked the move from the analog 1G to digital
radio signals. Its use has been phased out in some areas of the world, such as Europe and North
America, but 2G is still available in many developing regions.
3G
3G was introduced in the early 2000s, and is based on universal mobile telecommunication
service (UMTS) standards. For the first time, mobile devices could use web browsers and stream
music and videos. 3G is still widely in use around the world today.
4G
4G was first introduced around 2010 and offered a significant step forward for mobile networks.
Speed increases significantly with 4G, enabling advanced streaming capabilities and better
connectivity and performance for mobile games and other smartphone apps even when not
connected to WiFi.
5G
5G is the newest addition to the family of mobile networks, rolling out at the end of the 2010s
and still being introduced in major centres around the world today. Through high-frequency radio
waves, the 5G network offers significantly increased bandwidth and is approximately 100 times
faster than the upper limit of 4G.
Different mobile networks providers in the UK
UK networks vary in the United Kingdom, but all are regulated by Ofcom, the regulators and
competition authority for UK communication industries such as fixed-line telecoms, mobiles,
and wireless device airwaves. It’s worth noting that mobile networks can also fall under the
jurisdiction of the Financial Conduct Authority when offering services such as phone insurance.
What are the UK’s main mobile networks?
The UK has four main mobile network providers:
1. Vodafone
2. EE
3. O2
4. Three
Between them, these four mobile operators – known as the big four – own and manage the UKs
mobile network infrastructure. They’re also known as host mobile phone networks, supporting
all other mobile service providers – called mobile virtual network operators (MVNOs) – in the
UK.
Examples of mobile virtual network operators in the UK
 ID Mobile, which uses the Three network
 GiffGaff, which uses the O2 network
 Tesco Mobile, which uses the O2 network
 Virgin Mobile from Virgin Media, which uses the Vodafone and O2 networks
 Sky Mobile, which uses the O2 network
 BT Mobile, which uses the EE network
 Plusnet Mobile, which uses the EE network
 Asda Mobile, which uses the Vodafone network
 VOXI, which uses the Vodafone network
 SMARTY, which uses the Three network
 Talkmobile, which uses the Vodafone network
 Lebara, which uses the Vodafone network
Other mobile phone businesses, such as Carphone Warehouse, work with multiple providers to
offer consumers several options in one place when looking for a new phone provider.
Competition between mobile providers
Regardless of which mobile provider that UK mobile customers choose, there are just four
networks supporting the provider’s service. This means that having the UK’s fastest or most
reliable network is a huge selling point, and many customers use a dedicated coverage checker to
investigate their preferred option. It also means that providers offer a number of additional perks
and mobile phone deals to help secure mobile phone contracts.
These benefits might include:
 reduced tariffs for customers who sign up for a rolling monthly contract
 data plans such as an unlimited data allowance or data rollover, which allows customers
to rollover any unused data at the end of the month into the next month
 deals and discounts for other services offered by the providers, such as household
broadband deals or mobile broadband services
 access to affiliated entertainment services, such as Netflix, Amazon Prime, or BT Sport
 discounted SIM-only deals and plans such as a reduced one-month rolling SIM or a 12-
month SIM

Chapter 10: Network Management and Administration


Lesson 1: Network administration tasks and responsibilities.
Responsibilities of the Network Administrator
As a network administrator, your tasks generally fall into the following areas:
 Designing and planning the network
 Setting up the network
 Maintaining the network
 Expanding the network
Each task area corresponds to a phase in the continuing life cycle of a network. You might be
responsible for all the phases, or you might ultimately specialize in a particular area, for
example, network maintenance.
Designing the Network
The first phase in the life cycle of a network involves creating its design, a task not usually
performed by new network administrators. Designing a network involves making decisions about
the type of network that best suits the needs of your organization. In larger sites this task is
performed by a senior network architect: an experienced network administrator familiar with
both network software and hardware.
Setting Up the Network
After the new network is designed, the second phase of network administration begins, which
involves setting up and configuring the network. This consists of installing the hardware that
makes up the physical part of the network, and configuring the files or databases, hosts, routers,
and network configuration servers.
The tasks involved in this phase are a major responsibility for network administrators. You
should expect to perform these tasks unless your organization is very large, with an adequate
network structure already in place.
Maintaining the Network
The third phase of network administration consists of ongoing tasks that typically constitute the
bulk of your responsibilities. They might include:
 Adding new host machines to the network
 Administering network security
 Administering network services, such as NFSTM services, name services, and electronic
mail
 Troubleshooting network problems

Expanding the Network


The longer a network is in place and functioning properly, the more your organization
might want to expand its features and services. Initially, you can increase network
population by adding new hosts and expanding network services by providing
additional shared software. But eventually, a single network will expand to the point
where it can no longer operate efficiently. That is when it must enter the fourth phase
of the network administration cycle: expansion.

Several options are available for expanding your network:

 Setting up a new network and connecting it to the existing network using a


machine functioning as a router, thus creating an internetwork
 Configuring machines in users' homes or in remote office sites and enabling
these machines to connect over telephone lines to your network
 Connecting your network to the Internet, thus enabling users on your network
to retrieve information from other systems throughout the world
 Configuring UUCP communications, enabling users to exchange files and
electronic mail with remote machines

What is TCP/IP?
A network communications protocol is a set of formal rules that describe how
software and hardware should interact within a network. For the network to function
properly, information must be delivered to the intended destination in an intelligible
form. Because different types of networking software and hardware need to interact to
perform the networking function, designers developed the concept of the
communications protocol.

The Solaris operating environment includes the software needed for network
operations for your organization. This networking software implements the
communications protocol suite, collectively referred to as TCP/IP. TCP/IP is
recognized as a standard by major international standards organizations and is used
throughout the world. Because it is a set of standards, TCP/IP runs on many different
types of computers, making it easy for you to set up a heterogeneous network running
the Solaris operating environment.

TCP/IP provides services to many different types of computers, operating systems,


and networks. Types of networks range from local area networks, such as Ethernet,
FDDI, and Token Ring, to wide-area networks, such as T1 (telephone lines), X.25,
and ATM.
You can use TCP/IP to construct a network out of a number of local-area networks.
You can also use TCP/IP to construct a wide-area network by way of virtually any
point-to-point digital circuit.

Types of Hardware That Make Up a Solaris Network


The term local-area network (LAN) refers to a single network of computers limited to a
moderate geographical range, such as the floor of a building or two adjacent buildings. A local-
area network has both hardware and software components. From a hardware perspective, a basic
Solaris LAN consists of two or more computers attached to some form of local-area network
media.
Local-Area Network Media
The cabling or wiring used for computer networks is referred to as network media. Figure 2-
1 shows four computers connected by means of Ethernet media. In the Solaris LAN
environment, Ethernet is the most commonly used local-area network media. Other types of
local-area network media used in a Solaris LAN might include FDDI or Token Ring.
Figure 2-1 Solaris Local Area Network

Computers and Their Connectors


Computers on a TCP/IP network use two different kinds of connectors to connect to network
media: serial ports, and the ports on the network interface.
Serial Ports
Each computer has at least two serial ports, the connectors that enable you to plug a printer or
modem into the computer. The serial ports can be attached to the CPU board, or you might have
to purchase them. You use these ports when attaching a modem to the system to establish a PPP
or UUCP connection. PPP and UUCP actually provide wide-area network services, since they
can use telephone lines as their network media.
Network Interfaces
The hardware in a computer that enables you to connect it to a network is known as a network
interface. Many computers come with a preinstalled network interface; others can require you to
purchase the network interface separately.
Each LAN media type has its own associated network interface. For example, if you want to use
Ethernet as your network media, you must have an Ethernet interface installed in each host to be
part of the network. The connectors on the board to which you attach the Ethernet cable are
referred to as Ethernet ports. If you plan to use FDDI, each prospective host must have an
FDDI network interface, and so on.
How Network Software Transfers Information
Setting up network software is an involved task. Therefore, it helps to understand how the
network software you are about to set up will transfer information.
Figure 2-2 shows the basic elements involved in network communication.
Figure 2-2 How Information Is Transferred on a Network
In this figure, a computer sends a packet over the network media to another computer attached to
the same media.
How Information Is Transferred: The Packet
The basic unit of information to be transferred over the network is referred to as a packet. A
packet is organized much like a conventional letter.
Each packet has a header, which corresponds to the envelope. The header contains the addresses
of the recipient and the sender, plus information on how to handle the packet as it travels through
each layer of the protocol suite.
The message part of the packet corresponds to the letter itself. Packets can only contain a finite
number of bytes of data, depending on the network media in use. Therefore, typical
communications such as email messages are sometimes split into packet fragments.
Who Sends and Receives Information: The Host
If you are an experienced Solaris user, you are no doubt familiar with the term "host," a word
often used as a synonym for "computer" or "machine." From a TCP/IP perspective, only two
types of entities exist on a network: routers and hosts.
A router is a machine that forwards packets from one network to another. To do this, the router
must have at least two network interfaces. A machine with only one network interface cannot
forward packets; it is considered a host. Most of the machines you set up on a network will be
hosts.
It is possible for a machine to have more than one network interface but not function as a router.
This type of machine is called a multihomed host. A multihomed host is directly connected to
multiple networks through its network interfaces. However, it does not route packets from one
network to another.
When a host initiates communication, it is called a sending host, or the sender. For example, a
host initiates communications when its user types rlogin or sends an email message to another
user. The host that is the target of the communication is called the receiving host, or recipient.
For example, the remote host specified as the argument to rlogin is the recipient of the request to
log in.
Each host has three characteristics that help identify it to its peers on the network. These
characteristics include:
 Host name
 Internet address, or IP address, the form used in this book
 Hardware address
Host Name
The host name is the name of the local machine, combined with the name of your organization.
Many organizations let users choose the host names for their machines. Programs such
as sendmail and rlogin use host names to specify remote machines on a network. System
Administration Guide, Volume 1 contains more information about host names.
The host name of the machine also becomes the name of the primary network interface. This
concept becomes important when you set up the network databases or configure routers.
When setting up a network, you must obtain the host names of all machines to be involved. You
will use this information when setting up network databases, as described in "Naming Entities on
Your Network".
IP Address
The IP address is one of the two types of addresses each machine has on a TCP/IP network that
identifies the machine to its peers on the network. This address also gives peer hosts a notion
of where a particular host is located on the network. If you have installed the Solaris operating
environment on a machine on a network, you might recall specifying the IP address during the
installation process. IP addressing is a significant aspect of TCP/IP and is explained fully
in "Designing Your IPv4 Addressing Scheme".
Hardware Address
Each host on a network has a unique hardware address, which also identifies it to its peers. This
address is physically assigned to the machine's CPU or network interface by the manufacturer.
Each hardware address is unique.
This book uses the term Ethernet address to correspond to the hardware address. Because
Ethernet is the most commonly used network media on Solaris-based networks, the text assumes
that the hardware address of your Solaris host is an Ethernet address. If you are using other
network media, such as FDDI, refer to the documentation that came with your media for
hardware addressing information.
Reaching Beyond the Local-Area Network--the Wide-Area Network
As your network continues to function successfully, users might need to access information
available from other companies, institutes of higher learning, and other organizations not on your
LAN. To obtain this information, they might need to communicate over a wide-area
network (WAN), a network that covers a potentially vast geographic area and uses network
media such as leased data or telephone lines, X.25, and ISDN services.
A prime example of a WAN is the Internet, the global public network that is the successor to the
WANs for which TCP/IP was originally developed. Other examples of WANs are enterprise
networks, linking the separate offices of a single corporation into one network spanning an
entire country, or perhaps an entire continent. It is entirely possible for your organization to
construct its own WAN.
As network administrator, you might have to provide access to WANs to the users on your local
net. Within the TCP/IP and UNIX community, the most commonly used public network has been
the Internet. Information about directly connecting to the Internet is outside the scope of this
book. You can find many helpful books on this subject in a computer bookstore.

Lesson 2: Network monitoring and performance optimization.


What is Network Optimization?

In short, Network Optimization refers to the tools, techniques, and best practices used to
monitor and improve network performance. It involves analyzing the network infrastructure,
identifying bottlenecks and other performance issues, and implementing solutions to eliminate or
mitigate them. Network optimization techniques can include network performance
monitoring, network troubleshooting, network assessments, and more.
The goal of network optimization is to ensure that data and other network traffic can flow
smoothly and quickly across the network, without delays, interruptions, or other problems. This
can help businesses to improve their productivity, reduce downtime, and enhance the user
experience for their employees and customers.
Network optimization can involve a range of techniques and technologies, including optimizing
network protocols and settings, upgrading network hardware, and implementing advanced
networking tools such as load balancers, content delivery networks (CDNs), and software-
defined networking (SDN). It can also involve ongoing monitoring and management of the
network, to ensure that it continues to perform optimally over time.
An optimized network is one that should be able to sustain the demands of users, applications,
and your business.
Why is Network Optimization Important?

In today's digital age, a reliable and efficient network is essential for businesses to remain
competitive and successful. Network optimization can help businesses to maximize their network
performance, reduce downtime and costs, and enhance their overall security posture.
Network optimization is important for several reasons, including:
1. Improved Performance: By optimizing a network, businesses can ensure that data and
other network traffic can flow smoothly and quickly across the network. This can help to
reduce latency and other performance issues, improving the user experience for
employees and customers alike. Faster network speeds can also help businesses to be
more productive and responsive, as they can access the data and resources they need
more quickly.
2. Reduced Downtime: Network optimization can help to identify and address potential
sources of downtime, such as hardware failures, network congestion, and security threats.
By proactively addressing these issues, businesses can minimize the risk of unplanned
outages that can disrupt operations and impact their bottom line.
3. Cost Savings: By optimizing their network, businesses can reduce the need for costly
hardware upgrades and other investments. They can also avoid potential fines and other
penalties associated with network downtime or security breaches.
4. Enhanced Security: Network optimization can help to improve the security of a network
by identifying and addressing vulnerabilities and other risks. This can help to protect
sensitive data and other valuable assets, reducing the risk of cyberattacks and other
security incidents.
How to Optimize Network Performance: The Network Performance Monitoring
Technique

Although networks have different requirements depending on the size of the network, the scope
of the business and the number of users and applications, the tips for optimizing network
performance remain the same.
Network optimization is all about:
 Identifying network problems/ areas for improvement
 Improving your network performance with concrete changes
 Comparing performance before and after making changes
For example, implementing a SASE architecture or migrating from an MPLS network to an SD-
WAN network is a way to optimize your network performance by upgrading your network. But it
doesn’t end there. It’s important to monitor your SD-WAN migration to compare performance
before and after the migration, to ensure your network performance is actually being optimized.
That's why A Network Performance Monitoring tool is your perfect Network Optimization
tool!
a network performance monitoring tool is a fundamental component of network optimization.
Using an NPM tool as a network optimization tool empowers network administrators with the
data and insights needed to make informed decisions, troubleshoot issues efficiently, and
implement targeted optimizations that lead to a more reliable and efficient network
infrastructure.
How can you deploy this magical network optimization tool, let's get into that!
Step 1. Deploy Network Performance Monitoring for An Efficient Network Optimization
Technique

User complaints about network issues are a sure sign that your network may not be performing
optimally. But you can’t let your users be your monitoring tool or your network optimization
tool.
Obkio Network Performance Monitoring software monitors end-to-end network performance so
you can monitor performance from your local network (LAN monitoring, VPN), as well as third-
party networks (WAN, ISP, and Internet Peering) to identify and troubleshoot network issues,
and optimize network performance!
Deploy Network Monitoring Agents in your key network locations (head office, remote offices,
data centers) to monitor end-to-end network performance and:
 Measure core network metrics
 Continuously test your network for performance degradation
 Proactively identify network issues before they affect users
 Simulate user experience with synthetic traffic
 Collect data to help with network troubleshooting
 Compare network changes with historical data

Step 2. Measure Network Metrics: Your Key Network


Optimization KPIs

An important step in the network optimization process is to measure a series of key network
metrics, which will help you identify any issues and will become your key network optimization
KPIs.

Once you’ve deployed Obkio Monitoring Agents in key network locations, they will
continuously measure key network metrics like:
 Jitter: Jitter is a measure of the variation in the delay of received packets in a network. It is often
caused by congestion, routing changes, or network errors. Jitter is usually expressed as an average
over a certain time period, and a high jitter value can cause problems such as voice or video
distortion, dropped calls, and slow data transfer rates.
 Packet Loss: Packet loss is the percentage of data packets that do not arrive at their destination. It
can be caused by network congestion, routing issues, faulty hardware, or software errors. High
packet loss can lead to slow data transfer rates, poor voice or video quality, and interruptions in
network connectivity.
 Latency: Latency is the time it takes for a data packet to travel from its source to its destination. It
is affected by factors such as network congestion, distance, and routing. High latency can cause
slow data transfer rates, poor voice or video quality, and delays in network responsiveness.
 VoIP Quality: VoIP quality refers to the clarity and reliability of voice calls made over the
internet. It is typically measured using the MOS (Mean Opinion Score) scale, which ranges from
1 (worst) to 5 (best) and is based on user feedback. Factors that can affect VoIP quality include
packet loss, jitter, latency, and network congestion.
 Network Throughput: Throughput is the amount of data that can be transmitted over a network in
a given amount of time. It is affected by factors such as network congestion, packet loss, and
latency. Throughput is usually expressed in bits per second (bps) or bytes per second (Bps).
 And QoE: QoE (Quality of Experience) is a measure of how satisfied users are with their
experience using a particular application or service over a network. It takes into account factors
such as network performance, usability, and user expectations. QoE can be measured using
various metrics such as network response time, error rate, and user feedback.
Step 3. Identify Network Problems Affecting Your Network Optimization Strategy

Measuring network metrics in all your network locations will then allow you to easily and
quickly determine what issues, if any, are affecting your network optimization. You can identify:
 What the problem is
 Where the problem is located
 When the problem occurred
 Who is responsible for this network segment
 What actions to take
With this information, you then know where to direct your network optimization efforts, and
what actions to take.
Whether you need to troubleshoot the network problems, contact your MSP or ISP, or upgrade
your network.
Pro-Tip: Obkio allows you to set up automatic alerts for network problems, or when there’s a
sign of network performance degradation so you know exactly when it’s time to start optimizing
your network performance.
What Network Problems Affecting Network Optimization?
In your network journey, as in real life, they'll be bumps along the road that may deter or slow
down your travels. In your network optimization journey, all network issues will affect your
network's optimal performance. Which is why you have a tool like Obkio to help you find and
solve them.
Here are several network problems that can impact network optimization:
1. Bandwidth limitations: Insufficient bandwidth can result in slow network speeds and
poor performance, particularly during peak usage periods.
2. Network congestion: Network congestion can occur when there is too much traffic on a
network, causing delays, packet loss, and other performance issues.
3. Network downtime: Network downtime can be caused by a range of factors, including
hardware failures, software issues, and security breaches. Downtime can be costly for
businesses, resulting in lost productivity and revenue.
4. Security threats: Security threats such as malware, viruses, and hacking attempts can
compromise network performance and compromise sensitive data.
5. Configuration errors: Misconfigured network settings can result in poor performance,
security vulnerabilities, and other issues that impact network optimization.
6. Inadequate hardware: Inadequate hardware can result in slow network speeds and poor
performance, particularly for high-demand applications and services.

Step 4. Compare Network Performance: Before & After


Network Optimization Efforts

Analyzing historical data is crucial for network optimization because it provides insights into
network usage patterns and helps identify areas for improvement. By studying data on network
traffic, usage patterns, and performance metrics, network engineers can gain a better
understanding of how the network is being used and where bottlenecks or inefficiencies may be
occurring.

Without a tool like Obkio, you can’t truly understand if the changes you’re making to your
network are actually beneficial unless you hear feedback from your users. That could take a lot
of time and won’t allow you to be proactive if something is going wrong.

Obkio measures and collects historical network performance data, so you can analyze, compare,
and troubleshoot performance from the past and compare performance before you optimize your
network performance and after.

This way you can understand:

 The impact of your network optimization efforts


 If your network performance has improved
 If you need to continue optimizing your network performance
Here are some specific reasons why historical data analysis is important for network
optimization:
 Identifying peak usage periods: Historical data can show when network usage is
highest, such as during certain times of day or in response to specific events or activities.
This information can be used to adjust network capacity or routing to ensure optimal
performance during high usage periods.
 Pinpointing areas of congestion: By analyzing historical data on network traffic,
engineers can identify areas of the network that are frequently congested or experiencing
slow performance. This information can be used to optimize network routing or adjust
network configurations to alleviate congestion and improve performance.
 Monitoring network trends: Historical data can reveal trends in network usage and
performance over time, such as changes in traffic patterns or the impact of network
upgrades or changes. This information can be used to make informed decisions about
future network upgrades or changes.
 Improving capacity planning: By analyzing historical data on network usage and traffic
patterns, engineers can make more accurate predictions about future network capacity
needs. This information can be used to plan network upgrades and expansions that are
better aligned with actual usage patterns, reducing the risk of overprovisioning or
underutilization.
Step 5. Implement Network Optimization Strategies

Now that you've identified the weaknesses in your network, it's time to optimize network
performance!
The network optimization strategies you implement will depend on the network problems you
uncovered, and the information you collected from Obkio's app. We'll talk more in depth about
our "11 Proven Network Optimization Strategies" at the end of the article.
Here is a brief summary of some key network optimization strategies:
1. Troubleshoot Network Issues: By troubleshooting network problems as they arise, you
can quickly resolve issues and prevent them from impacting overall network
performance. This can help to ensure that your network is running smoothly and
delivering the speed, reliability, and security your business needs to succeed.
2. Check Network Connections: Make sure all network connections are properly
configured and working as they should. Check cables, routers, switches, and other
hardware to ensure they are connected and configured correctly.
3. Upgrade Network Hardware: If your network is outdated or underpowered, upgrading
your hardware can be an effective way to improve performance. Consider upgrading to
faster switches, routers, and servers, as well as adding more bandwidth and storage
capacity as needed.
4. Optimize Network Settings: Adjusting network settings such as packet size, buffer
sizes, and Quality of Service (QoS) settings can help to improve network performance.
For example, configuring QoS settings can prioritize important traffic such as voice and
video traffic over less critical traffic, reducing latency and improving user experience.
5. Implement Load Balancing: Load balancing distributes network traffic across multiple
servers, helping to optimize resource utilization and prevent overloading of any one
server. This can improve network performance by reducing congestion and minimizing
downtime.
6. Use Content Delivery Networks (CDNs): CDNs are distributed networks of servers that
cache and deliver web content to users from the server closest to them. This can help to
reduce latency and improve network performance for users accessing content from
different parts of the world.
7. Implement Software-Defined Networking (SDN): SDN allows for centralized
management and control of network traffic, making it easier to optimize network
performance and adjust to changing network demands. This can help businesses to be
more agile and responsive to their network needs.
8. Conduct Regular Network Maintenance: Regular network maintenance, including
updates and patches, can help to prevent security threats and other issues that can impact
network performance. This includes monitoring network traffic and keeping an eye out
for potential issues that could cause problems down the line.
9. Consult With Network Experts: If you're not able to identify the source of the problem
or resolve it on your own, consider consulting with network experts who can help you
diagnose and fix the issue.
10. Bandwidth Optimization: This involves managing network bandwidth to ensure that
critical applications and services have the necessary bandwidth to function properly.
11. Network Segmentation: Dividing the network into smaller sub-networks can help
improve performance by reducing network congestion and improving security.
By implementing these network optimization strategies and regularly monitoring and optimizing
your network, you can ensure that your network is running at peak performance, delivering the
speed, reliability, and security your business needs to thrive.
Step 6. Continuous Network Optimization: It's An Ongoing Journey

No matter how efficiently your network is performing, networks don’t stay perfectly optimized
forever.
Network requirements change as you add new applications and users, upgrade devices, and face
increasing customer demands.
Network optimization needs to be continuous - so you need a dedicated team and solution to
keep putting in the work to optimize your network.
Once you’ve deployed Obkio, keep it on as a permanent part of your team to keep an eye on your
network, help you with network optimization and monitoring, and ensure you’re always
following the steps from this list!
Why is Continuous Network Optimization Important?

At this point you may be thinking, "Do I really need to keep up with this?
Short answer is: Yes
Continuous network optimization is important for several reasons:
1. Changing network demands: As the needs of your business evolve, your network must
evolve with them. By continuously optimizing your network, you can ensure that it is
able to handle changing demands and support new applications and services as they are
introduced.
2. Improved performance: Continuous network optimization can help to identify and
address performance issues before they become major problems. This can improve
network speed and reliability, minimizing downtime and maximizing productivity.
3. Enhanced security: Network security threats are constantly evolving, and continuous
optimization can help to identify and address vulnerabilities before they can be exploited.
This includes updating security protocols, monitoring for potential threats, and
conducting regular security audits.
4. Cost savings: By continuously optimizing your network, you can identify and address
inefficiencies and unnecessary costs, such as excess bandwidth or underutilized
hardware. This can help to reduce costs and improve your return on investment.
5. Competitive advantage: A well-optimized network can give your business a competitive
advantage by delivering better performance and reliability than your competitors. This
can help you to attract and retain customers, improve employee productivity, and achieve
your business objectives more efficiently.
In summary, continuous network optimization is important for ensuring that your network is able
to meet the changing demands of your business and deliver the speed, reliability, and security
your business needs to succeed. By optimizing your network on an ongoing basis, you can stay
ahead of the curve and remain competitive in a rapidly evolving business environment.
What is the Goal of Network Optimization?

The goal of network optimization is to improve the performance and efficiency of a computer
network. This involves identifying and addressing bottlenecks and other sources of poor
network performance, with the aim of ensuring that data and other network traffic can flow
smoothly and quickly across the network.
The specific objectives of network optimization may vary depending on the needs of the business
or organization. For example, some businesses may focus on improving network speed and
reducing latency to enhance the user experience and improve productivity. Others may prioritize
network security, seeking to identify and address vulnerabilities and other risks to protect
sensitive data and other valuable assets.
Overall, the goal of network optimization is to create a network that is reliable, fast, secure, and
cost-effective, enabling businesses to achieve their goals and objectives in an efficient and
productive manner. Achieving this goal typically involves a combination of hardware and
software optimization, ongoing monitoring and management, and a focus on continuous
improvement and innovation.
10 Proven Network Optimization Strategies You Need to Implement

We have 10 proven network optimization strategies that will take your network performance to
the next level! From bandwidth optimization to network segmentation and load balancing, we've
got all the tricks of the trade to make your network lightning-fast and super-efficient. So buckle
up and get ready to optimize, because your network is about to get an upgrade!
I. Network Monitoring: Network Optimization Strategy #1

The first network optimization strategy won't surprise you, since we've been using it to collect
precious information about your network health. Network monitoring is a critical network
optimization strategy that involves the continuous monitoring and analysis of network
performance data to identify potential issues and make necessary improvements.
P.S. You can use Obkio's Free Trial for all your network monitoring needs!
Here are some ways that network monitoring can help optimize your network:
1. Identifying Network Bottlenecks: Network monitoring tools can help you identify
bottlenecks in your network by analyzing traffic data and pinpointing areas of congestion.
This information can help you make adjustments to your network infrastructure, such as
adding additional bandwidth or optimizing routing paths, to improve performance.
2. Troubleshooting Network Issues: Network monitoring tools can also help you quickly
identify and troubleshoot network issues when they occur. For example, if a server goes
down, network monitoring tools can send an alert to your IT team, allowing them to
quickly investigate and resolve the issue before it affects the entire network.
3. Capacity Planning: Network monitoring tools can help you plan for future network
growth by tracking network usage trends and providing insights into how much
bandwidth and other resources your network will need to accommodate future growth.
II. Bandwidth Optimization: Network Optimization Strategy #2

Bandwidth optimization is a network optimization strategy that involves managing network


bandwidth to ensure that critical applications and services have the necessary bandwidth to
function properly.
Here are some ways that bandwidth optimization can help improve network performance:
1. Prioritizing Network Traffic: One of the most important aspects of bandwidth
optimization is prioritizing network traffic. By assigning different levels of priority to
different types of traffic, such as voice or video traffic, you can ensure that critical
applications receive the necessary bandwidth and resources to function properly.
2. Traffic Shaping: Traffic shaping is another important technique for bandwidth
optimization. This involves controlling the flow of network traffic to prevent congestion
and ensure that critical applications receive the necessary bandwidth. For example, you
could use traffic shaping to limit the amount of bandwidth that is allocated to non-critical
applications like file downloads during times of high network traffic.
3. Compression: Compression can also help optimize network bandwidth by reducing the
size of data packets before they are transmitted over the network. This can help improve
network performance by reducing the amount of data that needs to be transmitted and
reducing network congestion.
4. Caching: Caching is another technique that can help optimize network bandwidth by
storing frequently accessed data on local servers or devices. This can help reduce the
amount of data that needs to be transmitted over the network, improving network
performance and reducing network congestion.
III. Load Balancing: Network Optimization Strategy #3

Load balancing is a network optimization strategy that involves distributing network traffic
across multiple servers or devices to prevent overloading and ensure optimal performance.
Here are some ways that load balancing can help optimize your network:
1. Reducing Downtime: Load balancing can help reduce downtime by distributing network
traffic across multiple servers. If one server goes down, the load balancer can
automatically redirect traffic to another server, ensuring that critical applications remain
accessible and minimizing the impact of server failures.
2. Improving Network Performance: Load balancing can also help improve network
performance by distributing network traffic evenly across multiple servers. This can help
prevent overloading and ensure that each server is operating at optimal capacity,
improving overall network performance.
3. Optimizing Resource Utilization: Load balancing can help optimize resource utilization
by distributing network traffic across multiple servers. This can help prevent servers from
being underutilized or overutilized, ensuring that resources are used efficiently and
reducing the need for additional hardware or infrastructure.
4. Providing Redundancy: Load balancing can also provide redundancy by distributing
network traffic across multiple servers or devices. This can help ensure that critical
applications remain accessible in the event of hardware or software failures, improving
overall network reliability and network availability.
IV. Optimizing Network Settings: Network Optimization Strategy #4

Optimizing network settings is a crucial strategy for improving network performance and
ensuring that your network is running smoothly. It involves adjusting various network settings to
ensure that data can be transmitted efficiently and without delay.
Some of the network settings that can be optimized include:
1. Bandwidth allocation: Allocating sufficient bandwidth to each device and application is
important for ensuring that network traffic flows smoothly. By setting priorities for
different applications and devices, you can ensure that critical applications receive the
necessary bandwidth, while less important applications are allocated a lower priority.
2. Quality of Service (QoS): QoS is a mechanism that allows you to prioritize network
traffic based on the type of data being transmitted. By setting QoS policies, like QoS for
VoIP, you can ensure that critical applications such as VoIP or video conferencing receive
a higher priority than less important applications such as email.
3. Network security: Ensuring that your network is secure is critical for preventing
unauthorized access and protecting sensitive data. By implementing security measures
such as firewalls, intrusion detection systems, and virtual private networks (VPNs), you
can improve the security of your network.
4. Network latency: Network latency refers to the delay that occurs when data is
transmitted over a network. By optimizing network settings such as MTU size and TCP
window size, you can reduce network latency and improve the overall performance of
your network.
5. Network monitoring: Monitoring your network is important for identifying issues and
troubleshooting problems. By implementing network monitoring tools, you can track
network performance metrics such as bandwidth usage, packet loss, and latency, and take
corrective action when necessary.
V. Checking Network Connections: Network Optimization Strategy #5

The network optimization strategy of checking network connections involves ensuring that all
components of a network are properly connected and configured to ensure optimal performance.
1. Check physical connections: Start by physically inspecting all network components,
such as cables, routers, switches, and other hardware. Ensure that all cables are securely
plugged into the correct ports, and that all hardware is properly connected and powered
on.
2. Verify IP configurations: Verify that all devices are configured with the correct IP
addresses, subnet masks, and default gateway settings. Incorrect IP configurations can
cause connectivity issues and slow down the network.
3. Check network settings: Verify that network settings, such as DNS server addresses and
DHCP settings, are properly configured. Incorrect network settings can cause devices to
be unable to communicate with each other or access the internet.
4. Test network performance: Use network diagnostic tools, such as ping and traceroute,
to test network connectivity and identify any latency or packet loss issues. These tools
can also help you identify any misconfigured network devices that may be causing
problems.
5. Update firmware and software: Ensure that all hardware and software components are
up to date with the latest firmware and software updates. Outdated software can cause
security vulnerabilities and performance issues.
Network connections can be checked at different levels, from the physical layer to the
application layer, and each level requires different techniques and tools to be checked effectively.
Here are some of the ways in which checking network connections can be implemented:
1. Physical layer: The physical layer refers to the actual physical connections between
devices on the network, such as cables and connectors. Checking the physical layer
involves ensuring that all cables and connectors are properly connected, and that there are
no physical obstructions or other issues that could affect network performance.
2. Data link layer: The data link layer is responsible for establishing and maintaining
connections between devices on the network. Checking the data link layer involves
ensuring that all devices are properly configured and that there are no issues with the
communication protocol being used.
3. Network layer: The network layer is responsible for routing data between devices on the
network. Checking the network layer involves ensuring that all routers and switches are
properly configured, and that there are no routing issues that could affect network
performance.
4. Transport layer: The transport layer is responsible for ensuring that data is transmitted
reliably between devices on the network. Checking the transport layer involves ensuring
that all devices are using the correct transport protocol, and that there are no issues with
congestion or packet loss.
5. Application layer: The application layer is responsible for providing services to end
users, such as email or web browsing. Checking the application layer involves ensuring
that all applications are functioning properly, and that there are no issues with
application-specific protocols or configurations.
VI. Upgrading Network Hardware: Network Optimization Strategy #6

Upgrading network hardware is a powerful network optimization strategy that can help to
improve network performance and reliability. Network hardware refers to the physical
components of a network, such as routers, switches, and network adapters, which are responsible
for transmitting and receiving data over the network.
Here are some of the ways in which upgrading network hardware can be implemented:
1. Increasing Bandwidth: One of the primary benefits of upgrading network hardware is
the ability to increase bandwidth. By upgrading to faster routers, switches, and network
adapters, you can increase the amount of data that can be transmitted over the network,
which can help to reduce network congestion or network overload and improve overall
network performance.
2. Enabling New Network Capabilities: Upgrading network hardware can also enable new
network capabilities that were previously unavailable. For example, upgrading to newer
routers and switches may enable support for newer network protocols or technologies,
such as IPv6 or 5G, which can provide faster and more reliable network performance.
3. Increasing Network Reliability: Upgrading network hardware can also increase network
reliability by reducing the likelihood of hardware failure. Older network hardware may
be more prone to failure or may not be able to handle the demands of modern network
traffic. By upgrading to newer hardware, you can ensure that your network is more
reliable and less prone to downtime or outages.
VII. Using Content Delivery Networks (CDNs): Network Optimization Strategy #7

Using Content Delivery Networks (CDNs) is an effective network optimization strategy that can
help to improve the speed and reliability of website and application delivery. A CDN is a
network of geographically distributed servers that work together to deliver content to end users
based on their location.
Here are some of the ways in which using a CDN can be implemented:
1. Improving Load Times: One of the primary benefits of using a CDN is improved load
times for websites and applications. By distributing content to servers that are located
closer to the end user, CDNs can reduce the time it takes for content to be delivered,
resulting in faster load times and a better user experience.
2. Reducing Server Load: Using a CDN can also help to reduce the load on the origin
server, which is the server that hosts the original content. By distributing content to
multiple servers, CDNs can reduce the amount of traffic that is directed to the origin
server, which can help to improve server performance and reduce the risk of downtime or
outages.
3. Improving Scalability: CDNs can also help to improve the scalability of websites and
applications. By distributing content to multiple servers, CDNs can handle large amounts
of traffic more effectively, allowing websites and applications to handle more concurrent
users without experiencing performance issues.
4. Enhancing Security: CDNs can also enhance security by providing protection against
distributed denial-of-service (DDoS) attacks. CDNs are designed to handle large amounts
of traffic, and can help to absorb the impact of DDoS attacks, preventing them from
overwhelming the origin server.
VIII. Implementing Software-Defined Networking (SDN): Network Optimization Strategy
#8

Implementing Software-Defined Networking (SDN) is a powerful network optimization strategy


that can help to improve network flexibility, scalability, and performance. SDN is an approach to
networking that separates the control plane from the data plane, allowing network administrators
to centrally manage and configure network devices through software.
Here are some of the ways in which implementing SDN can be implemented:
1. Centralized Network Management: SDN enables centralized network management,
which makes it easier to configure and manage network devices. Rather than having to
configure individual switches and routers manually, network administrators can use
software to configure the entire network from a single location.
2. Improved Network Visibility: SDN provides improved network visibility, which makes
it easier to identify and troubleshoot network issues. By monitoring network traffic and
collecting data about network performance, SDN can help network administrators to
identify bottlenecks, detect anomalies, and optimize network performance.
3. Increased Network Flexibility: SDN provides increased network flexibility by enabling
network administrators to easily reconfigure network devices in response to changing
business needs. By separating the control plane from the data plane, network
administrators can change the behavior of the network without having to physically
reconfigure network devices.
4. Automated Network Management: SDN enables automated network management,
which can help to reduce the workload on network administrators. By using software to
configure and manage network devices, SDN can automate routine tasks such as network
configuration, network security, and network optimization.
IX. Network Troubleshooting: Network Optimization Strategy #9

Network troubleshooting is a critical network optimization strategy that involves identifying and
resolving network issues that are impacting network performance, reliability, and security.
Network troubleshooting involves a systematic approach to identifying and resolving network
issues, and may involve a range of tools and techniques to diagnose and fix problems,
like Obkio's Network Performance Monitoring tool.
Here are some of the ways in which network troubleshooting can be implemented:
1. Network Monitoring: Network monitoring is an important aspect of network
troubleshooting, as it involves regularly monitoring network traffic and performance to
identify potential issues. Network monitoring tools can provide valuable information
about network traffic patterns, bandwidth utilization, and network errors, which can be
used to diagnose and resolve issues.
2. Diagnosing Network Issues: When network issues are identified, network administrators
must use a range of diagnostic tools and techniques to identify the root cause of the issue.
This may involve using network diagnostic tools such as ping, traceroute, and netstat to
identify network connectivity issues, as well as network packet capture tools to identify
issues with network traffic.
3. Resolving Network Issues: Once network issues have been diagnosed, network
administrators must take steps to resolve the issue. This may involve configuring network
devices, replacing faulty hardware, or adjusting network settings to improve performance
and reliability.
4. Testing Network Performance: After network issues have been resolved, it is important
to test network performance to ensure that the issue has been fully resolved. This may
involve using network performance testing tools to measure network throughput, latency,
and packet loss, and comparing the results to baseline performance metrics.
5. Continuous Improvement: Network troubleshooting is an ongoing process, and it is
important to continually monitor network performance and identify potential issues
before they become major problems. By implementing a continuous improvement
process, network administrators can identify opportunities to optimize network
performance and improve network reliability and security over time.
X. Conducting Regular Network Maintenance: Network Optimization Strategy #10
Conducting regular network maintenance is a critical network optimization strategy that involves
regularly checking and maintaining network devices, software, and infrastructure to ensure
optimal network performance, reliability, and security. Regular network maintenance can help to
prevent network downtime, improve network performance, and reduce the risk of cyber attacks.
Here are some of the ways in which conducting regular network maintenance can be
implemented:
1. Updating Network Software and Firmware: Regularly updating network software and
firmware is essential to ensuring network security and performance. Network
administrators should regularly check for and install software updates and security
patches to ensure that network devices are running the latest versions of software and are
protected against known security vulnerabilities.
2. Cleaning Network Devices: Network devices such as switches, routers, and servers can
accumulate dust and debris over time, which can impact their performance and reliability.
Regularly cleaning network devices can help to prevent overheating, reduce wear and
tear, and improve overall network performance.
3. Checking Network Cabling: Network cabling can become damaged or worn over time,
which can impact network performance and reliability. Network administrators should
regularly check network cabling to ensure that it is properly installed, undamaged, and
functioning correctly.
4. Backing Up Network Data: Regularly backing up network data is essential to ensure
that data is not lost in the event of a network outage or disaster. Network administrators
should regularly back up network data and test backups to ensure that data can be
restored quickly and efficiently in the event of a failure.
5. Monitoring Network Performance: Regularly monitoring network performance can
help to identify potential issues before they become major problems. Network
administrators should use monitoring tools to track network traffic, bandwidth utilization,
and other performance metrics, and should be alerted to potential issues in real-time.
How to Choose the Right Network Optimization Technique for Your business

Choosing the right network optimization technique for your business depends on various factors,
including the specific needs, goals, and constraints of the organization. To give you a head start,
here are some tips to help you make an informed decision about the network optimization
technique that fits your business like a glove!
1. Identify Business Objectives: Start by understanding the business's primary objectives
for network optimization. Are they aiming to improve application performance, reduce
costs, enhance user experience, or ensure better security? Clearly defining the goals will
help in selecting the most appropriate network optimization techniques.
2. Analyze Network Traffic: Conduct a thorough analysis of the network traffic to identify
patterns, peak usage times, and potential bottlenecks. This will provide insights into
where your network optimization efforts should be focused, and what network
optimization technique can target that.
3. Understand the Network Infrastructure: Familiarize yourself with the organization's
network infrastructure, including the types of devices, servers, and links used. Different
network optimization techniques may be required for LANs, WANs, and wireless
networks.
4. Consider Scalability: Choose network optimization techniques that can scale with the
growth of the business. The network needs of a small company might be significantly
different from those of a large enterprise.
5. Evaluate Cost-Effectiveness: Assess the cost of implementing and maintaining each
optimization technique. Some solutions might require significant investments in
hardware, software, or ongoing operational expenses.
6. Prioritize Security: Security should always be a top priority. Ensure that the chosen
network optimization techniques do not compromise the network's integrity or make it
vulnerable to cyber threats.
7. Vendor Support and Compatibility: If you plan to use commercial solutions, evaluate
the reputation and reliability of the vendors. Ensure that the chosen network optimization
techniques integrate well with your existing network infrastructure and systems.
8. Consider User Experience: Consider how the network optimization techniques will
impact end-users. Some techniques might introduce minor delays, which can be
acceptable for non-latency-sensitive applications but detrimental to real-time services.
9. Implement Network Monitoring: Network monitoring tools can help track the
effectiveness of network optimization techniques and identify any new challenges that
arise over time.
10. Stay Updated with Technology: Network optimization is an evolving field, and new
technologies and network optimization techniques emerge regularly. Stay informed about
the latest trends and advancements to ensure your network stays competitive and
efficient.
11. Test in Staging Environment: Before implementing any network optimization technique
in the production environment, perform thorough testing in a controlled staging
environment. This will help identify any potential issues or conflicts.
12. Consider Consulting Experts: If you lack the expertise or resources to handle network
optimization internally, consider consulting with network specialists or hiring managed
service providers who can offer professional advice and support.

12 Types of Network Optimization Tools

There are various types of network optimization tools available, each designed to address
specific aspects of network performance and efficiency

In this section, let's explore some common types of network optimization tools, highlighting their
key features and use cases. Whether you are looking to improve bandwidth utilization, enhance
application performance, or strengthen network security, understanding the available tools will
empower you to make informed decisions and implement effective optimization strategies.

1. Network Performance Monitoring Tools as Network Optimization Tools:

These tools provide real-time monitoring and analysis of network devices, traffic, and
performance metrics. They offer visibility into bandwidth usage, latency, packet loss, and other
key performance indicators (KPIs) to identify bottlenecks and areas for optimization.

2. Network Traffic Analysis Tools as Network Optimization Tools:

Network traffic analysis tools focus on examining network traffic patterns and usage. They help
administrators understand application usage, identify bandwidth hogs, and optimize traffic flows.
3. Network Packet Analyzers as Network Optimization Tools:

Packet analyzers capture, inspect, and analyze individual data packets flowing through the
network. They are particularly useful for troubleshooting and identifying specific issues affecting
network performance.

4. Network Bandwidth Management Tools as Network Optimization Tools:

These tools allow administrators to allocate and control bandwidth usage for different
applications, services, or users. They help prioritize critical traffic, ensure Quality of Service
(QoS), and prevent bandwidth abuse.

5. Network Optimization Appliances as Network Optimization Tools:

Network optimization appliances optimize data transfer, reduce latency, and compress data to
enhance performance over WAN links. They are commonly used in Wide Area Networks
(WANs) to improve application delivery to remote locations.

6. Load Balancers as Network Optimization Tools:

Load balancers distribute incoming network traffic across multiple servers or resources to ensure
even network utilization, prevent overloads, and improve application availability and response
times.

7. Content Delivery Networks (CDNs) as Network Optimization Tools:

CDNs cache and distribute content across various servers located strategically worldwide. They
reduce latency and server load by delivering content from servers closest to the end-users,
improving the overall user experience.

8. WAN Optimization Controllers as Network Optimization Tools:

WAN optimization controllers employ various techniques such as data compression,


deduplication, and protocol optimization to accelerate data transfers and reduce bandwidth
utilization over wide-area networks.

9. Quality of Service (QoS) Management Tools as Network Optimization Tools:

QoS management tools enable administrators to define and enforce QoS policies, ensuring that
critical applications and services receive the necessary network resources and priority.

10. Network Configuration Management Tools as Network Optimization Tools:

These tools help manage network configurations, track changes, and ensure consistency across
devices. Proper configuration management helps maintain network stability and reduces the risk
of misconfigurations affecting performance.

11. Network Security Monitoring Tools as Network Optimization Tools:

Network security monitoring tools focus on identifying and mitigating security threats. By
maintaining network security, these tools indirectly contribute to network optimization by
preventing performance degradation due to security incidents.

Vulnerability scans and penetration testing can also help organizations can ensure their networks
and applications are secure. For a more comprehensive approach, organizations should look to a
dedicated security provider like Evolve Security to ensure their attack surface is properly
managed and threats are identified and remediated quickly.

12. Network Discovery and Mapping Tools as Network Optimization Tools:


Network discovery tools scan and map the network, providing an inventory of devices,
connections, and topologies. This information helps optimize network design and identify
potential inefficiencies.

Advanced Tips: How to Optimize A Network for Speed

When it comes to network optimization, one of the most common use cases if optimizing a
network for speed.
Whether you're running a business, gaming, or simply browsing the web, a fast and reliable
network can make a significant difference. In this section, we'll explore a range of tips and
techniques to optimize your network for speed, from hardware upgrades and configuration
tweaks to smart usage practices. By following these guidelines, you can ensure that your network
operates at its peak performance, delivering the speed you need for your specific applications
and activities.
Here are some tips to help you optimize a network for speed:
1. Use Wired Connections: Whenever possible, use wired Ethernet connections instead of
Wi-Fi. Wired connections offer more stability and higher speeds.
2. Upgrade Your Internet Plan: Make sure you have a high-speed internet plan that suits
your needs. The speed of your network is often limited by your internet service provider.
3. Quality Router: Invest in a high-quality router that supports the latest Wi-Fi standards,
such as Wi-Fi 6 (802.11ax). A good router can significantly improve the speed and range
of your wireless network.
4. Optimize Router Placement: Position your router in a central location and elevate it if
possible. Avoid placing it near walls, large metal objects, or electronic devices that can
interfere with the signal.
5. Firmware Updates: Keep your router's firmware up to date. Manufacturers often release
firmware updates that can improve performance and security.
6. Channel Selection: Use the least congested Wi-Fi channel available. Many routers can
automatically select the best channel, but you can also do this manually.
7. QoS (Quality of Service): Configure Quality of Service settings on your router to
prioritize certain types of network traffic, such as video streaming or gaming, for a
smoother experience.
8. Limit Background Applications: On devices connected to the network, close or restrict
applications and services that consume bandwidth in the background, like cloud backups
and automatic software updates.
9. Use a VPN Sparingly: VPNs can slow down your connection due to encryption and
routing through remote servers. Use a VPN only when necessary.
10. Optimize DNS Settings: Consider using a faster and more reliable DNS server, such as
Google's (8.8.8.8 and 8.8.4.4) or Cloudflare's (1.1.1.1).
11. Manage Network Traffic: Prioritize critical network traffic. For example, set video
streaming devices to lower resolution to reduce their impact on other devices' speed.
12. Bandwidth Monitoring: Use network monitoring tools to identify which devices or
applications are consuming the most bandwidth. This can help you pinpoint and address
issues.
13. Upgrade Hardware: If your devices are outdated, consider upgrading them to ones with
faster network capabilities.
14. Wired Backhaul for Mesh Systems: If you're using a mesh Wi-Fi system, connect the
satellite nodes through Ethernet cables to the primary router for maximum speed and
stability.
15. Firewall and Security: Ensure that your network security settings are appropriately
configured to protect against threats without causing unnecessary network slowdowns.
16. Optimize Web Content: If you're managing a website or web application, optimize
content delivery through techniques like content caching, content delivery networks
(CDNs), and image compression.
17. Traffic Shaping: Implement traffic shaping or bandwidth limiting policies if you have
multiple users sharing the network. This can prevent one user or application from
hogging all the bandwidth.
18. Regular Reboot: Occasionally reboot your router and network devices to clear memory
and refresh connections, especially if you notice a slowdown.
19. Regular Speed Tests: Conduct regular speed tests to monitor your network's
performance and identify any issues or changes in speed.
20. Contact Your ISP: If you consistently experience slow speeds, contact your Internet
Service Provider to diagnose and fix any issues with your connection.
Optimizing a network for speed is an ongoing process that may require adjustments based on
your specific environment and needs. By following these tips and staying vigilant, you can
ensure your network operates at its best possible speed.

You might also like