0% found this document useful (0 votes)
10 views

Network Capacity

Network capacity is the maximum data transmission rate of a network, measured in bits per second, and is crucial for evaluating network performance. It involves assessing network traffic at the server interface, cloud network performance, and the connection from the ISP to the local system. Effective capacity planning can involve either vertical scaling (adding resources to a single system) or horizontal scaling (adding more systems), each suitable for different application types.

Uploaded by

Tanya Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Network Capacity

Network capacity is the maximum data transmission rate of a network, measured in bits per second, and is crucial for evaluating network performance. It involves assessing network traffic at the server interface, cloud network performance, and the connection from the ISP to the local system. Effective capacity planning can involve either vertical scaling (adding resources to a single system) or horizontal scaling (adding more systems), each suitable for different application types.

Uploaded by

Tanya Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Network Capacity

Dr. Anchal Thakur


Definition
• Network capacity refers to the maximum amount of data that can be transmitted across a
network over a specific period, typically measured in bits per second (bps). It is a crucial metric
in networking, determining the efficiency, speed, and overall performance of a network
infrastructure.
• Network capacity is often confused with bandwidth, but while bandwidth defines
the potential speed of the data transfer, network capacity determines how much data the
network can handle concurrently across its communication channels without degradation in
performance
Network Capacity
If any cloud-computing system resource is difficult to plan for, it is network capacity. There are
three aspects to assessing network capacity:
• Network traffic to and from the network interface at the server, be it a physical or virtual
interface or server
• Network traffic from the cloud to the network interface
• Network traffic from the cloud through your ISP to your local network interface (your
computer)
Factor 1
To measure network traffic at a server’s network interface, you need to employ what is
commonly
known as a network monitor, which is a form of packet analyzer. Microsoft includes a utility
called
the Microsoft Network Monitor as part of its server utilities, and there are many third-party
products
in this area. The site Sectools.org has a list of packet sniffers at http://sectools.org/
sniffers.html. Here are some:
• Wireshark (http://www.wireshark.org/), formerly called Ethereal
• Kismet (http://www.kismetwireless.net/), a WiFi sniffer
• TCPdump (http://www.tcpdump.org/)
• Dsniff (http://www.monkey.org/~dugsong/dsniff/)
• Ntop (http://www.ntop.org/)
• EtherApe (http://etherape.sourceforge.net/)

Regardless of which of these tools you use, the statistics function of these tools provides a
measurement of network capacity as expressed by throughput. You can analyze the data in a
number of ways, including specific applications used, network protocols, traffic by system or
users, and so forth all the way down to, in some cases, the content of the individual packets
crossing the wire.
Factor 2
Factor 2 is the cloud’s network performance, which is a measurement of WAN traffic. A WAN’s
capacity is a function of many factors:
• Overall system traffic (competing services)
• Routing and switching protocols
• Traffic types (transfer protocols)

Network interconnect technologies (wiring)
The amount of bandwidth that the cloud vendor purchased from an Internet backbone
provider
Again, factor 2 is highly variable and unlike factor 1, it isn’t easy to measure in a reliable way.
Tools are available that can monitor a cloud network’s performance at geographical different points
and over different third-party ISP connections. This is done by establishing measurement systems
at various well-chosen network hubs.
Apparent Networks, a company that makes WAN network monitoring software, has set up a series
of these points of presence at various Internet hubs and uses its networking monitoring software
called PathView Cloud to collect data in a display that it calls the Cloud Performance Scorecard
(http://www.apparentnetworks.com/CPC/scorecard.aspx). this Web page populated with statistics
for some of the cloud vendors that Apparent Networks monitors.
You can use PathView Cloud as a hosted service to evaluate your own cloud application’s network
performance at these various points of presence and to create your own scorecard of a cloud
network. Current pricing for the service is $5 per network path per month. The company also sells
a small appliance that you can insert at locations of your choice and with which you can perform
Factor 3
• The last factor, factor 3, is the connection from the backbone through your ISP to your local
system, a.k.a. “The Internet.” The “Internet” is not a big, fat, dumb pipe; nor is it (as former
Senator Ted Stevens of Alaska proclaimed) “a series of tubes.” For most people, their Internet
connection is more like an intelligently managed thin straw that you are desperately trying to
suck information out of. So factor 3 is measurable, even if the result of the measurement isn’t
very encouraging, especially to your wallet.
• Internet connectivity over the last mile (to the home) is the Achilles heel of cloud computing.
The scarcity of high-speed broadband connections (particularly in the United States) and high
pricing are major impediments to the growth of cloud computing. Many organizations and
communities will wait on the sidelines before embracing cloud computing until faster
broadband becomes available. Indeed, this may be the final barrier to cloud computing’s
dominance.
Scaling
• In capacity planning, after you have made the decision that you need more resources, you are
faced with the fundamental choice of scaling your systems. You can either scale vertically (scale
up) or scale horizontally (scale out), and each method is broadly suitable for different types of
applications.
• To scale vertically, you add resources to a system to make it more powerful. For example, during
scaling up, you might replace a node in a cloud-based system that has a dual-processor machine
instance equivalence with a quad-processor machine instance equivalence. You also can scale up
when you add more memory, more network throughput, and other resources to a single node.
Scaling out indefinitely eventually leads you to an architecture with a single powerful
supercomputer.
• Vertical scaling allows you to use a virtual system to run more virtual machines (operating
system instance), run more daemons on the same machine instance, or take advantage of more
RAM (memory) and faster compute times. Applications that benefit from being scaled up
vertically include those applications that are processor-limited such as rendering or memory-
limited such as certain database operations—queries against an in-memory index, for example
• Horizontal scaling or scale out adds capacity to a system by adding more individual nodes. In a
system where you have a dual-processor machine instance, you would scale out by adding more
dual-processor machines instances or some other type of commodity system. Scaling out
indefinitely leads you to an architecture with a large number of servers (a server farm), which is
the model that many cloud and grid computer networks use.
Horizontal scaling allows you to run distributed applications more efficiently and is effective in
using hardware more efficiently because it is both easier to pool resources and to partition them.
Although your intuition might lead you to believe otherwise, the world’s most powerful computers
are currently built using clusters of computers aggregated using high speed interconnect
technologies such as InfiniBand or Myrinet. Scale out is most effective when you have an I/O
resource ceiling and you can eliminate the communications bottleneck by adding more channels.
Web server connections are a classic example of this situation.

You might also like