Network Capacity
Network Capacity
Regardless of which of these tools you use, the statistics function of these tools provides a
measurement of network capacity as expressed by throughput. You can analyze the data in a
number of ways, including specific applications used, network protocols, traffic by system or
users, and so forth all the way down to, in some cases, the content of the individual packets
crossing the wire.
Factor 2
Factor 2 is the cloud’s network performance, which is a measurement of WAN traffic. A WAN’s
capacity is a function of many factors:
• Overall system traffic (competing services)
• Routing and switching protocols
• Traffic types (transfer protocols)
•
Network interconnect technologies (wiring)
The amount of bandwidth that the cloud vendor purchased from an Internet backbone
provider
Again, factor 2 is highly variable and unlike factor 1, it isn’t easy to measure in a reliable way.
Tools are available that can monitor a cloud network’s performance at geographical different points
and over different third-party ISP connections. This is done by establishing measurement systems
at various well-chosen network hubs.
Apparent Networks, a company that makes WAN network monitoring software, has set up a series
of these points of presence at various Internet hubs and uses its networking monitoring software
called PathView Cloud to collect data in a display that it calls the Cloud Performance Scorecard
(http://www.apparentnetworks.com/CPC/scorecard.aspx). this Web page populated with statistics
for some of the cloud vendors that Apparent Networks monitors.
You can use PathView Cloud as a hosted service to evaluate your own cloud application’s network
performance at these various points of presence and to create your own scorecard of a cloud
network. Current pricing for the service is $5 per network path per month. The company also sells
a small appliance that you can insert at locations of your choice and with which you can perform
Factor 3
• The last factor, factor 3, is the connection from the backbone through your ISP to your local
system, a.k.a. “The Internet.” The “Internet” is not a big, fat, dumb pipe; nor is it (as former
Senator Ted Stevens of Alaska proclaimed) “a series of tubes.” For most people, their Internet
connection is more like an intelligently managed thin straw that you are desperately trying to
suck information out of. So factor 3 is measurable, even if the result of the measurement isn’t
very encouraging, especially to your wallet.
• Internet connectivity over the last mile (to the home) is the Achilles heel of cloud computing.
The scarcity of high-speed broadband connections (particularly in the United States) and high
pricing are major impediments to the growth of cloud computing. Many organizations and
communities will wait on the sidelines before embracing cloud computing until faster
broadband becomes available. Indeed, this may be the final barrier to cloud computing’s
dominance.
Scaling
• In capacity planning, after you have made the decision that you need more resources, you are
faced with the fundamental choice of scaling your systems. You can either scale vertically (scale
up) or scale horizontally (scale out), and each method is broadly suitable for different types of
applications.
• To scale vertically, you add resources to a system to make it more powerful. For example, during
scaling up, you might replace a node in a cloud-based system that has a dual-processor machine
instance equivalence with a quad-processor machine instance equivalence. You also can scale up
when you add more memory, more network throughput, and other resources to a single node.
Scaling out indefinitely eventually leads you to an architecture with a single powerful
supercomputer.
• Vertical scaling allows you to use a virtual system to run more virtual machines (operating
system instance), run more daemons on the same machine instance, or take advantage of more
RAM (memory) and faster compute times. Applications that benefit from being scaled up
vertically include those applications that are processor-limited such as rendering or memory-
limited such as certain database operations—queries against an in-memory index, for example
• Horizontal scaling or scale out adds capacity to a system by adding more individual nodes. In a
system where you have a dual-processor machine instance, you would scale out by adding more
dual-processor machines instances or some other type of commodity system. Scaling out
indefinitely leads you to an architecture with a large number of servers (a server farm), which is
the model that many cloud and grid computer networks use.
Horizontal scaling allows you to run distributed applications more efficiently and is effective in
using hardware more efficiently because it is both easier to pool resources and to partition them.
Although your intuition might lead you to believe otherwise, the world’s most powerful computers
are currently built using clusters of computers aggregated using high speed interconnect
technologies such as InfiniBand or Myrinet. Scale out is most effective when you have an I/O
resource ceiling and you can eliminate the communications bottleneck by adding more channels.
Web server connections are a classic example of this situation.