0% found this document useful (0 votes)
101 views

ch06 PDF

This document discusses the history and types of peer-to-peer computing. It describes some early experiments in the 1970s that helped lay the foundations, including programs that would replicate themselves across networked computers to use idle resources. The largest and most successful peer-to-peer project to date has been SETI@Home, which harnesses the collective power of millions of volunteered computers to search for signs of extraterrestrial intelligence. There are two main types of peer-to-peer computing: collaborative tools that allow file sharing, and computing tools that distribute processing power for complex tasks.

Uploaded by

Utsab Raut
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

ch06 PDF

This document discusses the history and types of peer-to-peer computing. It describes some early experiments in the 1970s that helped lay the foundations, including programs that would replicate themselves across networked computers to use idle resources. The largest and most successful peer-to-peer project to date has been SETI@Home, which harnesses the collective power of millions of volunteered computers to search for signs of extraterrestrial intelligence. There are two main types of peer-to-peer computing: collaborative tools that allow file sharing, and computing tools that distribute processing power for complex tasks.

Uploaded by

Utsab Raut
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 6 Telecommunications and Networks W31

PEER-TO-PEER COMPUTING (page 165)

Early Efforts
In the early 1970s, when computers were first linked by networks, the idea of harness-
ing unused CPU cycles was born. A few early experiments with distributed comput-
ing—including a pair of programs called Creeper and Reaper—ran on the Internet’s
predecessor, the ARPAnet (Advanced Research Projects Agency net).
In 1973, the Xerox Palo Alto Research Center (PARC) installed the first Ether-
net network and the first full-fledged distributed computing effort was underway. This
first program routinely cruised around 100 Ethernet-connected computers. The scien-
tists who created this program envisioned their program moving from machine to ma-
chine using idle resources for beneficial purposes. The program would roam
throughout the PARC network, replicating itself in each machine’s memory. Each
program used idle resources to perform a computation and had the ability to repro-
duce and transmit clones to other nodes of the network. The programs enabled users
to distribute graphic images and shared computations for rendering realistic computer
graphics.
In another effort, Richard Crandall, now a distinguished scientist at Apple Com-
puter, started putting idle, networked NeXT computers to work. Crandall installed
software that allowed the machines, when not in use, to perform computations and to
combine efforts with other machines on the network. His Zilla software first focused
on finding, factoring, and testing huge prime numbers and then moved on to test en-
cryption at NeXT.

The Next Level: Internet Distributed Computing


Distributed computing scaled to a global level with the maturation of the Internet in
the 1990s. Two projects in particular have proven that the concept works extremely
well.
• The first of these revolutionary projects used thousands of independently owned
computers across the Internet to crack encryption codes. This project, the first of its
kind, was called distributed.net.
• The second, and the most successful and popular of distributed computing projects
in history, is the SETI@Home project.
The seminal Internet distributed computing project, SETI@Home originated at
the University of California at Berkeley. SETI stands for “Search for Extraterrestrial
Intelligence,” and the project’s focus is to search for radio signal fluctuations that may
indicate a sign of intelligent life from space.
SETI@Home is the largest, most successful Internet distributed computing
project to date. Launched in May 1999 to search through signals collected by
the Arecibo Radio Telescope in Puerto Rico (the world’s largest radio telescope), the
project originally received far more terabytes of data every day than its assigned
computers could process. So, the project directors turned to volunteers, inviting indi-
viduals to download the SETI@Home software to donate the idle processing time on
their computers to the project.
After dispatching a backlog of data, SETI@Home volunteers began processing
current segments of radio signals captured by the telescope. Currently, about 40 giga-
bytes of data are pulled down daily by the telescope and sent to computers all over the
world to be analyzed. The results are being sent back through the Internet, and the
program then collects a new segment of radio signals for the PCs to work on.
W32 Chapter 6 Telecommunications and Networks

The SETI@Home global network of three million computers averages about


14 teraflops (14 trillion floating point operations per second), and has produced over
500,000 years of processing time in the past one and one-half years. It would
normally cost millions of dollars to achieve that type of power on one or even two
supercomputers.

Two Versions of Peer-to-Peer Computing


There are two, slightly different types of peer-to-peer computing: a collaborative tool
and a computing tool. The collaborative tool, actually called peer-to-peer computing,
allows users to share files and collaborate by linking their computers over the Inter-
net. The computing tool, which is an element of distributed computing, allows idle
computers to share their unused processing capabilities for complex tasks.
As a collaborative tool, peer-to-peer computing brings people together securely
in familiar settings. Employees can gather for online meetings or for short-term
projects, regardless of their location, while bypassing a bottleneck of corporate file
servers. Freelance workers and contractors can join a group online without compro-
mising a company’s security. In business-to-business commerce, companies can use
peer-to-peer computing to order from suppliers and serve customers. And Napster-
like file-sharing allows quick downloads of software and essential documents.
As a computing tool, the technology can break down large tasks into smaller, par-
allel assignments and distribute them across huge numbers of interconnected desk-
tops. These machines then contribute their unused processing power to a particularly
complicated computing task, such as scientific research or data analysis, resulting in
dramatic improvements in processing speed.

A Guide to Peer Applications


The term peer application is used to describe a diverse group of computing functions.
In each, various systems or system components interact with one another on an equal
footing. In broad terms, peer applications fall into one of three general categories: dis-
tributed processing, peer processing, and peer communications. They differ in how
the various systems communicate and whether a central system helps to facilitate the
communication.

Distributed processing. A distributed processing peer application takes a task and


parcels it out to multiple systems. In years past, this process meant getting multiple
processors in a single computer to cooperate on a computation. Today, it usually
means getting multiple computers in disparate locations to work jointly on a particu-
lar problem, applying the processing power of many systems to a shared task.
Applications such as SETI@Home use this model to accomplish large-scale data
analysis. A central system breaks large data sets into small chunks, and individual
users’ systems (running a specialized application) perform the requested analysis and
return the results to the server. The server then assembles the various analyses to cre-
ate the whole result that is needed.
However, many analysts do not regard that as true peer computing because the
various user systems do not communicate with one another. They are peers only in
the sense that they are sharing approximately equal portions of a workload.

Peer processing. The next form is true peer processing. In this arena, the various
computers in the processing mix communicate with each other about what they are
doing as they are doing it. For example, a complex mathematical calculation might be
Chapter 6 Telecommunications and Networks W33

broken down into several different steps, each of which may need to be repeated nu-
merous times during a particular run.
In a peer setting, each “function” of the overall calculation would be assigned to a
separate system. The various systems can communicate with each other, passing data
and processing requests back and forth as needed. This sort of peer computing may be
either “brokered” or “pure peer.” In a brokered system, a central server manages the
communication between peers. In a pure peer system, the various peers always com-
municate directly with each other, by using either network broadcasts or multiple di-
rect connections. United Technologies’ peer-processing project could best be
described as a pure peer system because it does not rely on a central server to broker
the work.

Peer communication. The most widely used form of peer application is peer commu-
nication. At its simplest level, one computer communicates directly with another, usu-
ally at the immediate request of the user. This communication may involve file
sharing, message transmission, or similar functions.
Many peer communication functions are built into contemporary network operat-
ing systems, while some network operating systems may be viewed as inherently peer-
to-peer. Microsoft Windows and Novell NetWare networks are primarily server based
with peer functions added, while systems such as Linux may be described as more
peer-oriented.
Peer applications often add specific functionality beyond the operating system,
designed to operate outside a corporate network or across different platforms. Instant
messaging systems, which let users chat with one another using text and content-
sharing services, such as Gnutella and Napster, combine a peer application running on
the user’s computer with a centralized database running on a server system.

THE GLOBAL POSITIONING SYSTEM (GPS)


EXPLAINED (page 171)

Thanks to a constant stream of radio signals from dozens of satellites circling the
planet, a handheld device the size of a pocket calculator can tell you your position on
Earth to within a few dozen feet. It will also tell you with great precision your speed
and the direction you are traveling. Best of all, the GPS (Global Positioning System)
is free.
GPS was initially managed by the U.S. Air Force, but is now administered by the
U.S. Naval Observatory. The system was launched in 1978, when Rockwell Interna-
tional began boosting the first of 11 specially built satellites into orbit. These Block I
satellites were experimental and are no longer in use. In 1993, however, when a group
of 24 Block II satellites was in orbit, the GPS, called NavStar, became operational.
The GPS has many applications, the most vital of which is to help civilian ships
and aircraft locate their position and determine their direction of travel. Handheld
GPS receivers, which sell for $100 and up, offer the same benefits for hikers; some
cars also have built-in GPS receivers, complete with electronic street maps. The sys-
tem also transmits the time of day, accurate to within 340 nanoseconds.
Initially, the Department of Defense built the GPS for military purposes. GPS
satellites actually transmit two sets of direction-finding and time-of-day signals: one
encrypted for military use, and the other for civilian use. Until May 2000, the civilian
signals were intentionally degraded with random errors, reducing accuracy to 100 me-
ters or worse. The government still reserves the right to lower the accuracy of the
civilian signals in the event of a national emergency.
W34 Chapter 6 Telecommunications and Networks

The NavStar system is made up of three parts: radio receivers, satellites, and
ground-control systems. A GPS receiver determines its location by listening to trans-
missions from four NavStar satellites. For the service to work anywhere on the planet,
the GPS must always have a minimum of 24 satellites in orbit. This collection of satel-
lites is known as the NavStar constellation. These satellites circle the Earth in six dif-
ferent paths, each of which is 12,550 miles above the Earth’s surface. Four satellites
are in each path, spaced out evenly in the orbit. And each satellite circles the Earth in
just under 12 hours.
Each 2,000-pound satellite transmits a powerful radio wave that contains a unique
data-navigation message. The signal serves as a kind of signature, providing the exact
time of day as well as the identity of the transmitting satellite. Every few minutes,
each satellite transmits the constellation’s ephemeris—the position and orbital veloc-
ity of each satellite in the GPS. Each satellite also transmits its signals on two different
frequencies simultaneously (1227.60 MHz and 1575.42 MHz) to lessen potential inter-
ference. The satellites are controlled by five ground stations, with the master control
at Schriever Air Force Base in Colorado Springs, Colorado.
On Earth, a GPS receiver consults its own stored copy of the ephemeris to deter-
mine which satellites should be “visible,” or over the horizon. It listens for the signals
from those satellites, which are quite weak after passing through the atmosphere. The
signals can be easily blocked by electronic noise, sheet metal, or even the water inside
a dense canopy of tree leaves.
Once the receiver has acquired a signal, it decodes the data navigation message.
By using its internal clock, and comparing the time the satellite sent the message to
the time the receiver acquired it, the receiver can compute the time the message spent
in transit, thus determining the distance from the satellite. The GPS receiver also lis-
tens for additional signals. Once it has received data navigation messages from three
satellites, the receiver can determine roughly where it is, but not the exact position.
Computing the exact position by using time delays is actually an equation with
four unknowns: latitude, longitude, altitude, and time. A fourth satellite provides the
GPS receiver’s onboard computer with the data needed to compensate for signal
propagation delays as they pass through the Earth’s atmosphere, as well as inaccura-
cies in the receiver’s internal clock. With four satellites, the receiver can determine its
position, typically to within 72 feet horizontally, and 90 feet vertically. That is not pre-
cise enough data to land an airplane on a runway, but it is more than adequate for
making sure the plane is heading in the right direction.
NavStar is undergoing continuous improvement. Even though each satellite has
an estimated life of about 7.5 years, the plan is to maintain 28 fully operational satel-
lites in orbit at all times in order to ensure performance.
In November 2000, the NavStar system entered a new phase, when the govern-
ment awarded a $16 million contract to Boeing and Lockheed Martin Space Systems
to design the next-generation Block III GPS.
Chapter 6 Telecommunications and Networks W35

Advantages Disadvantages Manager’s Checklist W6.1

• Transmission cost is the same • Any one-way transmission over a Advantages and Disadvantages
whatever the distance between the satellite link has an inherent of Satellites
sending and receiving stations propagation delay of approximately
within the satellite’s footprint. one-quarter of a second, making (page 172)
• Cost remains the same whatever satellite links inefficient for some
the number of stations receiving data communications needs (voice
that transmission (simultaneous communication).
reception). • Due to launch weight limitations,
• They can carry very large amounts they carry or generate very little
of data. electrical power. Low power of
• Extraterrestrial; there is no need to signal, coupled with distance, can
dig trenches. result in extremely weak signals at
• Satellite signals easily cross or span the receiving earth station.
political borders, often with • Signals are not secure—they are
minimal government regulation. available to all receivers within the
• Transmission errors in a digital footprint, intended or not.
satellite signal occur almost • Some frequencies are susceptible to
completely at random. Thus, interference from bad weather or
statistical methods for error ground-based microwave signals.
detection and correction can be
applied efficiently and reliably.
• Users can be highly mobile while
sending and receiving signals.

MOBILE AND WIRELESS APPLICATIONS (page 173)

Existing and New Kinds of Applications


• Mobile personal communications capabilities, such as personal digital assistants
(PDAs)
• Online transaction processing, for example, where a salesperson enters an order for
goods and charges a customer’s credit card to complete the transaction
• Remote database queries, for example, where a salesperson checks the status of an
order directly from the customer’s site
• Dispatching, such as air traffic control, rental car pickup and return, delivery vehi-
cles, trains, taxis, cars, and trucks
• Front-line IT applications, where data are entered only once as they go through the
value chain

Mobile Computing and Cable Replacement Applications


• Wireless connections for temporary offices
• Wireless connections for permanent offices where wiring is difficult (e.g., in historic
buildings, or in buildings where there is asbestos insulation)
• Campus area network backbones
• Preconfigured LAN installations, a LAN-in-a-box that anyone can install
W36 Chapter 6 Telecommunications and Networks

Applications in Many Industries


• Retail—particularly in department stores where there are frequent changes of layout
• Wholesale/distribution—wireless networking for inventory picking in warehouses
with PCs mounted directly on forklifts and for delivery and order status updates
• Field service/sales—dispatching, online diagnostic support from customer sites, and
parts ordering/inventory queries
• Factories/manufacturing—process control, hostile environments, clean rooms, mo-
bile shop-floor quality control
• Healthcare/hospitals—patient records, comparative diagnosis wherever the patient
or the healthcare worker may be located
• Education—interactive quizzes, additional data and graphics lecture support, and
online handout materials
• Banking/finance—purchasing, selling, inquiry, brokerage, and other financial dealings
Source: Based on advertisement from the Digital Equipment Corporation.

EMERGING WIRELESS APPLICATIONS (page 173)

Ultra-Wideband Wireless Technology


Ultra-wideband (UWB) is a wireless technology being developed at Time Domain
(timedomain.com), a Huntsville, Alabama company. The concept behind UWB dras-
tically differs from other approaches to wireless communications. UWB uses ex-
tremely low-power radio pulses (50 millionths of a watt) that extend across a large
portion of the electromagnetic spectrum, from one gigahertz to four gigahertz. Be-
cause UWB sends the pulses at such low power and across such a broad frequency
range—and because the pulses are so short (half a billionth of a second)—receivers
listening for transmissions at specific frequencies perceive them as background noise.
Time Domain’s system sends out 40 million pulses a second at differing, but pre-
cisely defined, intervals. Delaying or advancing a pulse by a few trillionths of a second
defines it as a digital 1 or 0, creating a short-range data carrier capable of transmitting
up to 10 megabits per second or more at distances up to 150 feet.
While this transmission rate is significantly faster than Bluetooth, which currently
tops out at 721 kilobits per second, UWB’s real advantage will come in dense user en-
vironments. Other uses emerge from the fact that pulses moving at the speed of light
travel about one foot in a billionth of a second. Thus, measuring the delay in the ar-
rival of an expected pulse provides an extremely accurate way to determine distance
from transmitter to receiver, making UWB an ideal position locator. And since the
pulses both travel through and are reflected by objects, measuring the time it takes a
reflected pulse to arrive back at the transmitter allows the system to function as radar
that can detect objects behind walls.
Time Domain predicts a market for security-related products, such as home radar
systems to detect intruders outside house walls and similar systems that emergency
agencies can use to detect bodies inside buildings. For example, firemen can use UWB
to locate people inside a smoke-filled building where there is no visibility. Time Do-
main lists these applications for UWB:
• Through-wall motion detection and tracking
• In-building personnel and asset tracking
• High-speed local area networks
Chapter 6 Telecommunications and Networks W37

• Home networks
• Invisible security domes and fences
• Collision-avoidance sensors
• High-precision positioning/tracking systems

Terrestrial fixed wireless (also called broadband wireless). The most effective option
for businesses other than fiber optics may come from the terrestrial wireless sector.
Broadband wireless is generally packet based and is most appropriate for business
customers who are currently using leased-line service but need more bandwidth than
T1 service delivers. DSL and broadband wireless have similar transmission rates for
customers. The technology connects fiber to rooftop antennas that relay the data from
building to building.
Broadband wireless takes less time to deploy than wireline technologies and does
not affect existing infrastructure. Further, the technology can be less expensive than
wireline technologies because deployment involves little construction beyond mount-
ing the antennas and wiring them into the buildings.
Broadband wireless has disadvantages. The weather can be a major problem. In
regions of the country that are subject to heavy rain or snowstorms, wireless services
can be impeded for several days each year. It is also possible that, as broadband wire-
less gains more customers, businesses may suffer from interference caused by others
using the frequency, thus prohibiting the transfer of data. Another barrier has been
finding cooperative landlords who will agree to have base stations (antennas) on their
rooftops. For these wireless networks to be effective, antennas must be aimed directly
at one another and be within a five-mile range. If tree foliage or some other obstruc-
tion blocks the antennas’ line of sight, data cannot be transferred.

Wireless local loop (WLL). Like cellular telephone systems, WLL systems carry
voice and data traffic over radio frequencies between local users and the public
switched telephone network. However, the emphasis in WLL is in replacing the local
loop that connects the subscriber’s premises to the telephone network, rather than the
provisioning of mobile services.
Fixed WLL has four potential uses:
• To bring telephony to underserved parts of the world
• To provide advanced services to businesses
• To provide an alternative to wireline services for business and residential areas
• To serve as an alternative local loop technology for new market entrants in deregu-
lated markets
WLL systems are being deployed in South America, Asia, Eastern Europe, and
other developing countries without adequate wireline services where WLL systems
offer the advantage of rapid deployment and avoid the cost of burying wires and ca-
bles. In fact, some developing countries use almost exclusively wireless systems. WLL
is attractive particularly where rocky or soggy terrain makes cabled systems difficult
to install. WLL can also satisfy the need to expand the number of connected cus-
tomers quickly.

Multichannel multipoint distribution service (MMDS). MMDS uses microwave fre-


quencies to distribute video and provide WLL and high-bandwidth data transmission.
Transmitters send line-of-sight signals to a home antenna with a range of approxi-
mately 35 miles at data transmission rates up to 10 Mbps. Developing countries in the
Middle East, Latin America, and Asia Pacific have deployed MMDS because wireless
W38 Chapter 6 Telecommunications and Networks

avoids the high cost of installing fiber or coaxial cable. In North America, demand for
Internet access generates the most MMDS activity.

Local multipoint distribution service (LMDS). LMDS is a two-way digital wireless


communications medium that can carry voice, data, and video. The LMDS frequency
is well above that of cell phones and radio broadcasts, so there is little chance of inter-
ference. LMDS works best when connecting satellite offices and campus buildings
that can be several miles apart. The advantages of the technology include a high level
of scalability and relatively quick, easy, and inexpensive deployment compared with
placing wire or fiber. However, LMDS does have disadvantages. Throughput is lim-
ited to 4.5 Mbps (megabits per second) and LMDS has a short range of around six
miles because of its high frequency. In heavy rain the signal can fade, which reduces
throughput or can totally break the connection. Also, the LMDS signal requires a
line-of-sight link between sender and receiver.

Free-space laser communication. Free-space laser is based on radio frequency trans-


mission. The laser directs a beam through the atmosphere between two buildings or
other points. Because a laser beam travels in a straight line, it must have a line-of-
sight path between the two endpoints. Free-space laser is susceptible to interference.
Fog, special window coatings, flocks of birds, or other moving objects will sometimes
block, interrupt, or slow transmission.
Terabeam Networks’ (www.terabeam.com) technology provides bandwidth up to
1 Gbps, using laser transmitters and detectors operating through office windows, thus
eliminating roof-mounted equipment. Airfiber (www.airfiber.com) has developed a
free-space laser mesh network with built-in redundancy using low-cost, short-range
laser assemblies. The company targets the urban business market. In densely popu-
lated metropolitan areas, free-space building-to-building links (up to 500 meters) cost
a fraction of the cost to install fiber-optic cable.
Chapter 6 Telecommunications and Networks W39

Table W6.1 The Seven Layers of the OSI Model (page 183)
Layer 1: Physical layer Transmits raw bits over a communication channel. Its
purpose is to provide a physical connection for the
transmission of data among network entities and the
means by which to activate and deactivate a physical
connection.
Layer 2: Data link layer Provides a reliable means of transmitting data across a
physical link; breaks up the input data into data frames
sequentially and processes the acknowledgment frames
sent back by the receiver.
Layer 3: Network layer Routes information from one network computer to
another; accepts messages from source host and sees to it
that they are directed toward the destination. Computers
may be physically located within the same network or
within another network that is interconnected in some
fashion.
Layer 4: Transport layer Provides a network-independent transport service to the
session layer; accepts data from session layer, splits it up
into smaller units as required, passes these to the
network layer, and ensures all pieces arrive correctly at
other end.
Layer 5: Session layer Provides user’s interface into network, where user must
negotiate to establish connection with process on another
machine. Once the connection is established, the session
layer can manage the dialogue in an orderly manner.
Layer 6: Presentation layer Translates messages to and from the format used in the
network to a format used at the application layer.
Layer 7: Application layer Includes activities related to users, such as supporting file
transfer, handling messages, and providing security.
W40 Chapter 6 Telecommunications and Networks

THE BENEFITS OF EDI (page 190)

Benefit How Benefit Is Achieved


Speed, volume • EDI enables companies to send and receive large amounts of
routine transaction information quickly around the globe in a
paperless environment.
• Sales and other information is delivered to manufacturers,
shippers, and warehouses almost in real time.
• Once EDI documents are received, they are automatically
forwarded to the appropriate department for processing.
Accuracy • There are very few errors in the transformed data as a result of
computer-to-computer data transfer. Information is also consistent.
Collaboration • Companies can access partners’ databases to retrieve and store
standard transactions.
Commitment • EDI fosters true (and strategic) partnership relationships, since it
involves a commitment to a long-term investment and the
refinement of the system over time.
Profit • The time for collecting payments can be shortened by several
weeks, benefiting the recipients of payments.
Cost savings • EDI creates a complete paperless transfer processing environment,
saving money and increasing efficiency.
• EDI enables a just-in-time environment, which means lower (or
no) inventories for manufacturers.

You might also like