Network Sec. Def & Firewall
Network Sec. Def & Firewall
Once we've covered this, we'll go back and discuss some of the threats that managers and
administrators of computer networks need to confront, and then some tools that can be
used to reduce the exposure to the risks of network computing.
What is a Network?
A ``network'' has been defined[1] as ``any set of interlinking lines resembling a net, a
network of roads || an interconnected system, a network of alliances.'' This definition
suits our purpose well: a computer network is simply a system of interconnected
computers. How they're connected is irrelevant, and as we'll soon see, there are a number
of ways to do this.
An easy way to look at this is to compare this model with something we use daily: the
telephone. In order for you and I to talk when we're out of earshot, we need a device like
a telephone. (In the ISO/OSI model, this is at the application layer.) The telephones, of
course, are useless unless they have the ability to translate the sound into electronic
pulses that can be transferred over wire and back again. (These functions are provided in
layers below the application layer.) Finally, we get down to the physical connection: both
must be plugged into an outlet that is connected to a switch that's part of the telephone
system's network of switches.
If I place a call to you, I pick up the receiver, and dial your number. This number
specifies which central office to which to send my request, and then which phone from
that central office to ring. Once you answer the phone, we begin talking, and our session
has begun. Conceptually, computer networks function exactly the same way.
It isn't important for you to memorize the ISO/OSI Reference Model's layers; but it's
useful to know that they exist, and that each layer cannot work without the services
provided by the layer below it.
UUCP
UUCP (Unix-to-Unix CoPy) was originally developed to connect Unix (surprise!) hosts
together. UUCP has since been ported to many different architectures, including PCs,
Macs, Amigas, Apple IIs, VMS hosts, everything else you can name, and even some
things you can't. Additionally, a number of systems have been developed around the
same principles as UUCP.
Batch-Oriented Processing.
UUCP and similar systems are batch-oriented systems: everything that they have to do is
added to a queue, and then at some specified time, everything in the queue is processed.
Implementation Environment.
UUCP networks are commonly built using dial-up (modem) connections. This doesn't
have to be the case though: UUCP can be used over any sort of connection between two
computers, including an Internet connection.
Building a UUCP network is a simple matter of configuring two hosts to recognize each
other, and know how to get in touch with each other. Adding on to the network is simple;
if hosts called A and B have a UUCP network between them, and C would like to join the
network, then it must be configured to talk to A and/or B. Naturally, anything that C talks
to must be made aware of C's existence before any connections will work. Now, to
connect D to the network, a connection must be established with at least one of the hosts
on the network, and so on. Figure 2 shows a sample UUCP network.
In a UUCP network, users are identified in the format host!userid. The ``!'' character
(pronounced ``bang'' in networking circles) is used to separate hosts and users. A
bangpath is a string of host(s) and a userid like A!cmcurtin or C!B!A!cmcurtin. If I am
a user on host A and you are a user on host E, I might be known as A!cmcurtin and you
as E!you. Because there is no direct link between your host (E) and mine (A), in order for
us to communicate, we need to do so through a host (or hosts!) that has connectivity to
both E and A. In our sample network, C has the connectivity we need. So, to send me a
file, or piece of email, you would address it to C!A!cmcurtin. Or, if you feel like taking
the long way around, you can address me as C!B!A!cmcurtin.
The ``public'' UUCP network is simply a huge worldwide network of hosts connected to
each other.
Popularity.
The public UUCP network has been shrinking in size over the years, with the rise of the
availability of inexpensive Internet connections. Additionally, since UUCP connections
are typically made hourly, daily, or weekly, there is a fair bit of delay in getting data from
one user on a UUCP network to a user on the other end of the network. UUCP isn't very
flexible, as it's used for simply copying files (which can be netnews, email, documents,
etc.) Interactive protocols (that make applications such as the World Wide Web possible)
have become much more the norm, and are preferred in most cases.
However, there are still many people whose needs for email and netnews are served quite
well by UUCP, and its integration into the Internet has greatly reduced the amount of
cumbersome addressing that had to be accomplished in times past.
Security.
UUCP, like any other application, has security tradeoffs. Some strong points for its
security is that it is fairly limited in what it can do, and it's therefore more difficult to
trick into doing something it shouldn't; it's been around a long time, and most its bugs
have been discovered, analyzed, and fixed; and because UUCP networks are made up of
occasional connections to other hosts, it isn't possible for someone on host E to directly
make contact with host B, and take advantage of that connection to do something
naughty.
On the other hand, UUCP typically works by having a system-wide UUCP user account
and password. Any system that has a UUCP connection with another must know the
appropriate password for the uucp or nuucp account. Identifying a host beyond that point
has traditionally been little more than a matter of trusting that the host is who it claims to
be, and that a connection is allowed at that time. More recently, there has been an
additional layer of authentication, whereby both hosts must have the same sequence
number , that is a number that is incremented each time a connection is made.
Hence, if I run host B, I know the uucp password on host A. If, though, I want to
impersonate host C, I'll need to connect, identify myself as C, hope that I've done so at a
time that A will allow it, and try to guess the correct sequence number for the session.
While this might not be a trivial attack, it isn't considered very secure.
The Internet
Internet: This is a word that I've heard way too often in the last few years. Movies, books,
newspapers, magazines, television programs, and practically every other sort of media
imaginable has dealt with the Internet recently.
The Internet is the world's largest network of networks . When you want to access the
resources offered by the Internet, you don't really connect to the Internet; you connect to
a network that is eventually connected to the Internet backbone , a network of extremely
fast (and incredibly overloaded!) network components. This is an important point: the
Internet is a network of networks -- not a network of hosts.
A simple network can be constructed using the same protocols and such that the Internet
uses without actually connecting it to anything else. Such a basic network is shown in
Figure 3.
If you have a connection ``to the Internet'' through a local ISP, you are actually
connecting your computer to one of their networks, which is connected to another, and so
on. To use a service from my host, such as a web server, you would tell your web
browser to connect to my host. Underlying services and protocols would send packets
(small datagrams) with your query to your ISP's network, and then a network they're
connected to, and so on, until it found a path to my employer's backbone, and to the exact
network my host is on. My host would then respond appropriately, and the same would
happen in reverse: packets would traverse all of the connections until they found their
way back to your computer, and you were looking at my web page.
In Figure 4, the network shown in Figure 3 is designated ``LAN 1'' and shown in the
bottom-right of the picture. This shows how the hosts on that network are provided
connectivity to other hosts on the same LAN, within the same company, outside of the
company, but in the same ISP cloud , and then from another ISP somewhere on the
Internet.
Open Design
One of the most important features of TCP/IP isn't a technological one: The protocol is an
``open'' protocol, and anyone who wishes to implement it may do so freely. Engineers
and scientists from all over the world participate in the IETF (Internet Engineering Task
Force) working groups that design the protocols that make the Internet work. Their time
is typically donated by their companies, and the result is work that benefits everyone.
IP
As noted, IP is a ``network layer'' protocol. This is the layer that allows the hosts to
actually ``talk'' to each other. Such things as carrying datagrams, mapping the Internet
address (such as 10.2.3.4) to a physical network address (such as 08:00:69:0a:ca:8f), and
routing, which takes care of making sure that all of the devices that have Internet
connectivity can find the way to each other.
Understanding IP
IP has a number of very important features which make it an extremely robust and
flexible protocol. For our purposes, though, we're going to focus on the security of IP, or
more specifically, the lack thereof.
Attacks Against IP
A number of attacks against IP are possible. Typically, these exploit the fact that IP does
not perform a robust mechanism for authentication , which is proving that a packet came
from where it claims it did. A packet simply claims to originate from a given address, and
there isn't a way to be sure that the host that sent the packet is telling the truth. This isn't
necessarily a weakness, per se , but it is an important point, because it means that the
facility of host authentication has to be provided at a higher layer on the ISO/OSI
Reference Model. Today, applications that require strong host authentication (such as
cryptographic applications) do this at the application layer.
IP Spoofing.
This is where one host claims to have the IP address of another. Since many systems
(such as router access control lists) define which packets may and which packets may not
pass based on the sender's IP address, this is a useful technique to an attacker: he can send
packets to a host, perhaps causing it to take some sort of action.
Additionally, some applications allow login based on the IP address of the person making
the request (such as the Berkeley r-commands )[2]. These are both good examples how
trusting untrustable layers can provide security that is -- at best -- weak.
IP Session Hijacking.
This is a relatively sophisticated attack, first described by Steve Bellovin [3]. This is very
dangerous, however, because there are now toolkits available in the underground
community that allow otherwise unskilled bad-guy-wannabes to perpetrate this attack. IP
Session Hijacking is an attack whereby a user's session is taken over, being in the control
of the attacker. If the user was in the middle of email, the attacker is looking at the email,
and then can execute any commands he wishes as the attacked user. The attacked user
simply sees his session dropped, and may simply login again, perhaps not even noticing
that the attacker is still logged in and doing things.
For the description of the attack, let's return to our large network of networks in Figure 4.
In this attack, a user on host A is carrying on a session with host G. Perhaps this is a
telnet session, where the user is reading his email, or using a Unix shell account from
home. Somewhere in the network between A and G sits host H which is run by a naughty
person. The naughty person on host H watches the traffic between A and G, and runs a tool
which starts to impersonate A to G, and at the same time tells A to shut up, perhaps trying
to convince it that G is no longer on the net (which might happen in the event of a crash,
or major network outage). After a few seconds of this, if the attack is successful, naughty
person has ``hijacked'' the session of our user. Anything that the user can do legitimately
can now be done by the attacker, illegitimately. As far as G knows, nothing has happened.
TCP
TCP is a transport-layer protocol. It needs to sit on top of a network-layer protocol, and
was designed to ride atop IP. (Just as IP was designed to carry, among other things, TCP
packets.) Because TCP and IP were designed together and wherever you have one, you
typically have the other, the entire suite of Internet protocols are known collectively as
``TCP/IP.'' TCP itself has a number of important features that we'll cover briefly.
Probably the most important is guaranteed packet delivery. Host A sending packets to
host B expects to get acknowledgments back for each packet. If B does not send an
acknowledgment within a specified amount of time, A will resend the packet.
Applications on host B will expect a data stream from a TCP session to be complete, and
in order. As noted, if a packet is missing, it will be resent by A, and if packets arrive out
of order, B will arrange them in proper order before passing the data to the requesting
application.
This is suited well toward a number of applications, such as a telnet session. A user
wants to be sure every keystroke is received by the remote host, and that it gets every
packet sent back, even if this means occasional slight delays in responsiveness while a
lost packet is resent, or while out-of-order packets are rearranged.
It is not suited well toward other applications, such as streaming audio or video, however.
In these, it doesn't really matter if a packet is lost (a lost packet in a stream of 100 won't
be distinguishable) but it does matter if they arrive late (i.e., because of a host resending a
packet presumed lost), since the data stream will be paused while the lost packet is being
resent. Once the lost packet is received, it will be put in the proper slot in the data stream,
and then passed up to the application.
UDP
UDP (User Datagram Protocol) is a simple transport-layer protocol. It does not provide
the same features as TCP, and is thus considered ``unreliable.'' Again, although this is
unsuitable for some applications, it does have much more applicability in other
applications than the more reliable and robust TCP.
One of the things that makes UDP nice is its simplicity. Because it doesn't need to keep
track of the sequence of packets, whether they ever made it to their destination, etc., it has
lower overhead than TCP. This is another reason why it's more suited to streaming-data
applications: there's less screwing around that needs to be done with making sure all the
packets are there, in the right order, and that sort of thing.
This is no different from our daily lives. We constantly make decisions about what risks
we're willing to accept. When we get in a car and drive to work, there's a certain risk that
we're taking. It's possible that something completely out of control will cause us to
become part of an accident on the highway. When we get on an airplane, we're accepting
the level of risk involved as the price of convenience. However, most people have a
mental picture of what an acceptable risk is, and won't go beyond that in most
circumstances. If I happen to be upstairs at home, and want to leave for work, I'm not
going to jump out the window. Yes, it would be more convenient, but the risk of injury
outweighs the advantage of convenience.
Every organization needs to decide for itself where between the two extremes of total
security and total access they need to be. A policy needs to articulate this, and then define
how that will be enforced with practices and such. Everything that is done in the name of
security, then, must enforce that policy uniformly.
Types And Sources Of Network Threats
Now, we've covered enough background information on networking that we can actually
get into the security aspects of all of this. First of all, we'll get into the types of threats
there are against networked computers, and then some things that can be done to protect
yourself against various threats.
Denial-of-Service
DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address.
These are the nastiest, because they're very easy to launch, difficult (sometimes
impossible) to track, and it isn't easy to refuse the requests of the attacker, without also
refusing legitimate requests for service.
The premise of a DoS attack is simple: send more requests to the machine than it can
handle. There are toolkits available in the underground community that make this a
simple matter of running a program and telling it which host to blast with requests. The
attacker's program simply makes a connection on some service port, perhaps forging the
packet's header information that says where the packet came from, and then dropping the
connection. If the host is able to answer 20 requests per second, and the attacker is
sending 50 per second, obviously the host will be unable to service all of the attacker's
requests, much less any legitimate requests (hits on the web site running there, for
example).
Such attacks were fairly common in late 1996 and early 1997, but are now becoming less
popular.
Some things that can be done to reduce the risk of being stung by a denial of service
attack include
Obviously forged packets would include those that claim to come from your own
hosts, addresses reserved for private networks as defined in RFC 1918 [4], and the
loopback network (127.0.0.0).
Unauthorized Access
``Unauthorized access'' is a very high-level term that can refer to a number of different
sorts of attacks. The goal of these attacks is to access some resource that your machine
should not provide the attacker. For example, a host might be a web server, and should
provide anyone with requested web pages. However, that host should not provide
command shell access without being sure that the person making such a request is
someone who should get it, such as a local administrator.
It's obviously undesirable for an unknown and untrusted person to be able to execute
commands on your server machines. There are two main classifications of the severity of
this problem: normal user access, and administrator access. A normal user can do a
number of things on a system (such as read files, mail them to other people, etc.) that an
attacker should not be able to do. This might, then, be all the access that an attacker
needs. On the other hand, an attacker might wish to make configuration changes to a host
(perhaps changing its IP address, putting a start-up script in place to cause the machine to
shut down every time it's started, or something similar). In this case, the attacker will
need to gain administrator privileges on the host.
Confidentiality Breaches
We need to examine the threat model: what is it that you're trying to protect yourself
against? There is certain information that could be quite damaging if it fell into the hands
of a competitor, an enemy, or the public. In these cases, it's possible that compromise of a
normal user's account on the machine can be enough to cause damage (perhaps in the
form of PR, or obtaining information that can be used against the company, etc.)
While many of the perpetrators of these sorts of break-ins are merely thrill-seekers
interested in nothing more than to see a shell prompt for your computer on their screen,
there are those who are more malicious, as we'll consider next. (Additionally, keep in
mind that it's possible that someone who is normally interested in nothing more than the
thrill could be persuaded to do more: perhaps an unscrupulous competitor is willing to
hire such a person to hurt you.)
Destructive Behavior
Among the destructive sorts of break-ins and attacks, there are two major categories.
Data Diddling.
The data diddler is likely the worst sort, since the fact of a break-in might not be
immediately obvious. Perhaps he's toying with the numbers in your spreadsheets, or
changing the dates in your projections and plans. Maybe he's changing the account
numbers for the auto-deposit of certain paychecks. In any case, rare is the case when
you'll come in to work one day, and simply know that something is wrong. An
accounting procedure might turn up a discrepancy in the books three or four months after
the fact. Trying to track the problem down will certainly be difficult, and once that
problem is discovered, how can any of your numbers from that time period be trusted?
How far back do you have to go before you think that your data is safe?
Data Destruction.
Some of those perpetrate attacks are simply twisted jerks who like to delete things. In
these cases, the impact on your computing capability -- and consequently your business --
can be nothing less than if a fire or other disaster caused your computing equipment to be
completely destroyed.
In order to be able to adequately address security, all possible avenues of entry must be
identified and evaluated. The security of that entry point must be consistent with your
stated policy on acceptable risk levels.
Lessons Learned
From looking at the sorts of attacks that are common, we can divine a relatively short list
of high-level practices that can help prevent security disasters, and to help control the
damage in the event that preventative measures were unsuccessful in warding off an
attack.
This isn't just a good idea from a security point of view. Operational requirements should
dictate the backup policy, and this should be closely coordinated with a disaster recovery
plan, such that if an airplane crashes into your building one night, you'll be able to carry
on your business from another location. Similarly, these can be useful in recovering your
data in the event of an electronic disaster: a hardware failure, or a breakin that changes or
otherwise damages your data.
Although this should go without saying, this doesn't occur to lots of folks. As a result,
information that doesn't need to be accessible from the outside world sometimes is, and
this can needlessly increase the severity of a break-in dramatically.
Be sure that someone who knows what you've got is watching the vendors' security
advisories. Exploiting old bugs is still one of the most common (and most effective!)
means of breaking into systems.
In addition to watching what the vendors are saying, keep a close watch on groups like
CERT and CIAC. Make sure that at least one person (preferably more) is subscribed to
these mailing lists
Having at least one person who is charged with keeping abreast of security developments
is a good idea. This need not be a technical wizard, but could be someone who is simply
able to read advisories issued by various incident response teams, and keep track of
various problems that arise. Such a person would then be a wise one to consult with on
security related issues, as he'll be the one who knows if web server software version
such-and-such has any known problems, etc.
This person should also know the ``dos'' and ``don'ts'' of security, from reading such
things as the ``Site Security Handbook.''[5]
Firewalls
As we've seen in our discussion of the Internet and similar networks, connecting an
organization to the Internet provides a two-way flow of traffic. This is clearly undesirable
in many organizations, as proprietary information is often displayed freely within a
corporate intranet (that is, a TCP/IP network, modeled after the Internet that only works
within the organization).
In order to provide some level of separation between an organization's intranet and the
Internet, firewalls have been employed. A firewall is simply a group of components that
collectively form a barrier between two networks.
A number of terms specific to firewalls and networking are going to be used throughout
this section, so let's introduce them all together.
Bastion host.
A general-purpose computer used to control access between the internal (private)
network (intranet) and the Internet (or any other untrusted network). Typically,
these are hosts running a flavor of the Unix operating system that has been
customized in order to reduce its functionality to only what is necessary in order
to support its functions. Many of the general-purpose features have been turned
off, and in many cases, completely removed, in order to improve the security of
the machine.
Router.
A special purpose computer for connecting networks together. Routers also
handle certain functions, such as routing , or managing the traffic on the networks
they connect.
Access Control List (ACL).
Many routers now have the ability to selectively perform their duties, based on a
number of facts about a packet that comes to it. This includes things like
origination address, destination address, destination service port, and so on. These
can be employed to limit the sorts of packets that are allowed to come in and go
out of a given network.
Demilitarized Zone (DMZ).
The DMZ is a critical part of a firewall: it is a network that is neither part of the
untrusted network, nor part of the trusted network. But, this is a network that
connects the untrusted to the trusted. The importance of a DMZ is tremendous:
someone who breaks into your network from the Internet should have to get
through several layers in order to successfully do so. Those layers are provided by
various components within the DMZ.
Proxy.
This is the process of having one host act in behalf of another. A host that has the
ability to fetch documents from the Internet might be configured as a proxy server
, and host on the intranet might be configured to be proxy clients . In this
situation, when a host on the intranet wishes to fetch the
<http://www.interhack.net/> web page, for example, the browser will make a
connection to the proxy server, and request the given URL. The proxy server will
fetch the document, and return the result to the client. In this way, all hosts on the
intranet are able to access resources on the Internet without having the ability to
direct talk to the Internet.
Types of Firewalls
There are three basic types of firewalls, and we'll consider each of them.
Application Gateways
The first firewalls were application gateways, and are sometimes known as proxy
gateways. These are made up of bastion hosts that run special software to act as a proxy
server. This software runs at the Application Layer of our old friend the ISO/OSI
Reference Model, hence the name. Clients behind the firewall must be proxitized (that is,
must know how to use the proxy, and be configured to do so) in order to use Internet
services. Traditionally, these have been the most secure, because they don't allow
anything to pass by default, but need to have the programs written and turned on in order
to begin passing traffic.
These are also typically the slowest, because more processes need to be started in order to
have a request serviced. Figure 5 shows a application gateway.
Packet Filtering
Packet filtering is a technique whereby routers have ACLs (Access Control Lists) turned
on. By default, a router will pass all traffic sent it, and will do so without any sort of
restrictions. Employing ACLs is a method for enforcing your security policy with regard
to what sorts of access you allow the outside world to have to your internal network, and
vice versa.
There is less overhead in packet filtering than with an application gateway, because the
feature of access control is performed at a lower ISO/OSI layer (typically, the transport or
session layer). Due to the lower overhead and the fact that packet filtering is done with
routers, which are specialized computers optimized for tasks related to networking, a
packet filtering gateway is often much faster than its application layer cousins. Figure 6
shows a packet filtering gateway.
Because we're working at a lower level, supporting new applications either comes
automatically, or is a simple matter of allowing a specific packet type to pass through the
gateway. (Not that the possibility of something automatically makes it a good idea;
opening things up this way might very well compromise your level of security below
what your policy allows.)
There are problems with this method, though. Remember, TCP/IP has absolutely no
means of guaranteeing that the source address is really what it claims to be. As a result,
we have to use layers of packet filters in order to localize the traffic. We can't get all the
way down to the actual host, but with two layers of packet filters, we can differentiate
between a packet that came from the Internet and one that came from our internal
network. We can identify which network the packet came from with certainty, but we
can't get more specific than that.
Hybrid Systems
In an attempt to marry the security of the application layer gateways with the flexibility
and speed of packet filtering, some vendors have created systems that use the principles
of both.
In some of these systems, new connections must be authenticated and approved at the
application layer. Once this has been done, the remainder of the connection is passed
down to the session layer, where packet filters watch the connection to ensure that only
packets that are part of an ongoing (already authenticated and approved) conversation are
being passed.
Other possibilities include using both packet filtering and application layer proxies. The
benefits here include providing a measure of protection against your machines that
provide services to the Internet (such as a public web server), as well as provide the
security of an application layer gateway to the internal network. Additionally, using this
method, an attacker, in order to get to services on the internal network, will have to break
through the access router, the bastion host, and the choke router.
Such gauges as market share, certification, and the like are no guarantees of security or
quality. Taking a little bit of time to talk to some knowledgeable folks can go a long way
in providing you a comfortable level of security between your private network and the
big, bad Internet.
Additionally, it's important to note that many consultants these days have become much
less the advocate of their clients, and more of an extension of the vendor. Ask any
consultants you talk to about their vendor affiliations, certifications, and whatnot. Ask
what difference it makes to them whether you choose one product over another, and vice
versa. And then ask yourself if a consultant who is certified in technology XYZ is going
to provide you with competing technology ABC, even if ABC best fits your needs.
Many ``firewalls'' are sold as a single component: a bastion host, or some other black box
that you plug your networks into and get a warm-fuzzy, feeling safe and secure. The term
``firewall'' refers to a number of components that collectively provide the security of the
system. Any time there is only one component paying attention to what's going on
between the internal and external networks, an attacker has only one thing to break (or
fool!) in order to gain complete access to your internal networks.
See the Internet Firewalls FAQ for more details on building and maintaining firewalls.
There are some remote access systems that have the feature of a two-part procedure to
establish a connection. The first part is the remote user dialing into the system, and
providing the correct userid and password. The system will then drop the connection, and
call the authenticated user back at a known telephone number. Once the remote user's
system answers that call, the connection is established, and the user is on the network.
This works well for folks working at home, but can be problematic for users wishing to
dial in from hotel rooms and such when on business trips.
Other possibilities include one-time password schemes, where the user enters his userid,
and is presented with a ``challenge,'' a string of between six and eight numbers. He types
this challenge into a small device that he carries with him that looks like a calculator. He
then presses enter, and a ``response'' is displayed on the LCD screen. The user types the
response, and if all is correct, he login will proceed. These are useful devices for solving
the problem of good passwords, without requiring dial-back access. However, these have
their own problems, as they require the user to carry them, and they must be tracked,
much like building and office keys.
No doubt many other schemes exist. Take a look at your options, and find out how what
the vendors have to offer will help you enforce your security policy effectively.
Crypto-Capable Routers
A feature that is being built into some routers is the ability to use session encryption
between specified routers. Because traffic traveling across the Internet can be seen by
people in the middle who have the resources (and time) to snoop around, these are
advantageous for providing connectivity between two sites, such that there can be secure
routes.
See the Snake Oil FAQ [6] for a description of cryptography, ideas for evaluating
cryptographic products, and how to determine which will most likely meet your needs.
The danger in doing this, of course, is that there is no privacy on this channel, and it's
difficult to provide the other office access to ``internal'' resources without providing those
resources to everyone on the Internet.
VPNs provide the ability for two offices to communicate with each other in such a way
that it looks like they're directly connected over a private leased line. The session
between them, although going over the Internet, is private (because the link is encrypted),
and the link is convenient, because each can see each others' internal resources without
showing them off to the entire world.
A number of firewall vendors are including the ability to build VPNs in their offerings,
either directly with their base product, or as an add-on. If you have need to connect
several offices together, this might very well be the best way to do it.