In This Issue: September 2008 Volume 11, Number 3
In This Issue: September 2008 Volume 11, Number 3
If you are reading the printed version of this journal you will notice a
In This Issue subtle change in the paper. This issue is printed on an uncoated stock,
specifically Exact® Offset Opaque White 60#, a recycled paper made
by Wausau Paper Corporation. This paper is slightly thinner, and thus
From the Editor....................... 1
lighter, than the paper we have been using. It is also less reflective and
easier to write notes on. We invite your feedback on this paper as we
GMPLS and the Optical experiment with various solutions to reduce our carbon footprint. As
Internet.................................... 2 always, send your comments to: [email protected]
O
ne of the major concerns in the Internet-based information
society today is the tremendous demand for more and more
bandwidth. Optical communication technology has the po-
tential for meeting the emerging needs of obtaining information at
much faster yet more reliable rates because of its potentially limitless
capabilities—huge bandwidth (nearly 50 terabits per second[1]), low
signal distortion, low power requirement, and low cost. The chal-
lenge is to turn the promise of optical networking into reality to meet
our Internet communication demands for the next decade. With the
deployment of Dense Wavelength Division Multiplexing (DWDM)
technology, a new and very crucial milestone is being reached in net-
work evolution. The speed and capacity of such wavelength switched
networks—with hundreds of channels per fiber strand—seem to be
more then adequate to satisfy the medium to long term connectiv-
ity demands. In this scenario, carriers need powerful, commercially
viable and scalable devices and control plane technologies that can
dynamically manage traffic demands and balance the network load
on the various fiber links, wavelengths, and switching nodes so that
none of these components is over- or underused.
This process of adaptively mapping traffic flows onto the physical to-
pology of a network and allocating resources to these flows—usually
referred to as traffic engineering—is one of the most difficult tasks
facing Internet backbone providers today. Generalized Multiprotocol
Label Switching (GMPLS) is the most promising technology. GMPLS
will play a critical role in future IP pure optical networks by pro-
viding the necessary bridges between the IP and optical layers to
deliver effective traffic-engineering features and allow for interoper-
able and scalable parallel growth in the IP and photonic dimension.
The GMPLS control plane technology, when fully available in next-
generation optical switching devices, will support all the needed
traffic-engineering functions and enable a variety of protection and
restoration capabilities, while simplifying the integration of new pho-
tonic switches and existing label switching routers.
Multiplexer Demultiplexer
Wavelength
Converters
Optical
Amplifiers
Output Ports
Input ports
WDM Demultiplexers
This architecture is transparent; that is, the optical signal does not
need to be transformed to electricity at all, implying that this archi-
tecture can support any protocol and any data rate. Hence, possible
upgrades in the wavelength transport capacity can be accommodated
at no extra cost. Furthermore, this architecture decreases the cost
because it involves the use of fewer devices than the other architec-
tures. In addition, transparent wavelength conversion eliminates
constraints on conversions. In this way the real switching capacity
of the OXC is increased, leading to cost reduction. First-generation
OXCs require manual configuration. Clearly, an automatic switching
capability allowing optical nodes to dynamically modify the network
topology based on changing traffic demand is highly desirable.
GMPLS Interfaces
GMPLS encompasses control plane signaling for multiple interface
types. The diversity of controlling not only switched packets and
cells but also TDM network traffic and optical network components
makes GMPLS flexible enough to position itself in the direct migra-
tion path from electronic to all-optical network switching. The five
main interface types supported by GMPLS follow:
• Packet Switching Capable (PSC)—These interfaces recognize
packet boundaries and can forward packets based on the IP header
or a standard MPLS “shim” header.
• Layer 2 Switch-Capable (L2SC)—These interfaces recognize frame
and cell headers and can forward data based on the content of the
frame or cell header (for example, an ATM LSR that forwards data
based on its Virtual Path Identifier/Virtual Circuit Identifier (VPI/
VCI) value, or Ethernet bridges that forward the data based on the
MAC header).
Generalized Label
GMPLS defines several new forms of label—the generalized label
objects. These objects include the generalized label request, the gen-
eralized label, the explicit label control, and the protection flag. The
generalized label can be used to represent timeslots, wavelengths,
wavebands, or space-division multiplexed positions.
Label/Lambda G-LSP 3
Switched
Engineered G-LSP 2
Paths
Both RSVP and CR-LDP can be used to reserve a single wavelength for
a light path if the wavelength is known in advance. These protocols
can also be modified to incorporate wavelength selection functions
into the reservation process[7]. In RSVP, signaling takes place between
the source and destination nodes. The signaling messages may con-
tain information such as QoS requirements for the carried traffic and
label requests for assigning labels at intermediate nodes that reserve
the appropriate resources for the path. CR-LDP uses TCP sessions
between nodes in order to provide a hop-by-hop reliable distribution
of control messages, indicating the route and the required traffic pa-
rameters for the route. Each intermediate node reserves the required
resources, allocates a label, and sets up its forwarding table before
backward signaling to the previous node.
Although LMP assumes the messages are IP encoded, it does not dic-
tate the actual transport mechanism used for the control channel.
However, the control channel must terminate on the same two nodes
that the bearer channels span. Therefore, this protocol can be im-
plemented on any OXC, regardless of the internal switching fabric.
A requirement for LMP is that each link has an associated bidirec-
tional control channel and that free bearer channels must be opaque
(that is, able to be terminated); however, when a bearer channel is
allocated, it may become transparent. Note that this requirement is
trivial for optical cross-connects with electronic switching planes, but
is an added restriction for photonic switches.
Conclusion
Innovations in the field of optical components will take advantage
of the introduction of all-optical networking in all areas of infor-
mation transport and will offer system designers the opportunity to
create new solutions that will allow smooth evolution of all telecom-
munication networks. A new class of versatile IP-addressable optical
switching devices is emerging, operating according to a common
GMPLS-based control plane to support full-featured traffic engineer-
ing in modern optical transparent infrastructures.
T
hroughout its relatively brief history, the Internet has con-
tinually challenged our preconceptions about networking and
communications architectures. For example, the concepts that
the network itself has no role in management of its own resources,
and that resource allocation is the result of interaction between com-
peting end-to-end data flows, were certainly novel innovations, and
for many they have been very confrontational. The approach of de-
signing a network that is unaware of services and service provisioning
and is not attuned to any particular service whatsoever—leaving the
role of service support to end-to-end overlays—was again a radical
concept in network design. The Internet has never represented the
conservative option for this industry, and has managed to define a
path that continues to present significant challenges.
The topic examined here is why this situation has arisen, and in ex-
amining this question we analyze the options available to the Internet
to resolve the problem of IPv4 address exhaustion. We examine the
timing of the IPv4 address exhaustion and the nature of the intended
transition to IPv6. We consider the shortfalls in the implementation
of this transition, and identify their underlying causes. And finally, we
consider the options available at this stage and identify some likely
consequences of such options.
When?
This question was first asked on the TCP/IP list in November 1988,
and the responses included foreshadowing a new version of IP with
longer addresses and undertaking an exercise to reclaim unused
addresses[1]. The exercise of measuring the rate of consumption of
IPv4 addresses has been undertaken many times in the past two de-
cades, with estimates of exhaustion ranging from the late 1990s to
beyond 2030. One of the earliest exercises in predicting IPv4 address
exhaustion was undertaken by Frank Solensky and presented at IETF
18 in August 1990. His findings are reproduced in Figure 1.
RIR behaviors are modeled using the current RIR operational prac-
tices and associated address policies, which are used to predict the
times when each RIR will be allocated a further 2 /8s from IANA.
This RIR consumption model, in turn, allows the IANA address pool
to be modeled.
First, the behavior of the Internet Service Provider (ISP) industry and
the other entities that are the direct recipients of RIR address alloca-
tions and assignments are not ignorant of the impending exhaustion
condition, and there is some level of expectation of some form of
last-minute rush or panic on the part of such address applicants when
exhaustion of this address pool is imminent. The predictive model
described here does not include such a last-minute acceleration of
demand.
The third factor is that this model assumes that the policy framework
remains unaltered, and that all unallocated addresses are allocated
or assigned under the current policy framework, rather than under a
policy regime that is substantially different from today’s framework.
The related assumption here is that the cost of obtaining and hold-
ing addresses remains unchanged, and that the perceptions of future
scarcity of addresses do not affect the policy framework of address
distribution of the remaining unallocated IPv4 addresses.
What Next?
Apart from the exact date of exhaustion that is predicted by this
modeling exercise, none of the information relating to exhaustion
of the unallocated IPv4 address pool should be viewed as particu-
larly novel information. The IETF Routing and Addressing (ROAD)
study of 1991 recognized that the IPv4 address space was always
going to be completely consumed at some point in the future of the
Internet[4].
And this reality has been true for the adoption of classless ad-
dress allocations, the adoption of CIDR in BGP, and the extremely
widespread use of NAT. But all of these measures were short-term,
whereas the longer-term measure, that of the transition to IPv6, was
what was intended to come after IPv4. But IPv6 has not been the
subject of widespread adoption so far, while the time of anticipated
exhaustion of IPv4 has been drawing closer. Given almost two de-
cades of advance warning of IPv4 address exhaustion, and a decade
since the first stable implementations of IPv6 were released, we could
reasonably expect that this industry—and each actor within this
industry—is aware of the problem and the need for a stable and scal-
able long-term solution as represented by IPv6. We could reasonably
anticipate that the industry has already planned the actions it will
take with respect to IPv6 transition, and is aware of the triggers that
will invoke such actions, and approximately when they will occur.
Over the 1990s the IETF undertook the exercise of the specification
of a successor IP protocol to Version 4, and the IETF’s view of the
longer-term response was refined to be advocacy of the adoption of
the IPv6 protocol and the use of this protocol as the replacement for
IPv4 across all parts of the network.
CIDR and NAT have been around for more than a decade now, and
the address consumption rates have been held at very conservative
levels in that period, particularly so when considering that the bulk
of the population of the Internet was added well after the advent of
CIDR and NAT.
The longer-term measure—the transition to IPv6—has not proved to
be as effective in terms of adoption in the Internet.
During the transition more and more hosts are configured with
dual stack. The idea is that dual-stack hosts prefer to use IPv6 to
communicate with other dual-stack hosts, and revert to use IPv4 only
when an IPv6-based end-to-end conversation is not possible. As more
and more of the Internet converts to dual stack, it is anticipated that
use of IPv4 will decline, until support for IPv4 is no longer necessary.
In this dual-stack transition scenario, no single flag day is required and
the dual-stack deployment can be undertaken in a piecemeal fashion.
There is no requirement to coordinate hosts with networks, and as
dual-stack capability is supported in networks the attached dual-
stack hosts can use IPv6. This scenario still makes some optimistic
assumptions, particularly relating to the achievement of universal
deployment of dual stack, at which point no IPv4 functions are used,
and support for IPv4 can be terminated. Knowing when this point is
reached is unclear, of course, but in principle there is no particular
timetable for the duration of the dual-stack phase of operation.
There are always variations, and in this case it is not necessarily that
each host must operate in dual-stack mode for such a transition. A
variant of the NAT approach can perform a rudimentary form of
protocol translation, where a Protocol-Translating NAT (or NAT-
PT[6]) essentially transforms an incoming IPv4 packet to an outgoing
IPv6 packet, and conversely, using algorithmic binding patterns to
map between IPv4 and IPv6 addresses. Although this process relieves
the IPv6-only host of some additional complexity of operation at
the expense of some added complexity in Domain Name System
(DNS) transformations and service fragility, the essential property
still remains that in order to speak to an IPv4-only remote host, the
combination of the local IPv6 host and the NAT-PT have to generate
an equivalent IPv4 packet. In this case the complexity of the dual
stack is now replaced by complexity in a shared state across the IPv6
host and the NAT-PT unit. Of course this solution does not neces-
sarily operate correctly in the context of all potential application
interactions, and concerns with the integrity of operation of NAT-PT
devices are significant, a factor that motivated the IETF to deprecate
the existing NAT-PT specification[7]. On the other hand, the lack of
any practical alternatives has led the IETF to subsequently reopen
this work, and once again look at specifying the standard behavior
of such devices[8].
Clearly, waiting for the time of IPv4 unallocated address pool exhaus-
tion to act as the signal to industry to commence the deployment of
IPv6 in a dual-stack transition framework is a totally flawed imple-
mentation of the original dual-stack transition plan.
Why?
At this point it may be useful to consider how and why this situation
has arisen.
Perhaps the only notable difference between the two protocols is the
ability to perform host scans in IPv6, where probe packets are sent
to successive addresses. In IPv6 the address density is extremely low
because the low-order 64-bit interface address of each host is more
or less unique, and within a single network the various interface ad-
dresses are not clustered sequentially in the number space. The only
known use of address probing to date has been in various forms of
hostile attack tools, so the lack of such a capability in IPv6 is gener-
ally seen as a feature rather than an impediment. IPv6 deployment
has been undertaken in a small scale for many years, and although
the size of the deployed IPv6 base remains small, the level of experi-
ence gained with the technology functions has been significant. It
is possible to draw the conclusion that IPv6 is technically capable
and this capability has been broadly tested in almost every scenario
except that of universal use across the Internet.
What of the strident calls for IPv6 deployment? Surely there is sub-
stance to the arguments to deploy IPv6 as a contingency plan for the
established service providers in the face of impending IPv4 address
exhaustion, and if that is the case, why have service providers dis-
counted the value of such contingency motivations? The problem to
date is that IPv4 address exhaustion is now not a novel message, and,
so far, NAT usage has neutralized the urgency of the message.
The more general observation is that, for the service provider indus-
try currently, IPv6 has all the negative properties of revenue margin
erosion with no immediate positive benefits. This observation lies at
the heart of why the service provider industry has been so resistant to
the call for widespread deployment of IPv6 services to date.
What Next?
Now we consider some questions relating to IPv4 address exhaus-
tion. Will the exhaustion of the current framework that supplies IP
addresses to service providers cause all further demand for addresses
to cease at that point?
The size and value of the installed base of the Internet using IPv4 is
now very much larger than the size and value of incremental growth
of the network. In address terms the routed Internet currently (as of
14 August 2008) spans 1,893,725,831 IPv4 addresses, or the equiv-
alent of 112.2 /8 address blocks. Some 12 months ago the routed
Internet spanned 1,741,837,080 IPv4 addresses, or the equivalent of
103.8 /8 address blocks, representing a net annual growth of 10 per-
cent in terms of advertised address space.
These facts lead to the observation that, even in the hypothetical sce-
nario where all further growth of the Internet is forced to use IPv6
exclusively while the installed base still uses IPv4, it is highly unlikely
that the core value of the Internet will shift away from its predomi-
nate IPv4 installed base in the short term.
From this observation it appears highly likely that the demand for
IPv4 addresses will continue at rates comparable to current rates
across the IPv4 unallocated address pool and after it is exhausted.
The exhaustion of the current framework of supply of IPv4 addresses
will not trigger an abrupt cessation of demand for IPv4 addresses,
and this event will not cause the deployment of IPv6-only networks,
at least in the short term of the initial years following IPv4 address
pool exhaustion. It is therefore possible to indicate that immediately
following this exhaustion event there will be a continuing market
need for IPv4 addresses for deployment in new networks.
How?
If demand continues, then what is the source of supply in an environ-
ment where the current supply channel, namely the unallocated pool
of addresses, is exhausted? The options for the supply of such IPv4
addresses are limited.
Given a large enough common address pool, this factor may be further
improved by statistical multiplexing by a factor of 2 or 3, allowing
for between 200 and 300 customers per NAT address. Of course
such approximations are very coarse, and the engineering require-
ment to achieve such a high level of NAT usage would be significant.
Variations on this engineering approach are possible in terms of the
internal engineering of the ISP network and the control interface be-
tween the CPE NATs and the ISP equipment, but the maximal ratio
of 200 to 300 customers per public IP address appears to be a reason-
able upper bound without unduly affecting application behaviors.
Of course, no such recovery option exists for new entrants, and in the
absence of any other supply option, this situation will act as an effec-
tive barrier to entry into the ISP market. In cases where the barriers
to entry effectively shut out new entrants, there is a strong trend for
the incumbents to form cartels or monopolies and extract monopoly
rentals from their clients. However, it is unlikely that the lack of sup-
ply will be absolute, and a more likely scenario is that addresses will
change hands in exchange for money. Or, in other words, it is likely
that such a situation will encourage the emergence of markets in ad-
dresses. Existing holders of addresses have the option to monetize all
or part of their held assets, and new entrants, and others, have the
option to bid against each other for the right to use these addresses.
In such an open market, the most efficient usage application would
tend to be able to offer the highest bid, in an environment dominated
by scarcity tending to provide strong incentives for deployment sce-
narios that offer high levels of address usage efficiency.
References
[1] TCP/IP Mailing List, Message Thread: “Running out of Internet
Addresses,” November 1988.
http://www-mice.cs.ucl.ac.uk/multimedia/misc/tcp_
ip/8813.mm.www/index.html#121
[11] William Lehr, Tom Vest, Eliot Lear, “Running on Empty: The
Challenge of Managing Internet Addresses,” to be presented at
the 36th Research Conference on Communication, Information
and Internet Policy (TPRC), on 27 September 2008.
http://eyeconomics.com/backstage/References_
files/Lehr-Vest-Lear-TPRC2008-080915.pdf
[13] http://icann.org/en/announcements/proposal-ipv4-
report-29nov07.htm
(See also “Fragments” on page 46.)
GEOFF HUSTON is the Chief Scientist at APNIC, the Regional Internet Registry
serving the Asia Pacific region. He graduated from the Australian National University
with a B.Sc. and M.Sc. in Computer Science. He has been closely involved with the
development of the Internet for many years, particularly within Australia, where he
was responsible for the initial build of the Internet within the Australian academic
and research sector. He is author of numerous Internet-related books, and was a
member of the Internet Architecture Board from 1999 until 2005; he served on the
Board of Trustees of the Internet Society from 1992 to 2001.
E-mail: [email protected]
The only comment I could make is that though Huston hints about
separating the IP address from the host name, he does not explicitly
mention the Host Identity Protocol (HIP)[1]. Previous issues of the
Journal have this omission as well.
Thanks for the privilege to continue reading the Journal; keep such
papers coming.
—Henry Sinnreich, Adobe Systems, Inc.
[email protected]
Regards,
—Geoff Huston, APNIC
[email protected]
“This week I received the June 2008 issue of IPJ. I have been a sub-
scriber for several years and it has been a great pleasure to find great
contents in IPJ, such as the current issue that brings reviews on
Internet evolution. I would like to send my congratulations to the
IPJ team for 10 years of publication and my best wishes for future
success.”
—Frederico Fari, Belo Horizonte, Brazil
“I think that IPJ is a great journal. I hope you will not be forced to
give up the paper edition because is a beautiful one (and it allows me
to read during the evening hours when all computers and children in
the house are shut down :–)”
—Andrea Montefusco, Rome, Italy
Code and Other Laws of Cyberspace Code and Other Laws of Cyberspace, by Lawrence Lessig, Basic
Books, 1999, ISBN 0-465-03913-8. http://code-is-law.org/
Code 2.0 Code 2.0, by Lawrence Lessig, Basic Books, 2006, ISBN-10: 0-465-
03914-6, ISBN 13: 978-0-465-03914-2. http://codev2.cc/
Lessig’s key findings from that previous work are that rules matter—
especially the sort of rules embodied in “constitutions” and other
foundational institutions; that rules are artifacts of contingent human
intent and design; and that rules can be changed. Being a “classical
liberal” on the model of John Stuart Mill, Lessig advocates the sort
of rules that afford maximum liberty for individuals against a trium-
virate of coercive influences, including not only governments but also
market power and oppressive social mores.
Like the canons of law (also known as “East Coast Code”), code is
basically a collection of rules written with human goals and objectives
in mind. However, in its effects code more closely resembles the laws
of nature, because it requires neither the awareness nor the consent
of its subjects in order to be effective. Although this claim sounds
suspiciously like a variant, or perhaps an illustration of Arthur C.
Clarke’s Third Law of Prediction (which states that any sufficiently
advanced technology will be indistinguishable from the supernatu-
ral), there is purpose behind Lessig’s observation. The self-enforcing
character of code is doubly problematic in the case of cyberspace,
he suggests, because unlike the law, code affords no appeal, no re-
course, and no formal, institutional review and interpretation of the
kind that lawyers and judges exercise in legal matters. Without such
expert oversight, code might come to be used as a tool to subvert
individual liberties or public values, for either commercial or political
gain, without anyone’s being the wiser. In fact, he implies, the lack of
transparency of code almost invites such abuses.
However, such a dismissal would indeed be too easy, for Lessig also
expresses misgivings about the professionalization and segregation
of “constitutional thinking” within the legal sector. “Constitutional
thought has been the domain of lawyers and judges for too long,”
Lessig writes, and as a result everyone else has grown less comfort-
able—and also less competent—in engaging in fruitful conversation
about fundamental, “constitutional” values.
Lessig’s specific ideas for achieving this function while preserving the
core are not fully detailed in this context until Code 2.0 (2006), which
Lessig describes as an update rather than a full rewrite, albeit one
with new relevance to match a “radically different time.” The cen-
tral idea involves the introduction of an “identity layer” that permits
authoritative in-band querying and signaling of the jurisdiction(s) to
which every would-be Internet user is subject. The deployment of
this system would be accompanied by the development of a compre-
hensive distributed database of Internet usage restrictions mandated
by every legally recognized jurisdiction around the world. Together,
these components would operate as a kind of “domain interdiction
system” that would automatically black-hole all Internet resource
queries that are legally impermissible to individuals based on their
jurisdiction(s) of origin, regardless of their actual location.
In the end Lessig provides some oblique advice for judges (abandon
formalism), hackers (open source), and voters (educate yourself, and
don’t give up hope), but ultimately concludes with a call for more
lawyerly deliberation: if only our leaders could act more like lawyers,
telling stories that persuade “not by hiding the truth or exciting the
emotion, but by using reason,” and our fellow citizens could act like
juries, resisting the fleeting passions of the mob and making decisions
based on the facts alone, then perhaps we could overcome the archi-
tectural challenges of both cyberspace and physical space.
Code may not be that particular story, but it’s an excellent read, and
an important contribution to a dialogue that must be engaged.
—Tom Vest
[email protected]
________________________
This publication is distributed on an “as-is” basis, without warranty of any kind either
express or implied, including but not limited to the implied warranties of merchantability,
fitness for a particular purpose, or non-infringement. This publication could contain technical
inaccuracies or typographical errors. Later issues may modify or update information provided
in this issue. Neither the publisher nor any contributor shall have any liability to any person
for any loss or damage caused directly or indirectly by the information contained herein.
Among other features, these Procedures state that the Board will
decide, as and when appropriate, that ICANN staff should follow
the development of a particular global policy, undertaking an “early
awareness” tracking of proposals in the addressing community. To
this end, staff should issue background reports periodically, for-
warded to the Board, to all ICANN Supporting Organizations and
Advisory Committees and posted at the ICANN Web site.
It should be noted that other policy proposals have been put forward
and are being discussed regarding IPv4 address space exhaustion,
although only those mentioned above have been scoped as global
policy proposals in the sense of the ASO MoU, that is, focusing on
address allocation from IANA to the RIRs, and recognized by the
ASO AC as global policy proposals in that meaning.
[1] http://aso.icann.org/docs/aso-mou2004.html
[2] http://icann.org/en/general/review-procedures-pgp.
html
[3] http://www.icann.org/en/announcements/proposal-
ipv4-report-29nov07.htm
Upcoming Events
The Internet Engineering Task Force (IETF) will meet in Minneapolis,
Minnesota, November 16 – 21, 2008. In 2009, IETF meetings are
scheduled for San Francisco, California (March 22 – 27), Stockholm,
Sweden (July 26 – 31) and Hiroshima, Japan (November 8 – 13). For
more information see http://www.ietf.org/