Chapter_4_V7.01
Chapter_4_V7.01
physical layer:
bit-level reception
data link layer: decentralized switching:
e.g., Ethernet using header field values, lookup
see chapter 5 output port using forwarding table
in input port memory (“match plus
action”)
goal: complete input port
processing at ‘line speed’
queuing: if datagrams arrive faster
than forwarding rate into switch
Network Layer: Data 4-7
Plane
Input port functions
lookup,
link forwarding
line layer switch
termination protocol fabric
(receive)
queueing
physical layer:
bit-level reception
decentralized switching:
data link layer: using header field values, lookup
e.g., Ethernet output port using forwarding table
see chapter 5 in input port memory (“match plus
action”)
destination-based forwarding:
forward based only on destination
IP address (traditional)
generalized forwarding: forward
based on any set of header field
Network Layer: Data 4-8
Plane
Destination-based
forwarding
forwarding table
Destination Address Range Link Interface
otherwise 3
examples:
DA: 11001000 00010111 00010110 10100001 which interface?
DA: 11001000 00010111 00011000 10101010 which interface?
Network Layer: Data 4-10
Plane
Longest prefix matching
we’ll see why longest prefix matching is
used shortly, when we study addressing
longest prefix matching: often
performed using ternary content
addressable memories (TCAMs)
• content addressable: present address to
TCAM: retrieve address in one clock cycle,
regardless of table size
• Cisco Catalyst: can up ~1M routing table
entries in TCAM
memory
input output
port memory port
(e.g., (e.g.,
Ethernet) Ethernet)
system bus
switch switch
fabric fabric
datagram
switch buffer link
fabric layer line
protocol termination
queueing (send)
switch
switch
fabric
fabric
packet packet
arrivals queue link departures
(waiting area) (server)
priorities 2
1 3 4 5
• class may depend arrivals
on marking or
packet
other header info, in 1 3 2 4 5
service
e.g. IP
source/dest, port departures
numbers, etc. 1 3 2 4 5
• real world
example?
Network Layer: Data 4-21
Plane
Scheduling policies: still
more
Round Robin (RR) scheduling:
multiple classes
cyclically scan class queues, sending
one complete packet from each class (if
available)
real world example?
2
1 3 4 5
arrivals
packet
in 1 3 2 4 5
service
departures
1 3 3 4 5
physical layer
…
possible link-level in: one large datagram
frame out: 3 smaller datagrams
• different link
types, different
MTUs reassembly
large IP datagram
divided
(“fragmented”) …
within net
• one datagram
becomes several
datagrams
• “reassembled” Network Layer: Data 4-27
Plane
IP fragmentation,
reassembly
length ID fragflag offset
example: =4000 =x =0 =0
4000 byte
datagram one large datagram becomes
several smaller datagrams
MTU = 1500
bytes
1480 bytes in length ID fragflag offset
data field =1500 =x =1 =0
6. 223.1.2.2
• can physically
reach each other
without intervening network consisting of 3 subnets
router
each isolated
network is called
223.1.3.0/24
a subnet
subnet mask: /24
Network Layer: Data 4-33
Plane
Subnets 223.1.1.2
223.1.1.3
223.1.9.2 223.1.7.0
223.1.9.1 223.1.7.1
223.1.8.1 223.1.8.0
223.1.2.6 223.1.3.27
223.1.2.0/24
223.1.3.1 223.1.3.2
223.1.3.0/24
DHCP offer
src: 223.1.2.5, 67
Broadcast: I’m a DHCP
dest: 255.255.255.255, 68
server! Here’s an IP
yiaddrr: 223.1.2.4
transaction
address youID:can
654 use
lifetime: 3600 secs
DHCP request
src: 0.0.0.0, 68
dest:: 255.255.255.255, 67
Broadcast: OK. I’ll
yiaddrr: 223.1.2.4
take that IPID:address!
transaction 655
lifetime: 3600 secs
DHCP ACK
src: 223.1.2.5, 67
dest: 255.255.255.255,
Broadcast: 68
OK. You’ve
yiaddrr: 223.1.2.4
gottransaction
that IPID:address!
655
lifetime: 3600 secs
encapsulated in UDP,
DHCP DHCP 168.1.1.1 encapsulated in IP,
DHCP UDP encapsulated in 802.1
DHCP IP Ethernet frame
DHCP Eth router with DHCP
Phy server built into broadcast (dest:
router FFFFFFFFFFFF) on LAN,
received at router
running DHCP
Ethernet server
demuxed to IP
demuxed, UDP
demuxed to DHCP
Organization 0
200.23.16.0/23
Organization 1
“Send me anything
200.23.18.0/23 with addresses
Organization 2 beginning
200.23.20.0/23 . Fly-By-Night-ISP 200.23.16.0/20”
.
. . Internet
.
Organization 7 .
200.23.30.0/23
“Send me anything
ISPs-R-Us
with addresses
beginning
199.31.0.0/16”
Organization 0
200.23.16.0/23
“Send me anything
with addresses
Organization 2 beginning
200.23.20.0/23 . Fly-By-Night-ISP 200.23.16.0/20”
.
. . Internet
.
Organization 7 .
200.23.30.0/23
“Send me anything
ISPs-R-Us
with addresses
Organization 1 beginning 199.31.0.0/16
or 200.23.18.0/23”
200.23.18.0/23
10.0.0.4
10.0.0.2
138.76.29.7
10.0.0.3
data
32 bits
Network Layer: Data 4-55
Plane
Other changes from IPv4
checksum: removed entirely to reduce
processing time at each hop
options: allowed, but outside of header,
indicated by “Next Header” field
ICMPv6: new version of ICMP
• additional message types, e.g. “Packet Too
Big”
• multicast group management functions
IPv6 datagram
IPv4 datagram
Network Layer: Data 4-57
Plane
Tunneling
A B IPv4 tunnel E F
connecting IPv6 routers
logical view:
IPv6 IPv6 IPv6 IPv6
A B C D E F
physical view:
IPv6 IPv6 IPv4 IPv4 IPv6 IPv6
A B C D E F
physical view:
IPv6 IPv6 IPv4 IPv4 IPv6 IPv6
data data
A-to-B: E-to-F:
IPv6 B-to-C: B-to-C: IPv6
IPv6 inside IPv6 inside
IPv4 IPv4 Network Layer: Data 4-59
Plane
IPv6:
adoption
Google: 8% of clients access services via
IPv6
NIST: 1/3 of all US government domains
are IPv6 capable