Advanced IP Services - Lecture Notes
Advanced IP Services - Lecture Notes
(AUTONOUMUS), SULUR.
B.Sc-INFORMATION TECHNOLOGY
Batch 2016-2019
Faculty V.YUVARAJ
Text Books :
CCNA Routing and Switching ICND2 200-105 Official Cert Guide, CCIE NO.1624 | Edition:3
Definition:
LAN includes all the user devices, servers, switches, routers, cables, and wireless routers,
cables, and wireless access points in one location.
A broadcast domain includes the set of all LAN-connected devices, so that when any of
the devices sends a broadcast frame, all the other devices get a copy of the frame. So, from one
perspective, you can think of a LAN and a broadcast domain as being basically the same thing.
Without VLANs, a switch considers all its interfaces to be in the same broadcast domain. That
is, for one switch, when a broadcast frame entered one switch port, the switch forwarded that
broadcast frame out all other ports. With that logic, to create two different LAN broadcast
domains, you had to buy two different Ethernet LAN switches.
VLAN
access switch.
Creating Multiswitch VLANs Using Trunking
When using VLANs in networks that have multiple interconnected switches, the switches need
to use VLAN trunking on the links between the switches. VLAN trunking causes the switches
to use a process called VLAN tagging, by which the sending switch adds another header to the
frame before sending it over the trunk. This extra trunking header includes a VLAN identifier
(VLAN ID) field so that the sending switch can associate the frame with a particular VLAN ID,
and the receiving switch can then know in what VLAN each frame belongs.
ISL
ISL is a Cisco proprietary protocol – it can only be used between Cisco switches
supporting ISL.
ISL fully encapsulates each original Ethernet frame in an ISL header and trailer,
which also includes a VLAN field. Source and destination address uses the MAC
address of the sending and receiving switch.
• IEEE 802.1Q
802.1q is a standardized protocol and uses native VLAN 1 by default, which
means other switches that doesn’t understand 802.1q knows to just use VLAN 1.
802.1q does not encapsulate the original frame like ISL. Instead, it inserts an
extra 4-byte VLAN header into the original frame’s header (after the destination
and source address).
The extra 4 byte VLAN header is called the Tag in the frame, which includes:
Type, Priority, Flag, and VLAN ID.
When a packet is destined for another VLAN, it has its current VLAN ID in the
packet before it hits the router. After it hits the router and comes back, it has the
VLAN ID of the destination host (only if it’s not native VLAN)
802.1q also runs on 10 Mb/s Ethernet interfaces.
ISL and 802.1q Compared
Both trunking protocols support 4094 VLANs.
Of the supported VLANs, note that VLAN IDs 1-1005 are considered normal
range VLANs, and 1005-4094 are considered extended range VLANs.
Both support a separate instance of STP for each VLAN, which means engineers
can tune the STP parameters so that some VLANs’ traffic goes to one set of links
and the other VLANs’ traffic uses other link.
802.1q defines one VLAN on each trunk as the native VLAN 1 by default.
Switches that receive the frame without the 802.1q header knows that the frame
belongs to native VLAN 1. ISL does not use that concept.
Forwarding Data between VLANs
Routing Packets between VLANs with a Router
When including VLANs in a campus LAN design, the devices in a VLAN need to be in
the same subnet. Following the same design logic, devices in different VLANs need to be in
different subnets. For example, the two PCs on the left sit in VLAN 10, in subnet 10. The two
PCs on the right sit in a different VLAN (20), with a different subnet (20). When including
VLANs in a campus LAN design, the devices in a VLAN need to be in the same subnet.
Following the same design logic, devices in different VLANs need to be in different subnets.
the two PCs on the left sit in VLAN 10, in subnet 10. The two PCs on the right sit in a different
VLAN (20), with a different subnet (20).
Routing packets using a physical router, even with the VLAN trunk in the router-on-a-stick
model still has one significant problem: performance. The physical link puts an upper limit on
how many bits can be routed, and less expensive routers tend to be less powerful, and might not
be able to route a large enough number of packets per second (pps) to keep up with the traffic
volumes. The ultimate solution moves the routing functions inside the LAN switch hardware.
Vendors long ago started combining the hardware and software features of their Layer 2 LAN
switches, plus their Layer 3 routers, creating products called Layer 3 switches (also known as
multilayer switches). Layer 3 switches can be configured to act only as a Layer 2 switch, or they
can be configured to do both Layer 2 switching as well as Layer 3 routing. Today, many medium-
to large-sized enterprise campus LANs use Layer 3 switches to route packets between subnets
(VLANs) in a campus. In concept, a Layer 3 switch works a lot like the original two devices on
which the Layer 3 switch is based: a Layer 2 LAN switch and a Layer 3 router. In fact, if you
take the concepts and packet flow with a separate Layer 2 switch and Layer 3 router, and then
imagine all those features happening inside one device, you have the general idea of what a Layer
3 switch does. shows that exact concept, repeating many details of Routing between two VLANs
on two Physical Interfaces but with an overlay that shows the one Layer 3 switch doing the Layer
2 switch functions and the separate Layer 3 routing function.
If a tie occurs based on the priority ID of the Bridge ID, the switch with the lowest
MAC address portion of the Bridge ID is the root.
All switches claim to be the root by sending Hello BPDUs listing their own
bridge ID as the root bridge. If a switch hears a Hello with a lower Bridge ID,
called a Superior Hello, the switch stops advertising itself as root and starts
forwarding the Superior Hello.
After the election is complete, only the root switch continues to originate STP
Hello BPDU messages. The other switches receive the updates, updates the
sender’s BID field (and cost-to-reach-the-root field), and forward the Hellos out
other interfaces.
Routing protocols basically exchange information so routers can learn routes. The routers
learn information about subnets, routes to those subnets, and metric information about how good
each route is compared to others. The routing protocol can then choose the currently best route to
each subnet, building the IP routing table.
Link-state protocols like OSPF take a little different approach to the particulars of what
information they exchange and what the routers do with that information once learned.OSPF does
by exchanging data about the net-work in data structures called link-state advertisements (LSA).
OSPF Overview
Link-state protocols build IP routes with a couple of major steps. First, the routers together
build a lot of information about the network: routers, links, IP addresses, status information, and
so on. Then the routers flood the information, so all routers know the same information. At that
point, each router can calculate routes to all subnets, but from each router’s own perspective.
Routers using link-state routing protocols need to collectively advertise practically every
detail about the internetwork to all the other routers. At the end of the process of flooding the
information to all routers, every router in the internetwork has the exact same information about
the internetwork. Flooding a lot of detailed information to every router sounds like a lot of work,
and relative to distance vector routing protocols, it is.
Open Shortest Path First (OSPF), the most popular link-state IP routing protocol, organizes
topology information using LSAs and the link-state database (LSDB). Figure 7-4 represents the
ideas. Each LSA is a data structure with some specific information about the network topology;
the LSDB is simply the collection of all the LSAs known to a router. When sitting at the CLI of a
router that uses OSPF, the show ip ospf database command lists information about the LSDB on
that router by listing some of the information in each of the LSAs in the LSDB.
Figure 7-5 shows the general idea of the flooding process, with R8 creating and flooding its router
LSA. The router LSA for Router R8 describes the router itself, including the existence of subnet
172.16.3.0/24, as seen on the right side of the figure. (Note that Figure 7-5 actually shows only a
subset of the information in R8’s router LSA.)
Figure 7-5 shows the rather basic flooding process, with R8 sending the original LSA for itself,
and the other routers flooding the LSA by forwarding it until every router has a copy. The flooding
process has a way to prevent loops so that the LSAs do not get flooded around in circles. Basically,
before sending an LSA to yet another neighbor, routers communicate, asking “Do you already
have this LSA?,” and then they avoid flooding the LSA to neighbors that already have it. Once
flooded, routers do occasionally reflood a particular LSA. Routers reflood an LSA when some
information changes (for example, when a link goes up or comes down). They also reflood each
LSA based on each LSA’s separate aging timer (default 30 minutes).
The link-state flooding process results in every router having an identical copy of the LSDB
in memory, but the flooding process alone does not cause a router to learn what routes toadd to the
IP routing table. Although incredibly detailed and useful, the information in the LSDB does not
explicitly state each router’s best route to reach a destination.
To build routes, link-state routers have to do some math. Thankfully, you and I do not have
to know the math! However, all link-state protocols use a type of math algorithm, called the
Dijkstra Shortest Path First (SPF) algorithm, to process the LSDB. That algorithm analyzes(with
math) the LSDB, and builds the routes that the local router should add to the IP routing table—
routes that list a subnet number and mask, an outgoing interface, and a next-hop router IP address.
Now that you have the big ideas down, the next several topics walk through the three main phases
of how OSPF routers accomplish the work of exchanging LSAs and calculating routes. Those three
phases are
Becoming neighbors: A relationship between two routers that connect to the same data link,
created so that the neighboring routers have a means to exchange their LSDBs.
Exchanging databases: The process of sending LSAs to neighbors so that all routers learn the
same LSAs.
Adding the best routes: The process of each router independently running SPF, on their local
copy of the LSDB, calculating the best routes, and adding those to the IPv4 routing table.
OSPF neighbors are routers that both use OSPF and both sit on the same data link. With
the data link technology discussed so far in this book, that means two routers connected to the
same VLAN become OSPF neighbors, or two routers on the ends of a serial link become OSPF
neighbors.
Two routers need to do more than simply exist on the same link to become OSPF neighbors;
they must send OSPF messages and agree to become neighbors. To do so, the routers send OSPF
Hello messages, introducing themselves to the neighbor. Assuming the two neighbors have
compatible OSPF parameters, the two form a neighbor relationship, and would be displayed in the
output of the show ip ospf neighbors command.
The OSPF neighbor relationship also lets OSPF know when a neighbor might not be a good
option for routing packets right now. Imagine R1 and R2 form a neighbor relationship, learn LSAs,
and calculate routes that send packets through the other router. Months later, R1 notices that the
neighbor relationship with R2 fails. That failed neighbor connection to R2 makes R1 react: R1
refloods LSAs that formerly relied on the link from R1 to R2, and R1 runs SPF to recalculate its
own routes.
Finally, the OSPF neighbor model allows new routers to be dynamically discovered. That
means new routers can be added to a network without requiring every router to be reconfigured.
Instead, the configuration enables OSPF on a router’s interfaces, and then the router reacts to any
Hello messages from new neighbors, whenever those neighbors happen to be installed.
The OSPF Hello process, by which new neighbor relationships are formed, works
somewhat like when you move to a new house and meet your various neighbors. When you see
each other outside, you might walk over, say hello, and learn each other’s name. After talking a
bit, you form a first impression, particularly as to whether you think you’ll enjoy chatting with this
neighbor occasionally, or whether you can just wave and not take the time to talk the next time
you see him outside. Similarly, with OSPF, the process starts with messages called OSPF Hello
messages. The Hellos in turn list each router’s router ID (RID), which serves as each router’s
unique name or identifier for OSPF. Finally, OSPF does several checks of the information in the
Hello messages to ensure that the two routers should become neighbors.
OSPF RIDs are 32-bit numbers. As a result, most command output lists these as dotted-
decimal numbers (DDN). Additionally, by default, IOS chooses its OSPF RID based on an active
interface IPv4 address, because those are some nearby convenient 32-bit numbers as well.
However, the OSPF RID can be directly configured, as covered in the section
Routers A and B both send Hello messages onto the LAN. They continue to send Hellos at
a regular interval based on their Hello timer settings. The Hello messages themselves have the
following features:
■ The Hello message follows the IP packet header, with IP protocol type 89.
■ Hello packets are sent to multicast IP address 224.0.0.5, a multicast IP address intended
■ OSPF routers listen for packets sent to IP multicast address 224.0.0.5, in part hoping to
The process continues at Step 3, with R2 sending back a Hello. This message tells R1 that
R2 exists, and it allows R1 to move through the init state and quickly to a 2-way state. At Step 4,
R2 receives the next Hello from R1, and R2 can also move to a 2-way state. The 2-way state is a
particularly important OSPF state. At that point, the following major facts are true:
■ The router received a Hello from the neighbor, with that router’s own RID listed as being
seen by the neighbor.
■ The router has checked all the parameters in the Hello received from the neighbor, with
no problems. The router is willing to become a neighbor.
■ If both routers reach a 2-way state with each other, it means that both routers meet all OSPF
configuration requirements to become neighbors. Effectively, at that point, they are neighbors, and
ready to exchange their LSDB with each other.
UNIT-III
Introduction to EIGRP
RIP Version 1 (RIPv1) was the first popularly used IP routing protocol, with the Cisco-
proprietary Interior Gateway Routing Protocol (IGRP) being introduced a little later, as shown in
Even today, EIGRP and OSPF remain the two primary competitors as the IPv4 routing protocol to
use in a modern enterprise IPv4 internetwork. RIPv2 has fallen away as a serious competitor, in
part due to its less robust hop-count metric, and in part due to its slower (worse) convergence time.
Even today, you can walk in to most corporate networks and find either EIGRP or OSPF as the
routing protocol used throughout the network.
■ EIGRP uses a robust metric based on both link bandwidth and link delay, so routers make good
choices about the best route to use (see Figure 9-2).
■ EIGRP converges quickly, meaning that when something changes in the internetwork, EIGRP
quickly finds the currently best loop-free routes to use.
EIGRP and OSPF remain the two best options for IPv4 interior routing protocols. Both converge
quickly. Both use a good metric that considers link speeds when choosing the route. EIGRP can
be much simpler to implement. Many reasonable network engineers have made these comparisons
over the years, with some choosing OSPFv2 and others choosing EIGRP.
EIGRP does not fit cleanly into the category of DV routing protocols or LS routing
protocols. However, it most closely matches DV protocols. The next topic explains the basics of
DV routing protocols as originally implemented with RIP, to give a frame of reference of how DV
protocols work. In particular, the next examples show routes that use RIP’s simple hop-count
metric, which, although a poor option in real networks today, is a much simpler option for learning
than EIGRP’s more complex metric.
The term distance vector describes what a router knows about each route. At the end of the
process, when a router learns about a route to a subnet, all the router knows is some measurement
of distance (the metric) and the next-hop router and outgoing interface to use for that route (a
vector, or direction). Figure 9-3 shows a view of both the vector and the distance as learned with
RIP. The figure shows the flow of RIP messages that cause R1 to learn some IPv4 routes,
specifically three routes to reach subnet X:
Vector: The direction, based on the next-hop router for a possible route
DV routing protocols have a couple of functions that require messages between neighboring
routers.
First, routers need to send routing information inside some message, so that the sending router can
advertise routing information to neighboring routers. For instance, in Figure 9-3, R1 received RIP
messages to learn routes. As discussed in Chapter 7, OSPF calls those messages Link-State
Updates (LSU). RIP and EIGRP both call their messages an update message.
Full update means that a router advertises all its routes, using one or more RIP update messages,
no matter whether the route has changed or not. Periodic means that the router sends the message
based on a short timed period (30seconds with RIP).
Split horizon is a DV feature that tells the routing protocol to not advertise some routes in
an update sent out an interface. The routes that list that same interface as the outgoing interface.
Those routes that are not advertised on an interface usually include the routes learned in routing
updates received on that interface. Split horizon is difficult to learn by reading words, and much
easier to learn by seeing an example. Figure 9-6 continues the same example as 9-5, but focusing
on R1’s RIP update sent out R1’s S0/0 interface to R2. This figure shows R1’s routing table with
three light- colored routes, all of which list S0/0 as the outgoing interface. When building the RIP
update to send out S0/0, split-horizon rules tell R1 to ignore those light-colored routes, because all
three routes list S0/0 as the outgoing interface. Only the bold route, which does not list S0/0 as an
outgoing interface, can be included in the RIP update sent out S0/0.
Route Poisoning
DV protocols help prevent routing loops by ensuring that every router learns that the route
has failed, through every means possible, as quickly as possible. One of these features, route
poisoning, helps all routers know for sure that a route has failed. Route poisoning refers to the
practice of advertising a failed route, but with a special metric value called infinity. Routers
consider routes advertised with an infinite metric to have failed. Figure 9-7 shows an example of
route poisoning with RIP, with R2’s G0/1 interface failing, meaning that R2’s route for
172.30.22.0/24 has failed. RIP defines infinity as 16.
2. R2 removes its connected route for 172.30.22.0/24 from its routing table.
4. Depending on other conditions, R1 either immediately removes the route to 172.30.22.0 from
its routing table, or marks the route as unusable (with an infinite metric) for a few minutes before
removing the route. By the end of this process, Router R1 knows for sure that its old route for
subnet 172.30.22.0/24 has failed, which helps R1 not introduce any looping IP routes.
1. Neighbor discovery: EIGRP routers send Hello messages to discover potential neighboring
EIGRP routers and perform basic parameter checks to determine which routers should become
neighbors. Neighbors that pass all parameter checks are added to the EIGRP neighbor table.
2. Topology exchange: Neighbors exchange full topology updates when the neighbor relationship
comes up, and then only partial updates as needed based on changes to the network topology. The
data learned in these updates is added to the router’s EIGRP topology table.
3. Choosing routes: Each router analyzes its respective EIGRP topology tables, choosing the
lowest-metric route to reach each subnet. EIGRP places the route with the best metric for each
destination into the IPv4 routing table.
EIGRP Neighbors
■ The source IP address used by the neighbor’s Hello must be in the same subnet as the local
router’s interface IP address/mask.
■ The routers' EIGRP K-values must match. (However, Cisco recommends to not change these
values.)
EIGRP uses EIGRP update messages to send topology information to neighbors. These
update messages can be sent to multicast IP address 224.0.0.10 if the sending router needs to
update multiple routers on the same subnet; otherwise, the updates are sent to the unicast IP address
of the particular neighbor. (Hello messages are always sent to the 224.0.0.10 multicast address.)
The use of multicast packets on LANs allows EIGRP to exchange routing information with all
neighbors on the LAN efficiently.
EIGRP sends update messages without UDP or TCP, but it does use a protocol called
Reliable Transport Protocol (RTP). RTP provides a mechanism to resend any EIGRP messages
that are not received by a neighbor. By using RTP, EIGRP can better avoid loops because a router
knows for sure that the neighboring router has received any updated routing information. (The use
of RTP is just another example of a difference between basic DV protocols like RIP, which have
no mechanism to know whether neighbors receive update messages, and the more advanced
EIGRP.)
Calculating the Best Routes for the Routing Table
OSPF interface costs can calculate the exact OSPF metric (cost) for each route. EIGRP
uses a math equation and a composite metric, making the exact metric value hard to predict.
The EIGRP composite metric means that EIGRP feeds multiple inputs (called metric
components) into the math equation. By default, EIGRP feeds two metric components into the
calculation: bandwidth and delay. The result of the calculation is an integer value and is the
composite metric for that router.
EIGRP’s metric calculation formula actually helps describe some of the key points about
the composite metric. (In real life, you seldom if ever need to sit down and calculate what a router
will calculate with this formula.) The formula, assuming that the default settings that tell the router
to use just bandwidth and delay, is as follows:
In this formula, the term least-bandwidth represents the lowest-bandwidth link in the route, using
a unit of kilobits per second. For instance, if the slowest link in a route is a 10-Mbps Ethernet link,
the first part of the formula is 107 / 104, which equals 1000. You use 104 in the formula because
10 Mbps is equal to 10,000 Kbps (104 Kbps).The cumulative-delay value used in the formula is
the sum of all the delay values for all outgoing interfaces in the route, with a unit of “tens of
microseconds.”
Unit of microseconds: Listed in the output of show commands such as show interfaces and show
ip eigrp topology, and in the EIGRP update messages
Unit of tens-of-microseconds: Used by the interface mode configuration command (delay), with
which to set the delay, and in the EIGRP metric calculation
EIGRP’s robust metric gives it the ability to choose routes that include more router hops but with
faster links. However, to ensure that the right routes are chosen, engineers must take care to
configure meaningful bandwidth and delay settings. In particular, serial links default to a
bandwidth of 1544 and a delay of 20,000 microseconds, as used in the example shown in Figure
9-10. However, IOS cannot automatically change the bandwidth and delay settings based on the
Layer 1 speed of a serial link. So, using default bandwidth and delay settings, particularly the
bandwidth setting on serial links can lead to problems.
EIGRP Convergence
EIGRP’s work to converge to a new loop-free route. Loop avoidance poses one of the
most difficult problems with any dynamic routing protocol. DV protocols overcome this problem
with a variety of tools, some of which create a large portion of the minutes-long convergence time
after a link failure. LS protocols overcome this problem by having each router keep a full topology
of the network, so by running a rather involved mathematical model (for example, OSPF’s SPF
algorithm), a router can avoid any loops.
Feasible Distance and Reported Distance
■ Feasible distance (FD): The local router’s composite metric of the best route to reach a subnet,
as calculated on the local router
■ Reported distance (RD): The next-hop router’s best composite metric for that same subnet.
EIGRP calculates the metric for each route to reach each subnet. For a particular subnet, the route
with the best metric is called the successor, with the router filling the IP routing table with this
successor route. Of the other routes to reach that same subnet—routes whose metrics were larger
than the FD for the successor route—EIGRP needs to determine which alternate route can be used
immediately if the currently best route fails, without causing a routing loop. EIGRP runs a simple
algorithm to identify which routes could be used, keeping these loop-free backup routes in its
topology table and using them if the currently best route fails. These alternative, immediately
usable routes are called feasible successor routes because they can feasibly be used as the new
successor route when the previous successor route fails.
The
Query and Reply Process
When a route fails, and the route has no feasible successor, EIGRP uses a distributed
algorithm called Diffusing Update Algorithm (DUAL) to choose a replacement route. DUAL
sends queries looking for a loop-free route to the subnet in question. When the new route is found,
DUAL adds it to the routing table.
The EIGRP DUAL process simply uses messages to confirm that a route exists, and
would not create a loop, before deciding to replace a failed route with an alternative route. For
instance, in Figure 9-14, imagine that both routers C and D fail. Router E does not have any
remaining feasible successor route for subnet 1, but there is an obvious physically available path
through router B. To use the route, router E sends EIGRP query messages to its working neighbors
(in this case, router B). Router B’s route to subnet 1 is still working fine, so router B replies to
router E with an EIGRP reply message, simply stating the details of the working route to subnet 1
and confirming that it is still viable. Router E can then add a new route to subnet 1 to its routing
table, without fear of a loop.
UNIT-IV
Access Control Lists Basics
IPv4 access control lists (IP ACL) give network engineers a way to identify different types
of packets. To do so, the ACL configuration lists values that the router can see in the IP, TCP,
UDP, and other headers. For example, an ACL can match packets whose source IP address is
1.1.1.1, or packets whose destination IP address is some address in subnet 10.1.1.0/24, or packets
with a destination port of TCP port 23 (Telnet).
Types of IP ACLs
■ Standard numbered ACLs (1–99)
■ Extended numbered ACLs (100–199)
■ Additional ACL numbers (1300–1999 standard, 2000–2699 extended)
■ Named ACLs
■ Improved editing with sequence numbers
Standard Numbered ACLs
Type of Cisco filter (ACL) that matches only the source IP address of the packet (standard), is
configured to identify the ACL using numbers rather than names (numbered), and looks at IPv4
packets.
List Logic with IP ACLs
A single ACL is both a single entity and, at the same time, a list of one or more
configuration commands. As a single entity, the configuration enables the entire ACL on an
interface, in a specific direction, as shown earlier in Figure 16-1. As a list of commands, each
command has different matching logic that the router must apply to each packet when filtering
using that ACL. When doing ACL processing, the router processes the packet, compared to the
ACL, as follows: ACLs use first-match logic. Once a packet matches one line in the ACL, the router
takes the action listed in that line of the ACL, and stops looking further in the ACL.
In some cases, you will want one ACL command to match any and all packets that reach
that point in the ACL. First, you have to know the (simple) way to match all packets using the
any keyword. More importantly, you need to think about when to match any and all packets.
First, to match any and all packets with an ACL command, just use the any keyword for the
address. For example, to permit all packets:
Using names instead of numbers to identify the ACL, making it easier to remember the
reason for the ACL
Using ACL subcommands, not global commands, to define the action and matching
parameters
Using ACL editing features that allow the CLI user to delete individual lines from the
ACL and insert new lines
New configuration style for numbered: Numbered ACLs use a configuration style like
named ACLs, as well as the traditional style, for the same ACL; the new style is required to
perform advanced ACL editing.
Deleting single lines: An individual ACL permit or deny statement can be deleted with a no
sequence-number subcommand.
Inserting new lines: Newly added permit and deny commands can be configured with a
sequence number before the deny or permit command, dictating the location of the statement
within the ACL.
Automatic sequence numbering: IOS adds sequence numbers to commands as you configure
them, even if you do not include the sequence numbers.
To take advantage of the ability to delete and insert lines in an ACL, both numbered and named
ACLs must use the same overall configuration style and commands used for named
The example shows the power of the ACL sequence number for editing. In this example, the
following occurs :
Step 1. Numbered ACL 24 is configured using this new-style configuration, with three permit
commands.
Step 2. The show ip access-lists command shows the three permit commands with sequence
numbers 10, 20, and 30.
Step 3. The engineer deletes only the second permit command using the no 20 ACL
subcommand, which simply refers to sequence number 20.
Step 4. The show ip access-lists command confirms that the ACL now has only two lines
(sequence numbers 10 and 30).
Step 5. The engineer adds a new deny command to the beginning of the ACL, using the 5 deny
10.1.1.1 ACL subcommand.
Step 6. The show ip access-lists command again confirms the changes, this time listing three
commands, sequence numbers 5, 10, and 30.
Numbered ACL Configuration Versus Named ACL Configuration
Step 7. The engineer lists the configuration ( show running-config), which lists the oldstyle
configuration commands—even though the ACL was created with the new-style commands.
Step 8. The engineer adds a new statement to the end of the ACL using the old-style access-
list 24 permit 10.1.4.0 0.0.0.255 global configuration command.
Step 9. The show ip access-lists command confirms that the old-style access-list command
from the previous step followed the rule of being added only to the end of the ACL.
Step 10. The engineer displays the configuration to confirm that the parts of ACL 24
configured with both new-style commands and old-style commands are all listed in the same old-
style ACL (show running-config) .
UNIT-V
Miscellaneous LAN
The 802.1x authentication process works like the flow in Figure 6-2. Once the PC connects and
the port comes up, the switch uses 802.1x messages to ask the PC to supply a username/ password.
The PC user must then supply that information. For that process to work, the end-user device
must be using an 802.1x client called a supplicant; many OSs include an 802.1x supplicant, so it
may just be seen as a part of the OS settings.
AAA Authentication
■ A AAA server must be configured with usernames and passwords.
■ Each LAN switch must be enabled for 802.1x, to enable the switch as an authenticator,
to configure the IP address of the AAA server, and to enable 802.1x on the required ports.
■ Users must know a username/password combination that exists on the AAA server, or
they will not be able to access the network from any device.
AAA Login Process
The networking devices would each then need new configuration to tell the device to start
using the AAA server. That configuration would point to the IP address of the AAA server, and
define which AAA protocol to use: either TACACS+ or RADIUS. The configuration includes
details about TCP (TACACS+) or UDP (RADIUS) ports to use.
When using a AAA server for authentication, the switch (or router) simply sends a message
to the AAA server asking whether the username and password are allowed, and the AAA server
replies. Figure 6-4 shows an example, with the user first supplying his username/password, the
switch asking the AAA server, and the server replying to the switch stating that the
username/password is valid.
■ IOS does login authentication for the console, vty, and aux port, by default, based on the setting
of the aaa authentication login default global command.
■ The aaa authentication login default method1 method2… global command lists different
authentication methods, including referencing a AAA group to be used (as shown at the bottom of
■ The methods include: a defined AAA group of AAA servers; local, meaning a locally configured
list of usernames/passwords; or line, meaning to use the password defined by the password line
subcommand.
DHCP Snooping