Handout CS407 - Routing and Switching
Handout CS407 - Routing and Switching
• Easy to set up
• Less complexity
• Lower cost since network devices and dedicated servers may not be required
• Can be used for simple tasks such as transferring files and sharing printers
The disadvantages of peer-to-peer networking:
• No centralized administration
• Not as secure
• Not scalable
• All devices may act as both clients and servers which can slow their performance
Networks Components
Communication across a network is carried on a medium. The medium provides the channel over which
the message travels from source to destination.
Modern networks primarily use three types of media to interconnect devices and to provide the
pathway over which data can be transmitted. As shown in the figure above, these media are:
• Metallic wires within cables
• Glass or plastic fibers (fiber optic cable)
• Wireless transmission
The signal encoding that must occur for the message to be transmitted is different for each media type.
Different types of network media have different features and benefits. Not all network media has the
same characteristics and is appropriate for the same purpose. The criteria for choosing network media
are:
• The distance the media can successfully carry a signal
• The environment in which the media is to be installed
• The amount of data and the speed at which it must be transmitted
• The cost of the media and installation
Network Representation
When conveying complex information such as displaying all the devices and medium in a large
internetwork, it is helpful to use visual representations.
Like any other language, the language of networking uses a common set of symbols to represent the
different end devices, network devices, and media, as shown in the figure above. In addition to these
representations, specialized terminology is used when discussing how each of these devices and media
connect to each other. Important terms to remember are:
Network Interface Card - A NIC, or LAN adapter, provides the physical connection to the network at the
PC or other host device. The media connecting the PC to the networking device plugs directly into the
NIC.
Physical Port - A connector or outlet on a networking device where the media is connected to a host or
other networking device.
Interface - Specialized ports on an internetworking device that connect to individual networks. Because
routers are used to interconnect networks, the ports on a router are referred to network interfaces.
Topology Diagrams
Physical Topology
Topology diagrams are mandatory for anyone working with a network. It provides a visual map of how
the network is connected.
There are two types of topology diagrams including:
Physical topology diagrams - Identify the physical location of intermediary devices, configured ports,
and cable installation.
Logical topology diagrams - Identify devices, ports, and IP addressing scheme.
Logical Topology
Topic 3: LANs, WANs, and the Internet
Types of Networks
Network infrastructures can vary greatly in terms of:
• Size of the area covered
• Number of users connected
• Number and types of services available
Local Area Network (LAN) - A network infrastructure that provides access to users and end devices in a
small geographical area.
Wide Area Network (WAN) - A network infrastructure that provides access to other networks over a
wide geographical area.
Other types of networks include:
Metropolitan Area Network (MAN) - A network infrastructure that spans a physical area larger than a
LAN but smaller than a WAN (e.g., a city). MANs are typically operated by a single entity such as a large
organization.
Wireless LAN (WLAN) - Similar to a LAN but wirelessly interconnects users and end points in a small
geographical area.
Storage Area Network (SAN) - A network infrastructure designed to support file servers and provide
data storage, retrieval, and replication. It involves high-end servers, multiple disk arrays (called blocks),
and Fiber Channel interconnection technology.
The Internet - A Network of Networks
Although there are benefits to using a LAN or WAN, most individuals need to communicate with a
resource on another network, outside of the local network within the home, campus, or organization.
This is done using the Internet.
As shown in the figure above, the Internet is a worldwide collection of interconnected networks
(internetworks or internet for short), cooperating with each other to exchange information using
common standards. Through telephone wires, fiber optic cables, wireless transmissions, and satellite
links, Internet users can exchange information in a variety of forms.
The Internet is a conglomerate of networks and is not owned by any individual or group. Ensuring
effective communication across this diverse infrastructure requires the application of consistent and
commonly recognized technologies and standards as well as the cooperation of many network
administration agencies. There are organizations that have been developed for the purpose of helping
to maintain structure and standardization of Internet protocols and processes. These organizations
include the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and
Numbers (ICANN), and the Internet Architecture Board (IAB), plus many others.
Internet Access Technologies
There are many different ways to connect users and organizations to the Internet. The figure above
illustrates common connection options for small office and home office users, which include:
Cable - Typically offered by cable television service providers, the Internet data signal is carried on the
same coaxial cable that delivers cable television. It provides a high bandwidth, always on, connection to
the Internet. A special cable modem separates the Internet data signal from the other signals carried on
the cable and provides an Ethernet connection to a host computer or LAN.
DSL - Provides a high bandwidth, always on, connection to the Internet. It requires a special high-speed
modem that separates the DSL signal from the telephone signal and provides an Ethernet connection to
a host computer or LAN. DSL runs over a telephone line, with the line split into three channels. One
channel is used for voice telephone calls. This channel allows an individual to receive phone calls
without disconnecting from the Internet. A second channel is a faster download channel, used to receive
information from the Internet. The third channel is used for sending or uploading information. This
channel is usually slightly slower than the download channel. The quality and speed of the DSL
connection depends mainly on the quality of the phone line and the distance from your phone
company's central office. The farther you are from the central office, the slower the connection.
Cellular - Cellular Internet access uses a cell phone network to connect. Wherever you can get a cellular
signal, you can get cellular Internet access. Performance will be limited by the capabilities of the phone
and the cell tower to which it is connected. The availability of cellular Internet access is a real benefit in
those areas that would otherwise have no Internet connectivity at all, or for those constantly on the go.
Satellite - Satellite service is a good option for homes or offices that do not have access to DSL or cable.
Satellite dishes require a clear line of sight to the satellite and so might be difficult in heavily wooded
areas or places with other overhead obstructions. Speeds will vary depending on the contract, though
they are generally good. Equipment and installation costs can be high (although check the provider for
special deals), with a moderate monthly fee thereafter. The availability of satellite Internet access is a
real benefit in those areas that would otherwise have no Internet connectivity at all.
Dial-up Telephone - An inexpensive option that uses any phone line and a modem. To connect to the
ISP, a user calls the ISP access phone number. The low bandwidth provided by a dial-up modem
connection is usually not sufficient for large data transfer, although it is useful for mobile access while
traveling. A modem dial-up connection should only be considered when higher speed connection
options are not available.
Many homes and small offices are more commonly being connected directly with fibre optic cables. This
enables an Internet service provider to provide higher bandwidth speeds and support more services
such as Internet, phone, and TV.
The choice of connection varies depending on geographical location and service provider availability.
Topic 4: Packet Tracer Basics - Part I
In this topic, we will go through the basics of Packet tracer software. Packet Tracer is a networking
learning tool that supports a wide range of physical and logical simulations. It also provides visualization
tools to help you understand the internal workings of a network.
Topic 5: Packet Tracer Basics - Part II
This is the continuation of the previous topic about packet tracer basics.
Topic 6: Rules of Communication
Establishing the Rules
Before communicating with one another, individuals must use established rules or agreements to
govern the conversation.
Message Encoding
One of the first steps to sending a message is encoding it. Encoding is the process of converting
information into another, acceptable form, for transmission. Decoding reverses this process in order to
interpret the information.
Encoding also occurs in computer communication, as shown in Figure above. Encoding between hosts
must be in an appropriate form for the medium. Messages sent across the network are first converted
into bits by the sending host. Each bit is encoded into a pattern of sounds, light waves, or electrical
impulses depending on the network media over which the bits are transmitted. The destination host
receives and decodes the signals in order to interpret the message.
Message Formatting and Encapsulation
When a message is sent from source to destination, it must use a specific format or structure. Message
formats depend on the type of message and the channel that is used to deliver the message.
A frame acts like an envelope; it provides the address of the intended destination and the address of the
source host, as shown in Figure above.
The format and contents of a frame are determined by the type of message being sent and the channel
over which it is communicated. Messages that are not correctly formatted are not successfully delivered
to or processed by the destination host.
Message Size
Another rule of communication is size. When people communicate with each other, the messages that
they send are usually broken into smaller parts or sentences.
Message Timing
Another factor that affects how well a message is received and understood is timing. People use timing
to determine when to speak, how fast or slow to talk, and how long to wait for a response. These are
the rules of engagement.
Access Method
Access method determines when someone is able to send a message.
Flow Control
Timing also affects how much information can be sent and the speed that it can be delivered. If one
person speaks too quickly, it is difficult for the other person to hear and understand the message. The
receiving person must ask the sender to slow down. In network communication, a sending host can
transmit messages at a faster rate than the destination host can receive and process. Source and
destination hosts use flow control to negotiate correct timing for successful communication.
Response Timeout
If a person asks a question and does not hear a response within an acceptable amount of time, the
person assumes that no answer is coming and reacts accordingly.
Message Delivery Options - Unicast
A protocol suite is a set of protocols that work together to provide comprehensive network
communication services. A protocol suite may be specified by a standards organization or developed by
a vendor.
The protocols IP, HTTP, and DHCP are all part of the Internet protocol suite known as Transmission
Control Protocol/IP (TCP/IP). The TCP/IP protocol suite is an open standard, meaning these protocols are
freely available to the public, and any vendor is able to implement these protocols on their hardware or
in their software.
The IP suite is a suite of protocols required for transmitting and receiving information using the Internet.
It is commonly known as TCP/IP because the first two networking protocols defined for this standard
were TCP and IP. The open standards-based TCP/IP has replaced other vendor proprietary protocol
suites, such as Apple's AppleTalk and Novell's Internetwork Packet Exchange/Sequenced Packet
Exchange (IPX/SPX).
Today, the suite includes dozens of protocols, as shown in Figure above. They are organized in layers
using the TCP/IP protocol model. TCP/IP protocols are included in the internet layer to the application
layer when referencing the TCP/IP model. The lower layer protocols in the data link or network access
layer are responsible for delivering the IP packet over the physical medium. These lower layer protocols
are developed by standards organizations, such as IEEE.
The TCP/IP protocol suite is implemented as a TCP/IP stack on both the sending and receiving hosts to
provide end-to-end delivery of applications over a network. The 802.3 or Ethernet protocols are used to
transmit the IP packet over the physical medium used by the LAN.
Standard Organizations
• Alternative model.
• The architecture of the TCP/IP protocol suite follows the structure of this model.
• Similar to OSI Model
Topic 8: Packet Tracer – Investigating the TCP/IP and OSI Models in Action
This simulation activity is intended to provide a foundation for understanding the TCP/IP protocol suite
and the relationship to the OSI model. Simulation mode allows you to view the data contents being sent
across the network at each layer.
As data moves through the network, it is broken down into smaller pieces and identified so that the
pieces can be put back together when they arrive at the destination. Each piece is assigned a specific
name (protocol data unit [PDU]) and associated with a specific layer of the TCP/IP and OSI models.
Packet Tracer simulation mode enables you to view each of the layers and the associated PDU. The
following steps lead the user through the process of requesting a web page from a web server by using
the web browser application available on a client PC.
Topic 9: Internet Operating System (IOS):
Cisco IOS
• All networking equipment depend on operating systems:
-End users
-Switches
-Routers
-Wireless access points
-Firewalls
Cisco Internetwork Operating System (IOS)
• Collection of network operating systems used on Cisco devices
Operating System
All end devices and network devices connected to the Internet require an operating system (OS) to help
them perform their function.
When a computer is powered on, it loads the OS, normally from a disk drive, into RAM. The portion of
the OS code that interacts directly with the computer hardware is known as the kernel. The portion that
interfaces with the applications and user is known as the shell. The user can interact with the shell using
either the command-line interface (CLI) or graphical user interface (GUI).
When using the CLI, the user interacts directly with the system in a text-based environment by entering
commands on the keyboard at a command prompt. The system executes the command, often providing
textual output. The GUI interface allows the user to interact with the system in an environment that
uses graphical images, multimedia, and text. Actions are performed by interacting with the images on
screen. GUI is more user friendly and requires less knowledge of the command structure to utilize the
system. For this reason, many individuals rely on the GUI environments. Many operating systems offer
both GUI and CLI.
The operating system on home routers is usually called firmware. The most common method for
configuring a home router is using a web browser to access an easy to use GUI. Most home routers
enable the update of the firmware as new features or security vulnerabilities are discovered.
Infrastructure network devices use a network operating system. The network operating system used on
Cisco devices is called the Cisco Internetwork Operating System (IOS). Cisco IOS is a generic term for the
collection of network operating systems used on Cisco networking devices. Cisco IOS is used for most
Cisco devices regardless of the type or size of the device. The most common method of accessing these
devices is using a CLI.
IOS Functions
Cisco IOS routers and switches perform functions that network professionals depend upon to make their
networks operate as expected. Major functions performed or enabled by Cisco routers and switches
include:
• Providing network security
• IP addressing of virtual and physical interfaces
• Enabling interface-specific configurations to optimize connectivity of the respective media
• Routing
• Enabling quality of service (QoS) technologies
• Supporting network management technologies
• Each feature or service has an associated collection of configuration commands that allow a
network technician to implement it.
• The services provided by the Cisco IOS are generally accessed using a CLI.
Topic 10: Accessing an IOS Device:
There are several ways to access the CLI environment. The most common methods are:
Console
The console port is a management port that provides out-of-band access to Cisco device. Out-of-band
access refers to access via a dedicated management channel that is used for device maintenance
purposes only. The advantage of using a console port is that the device is accessible even if no
networking services have been configured, such as when performing an initial configuration of the
networking device. When performing an initial configuration, a computer running terminal emulation
software is connected to the console port of the device using a special cable. Configuration commands
for setting up the switch or router can be entered on the connected computer.
The console port can also be used when the networking services have failed and remote access of the
Cisco IOS device is not possible. If this occurs, a connection to the console can enable a computer to
determine the status of the device. By default, the console conveys the device startup, debugging, and
error messages. After the network technician is connected to the device, the network technician can
perform any configuration commands necessary using the console session.
For many IOS devices, console access does not require any form of security, by default. However, the
console should be configured with passwords to prevent unauthorized device access. In the event that a
password is lost, there is a special set of procedures for bypassing the password and accessing the
device. The device should also be located in a locked room or equipment rack to prevent unauthorized
physical access.
Telnet
Telnet is a method for remotely establishing a CLI session of a device, through a virtual interface, over a
network. Unlike the console connection, Telnet sessions require active networking services on the
device. The network device must have at least one active interface configured with an Internet address,
such as an IPv4 address. Cisco IOS devices include a Telnet server process that allows users to enter
configuration commands from a Telnet client. In addition to supporting the Telnet server process, the
Cisco IOS device also contains a Telnet client. This allows a network administrator to telnet from the
Cisco device CLI to any other device that supports a Telnet server process.
SSH
The Secure Shell (SSH) protocol provides a remote login similar to Telnet, except that it uses more
secure network services. SSH provides stronger password authentication than Telnet and uses
encryption when transporting session data. This keeps the user ID, password, and the details of the
management session private. As a best practice, use SSH instead of Telnet whenever possible.
Most versions of Cisco IOS include an SSH server. In some devices, this service is enabled by default.
Other devices require the SSH server to be enabled manually. IOS devices also include an SSH client that
can be used to establish SSH sessions with other devices.
AUX
An older way to establish a CLI session remotely is via a telephone dialup connection using a modem
connected to the auxiliary (AUX) port of a router. Similar to the console connection, the AUX method is
also an out-of-band connection and does not require any networking services to be configured or
available on the device. In the event that network services have failed, it may be possible for a remote
administrator to access the switch or router over a telephone line.
The AUX port can also be used locally, like the console port, with a direct connection to a computer
running a terminal emulation program. However, the console port is preferred over the AUX port for
troubleshooting because it displays startup, debugging, and error messages by default.
Terminal Emulation Program
Software available for connecting to a networking device (usually same as terminal/serial/console
connection):
• PuTTY
• Tera Term
• HyperTerminal
• OS X Terminal
Topic 11: IOS Modes of Operation:
After a network technician is connected to a device, it is possible to configure it. The network technician
must navigate through various modes of the IOS. The Cisco IOS modes are quite similar for switches and
routers. The CLI uses a hierarchical structure for the modes.
In hierarchical order from most basic to most specialized, the major modes are:
The two primary modes of operation are user EXEC mode and privileged EXEC mode. As a security
feature, the Cisco IOS software separates the EXEC sessions into two levels of access. The privileged
EXEC mode has a higher level of authority in what it allows the user to do with the device.
User EXEC Mode
The user EXEC mode has limited capabilities but is useful for some basic operations. The user EXEC mode
is at the most basic level of the modal hierarchical structure. This mode is the first mode encountered
upon entrance into the CLI of an IOS device.
The user EXEC mode allows only a limited number of basic monitoring commands. This is often referred
to as view-only mode. The user EXEC level does not allow the execution of any commands that might
change the configuration of the device.
By default, there is no authentication required to access the user EXEC mode from the console.
However, it is a good practice to ensure that authentication is configured during the initial configuration.
The user EXEC mode is identified by the CLI prompt that ends with the > symbol. This is an example that
shows the > symbol in the prompt:
Switch>
Privileged EXEC Mode
The execution of configuration and management commands requires that the network administrator
use the privileged EXEC mode or a more specific mode in the hierarchy. This means that a user must
enter user EXEC mode first, and from there, access privileged EXEC mode.
The privileged EXEC mode can be identified by the prompt ending with the # symbol.
Switch#
By default, privileged EXEC mode does not require authentication. It is a good practice to ensure that
authentication is configured.
Global configuration mode and all other more specific configuration modes can only be reached from
the privileged EXEC mode. In a later section of this chapter, we will examine device configuration and
some of the configuration modes.
Global configuration mode
Global configuration mode and interface configuration modes can only be reached from the privileged
EXEC mode.
Global Configuration Mode
The primary configuration mode is called global configuration or global config. From global configuration
mode, CLI configuration changes are made that affect the operation of the device as a whole. The global
configuration mode is accessed before accessing specific configuration modes.
The following CLI command is used to take the device from privileged EXEC mode to the global
configuration mode and to allow entry of configuration commands from a terminal:
Switch# configure terminal
After the command is executed, the prompt changes to show that the switch is in global configuration
mode.
Switch(config)#
Specific Configuration Modes
From the global configuration mode, the user can enter different sub-configuration modes. Each of
these modes allows the configuration of a particular part or function of the IOS device. The list below
shows a few of them:
Interface mode - to configure one of the network interfaces (Fa0/0, S0/0/0)
Line mode - to configure one of the physical or virtual lines (console, AUX, VTY)
Command Prompts
When using the CLI, the mode is identified by the command-line prompt that is unique to that mode. By
default, every prompt begins with the device name. Following the name, the remainder of the prompt
indicates the mode. For example, the default prompt for the global configuration mode on a switch
would be:
Switch(config)#
Navigating between IOS Modes
A Cisco IOS device supports many commands. Each IOS command has a specific format or syntax and
can only be executed at the appropriate mode. The general syntax for a command is the command
followed by any appropriate keywords and arguments. Some commands include a subset of keywords
and arguments that provide additional functionality. Commands are used to execute an action, and the
keywords are used to identify where or how to execute the command.
As shown in Figure above, the command is the initial word or words entered in the command line
following the prompt. The commands are not case-sensitive. Following the command are one or more
keywords and arguments. After entering each complete command, including any keywords and
arguments, press the Enter key to submit the command to the command interpreter.
Context-Sensitive Help
The context-sensitive help provides a list of commands and the arguments associated with those
commands within the context of the current mode. To access context-sensitive help, enter a question
mark, ?, at any prompt. There is an immediate response without the need to use the Enter key.
One use of context-sensitive help is to get a list of available commands. This can be used when you are
unsure of the name for a command or you want to see if the IOS supports a particular command in a
particular mode.
Command Syntax Check
When a command is submitted by pressing the Enter key, the command line interpreter parses the
command from left to right to determine what action is being requested. The IOS generally only
provides negative feedback, as shown in Figure above. If the interpreter understands the command, the
requested action is executed and the CLI returns to the appropriate prompt. However, if the interpreter
cannot understand the command being entered, it will provide feedback describing what is wrong with
the command.
There are three different types of error messages:
Ambiguous command
Incomplete command
Incorrect command
Hot Keys and Shortcuts
The IOS CLI provides hot keys and shortcuts that make configuring, monitoring, and troubleshooting
easier.
The following are worthy of special note:
Down Arrow - Allows the user to scroll forward through former commands
Up Arrow - Allows the user to scroll backward through former commands
Tab - Completes the remainder of a partially typed command or keyword
Ctrl-A - Moves to the beginning of the line
Ctrl-E - Moves to the end of the line
Ctrl-R - Redisplays a line
Ctrl-Z - Exits the configuration mode and returns to user EXEC
Ctrl-C - Exits the configuration mode or aborts the current command
Ctrl-Shift-6 - Allows the user to interrupt an IOS process such as ping or traceroute
Abbreviated commands or keywords
Commands and keywords can be abbreviated to the minimum number of characters that identify a
unique selection. For example, the configure command can be abbreviated to conf because configure is
the only command that begins with conf. An abbreviation of con will not work because more than one
command begins with con.
Keywords can also be abbreviated.
As another example, show interfaces can be abbreviated like this:
Switch# show interfaces
Switch# show int
IOS Examination Commands
One of the most commonly used commands on a switch or router is:
Switch# show version
This command displays information about the currently loaded IOS version, along with hardware and
device information. If you are logged into a router or switch remotely, the show version command is an
excellent means of quickly finding useful summary information about the particular device to which you
are connected.
Topic 13: Packet Tracer - Navigating the IOS:
In this activity, you will practice skills necessary for navigating the Cisco IOS, including different user
access modes, various configuration modes, and common commands you use on a regular basis.
Topic 14: Configuring Hostnames:
Cisco switches and Cisco routers have many similarities. They support a similar modal operating system
support similar command structures, and support many of the same commands. In addition, both
devices have identical initial configuration steps when implementing them in a network.
However, a Cisco IOS switch is one of the simplest devices that can be configured on a network. This is
because there are no configurations that are required prior to the device functioning. At its most basic, a
switch can be plugged in with no configuration, but it will still switch data between connected devices.
A switch is also one of the fundamental devices used in the creation of a small network. By connecting
two PCs to a switch, those PCs will instantly have connectivity with one another. Initial settings include
setting a name for the switch, limiting access to the device configuration, configuring banner messages,
and saving the configuration.
Device Names
When configuring a networking device, one of the first steps is configuring a unique device name, or
hostname. Hostnames appear in CLI prompts, can be used in various authentication processes between
devices, and should be used on topology diagrams.
Hostnames are configured on the active networking device. If the device name is not explicitly
configured, a factory-assigned default device name is used by Cisco IOS. The default name for a Cisco
IOS switch is "Switch."
Some guidelines for naming conventions are that names should:
• Start with a letter
• Contain no spaces
• End with a letter or digit
• Use only letters, digits, and dashes
• Be less than 64 characters in length
Hostnames allow devices to be identified by network administrators over a network or the Internet.
Securing Privilege EXEC Mode
• use the enable secret command, not the older enable password command
• enable secret provides greater security because the password is encrypted
Securing USER EXEC Mode
The IEEE and telecommunications industry standards for wireless data communications cover both the
data link and physical layers.
Topic 21: Data Link Layer Protocols:
The TCP/IP network access layer is the equivalent of the OSI:
Data link (Layer 2)
Physical (Layer 1)
The data link layer is responsible for the exchange of frames between nodes over a physical network
media. It allows the upper layers to access the media and controls how data is placed and received on
the media.
Specifically the data link layer performs these two basic services:
• It accepts Layer 3 packets and packages them into data units called frames.
• It controls media access control and performs error detection.
The data link layer effectively separates the media transitions that occur as the packet is forwarded from
the communication processes of the higher layers. The data link layer receives packets from and directs
packets to an upper layer protocol, in this case IPv4 or IPv6. This upper layer protocol does not need to
be aware of which media the communication will use.
Data Link Sublayers
The data link layer is actually divided into two sublayers:
Logical Link Control (LLC): This upper sublayer defines the software processes that provide services to
the network layer protocols. It places information in the frame that identifies which network layer
protocol is being used for the frame. This information allows multiple Layer 3 protocols, such as IPv4 and
IPv6, to utilize the same network interface and media.
Media Access Control (MAC): This lower sublayer defines the media access processes performed by the
hardware. It provides data link layer addressing and delimiting of data according to the physical signaling
requirements of the medium and the type of data link layer protocol in use.
Separating the data link layer into sublayers allows for one type of frame defined by the upper layer to
access different types of media defined by the lower layer. Such is the case in many LAN technologies,
including Ethernet.
Data Link Frame Fields
The frame header contains the control information specified by the data link layer protocol for the
specific logical topology and media used.
Frame control information is unique to each type of protocol. It is used by the Layer 2 protocol to
provide features demanded by the communication environment.
Start Frame field: Indicates the beginning of the frame.
Source and Destination Address fields: Indicates the source and destination nodes on the media.
Type field: Indicates the upper layer service contained in the frame.
Different data link layer protocols may use different fields from those mentioned. For example other
Layer 2 protocol header frame fields could include:
Priority/Quality of Service field: Indicates a particular type of communication service for processing.
Logical connection control field: Used to establish a logical connection between nodes.
Physical link control field: Used to establish the media link.
Flow control field: Used to start and stop traffic over the media.
Congestion control field: Indicates congestion in the media.
Data link layer protocols add a trailer to the end of each frame. The trailer is used to determine if the
frame arrived without error. This process is called error detection and is accomplished by placing a
logical or mathematical summary of the bits that comprise the frame in the trailer. Error detection is
added at the data link layer because the signals on the media could be subject to interference,
distortion, or loss that would substantially change the bit values that those signals represent.
A transmitting node creates a logical summary of the contents of the frame. This is known as the cyclic
redundancy check (CRC) value. This value is placed in the Frame Check Sequence (FCS) field of the frame
to represent the contents of the frame.
When the frame arrives at the destination node, the receiving node calculates its own logical summary,
or CRC, of the frame. The receiving node compares the two CRC values. If the two values are the same,
the frame is considered to have arrived as transmitted. If the CRC value in the FCS differs from the CRC
calculated at the receiving node, the frame is discarded.
Therefore, the FCS field is used to determine if errors occurred in the transmission and reception of the
frame. The error detection mechanism provided by the use of the FCS field discovers most errors caused
on the media.
Topic 22: Packet Tracer – Connecting a Wired and Wireless LAN:
When working in Packet Tracer (a lab environment or a corporate setting), you should know how to
select the appropriate cable and how to properly connect devices. This activity will examine device
configurations in Packet Tracer, selecting the proper cable based on the configuration, and connecting
the devices. This activity will also explore the physical view of the network in Packet Tracer.
Topic 23: Network Layer Protocols:
The network layer, or OSI Layer 3, provides services to allow end devices to exchange data across the
network. To accomplish this end-to-end transport, the network layer uses four basic processes:
Addressing end devices - In the same way that a phone has a unique telephone number, end devices
must be configured with a unique IP address for identification on the network. An end device with a
configured IP address is referred to as a host.
Encapsulation - The network layer receives a protocol data unit (PDU) from the transport layer. In a
process called encapsulation, the network layer adds IP header information, such as the IP address of
the source (sending) and destination (receiving) hosts. After header information is added to the PDU,
the PDU is called a packet.
Routing - The network layer provides services to direct packets to a destination host on another
network. To travel to other networks, the packet must be processed by a router. The role of the router is
to select paths for and direct packets toward the destination host in a process known as routing. A
packet may cross many intermediary devices before reaching the destination host. Each route the
packet takes to reach the destination host is called a hop.
De-encapsulation - When the packet arrives at the network layer of the destination host, the host
checks the IP header of the packet. If the destination IP address within the header matches its own IP
address, the IP header is removed from the packet. This process of removing headers from lower layers
is known as de-encapsulation. After the packet is de-encapsulated by the network layer, the resulting
Layer 4 PDU is passed up to the appropriate service at the transport layer.
Unlike the transport layer (OSI Layer 4), which manages the data transport between the processes
running on each host, network layer protocols specify the packet structure and processing used to carry
the data from one host to another host. Operating without regard to the data carried in each packet
allows the network layer to carry packets for multiple types of communications between multiple hosts.
There are several network layer protocols in existence; however, only the following two are commonly
implemented:
• Internet Protocol version 4 (IPv4)
• Internet Protocol version 6 (IPv6)
Other legacy network layer protocols that are not widely used include:
• Novell Internetwork Packet Exchange (IPX)
• AppleTalk
• Connectionless Network Service (CLNS/DECNet)
Characteristics of IP Protocol
IP is the network layer service implemented by the TCP/IP protocol suite.
IP was designed as a protocol with low overhead. It provides only the functions that are necessary to
deliver a packet from a source to a destination over an interconnected system of networks. The protocol
was not designed to track and manage the flow of packets. These functions, if required, are performed
by other protocols in other layers.
The basic characteristics of IP are:
Connectionless - No connection with the destination is established before sending data packets.
Best Effort (unreliable) - Packet delivery is not guaranteed.
Media Independent - Operation is independent of the medium carrying the data.
Topic 24: IPv4 Packet:
Assignment of IP Addresses
For a company or organization to have network hosts, such as web servers, accessible from the Internet,
that organization must have a block of public addresses assigned. Remember that public addresses must
be unique, and use of these public addresses is regulated and allocated to each organization separately.
This is true for IPv4 and IPv6 addresses.
IANA and RIRs
Internet Assigned Numbers Authority (IANA) (http://www.iana.org) manages the allocation of IPv4 and
IPv6 addresses. Until the mid-1990s, all IPv4 address space was managed directly by the IANA. At that
time, the remaining IPv4 address space was allocated to various other registries to manage for particular
purposes or for regional areas. These registration companies are called Regional Internet Registries
(RIRs).
The major registries are:
AfriNIC (African Network Information Centre) - Africa Regionhttp://www.afrinic.net
APNIC (Asia Pacific Network Information Centre) - Asia/Pacific Region http://www.apnic.net
ARIN (American Registry for Internet Numbers) - North America Regionhttp://www.arin.net
LACNIC (Regional Latin-American and Caribbean IP Address Registry) - Latin America and some
Caribbean Islandshttp://www.lacnic.net
RIPE NCC (Reseaux IP Europeans) - Europe, the Middle East, and Central Asia http://www.ripe.net
ISPs
RIRs are responsible for allocating IP addresses to the Internet Service Providers (ISPs). Most companies
or organizations obtain their IPv4 address blocks from an ISP. An ISP will generally supply a small
number of usable IPv4 addresses (6 or 14) to their customers as a part of their services. Larger blocks of
addresses can be obtained based on justification of needs and for additional service costs.
Topic 29: Using Windows Calculator with Network Addresses:
In this activity, we will use windows calculator to calculate network addresses.
Topic 30: Converting IPv4 Addresses to Binary:
• Convert IPv4 Addresses from Dotted Decimal to Binary
• Bitwise ANDing
• Network Address Calculation
Topic 31: Network Segmentation:
In early network implementations, it was common for organizations to have all computers and other
networked devices connected to a single IP network. All devices in the organization were assigned an IP
address with a matching network ID. This type of configuration is known as a flat network design. In a
small network, with a limited number of devices, a flat network design is not problematic. However, as
the network grows, this type of configuration can create major issues.
Consider how on an Ethernet LAN, devices use broadcasts to locate needed services and devices. Recall
that a broadcast is sent to all hosts on an IP network. The Dynamic Host Configuration Protocol (DHCP) is
an example of a network service that depends on broadcasts. Devices send broadcasts across the
network to locate the DHCP server. On a large network, this could create a significant amount of traffic
slowing network operations.
The process of segmenting a network, by dividing it into multiple smaller network spaces, is called
subnetting. These sub-networks are called subnets. Network administrators can group devices and
services into subnets that are determined by geographic location (perhaps the 3rd floor of a building),
by organizational unit (perhaps the sales department), by device type (printers, servers, WAN), or any
other division that makes sense for the network. Subnetting can reduce overall network traffic and
improve network performance.
Subnetting
Segmenting networks in subnets creates smaller groups of devices and services in order to:
• Create smaller broadcast domains.
• Limit the amount of traffic on the other network segments.
• Provide low-level security.
A primary function of a router is to forward packets toward their destination. This is accomplished by
using a switching function, which is the process used by a router to accept a packet on one interface and
forward it out of another interface. A key responsibility of the switching function is to encapsulate
packets in the appropriate data link frame type for the outgoing data link.
After the router has determined the exit interface using the path determination function, the router
must encapsulate the packet into the data link frame of the outgoing interface.
What does a router do with a packet received from one network and destined for another network? The
router performs the following three major steps:
Step 1. De-encapsulates the Layer 3 packet by removing the Layer 2 frame header and trailer.
Step 2. Examines the destination IP address of the IP packet to find the best path in the routing table.
Step 3. If the router finds a path to the destination, it encapsulates the Layer 3 packet into a new Layer 2
frame and forwards the frame out the exit interface.
As shown in the figure above, devices have Layer 3 IPv4 addresses and Ethernet interfaces have Layer 2
data link addresses. For example, PC1 is configured with IPv4 address 192.168.1.10 and an example MAC
address of 0A-10. As a packet travels from the source device to the final destination device, the Layer 3
IP addresses do not change. However, the Layer 2 data link addresses change at every hop as the packet
is de-encapsulated and re-encapsulated in a new frame by each router. It is very likely that the packet is
encapsulated in a different type of Layer 2 frame than the one in which it was received. For example, an
Ethernet encapsulated frame might be received by the router on a FastEthernet interface, and then
processed to be forwarded out of a serial interface as a Point-to-Point Protocol (PPP) encapsulated
frame.
Please refer to the slides for detail understanding with the help of an example.
Topic 45: Path Determination:
A primary function of a router is to determine the best path to use to send packets. To determine the
best path, the router searches its routing table for a network address that matches the destination IP
address of the packet.
The routing table search results in one of three path determinations:
Directly connected network - If the destination IP address of the packet belongs to a device on a
network that is directly connected to one of the interfaces of the router, that packet is forwarded
directly to the destination device. This means that the destination IP address of the packet is a host
address on the same network as the interface of the router.
Remote network - If the destination IP address of the packet belongs to a remote network, then the
packet is forwarded to another router. Remote networks can only be reached by forwarding packets to
another router.
No route determined - If the destination IP address of the packet does not belong to either a connected
or remote network, the router determines if there is a Gateway of Last Resort available. A Gateway of
Last Resort is set when a default route is configured on a router. If there is a default route, the packet is
forwarded to the Gateway of Last Resort. If the router does not have a default route, then the packet is
discarded. If the packet is discarded, the router sends an ICMP unreachable message to the source IP
address of the packet.
The logic flowchart in the figure above illustrates the router packet forwarding decision process.
Determining the best path involves the evaluation of multiple paths to the same destination network
and selecting the optimum or shortest path to reach that network. Whenever multiple paths to the
same network exist, each path uses a different exit interface on the router to reach that network.
The best path is selected by a routing protocol based on the value or metric it uses to determine the
distance to reach a network. A metric is the quantitative value used to measure the distance to a given
network. The best path to a network is the path with the lowest metric.
Dynamic routing protocols typically use their own rules and metrics to build and update routing tables.
The routing algorithm generates a value, or a metric, for each path through the network. Metrics can be
based on either a single characteristic or several characteristics of a path. Some routing protocols can
base route selection on multiple metrics, combining them into a single metric.
The following lists some dynamic protocols and the metrics they use:
Routing Information Protocol (RIP) - Hop count
Open Shortest Path First (OSPF) - Cisco's cost based on cumulative bandwidth from source to
destination
Enhanced Interior Gateway Routing Protocol (EIGRP) - Bandwidth, delay, load, reliability
Load Balancing
What happens if a routing table has two or more paths with identical metrics to the same destination
network?
When a router has two or more paths to a destination with equal cost metrics, then the router forwards
the packets using both paths equally. This is called equal cost load balancing. The routing table contains
the single destination network, but has multiple exit interfaces, one for each equal cost path. The router
forwards packets using the multiple exit interfaces listed in the routing table.
If configured correctly, load balancing can increase the effectiveness and performance of the network.
Equal cost load balancing can be configured to use both dynamic routing protocols and static routes.
Note: Only EIGRP supports unequal cost load balancing.
Topic 46: Analyze the Routing Table:
The routing table of a router stores information about:
Directly connected routes - These routes come from the active router interfaces. Routers add a directly
connected route when an interface is configured with an IP address and is activated.
Remote routes - These are remote networks connected to other routers. Routes to these networks can
either be statically configured or dynamically configured using dynamic routing protocols.
Specifically, a routing table is a data file in RAM that is used to store route information about directly
connected and remote networks. The routing table contains network or next hop associations. These
associations tell a router that a particular destination can be optimally reached by sending the packet to
a specific router that represents the next hop on the way to the final destination. The next hop
association can also be the outgoing or exit interface to the next destination.
The figure below identifies the directly connected networks and remote networks of router R1.
On a Cisco IOS router, the show ip route command can be used to display the IPv4 routing table of a
router. A router provides additional route information, including how the route was learned, how long
the route has been in the table, and which specific interface to use to get to a predefined destination.
Entries in the routing table can be added as:
Local Route interfaces - Added when an interface is configured and active. This entry is only displayed in
IOS 15 or newer for IPv4 routes and all IOS releases for IPv6 routes.
Directly connected interfaces - Added to the routing table when an interface is configured and active.
Static routes - Added when a route is manually configured and the exit interface is active.
Dynamic routing protocol - Added when routing protocols that dynamically learn about the network,
such as EIGRP or OSPF, are implemented and networks are identified.
The sources of the routing table entries are identified by a code. The code identifies how the route was
learned. For instance, common codes include:
L - Identifies the address assigned to a router's interface. This allows the router to efficiently determine
when it receives a packet for the interface instead of being forwarded.
C - Identifies a directly connected network.
S - Identifies a static route created to reach a specific network.
D - Identifies a dynamically learned network from another router using EIGRP.
O - Identifies a dynamically learned network from another router using the OSPF routing protocol.
The figure below shows the routing table of R1 in a simple network.
• Setup Topology
• Configure Devices
• Verify Connectivity
• Display Device Information
In this activity, you will configure a router with basic settings including IP addressing. You will also
configure a switch for remote management and configure the PCs. After you have successfully verified
connectivity, you will use show commands to gather information about the network.
Topic 53: Packet Tracer – Building a Switch and Router Network - 2:
This is the continuation of the previous topic.
Topic 54: Packet Tracer – Testing Network Connectivity with Ping & Traceroute:
• Build and Configure a Network
• Ping Command
• Tracert/Traceroute Command
In this activity, you will configure a router with basic settings including IP addressing. You will also
configure a switch for remote management and configure the PCs. After you have successfully verified
connectivity, you will use show commands to gather information about the network.
Topic 55: Packet Tracer – Testing Network Connectivity with Ping & Traceroute - 2:
This is the continuation of the previous topic.
Topic 56: Packet Tracer – Testing Network Connectivity with Ping & Traceroute - 3:
This is the continuation of the previous topic.
Topic 57: Static Routing:
A router can learn about remote networks in one of two ways:
Manually - Remote networks are manually entered into the route table using static routes.
Dynamically - Remote routes are automatically learned using a dynamic routing protocol.
A network administrator can manually configure a static route to reach a specific network. Unlike a
dynamic routing protocol, static routes are not automatically updated and must be manually
reconfigured any time the network topology changes. A static route does not change until the
administrator manually reconfigures it.
Dynamic vs. Static Routing
Static routing has three primary uses:
• Providing ease of routing table maintenance in smaller networks that are not expected to grow
significantly.
• Routing to and from stub networks. A stub network is a network accessed by a single route, and
the router has only one neighbor.
• Using a single default route to represent a path to any network that does not have a more
specific match with another route in the routing table. Default routes are used to send traffic to
any destination beyond the next upstream router.
Static routes can also be used to:
• Reduce the number of routes advertised by summarizing several contiguous networks as one
static route
• Create a backup route in case a primary route link fails
The following types of IPv4 and IPv6 static routes will be discussed:
• Standard static route
• Default static route
• Summary static route
• Floating static route
A default static route is a route that matches all packets. A default route identifies the gateway IP
address to which the router sends all IP packets that it does not have a learned or static route. A default
static route is simply a static route with 0.0.0.0/0 as the destination IPv4 address. Configuring a default
static route creates a Gateway of Last Resort.
Default static routes are used:
• When no other routes in the routing table match the packet destination IP address. In other
words, when a more specific match does not exist. A common use is when connecting a
company's edge router to the ISP network.
• When a router has only one other router to which it is connected. This condition is known as a
stub router.
Summary Static Route
To reduce the number of routing table entries, multiple static routes can be summarized into a single
static route if:
• The destination networks are contiguous and can be summarized into a single network address.
• The multiple static routes all use the same exit interface or next-hop IP address.
Floating Static Route
Another type of static route is a floating static route. Floating static routes are static routes that are used
to provide a backup path to a primary static or dynamic route, in the event of a link failure. The floating
static route is only used when the primary route is not available.
To accomplish this, the floating static route is configured with a higher administrative distance than the
primary route. Recall that the administrative distance represents the trustworthiness of a route. If
multiple paths to the destination exist, the router will choose the path with the lowest administrative
distance.
For example, assume that an administrator wants to create a floating static route as a backup to an
EIGRP-learned route. The floating static route must be configured with a higher administrative distance
than EIGRP. EIGRP has an administrative distance of 90. If the floating static route is configured with an
administrative distance of 95, the dynamic route learned through EIGRP is preferred to the floating
static route. If the EIGRP-learned route is lost, the floating static route is used in its place.
In the figure below, the Branch router typically forwards all traffic to the HQ router over the private
WAN link. In this example, the routers exchange route information using EIGRP. A floating static route,
with an administrative distance of 91 or higher, could be configured to serve as a backup route. If the
private WAN link fails and the EIGRP route disappears from the routing table, the router selects the
floating static route as the best path to reach the HQ LAN.
Routing protocols are used to facilitate the exchange of routing information between routers. A routing
protocol is a set of processes, algorithms, and messages that are used to exchange routing information
and populate the routing table with the routing protocols choice of best paths. The purpose of dynamic
routing protocols includes:
• Discovery of remote networks
• Maintaining up-to-date routing information
• Choosing the best path to destination networks
• Ability to find a new best path if the current path is no longer available
The main components of dynamic routing protocols include:
Data structures - Routing protocols typically use tables or databases for its operations. This information
is kept in RAM.
Routing protocol messages - Routing protocols use various types of messages to discover neighboring
routers, exchange routing information, and other tasks to learn and maintain accurate information
about the network.
Algorithm - An algorithm is a finite list of steps used to accomplish a task. Routing protocols use
algorithms for facilitating routing information and for best path determination.
Topic 70: Routing Protocol Operating Fundamentals:
All routing protocols are designed to learn about remote networks and to quickly adapt whenever there
is a change in the topology. The method that a routing protocol uses to accomplish this depends upon
the algorithm it uses and the operational characteristics of that protocol.
In general, the operations of a dynamic routing protocol can be described as follows:
1. The router sends and receives routing messages on its interfaces.
2. The router shares routing messages and routing information with other routers that are using the
same routing protocol.
3. Routers exchange routing information to learn about remote networks.
4. When a router detects a topology change the routing protocol can advertise this change to other
routers.
All routing protocols follow the same patterns of operation. To help illustrate this, consider the following
scenario in which all three routers are running RIPv2.
When a router powers up, it knows nothing about the network topology. It does not even know that
there are devices on the other end of its links. The only information that a router has is from its own
saved configuration file stored in NVRAM. After a router boots successfully, it applies the saved
configuration. If the IP addressing is configured correctly, then the router initially discovers its own
directly connected networks.
Routers proceed through the boot up process and then discovers any directly connected networks and
subnet masks. This information is added to their routing tables as follows:
• R1 adds the 10.1.0.0 network available through interface FastEthernet 0/0 and 10.2.0.0 is
available through interface Serial 0/0/0.
• R2 adds the 10.2.0.0 network available through interface Serial 0/0/0 and 10.3.0.0 is available
through interface Serial 0/0/1.
• R3 adds the 10.3.0.0 network available through interface Serial 0/0/1 and 10.4.0.0 is available
through interface FastEthernet 0/0.
With this initial information, the routers then proceed to find additional route sources for their routing
tables.
After initial boot up and discovery, the routing table is updated with all directly connected networks and
the interfaces those networks reside on.
If a routing protocol is configured, the next step is for the router to begin exchanging routing updates to
learn about any remote routes.
The router sends an update packet out all interfaces that are enabled on the router. The update
contains the information in the routing table, which currently are all directly connected networks.
At the same time, the router also receives and processes similar updates from other connected routers.
Upon receiving an update, the router checks it for new network information. Any networks that are not
currently listed in the routing table are added.
Refer to the figure above for a topology setup between three routers, R1, R2, and R3. Based on this
topology, below is a listing of the different updates that R1, R2, and R3 send and receive during initial
convergence.
R1:
• Sends an update about network 10.1.0.0 out the Serial0/0/0 interface
• Sends an update about network 10.2.0.0 out the FastEthernet0/0 interface
• Receives update from R2 about network 10.3.0.0 and increments the hop count by 1
• Stores network 10.3.0.0 in the routing table with a metric of 1
R2:
• Sends an update about network 10.3.0.0 out the Serial 0/0/0 interface
• Sends an update about network 10.2.0.0 out the Serial 0/0/1 interface
• Receives an update from R1 about network 10.1.0.0 and increments the hop count by 1
• Stores network 10.1.0.0 in the routing table with a metric of 1
• Receives an update from R3 about network 10.4.0.0 and increments the hop count by 1
• Stores network 10.4.0.0 in the routing table with a metric of 1
R3:
• Sends an update about network 10.4.0.0 out the Serial 0/0/1 interface
• Sends an update about network 10.3.0.0 out the FastEthernet0/0
• Receives an update from R2 about network 10.2.0.0 and increments the hop count by 1
• Stores network 10.2.0.0 in the routing table with a metric of 1
After this first round of update exchanges, each router knows about the connected networks of their
directly connected neighbors. However, did you notice that R1 does not yet know about 10.4.0.0 and
that R3 does not yet know about 10.1.0.0? Full knowledge and a converged network do not take place
until there is another exchange of routing information.
At this point the routers have knowledge about their own directly connected networks and about the
connected networks of their immediate neighbors. Continuing the journey toward convergence, the
routers exchange the next round of periodic updates. Each router again checks the updates for new
information.
Refer to the figure above for a topology setup between three routers, R1, R2, and R3. After initial
discovery is complete, each router continues the convergence process by sending and receiving the
following updates.
R1:
• Sends an update about network 10.1.0.0 out the Serial 0/0/0 interface
• Sends an update about networks 10.2.0.0 and 10.3.0.0 out the FastEthernet0/0 interface
• Receives an update from R2 about network 10.4.0.0 and increments the hop count by 1
• Stores network 10.4.0.0 in the routing table with a metric of 2
• Same update from R2 contains information about network 10.3.0.0 with a metric of 1. There is
no change; therefore, the routing information remains the same
R2:
• Sends an update about networks 10.3.0.0 and 10.4.0.0 out of Serial 0/0/0 interface
• Sends an update about networks 10.1.0.0 and 10.2.0.0 out of Serial 0/0/1 interface
• Receives an update from R1 about network 10.1.0.0. There is no change; therefore, the routing
information remains the same
• Receives an update from R3 about network 10.4.0.0. There is no change; therefore, the routing
information remains the same
R3:
• Sends an update about network 10.4.0.0 out the Serial 0/0/1 interface
• Sends an update about networks 10.2.0.0 and 10.3.0.0 out the FastEthernet0/0 interface
• Receives an update from R2 about network 10.1.0.0 and increments the hop count by 1
• Stores network 10.1.0.0 in the routing table with a metric of 2
• Same update from R2 contains information about network 10.2.0.0 with a metric of 1. There is
no change; therefore, the routing information remains the same
Distance vector routing protocols typically implement a routing loop prevention technique known as
split horizon. Split horizon prevents information from being sent out the same interface from which it
was received. For example, R2 does not send an update containing the network 10.1.0.0 out of Serial
0/0/0, because R2 learned about network 10.1.0.0 through Serial 0/0/0.
After routers within a network have converged, the router can then use the information within the
route table to determine the best path to reach a destination. Different routing protocols have different
ways of calculating the best path.
The network has converged when all routers have complete and accurate information about the entire
network. Convergence time is the time it takes routers to share information, calculate best paths, and
update their routing tables. A network is not completely operable until the network has converged;
therefore, most networks require short convergence times.
Convergence is both collaborative and independent. The routers share information with each other, but
must independently calculate the impacts of the topology change on their own routes. Because they
develop an agreement with the new topology independently, they are said to converge on this
consensus.
Convergence properties include the speed of propagation of routing information and the calculation of
optimal paths. The speed of propagation refers to the amount of time it takes for routers within the
network to forward routing information.
Routing protocols can be rated based on the speed to convergence; the faster the convergence, the
better the routing protocol. Generally, older protocols, such as RIP, are slow to converge, whereas
modern protocols, such as EIGRP and OSPF, converge more quickly.
Topic 71: Packet Tracer – Investigating Convergence:
• View the Routing Table of a Converged Network
• Add a New LAN
• Watch Network Converge
Topic 72: Types of Routing Protocols:
Routing protocols can be classified into different groups according to their characteristics. Specifically,
routing protocols can be classified by their:
• Purpose - Interior Gateway Protocol (IGP) or Exterior Gateway Protocol (EGP)
• Operation - Distance vector, link-state protocol, or path-vector protocol
• Behavior - Classful (legacy) or classless protocol
• For example, IPv4 routing protocols are classified as follows:
• RIPv1 (legacy) - IGP, distance vector, classful protocol
• IGRP (legacy) - IGP, distance vector, classful protocol developed by Cisco (deprecated from 12.2
IOS and later)
• RIPv2 - IGP, distance vector, classless protocol
• EIGRP - IGP, distance vector, classless protocol developed by Cisco
• OSPF - IGP, link-state, classless protocol
• IS-IS - IGP, link-state, classless protocol
• BGP - EGP, path-vector, classless protocol
The classful routing protocols, RIPv1 and IGRP, are legacy protocols and are only used in older networks.
These routing protocols have evolved into the classless routing protocols, RIPv2 and EIGRP, respectively.
Link-state routing protocols are classless by nature.
Figure below displays a hierarchical view of dynamic routing protocol classification.
Refer to Figure above. In this scenario, R1 is single-homed to a service provider. Therefore, all that is
required for R1 to reach the Internet is a default static route going out of the Serial 0/0/1 interface.
Similar default static routes could be configured on R2 and R3, but it is much more scalable to enter it
one time on the edge router R1 and then have R1 propagate it to all other routers using RIP. To provide
Internet connectivity to all other networks in the RIP routing domain, the default static route needs to
be advertised to all other routers that use the dynamic routing protocol.
To propagate a default route, the edge router must be configured with:
A default static route using the ip route 0.0.0.0 0.0.0.0 exit-intf next-hop-ip command.
The default-information originate router configuration command. This instructs R1 router to originate
default information, by propagating the static default route in RIP updates.
The example in Figure below configures a fully specified default static route to the service provider and
then the route is propagated by RIP. Notice that R1 now has a Gateway of Last Resort and default route
installed in its routing table.
Topic 77: Packet Tracer – Configuring RIPv2-1:
• Build the Network
• Configure Device Settings
• Configure RIPv2
• Verify RIPv2
Background/Scenario
In this activity, you will configure static and default routes. A static route is a route that is entered
manually by the network administrator to create a route that is reliable and safe. There are four
different static routes that are used in this activity: a recursive static route, a directly connected static
route, a fully specified static route, and a default route.
Topic 78: Packet Tracer – Configuring RIPv2-2:
This is the continuation of the previous topic.
Topic 79: Packet Tracer – Configuring RIPv2-3:
This is the continuation of the previous topic.
Topic 80: Packet Tracer – Comparing RIP and EIGRP Path Selection:
• Predict the Path
• Trace the Route
Topic 81: Packet Tracer – Configuring RIPv2:
• Configure RIPv2
• Verify Configurations
Topic 82: Link-State Routing Protocol:
Link-state routing protocols are also known as shortest path first protocols and are built around Edsger
Dijkstra's shortest path first (SPF) algorithm.
Link-state routing protocols have the reputation of being much more complex than their distance vector
counterparts. However, the basic functionality and configuration of link-state routing protocols is
equally straight-forward.
Just like RIP and EIGRP, basic OSPF operations can be configured using the:
router ospf process-id global configuration command
network command to advertise networks
Dijkstra’s Shortest Path First Algorithm
All link-state routing protocols apply Dijkstra's algorithm to calculate the best path route. The algorithm
is commonly referred to as the shortest path first (SPF) algorithm. This algorithm uses accumulated costs
along each path, from source to destination, to determine the total cost of a route.
In the figure below, each path is labeled with an arbitrary value for cost. The cost of the shortest path
for R2 to send packets to the LAN attached to R3 is 27. Each router determines its own cost to each
destination in the topology. In other words, each router calculates the SPF algorithm and determines the
cost from its own perspective.
The topology displayed in Figure above is used as the reference topology. Notice that in the topology:
R1 is the edge router that connects to the Internet. Therefore, it is propagating a default static route to
R2 and R3.
R1, R2, and R3 contain discontiguous networks separated by another classful network.
R3 is also introducing a 192.168.0.0/16 supernet route.
Figure above displays the IPv4 routing table of R1 with directly connected, static, and dynamic routes.
Note: The routing table hierarchy in Cisco IOS was originally implemented with the classful routing
scheme. Although the routing table incorporates both classful and classless addressing, the overall
structure is still built around this classful scheme.
Remote Networks
The figure above displays an IPv4 routing table entry on R1 for the route to remote network 172.16.4.0
on R3. The entry identifies the following information:
• Route source - Identifies how the route was learned.
• Destination network - Identifies the address of the remote network.
• Administrative distance - Identifies the trustworthiness of the route source.
• Metric - Identifies the value assigned to reach the remote network. Lower values indicate
preferred routes.
• Next hop - Identifies the IPv4 address of the next router to forward the packet to.
• Route timestamp - Identifies from when the route was last heard.
• Outgoing interface - Identifies the exit interface to use to forward a packet toward the final
destination.
IPv4 Routing Table – Ultimate Route
A dynamically built routing table provides a great deal of information. Therefore, it is crucial to
understand the output generated by the routing table. Special terms are applied when discussing the
contents of a routing table.
The Cisco IP routing table is not a flat database. The routing table is actually a hierarchical structure that
is used to speed up the lookup process when locating routes and forwarding packets. Within this
structure, the hierarchy includes several levels.
Routes are discussed in terms of:
• Ultimate route
• Level 1 route
• Level 1 parent route
• Level 2 child routes
IPv4 Routing Table – Level 1 Route
A level 1 route is a route with a subnet mask equal to or less than the classful mask of the network
address. Therefore, a level 1 route can be a:
Network route - A network route that has a subnet mask equal to that of the classful mask.
Supernet route - A supernet route is a network address with a mask less than the classful mask, for
example, a summary address.
Default route - A default route is a static route with the address 0.0.0.0/0.
The source of the level 1 route can be a directly connected network, static route, or a dynamic routing
protocol.
IPv4 Routing Table – Level 2 Child Route
A level 2 child route is a route that is a subnet of a classful network address. A level 1 parent route is a
level 1 network route that is subnetted. A level 1 parent routes contain level 2 child routes.
Like a level 1 route, the source of a level 2 route can be a directly connected network, a static route, or a
dynamically learned route. Level 2 child routes are also ultimate routes.
Please see the slides for an example of it.
Topic 87: IPv4 Route Lookup Process:
When a packet arrives on a router interface, the router examines the IPv4 header, identifies the
destination IPv4 address, and proceeds through the router lookup process.
In Figure below, the router examines level 1 network routes for the best match with the destination
address of the IPv4 packet.
1. If the best match is a level 1 ultimate route, then this route is used to forward the packet.
2. If the best match is a level 1 parent route, proceed to the next step.
In Figure above, the router examines child routes (the subnet routes) of the parent route for a best
match.
3. If there is a match with a level 2 child route, that subnet is used to forward the packet.
4. If there is not a match with any of the level 2 child routes, proceed to the next step.
In Figure above, the router continues searching level 1 supernet routes in the routing table for a match,
including the default route, if there is one.
5. If there is now a lesser match with a level 1 supernet or default routes, the router uses that route to
forward the packet.
6. If there is not a match with any route in the routing table, the router drops the packet.
Note: A route referencing only a next-hop IP address and not an exit interface must be resolved to a
route with an exit interface. A recursive lookup is performed on the next-hop IP address until the route
is resolved to an exit interface.
Best Route = Longest Match
What is meant by the router must find the best match in the routing table? Best match is equal to the
longest match.
For there to be a match between the destination IPv4 address of a packet and a route in the routing
table, a minimum number of far left bits must match between the IPv4 address of the packet and the
route in the routing table. The subnet mask of the route in the routing table is used to determine the
minimum number of far left bits that must match. Remember that an IPv4 packet only contains the IPv4
address and not the subnet mask.
The best match is the route in the routing table that has the most number of far left matching bits with
the destination IPv4 address of the packet. The route with the greatest number of equivalent far left
bits, or the longest match, is always the preferred route.
In the figure below, a packet is destined for 172.16.0.10. The router has three possible routes that
match this packet: 172.16.0.0/12, 172.16.0.0/18, and 172.16.0.0/26. Of the three routes, 172.16.0.0/26
has the longest match and is therefore chosen to forward the packet. Remember, for any of these
routes to be considered a match there must be at least the number of matching bits indicated by the
subnet mask of the route.
As shown in Figure above, OSPF version 2 (OSPFv2) is available for IPv4 while OSPF version 3 (OSPFv3) is
available for IPv6.
The initial development of OSPF began in 1987 by the Internet Engineering Task Force (IETF) OSPF
Working Group. At that time, the Internet was largely an academic and research network funded by the
U.S. government.
In 1989, the specification for OSPFv1 was published in RFC 1131. Two implementations were written.
One implementation was developed to run on routers and the other to run on UNIX workstations. The
latter implementation became a widespread UNIX process known as GATED. OSPFv1 was an
experimental routing protocol and was never deployed.
In 1991, OSPFv2 was introduced in RFC 1247 by John Moy. OSPFv2 offered significant technical
improvements over OSPFv1. It is classless by design; therefore, it supports VLSM and CIDR.
At the same time the OSPF was introduced, ISO was working on a link-state routing protocol of their
own, Intermediate System-to-Intermediate System (IS-IS). IETF chose OSPF as their recommended
Interior Gateway Protocol (IGP).
In 1998, the OSPFv2 specification was updated in RFC 2328, which remains the current RFC for OSPF.
In 2008, OSPFv3 was updated in RFC 5340 as OSPF for IPv6.
Features of OSPF
• Link-state database (LSDB) - Creates the topology table List of routes generated when an algorithm is run on the link state
database; each routers routing table is unique and contains
• Forwarding database - Creates the routing table information on how and where to send packets to other routers;
can be viewed using the 'show IP route command'
These tables contain a list of neighboring routers to exchange routing information with and are kept and
maintained in RAM.
Routing Protocol Messages
OSPF exchanges messages to convey routing information using five types of packets.
• Hello packet
• Database description packet
• Link-state request packet
• Link-state update packet
• Link-state acknowledgment packet
These packets are used to discover neighboring routers and also to exchange routing information to
maintain accurate information about the network.
Algorithm
The CPU processes the neighbor and topology tables using Dijkstra's SPF algorithm. The SPF algorithm is
based on the cumulative cost to reach a destination.
The SPF algorithm creates an SPF tree by placing each router at the root of the tree and calculating the
shortest path to each node. The SPF tree is then used to calculate the best routes. OSPF places the best
routes into the forwarding database, which is used to make the routing table.
Link State Operation
To maintain routing information, OSPF routers complete the following generic link-state routing process
to reach a state of convergence:
1. Establish Neighbor Adjacencies - OSPF-enabled routers must recognize each other on the network
before they can share information. An OSPF-enabled router sends Hello packets out all OSPF-enabled
interfaces to determine if neighbors are present on those links. If a neighbor is present, the OSPF-
enabled router attempts to establish a neighbor adjacency with that neighbor.
2. Exchange Link-State Advertisements - After adjacencies are established, routers then exchange link-
state advertisements (LSAs). LSAs contain the state and cost of each directly connected link. Routers
flood their LSAs to adjacent neighbors. Adjacent neighbors receiving the LSA immediately flood the LSA
to other directly connected neighbors, until all routers in the area have all LSAs.
3. Build the Topology Table - After LSAs are received, OSPF-enabled routers build the topology table
(LSDB) based on the received LSAs. This database eventually holds all the information about the
topology of the network.
4. Execute the SPF Algorithm - Routers then execute the SPF algorithm. The SPF algorithm creates the
SPF tree.
From the SPF tree, the best paths are inserted into the routing table. Routing decisions are made based
on the entries in the routing table.
Single Area and Multiarea OSPF
To make OSPF more efficient and scalable, OSPF supports hierarchical routing using areas. An OSPF area
is a group of routers that share the same link-state information in their LSDBs.
OSPF can be implemented in one of two ways:
• Single-Area OSPF - All routers are in one area called the backbone area (area 0).
• Multiarea OSPF - OSPF is implemented using multiple areas, in a hierarchal fashion. All areas
must connect to the backbone area (area 0). Routers interconnecting the areas are referred to
as Area Border Routers (ABR).
With multiarea OSPF, OSPF can divide one large autonomous system (AS) into smaller areas, to support
hierarchical routing. With hierarchical routing, routing still occurs between the areas (interarea routing),
while many of the processor intensive routing operations, such as recalculating the database, are kept
within an area.
For instance, any time a router receives new information about a topology change within the area
(including the addition, deletion, or modification of a link) the router must rerun the SPF algorithm,
create a new SPF tree, and update the routing table. The SPF algorithm is CPU-intensive and the time it
takes for calculation depends on the size of the area.
Note: Topology changes are distributed to routers in other areas in a distance vector format. In other
words, these routers only update their routing tables and do not need to rerun the SPF algorithm.
Too many routers in one area would make the LSDBs very large and increase the load on the CPU.
Therefore, arranging routers into areas effectively partitions a potentially large database into smaller
and more manageable databases.
The hierarchical-topology possibilities of multiarea OSPF have these advantages:
• Smaller routing tables - Fewer routing table entries because network addresses can be
summarized between areas. Route summarization is not enabled by default.
• Reduced link-state update overhead- Minimizes processing and memory requirements.
• Reduced frequency of SPF calculations - Localizes the impact of a topology change within an
area. For instance, it minimizes routing update impact because LSA flooding stops at the area
boundary.
Topic 89: OSPF Messages:
OSPF Message Encapsulation
OSPF messages transmitted over an Ethernet link contain the following information:
Data Link Ethernet Frame Header - Identifies the destination multicast MAC addresses 01-00-5E-00-00-
05 or 01-00-5E-00-00-06.
IP Packet Header - Identifies the IPv4 protocol field 89 which indicates that this is an OSPF packet. It also
identifies one of two OSPF multicast addresses, 224.0.0.5 or 224.0.0.6.
OSPF Packet Header - Identifies the OSPF packet type, the router ID and the area ID.
OSPF Packet Type Specific Data - Contains the OSPF packet type information. The content differs
depending on the packet type. In this case, it is an IPv4 Header.
• Build Network
• Configure Devices
• Configure OSPF Routing
• Verify OSPF Routing
• Change Router ID Assignments
• Configure OSPF Passive Interfaces
• Change OSPF Metrics
Topic 93: Packet Tracer – Configure Basic Single Area OSPFv2 - 2:
This is the continuation of the previous topic.
Topic 94: Packet Tracer – Configure Basic Single Area OSPFv2 - 3:
This is the continuation of the previous topic.
Topic 95: Packet Tracer – Configure Basic Single Area OSPFv2 - 4:
This is the continuation of the previous topic.
Topic 96: Packet Tracer – Configuring OSPFv2 in a Single Area:
• Configure OSPFv2 Routing
• Verify Configurations
Topic 97: OSPFv2 Cost:
Recall that a routing protocol uses a metric to determine the best path of a packet across a network. A
metric gives indication of the overhead that is required to send packets across a certain interface. OSPF
uses cost as a metric. A lower cost indicates a better path than a higher cost.
The cost of an interface is inversely proportional to the bandwidth of the interface. Therefore, a higher
bandwidth indicates a lower cost. More overhead and time delays equal a higher cost. Therefore, a 10-
Mb/s Ethernet line has a higher cost than a 100-Mb/s Ethernet line.
The formula used to calculate the OSPF cost is:
Cost = reference bandwidth /interface bandwidth
The default reference bandwidth is 10^8 (100,000,000); therefore, the formula is:
Cost = 100,000,000 bps / interface bandwidth in bps
Refer to the table in the figure below for a breakdown of the cost calculation. Notice that FastEthernet,
Gigabit Ethernet, and 10 GigE interfaces share the same cost, because the OSPF cost value must be an
integer. Consequently, because the default reference bandwidth is set to 100 Mb/s, all links that are
faster than Fast Ethernet also have a cost of 1.
The cost of an OSPF route is the accumulated value from one router to the destination network.
All interfaces have default bandwidth values assigned to them. As with reference bandwidth, interface
bandwidth values do not actually affect the speed or capacity of the link. Instead, they are used by OSPF
to compute the routing metric. Therefore, it is important that the bandwidth value reflect the actual
speed of the link so that the routing table has accurate best path information.
Although the bandwidth values of Ethernet interfaces usually match the link speed, some other
interfaces may not. For instance, the actual speed of serial interfaces is often different than the default
bandwidth. On Cisco routers, the default bandwidth on most serial interfaces is set to 1.544 Mb/s.
Use the show interfaces command to view the interface bandwidth setting.
As an alternative to setting the default interface bandwidth, the cost can be manually configured on an
interface using the ip ospf cost value interface configuration command.
An advantage of configuring a cost over setting the interface bandwidth is that the router does not have
to calculate the metric when the cost is manually configured. In contrast, when the interface bandwidth
is configured, the router must calculate the OSPF cost based on the bandwidth. The ip ospf
cost command is useful in multi-vendor environments where non-Cisco routers may use a metric other
than bandwidth to calculate the OSPF costs.
Both the bandwidth interface command and the ip ospf cost interface command achieve the same
result, which is to provide an accurate value for use by OSPF in determining the best route.
Topic 98: Verify OSPFv2:
Figure below shows the reference topology.
Use the show ip ospf neighbor command to verify that the router has formed an adjacency with its
neighboring routers. If the router ID of the neighboring router is not displayed, or if it does not show as
being in a state of FULL, the two routers have not formed an OSPF adjacency.
If two routers do not establish adjacency, link-state information is not exchanged. Incomplete LSDBs can
cause inaccurate SPF trees and routing tables. Routes to destination networks may not exist, or may not
be the most optimum path.
Figure below displays the neighbor adjacency of R1. For each neighbor, this command displays the
following output:
Neighbor ID - The router ID of the neighboring router.
Pri - The OSPF priority of the interface. This value is used in the DR and BDR election.
State - The OSPF state of the interface. FULL state means that the router and its neighbor have identical
OSPF LSDBs. On multiaccess networks, such as Ethernet, two routers that are adjacent may have their
states displayed as 2WAY. The dash indicates that no DR or BDR is required because of the network
type.
Dead Time - The amount of time remaining that the router waits to receive an OSPF Hello packet from
the neighbor before declaring the neighbor down. This value is reset when the interface receives a Hello
packet.
Address - The IPv4 address of the neighbor's interface to which this router is directly connected.
Interface - The interface on which this router has formed adjacency with the neighbor.
Two routers may not form an OSPF adjacency if:
• The subnet masks do not match, causing the routers to be on separate networks.
• OSPF Hello or Dead Timers do not match.
• OSPF Network Types do not match.
• There is a missing or incorrect OSPF network command.
Figure above displays the network topology that is used to configure OSPFv3.
In this topology, none of the routers have IPv4 addresses configured. A network with router interfaces
configured with IPv4 and IPv6 addresses is referred to as dual-stacked. A dual-stacked network can have
OSPFv2 and OSPFv3 simultaneously-enabled.
The steps to configure basic OSPFv3 in a single area.
Link Local Addresses
The output of the show ipv6 interface brief command confirms that the correct global IPv6 addresses
have been successfully configured and that the interfaces are enabled.
Link-local addresses are automatically created when an IPv6 global unicast address is assigned to the
interface. Global unicast addresses are not required on an interface; however, IPv6 link-local addresses
are.
Unless configured manually, Cisco routers create the link-local address using FE80::/10 prefix and the
EUI-64 process. EUI-64 involves using the 48-bit Ethernet MAC address, inserting FFFE in the middle and
flipping the seventh bit. For serial interfaces, Cisco uses the MAC address of an Ethernet interface.
Configuring Link Local Addresses on R1
Link-local addresses created using the EUI-64 format or in some cases, random interface IDs, make it
difficult to recognize and remember those addresses. Because IPv6 routing protocols use IPv6 link-local
addresses for unicast addressing and next-hop address information in the routing table, it is common
practice to make it an easily recognizable address.
Configuring the link-local address manually provides the ability to create an address that is recognizable
and easier to remember. As well, a router with several interfaces can assign the same link-local address
to each IPv6 interface. This is because the link-local address is only required for local communications.
Link-local addresses can be configured manually using the same interface command used to create IPv6
global unicast addresses, but appending the link-local keyword to the ipv6 address command.
A link-local address has a prefix within the range FE80 to FEBF. When an address begins with this hextet
(16-bit segment) the link-local keyword must follow the address.
Configuring OSPFv3 Router ID
Topic 101: Verify OSPFv3:
Verify OSPFv3 Neighbors & Protocol Settings
Use the show ipv6 ospf neighbor command to verify that the router has formed an adjacency with its
neighboring routers. If the router ID of the neighboring router is not displayed, or if it does not show as
being in a state of FULL, the two routers have not formed an OSPF adjacency.
If two routers do not establish a neighbor adjacency, link-state information is not exchanged.
Incomplete LSDBs can cause inaccurate SPF trees and routing tables. Routes to destination networks
may not exist or may not be the most optimum path.
Figure above displays the neighbor adjacency of R1. For each neighbor, this command displays the
following output:
• Neighbor ID - The router ID of the neighboring router.
• Pri - The OSPF priority of the interface. Value is used in the DR and BDR election.
• State - The OSPF state of the interface. FULL state means that the router and its neighbor have
identical OSPF LSDBs. On multiaccess networks such as Ethernet, two routers that are adjacent
may have their states displayed as 2WAY. The dash indicates that no DR or BDR is required
because of the network type.
• Dead Time - The amount of time remaining that the router waits to receive an OSPF Hello
packet from the neighbor before declaring the neighbor down. This value is reset when the
interface receives a Hello packet.
• Interface ID - The interface ID or link ID.
• Interface - The interface on which this router has formed adjacency with the neighbor.
As shown in Figure above, the show ipv6 protocols command is a quick way to verify vital OSPFv3
configuration information, including the OSPF process ID, the router ID, and the interfaces enabled for
OSPFv3.
Topic 102: Packet Tracer – Configuring Basic OSPFv3:
• Configure OSPFv3 Routing
• Verify Connectivity
Topic 103: Multiarea OSPF:
When a large OSPF area is divided into smaller areas, this is called multiarea OSPF. Multiarea OSPF is
useful in larger network deployments to reduce processing and memory overhead.
For instance, any time a router receives new information about the topology, as with additions,
deletions, or modifications of a link, the router must rerun the SPF algorithm, create a new SPF tree, and
update the routing table. The SPF algorithm is CPU-intensive and the time it takes for calculation
depends on the size of the area. Too many routers in one area make the LSDB larger and increase the
load on the CPU. Therefore, arranging routers into areas effectively partitions one potentially large
database into smaller and more manageable databases.
Multiarea OSPF requires a hierarchical network design. The main area is called the backbone area (area
0) and all other areas must connect to the backbone area. With hierarchical routing, routing still occurs
between the areas (interarea routing); while many of the tedious routing operations, such as
recalculating the database, are kept within an area.
The hierarchical-topology possibilities of multiarea OSPF have these advantages:
Smaller routing tables - There are fewer routing table entries as network addresses can be summarized
between areas. For example in figure below, R1 summarizes the routes from area 1 to area 0 and R2
summarizes the routes from area 51 to area 0. R1 and R2 also propagate a default static route to area 1
and area 51.
Reduced link-state update overhead- Minimizes processing and memory requirements, because there
are fewer routers exchanging LSAs.
Reduced frequency of SPF calculations - Localizes impact of a topology change within an area. For
instance, it minimizes routing update impact, because LSA flooding stops at the area boundary.
In Figure below, assume a link fails between two internal routers in area 51. Only the routers in area 51
exchange LSAs and rerun the SPF algorithm for this event. R1 does not receive LSAs from area 51 and
does not recalculate the SPF algorithm.
OSPF Two-Layer Area Hierarchy
Multiarea OSPF is implemented in a two-layer area hierarchy:
Backbone (Transit) area - An OSPF area whose primary function is the fast and efficient movement of IP
packets. Backbone areas interconnect with other OSPF area types. Generally, end users are not found
within a backbone area. The backbone area is also called OSPF area 0. Hierarchical networking defines
area 0 as the core to which all other areas directly connect.
Regular (Non -backbone) area -Connects users and resources. Regular areas are usually set up along
functional or geographical groupings. By default, a regular area does not allow traffic from another area
to use its links to reach other areas. All traffic from other areas must cross a transit area.
OSPF enforces this rigid two-layer area hierarchy. The underlying physical connectivity of the network
must map to the two-layer area structure, with all non-backbone areas attaching directly to area 0. All
traffic moving from one area to another area must traverse the backbone area. This traffic is referred to
as interarea traffic.
The optimal number of routers per area varies based on factors such as network stability, but Cisco
recommends the following guidelines:
• An area should have no more than 50 routers.
• A router should not be in more than three areas.
• Any single router should not have more than 60 neighbors.
Types of OSPF Routers
OSPF routers of different types control the traffic that goes in and out of areas. The OSPF routers are
categorized based on the function they perform in the routing domain.
There are four different types of OSPF routers:
• Internal router This is a router that has all of its interfaces in the same area. All internal routers
in an area have identical LSDBs.
• Backbone router This is a router in the backbone area. Generally, the backbone area is set to
area 0.
• Area Border Router (ABR) This is a router that has interfaces attached to multiple areas. It must
maintain separate LSDBs for each area it is connected to, and can route between areas. ABRs
are exit points for the area, which means that routing information destined for another area can
get there only via the ABR of the local area. ABRs can be configured to summarize the routing
information from the LSDBs of their attached areas. ABRs distribute the routing information into
the backbone. The backbone routers then forward the information to the other ABRs. In a
multiarea network, an area can have one or more ABRs.
• Autonomous System Boundary Router (ASBR) This is a router that has at least one interface
attached to an external internetwork (another autonomous system), such as a non-OSPF
network. An ASBR can import non-OSPF network information to the OSPF network, and vice
versa, using a process called route redistribution.
Redistribution in multiarea OSPF occurs when an ASBR connects different routing domains (e.g., EIGRP
and OSPF) and configures them to exchange and advertise routing information between those routing
domains.
A router can be classified as more than one router type. For example, if a router connects to area 0 and
area 1, and in addition, maintains routing information for another, non-OSPF network, it falls under
three different classifications: a backbone router, an ABR, and an ASBR.
Topic 104: Multiarea OSPF LSA Operation:
OSPF LSA Type
LSAs are the building blocks of the OSPF LSDB. Individually, they act as database records and provide
specific OSPF network details. In combination, they describe the entire topology of an OSPF network or
area.
The RFCs for OSPF currently specify up to 11 different LSA types. However, any implementation of
multiarea OSPF must support the first five LSAs: LSA 1 to LSA 5. The focus of this topic is on these first
five LSAs.
Each router link is defined as an LSA type. The LSA includes a link ID field that identifies, by network
number and mask, the object to which the link connects. Depending on the type, the link ID has
different meanings. LSAs differ on how they are generated and propagated within the routing domain.
Note: OSPFv3 includes additional LSA types.
As shown in the figure above, all routers advertise their directly connected OSPF-enabled links in a type
1 LSA and forward their network information to OSPF neighbors. The LSA contains a list of the directly
connected interfaces, link types, and link states.
• Type 1 LSAs are also referred to the router link entries.
• Type 1 LSAs are flooded only within the area in which they originated. ABRs subsequently
advertise the networks learned from the type 1 LSAs to other areas as type 3 LSAs.
• The type 1 LSA link ID is identified by the router ID of the originating router.
OSPF LSA Type 2
A type 2 LSA only exists for multiaccess and non-broadcast multiaccess (NBMA) networks where there is
a DR elected and at least two routers on the multiaccess segment. The type 2 LSA contains the router ID
and IP address of the DR, along with the router ID of all other routers on the multiaccess segment. A
type 2 LSA is created for every multiaccess network in the area.
The purpose of an a type 2 LSA is to give other routers information about multiaccess networks within
the same area.
The DR floods type 2 LSAs only within the area in which they originated. Type 2 LSAs are not forwarded
outside of an area.
Type 2 LSAs are also referred to the network link entries.
As shown in the figure above, ABR1 is the DR for the Ethernet network in area 1. It generates the type 2
LSA and forwards it into area 1. ABR2 is the DR for the multiaccess network in area 0. There are no
multiaccess networks in area 2 and therefore, no type 2 LSAs are ever propagated in that area.
The link-state ID for a network LSA is the IP interface address of the DR that advertises it.
OSPF LSA Type 3
Type 3 LSAs are used by ABRs to advertise networks from other areas. ABRs collect type 1 LSAs in the
LSDB. After an OSPF area has converged, the ABR creates a type 3 LSA for each of its learned OSPF
networks. Therefore, an ABR with many OSPF routes must create type 3 LSAs for each network.
As shown in the figure above, ABR1 and ABR2 floods type 3 LSAs from one area to other areas. The ABRs
propagate the type 3 LSAs into other areas. In a large OSPF deployment with many networks,
propagating type 3 LSAs can cause significant flooding problems. For this reason, it is strongly
recommended that manual route summarization be configured on the ABR.
The link-state ID is set to the network number and the mask is also advertised.
Receiving a type 3 LSA into its area does not cause a router to run the SPF algorithm. The routes being
advertised in the type 3 LSAs are appropriately added to or deleted from the router's routing table, but a
full SPF calculation is not necessary.
OSPF LSA Type 4
Type 4 and type 5 LSAs are used collectively to identify an ASBR and advertise external networks into an
OSPF routing domain.
A type 4 summary LSA is generated by an ABR only when an ASBR exists within an area. A type 4 LSA
identifies the ASBR and provides a route to it. All traffic destined to an external autonomous system
requires routing table knowledge of the ASBR that originated the external routes.
As shown in the figure above, the ASBR sends a type 1 LSA, identifying itself as an ASBR. The LSA
includes a special bit known as the external bit (e bit) that is used to identify the router as an ASBR.
When ABR1 receives the type 1 LSA, it notices the e bit, it builds a type 4 LSA, and then floods the type 4
LSA to the backbone (area 0). Subsequent ABRs flood the type 4 LSA into other areas.
The link-state ID is set to the ASBR router ID.
OSPF LSA Type 5
Type 5 external LSAs describe routes to networks outside the OSPF autonomous system. Type 5 LSAs are
originated by the ASBR and are flooded to the entire autonomous system.
Type 5 LSAs are also referred to as autonomous system external LSA entries.
In the figure above, the ASBR generates type 5 LSAs for each of its external routes and floods it into the
area. Subsequent ABRs also flood the type 5 LSA into other areas. Routers in other areas use the
information from the type 4 LSA to reach the external routes.
In a large OSPF deployment with many networks, propagating multiple type 5 LSAs can cause significant
flooding problems. For this reason, it is strongly recommended that manual route summarization be
configured on the ASBR.
The link-state ID is the external network number.
Topic 105: OSPF Routing Table and Types of Routes:
OSPF Routing Table Entries
Figure above provides a sample routing table for a multiarea OSPF topology with a link to an external
non-OSPF network. OSPF routes in an IPv4 routing table are identified using the following descriptors:
• - Router (type 1) and network (type 2) LSAs describe the details within an area. The routing table
reflects this link-state information with a designation of O, meaning that the route is intra-area.
• IA - When an ABR receives summary LSAs, it adds them to its LSDB and regenerates them into
the local area. When an ABR receives external LSAs, it adds them to its LSDB and floods them
into the area. The internal routers then assimilate the information into their databases.
Summary LSAs appear in the routing table as IA (interarea routes).
• E1 or O E2 - External LSAs appear in the routing table marked as external type 1 (E1) or external
type 2 (E2) routes.
OSPF Route Calculation
Each router uses the SPF algorithm against the LSDB to build the SPF tree. The SPF tree is used to
determine the best paths.
The order in which the best paths are calculated is as follows:
1. All routers calculate the best paths to destinations within their area (intra-area) and add these entries
to the routing table. These are the type 1 and type 2 LSAs, which are noted in the routing table with a
routing designator of O.
2. All routers calculate the best paths to the other areas within the internetwork. These best paths are
the interarea route entries, or type 3 and type 4 LSAs, and are noted with a routing designator of O IA.
3. All routers (except those that are in a form of stub area) calculate the best paths to the external
autonomous system (type 5) destinations. These are noted with either an O E1 or an O E2 route
designator, depending on the configuration.
When converged, a router can communicate with any network within or outside the OSPF autonomous
system.
Topic 106: Configuring Multiarea OSPF:
Implementing Multiarea OSPF
OSPF can be implemented as single-area or multiarea. The type of OSPF implementation chosen
depends on the specific requirements and existing topology.
There are 4 steps to implementing multiarea OSPF.
Steps 1 and 2 are part of the planning process.
Step 1. Gather the network requirements and parameters - This includes determining the number of
host and network devices, the IP addressing scheme (if already implemented), the size of the routing
domain, the size of the routing tables, the risk of topology changes, and other network characteristics.
Step 2. Define the OSPF parameters -Based on information gathered during Step 1, the network
administrator must determine if single-area or multiarea OSPF is the preferred implementation. If
multiarea OSPF is selected, there are several considerations the network administrator must take into
account while determining the OSPF parameters, to include:
• IP addressing plan - This governs how OSPF can be deployed and how well the OSPF
deployment might scale. A detailed IP addressing plan, along with the IP subnetting information,
must be created. A good IP addressing plan should enable the usage of OSPF multiarea design
and summarization. This plan more easily scales the network, as well as optimizes OSPF
behavior and the propagation of LSA.
• OSPF areas - Dividing an OSPF network into areas decreases the LSDB size and limits the
propagation of link-state updates when the topology changes. The routers that are to be ABRs
and ASBRs must be identified, as are those that are to perform any summarization or
redistribution.
• Network topology - This consists of links that connect the network equipment and belong to
different OSPF areas in a multiarea OSPF design. Network topology is important to determine
primary and backup links. Primary and backup links are defined by the changing OSPF cost on
interfaces. A detailed network topology plan should also be used to determine the different
OSPF areas, ABR, and ASBR as well as summarization and redistribution points, if multiarea OSPF
is used.
Step 3. Configure the multiarea OSPF implementation based on the parameters.
Step 4. Verify the multiarea OSPF implementation based on the parameters.
Configuring Multiarea OSPF
Figure above displays the reference multiarea OSPF topology. In this example:
• R1 is an ABR because it has interfaces in area 1 and an interface in area 0.
• R2 is an internal backbone router because all of its interfaces are in area 0.
• R3 is an ABR because it has interfaces in area 2 and an interface in area 0.
There are no special commands required to implement this multiarea OSPF network. A router simply
becomes an ABR when it has two network statements in different areas.
R1 is assigned the router ID 1.1.1.1. This example enables OSPF on the two LAN interfaces in area 1. The
serial interface is configured as part of OSPF area 0. Because R2 has interfaces connected to two
different areas, it is an ABR.
Upon completion of the R2 and R3 configuration, notice the informational messages informing of the
adjacencies with R1 (1.1.1.1).
Upon completion of the R3 configuration, notice the informational messages informing of an adjacency
with R1 (1.1.1.1) and R2 (2.2.2.2).
The figure above illustrates that summarizing networks into a single address and mask can be done in
three steps:
Step 1. List the networks in binary format. In the example the two area 1 networks 10.1.1.0/24 and
10.1.2.0/24 are listed binary format.
Step 2. Count the number of far left matching bits to determine the mask for the summary route. As
highlighted, the first 22 far left matching bits match. This results in the prefix /22 or subnet
mask255.255.252.0.
Step 3. Copy the matching bits and then add zero bits to determine the summarized network address. In
this example, the matching bits with zeros at the end result in a network address of 10.1.0.0/22. This
summary address summarizes four networks: 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24.
In the example the summary address matches four networks although only two networks exist.
Summarizing Area 1 Routes on R1
Packet Filtering
So how does an ACL use the information passed during a TCP/IP conversation to filter traffic?
Packet filtering, sometimes called static packet filtering, controls access to a network by analyzing the
incoming and outgoing packets and passing or dropping them based on given criteria, such as the source
IP address, destination IP addresses, and the protocol carried within the packet.
A router acts as a packet filter when it forwards or denies packets according to filtering rules. When a
packet arrives at the packet-filtering router, the router extracts certain information from the packet
header. Using this information, the router makes decisions, based on configured filter rules, as to
whether the packet can pass through or be discarded.
A packet-filtering router uses rules to determine whether to permit or deny traffic. A router can also
perform packet filtering at Layer 4, the transport layer. The router can filter packets based on the source
port and destination port of the TCP or UDP segment. These rules are defined using ACLs.
An ACL is a sequential list of permit or deny statements, known as access control entries (ACEs). ACEs
are also commonly called ACL statements. ACEs can be created to filter traffic based on certain criteria
such as: the source address, destination address, the protocol, and port numbers. When network traffic
passes through an interface configured with an ACL, the router compares the information within the
packet against each ACE, in sequential order, to determine if the packet matches one of the statements.
If a match is found, the packet is processed accordingly. In this way, ACLs can be configured to control
access to a network or subnet.
To evaluate network traffic, the ACL extracts the following information from the Layer 3 packet header:
• Source IP address
• Destination IP address
• ICMP message type
The ACL can also extract upper layer information from the Layer 4 header, including:
• TCP/UDP source port
• TCP/UDP destination port
• ACL Operation
ACLs define the set of rules that give added control for packets that enter inbound interfaces, packets
that relay through the router, and packets that exit outbound interfaces of the router. ACLs do not act
on packets that originate from the router itself.
ACLs are configured to apply to inbound traffic or to apply to outbound traffic.
Inbound ACLs - Incoming packets are processed before they are routed to the outbound interface. An
inbound ACL is efficient because it saves the overhead of routing lookups if the packet is discarded. If
the packet is permitted by the tests, it is then processed for routing. Inbound ACLs are best used to filter
packets when the network attached to an inbound interface is the only source of the packets needed to
be examined.
Outbound ACLs - Incoming packets are routed to the outbound interface, and then they are processed
through the outbound ACL. Outbound ACLs are best used when the same filter will be applied to packets
coming from multiple inbound interfaces before exiting the same outbound interface.
The last statement of an ACL is always an implicit deny. This statement is automatically inserted at the
end of each ACL even though it is not physically present. The implicit deny blocks all traffic. Because of
this implicit deny, an ACL that does not have at least one permit statement will block all traffic.
Topic 113: Standard vs. Extended IPv4 ACL:
The two types of Cisco IPv4 ACLs are standard and extended.
Standard ACLs
Standard ACLs can be used to permit or deny traffic only from source IPv4 addresses. The destination of
the packet and the ports involved are not evaluated.
Extended ACLs
Extended ACLs filter IPv4 packets based on several attributes:
• Protocol type
• Source IPv4 address
• Destination IPv4 address
• Source TCP or UDP ports
• Destination TCP or UDP ports
• Optional protocol type information for finer control
Numbered and Named ACLs
Standard and extended ACLs can be created using either a number or a name to identify the ACL and its
list of statements.
Using numbered ACLs is an effective method for determining the ACL type on smaller networks with
more homogeneously defined traffic. However, a number does not provide information about the
purpose of the ACL. see notes
Wildcard Masking
IPv4 ACEs include the use of wildcard masks. A wildcard mask is a string of 32 binary digits used by the
router to determine which bits of the address to examine for a match.
As with subnet masks, the numbers 1 and 0 in the wildcard mask identify how to treat the
corresponding IP address bits. However, in a wildcard mask, these bits are used for different purposes
and follow different rules.
• Subnet masks use binary 1s and 0s to identify the network, subnet, and host portion of an IP
address. Wildcard masks use binary 1s and 0s to filter individual IP addresses or groups of IP
addresses to permit or deny access to resources.
• Wildcard masks and subnet masks differ in the way they match binary 1s and 0s. Wildcard masks
use the following rules to match binary 1s and 0s:
• Wildcard mask bit 0 - Match the corresponding bit value in the address.
• Wildcard mask bit 1 - Ignore the corresponding bit value in the address.
Wildcard Bit Mask Keywords
Working with decimal representations of binary wildcard mask bits can be tedious. To simplify this task,
the keywords host and any help identify the most common uses of wildcard masking. These keywords
eliminate entering wildcard masks when identifying a specific host or an entire network. These
keywords also make it easier to read an ACL by providing visual clues as to the source or destination of
the criteria.
The host keyword substitutes for the 0.0.0.0 mask. This mask states that all IPv4 address bits must
match or only one host is matched.
The any option substitutes for the IP address and 255.255.255.255 mask. This mask says to ignore the
entire IPv4 address or to accept any addresses.
Topic 114: ACL Creation and Placement:
Writing ACLs can be a complex task. For every interface there may be multiple policies needed to
manage the type of traffic allowed to enter or exit that interface.
Here are some guidelines for using ACLs:
• Use ACLs in firewall routers positioned between your internal network and an external network
such as the Internet.
• Use ACLs on a router positioned between two parts of your network to control traffic entering
or exiting a specific part of your internal network.
• Configure ACLs on border routers, that is, routers situated at the edges of your networks. This
provides a very basic buffer from the outside network, or between a less controlled area of your
own network and a more sensitive area of your network.
• Configure ACLs for each network protocol configured on the border router interfaces.
The Three Ps
A general rule for applying ACLs on a router can be recalled by remembering the three Ps. You can
configure one ACL per protocol, per direction, per interface:
One ACL per protocol -To control traffic flow on an interface, an ACL must be defined for each protocol
enabled on the interface.
One ACL per direction -ACLs control traffic in one direction at a time on an interface. Two separate ACLs
must be created to control inbound and outbound traffic.
One ACL per interface - ACLs control traffic for an interface, for example, GigabitEthernet 0/0.
Where to Place ACLs
The proper placement of an ACL can make the network operate more efficiently. An ACL can be placed
to reduce unnecessary traffic. For example, traffic that will be denied at a remote destination should not
be forwarded using network resources along the route to that destination.
Every ACL should be placed where it has the greatest impact on efficiency. The basic rules are:
Extended ACLs - Locate extended ACLs as close as possible to the source of the traffic to be filtered. This
way, undesirable traffic is denied close to the source network without crossing the network
infrastructure.
Standard ACLs - Because standard ACLs do not specify destination addresses, place them as close to the
destination as possible. Placing a standard ACL at the source of the traffic will effectively prevent that
traffic from reaching any other networks through the interface where the ACL is applied.
Placement of the ACL and therefore the type of ACL used may also depend on:
The extent of the network administrator's control - Placement of the ACL can depend on whether or
not the network administrator has control of both the source and destination networks.
Bandwidth of the networks involved- Filtering unwanted traffic at the source prevents transmission of
the traffic before it consumes bandwidth on the path to a destination. This is especially important in low
bandwidth networks.
Ease of configuration - If a network administrator wants to deny traffic coming from several networks,
one option is to use a single standard ACL on the router closest to the destination. The disadvantage is
that traffic from these networks will use bandwidth unnecessarily. An extended ACL could be used on
each router where the traffic originated. This will save bandwidth by filtering the traffic at the source
but requires creating extended ACLs on multiple routers.
Standard ACL Placement
A standard ACL can only filter traffic based on a source address. The basic rule for placement of a
standard ACL is to place the ACL as close as possible to the destination network. This allows the traffic to
reach all other networks except the network where the packets will be filtered.
Extended ACL Placement
Like a standard ACL, an extended ACL can filter traffic based on the source address. However, an
extended ACL can also filter traffic based on the destination address, protocol, and port number. This
allows network administrators more flexibility in the type of traffic that can be filtered and where to
place the ACL. The basic rule for placing an extended ACL is to place it as close to the source as possible.
This prevents unwanted traffic from being sent across multiple networks only to be denied when it
reaches its destination. Network administrators can only place ACLs on devices that they control.
Therefore, placement must be determined in the context of where the control of the network
administrator extends.
Topic 115: Configure Standard IPv4 ACLs:
How are ACLs Created?
In Two Steps
1. Create an ACL definition.
– Enter global configuration mode.
– Define statements of what to filter.
2. Apply the ACL to an interface.
– Enter interface configuration mode.
– Identify the ACL and the direction to filter.
Create a Standard ACL
ip access-group 5 out
Implied Deny
When traffic enters the router, the traffic is compared to all ACEs in the order that the entries occur in
the ACL. The router continues to process the ACEs until it finds a match. The router will process the
packet based on the first match found and no other ACEs will be examined.
If no matches are found when the router reaches the end of the list, the traffic is denied. This is because,
by default, there is an implied deny at the end of all ACLs for traffic that was not matched to a
configured entry. A single-entry ACL with only one deny entry has the effect of denying all traffic. At
least one permit ACE must be configured in an ACL or all traffic is blocked.
Configuring Standard ACLs
To use numbered standard ACLs on a Cisco router, you must first create the standard ACL and then
activate the ACL on an interface.
The access-list global configuration command defines a standard ACL with a number in the range of 1
through 99. Cisco IOS Software Release 12.0.1 extended these numbers by allowing 1300 to 1999 to be
used for standard ACLs. This allows for a maximum of 798 possible standard ACLs. These additional
numbers are referred to as expanded IP ACLs.
The full syntax of the standard ACL command is as follows:
Router(config)# access-list access-list-number { deny |permit | remark } source [source-wildcard ][ log ]
ACEs can deny or permit an individual host or a range of host addresses. To create a host statement in
numbered ACL 10 that permits a specific host with the IP address 192.168.10.0, you would enter:
R1(config)# access-list 10 permit host 192.168.10.10
To create a statement that will permit a range of IPv4 addresses in a numbered ACL 10 that permits all
IPv4 addresses in the network 192.168.10.0/24, you would enter:
R1(config)# access-list 10 permit 192.168.10.0 0.0.0.255
To remove the ACL, the global configuration no access-list command is used. Issuing the show access-
list command confirms that access list 10 has been removed.
Typically, when an administrator creates an ACL, the purpose of each statement is known and
understood. However, to ensure that the administrator and others recall the purpose of a statement,
remarks should be included. The remark keyword is used for documentation and makes access lists a
great deal easier to understand. Each remark is limited to 100 characters.
Internal Logic – Order Matters
Cisco IOS applies an internal logic when accepting and processing standard ACEs. ACEs are processed
sequentially. Therefore, the order in which ACEs are entered is important.
Topic 116: Configure Standard IPv4 ACLs - 2:
Applying Standard ACLs to Interfaces
Standard ACL Configuration Procedures
After a standard ACL is configured, it is linked to an interface using the ip access-group command in
interface configuration mode:
Router(config-if)# ip access-group { access-list-number |access-list-name } { in | out }
To remove an ACL from an interface, first enter the no ip access-groupcommand on the interface, and
then enter the global no access-list command to remove the entire ACL.
Standard ACL Configuration Procedures
After a standard ACL is configured, it is linked to an interface using the ip access-group command in
interface configuration mode:
Router(config-if)# ip access-group { access-list-number |access-list-name } { in | out }
To remove an ACL from an interface, first enter the no ip access-groupcommand on the interface, and
then enter the global no access-list command to remove the entire ACL.
Creating a Named ACL
Naming an ACL makes it easier to understand its function. For example, an ACL configured to deny FTP
could be called NO_FTP. When you identify your ACL with a name instead of with a number, the
configuration mode and command syntax are slightly different.
The steps required to create a standard named ACL.
Step 1. Starting from the global configuration mode, use the ip access-list command to create a named
ACL. ACL names are alphanumeric, case sensitive, and must be unique. The ip access-list
standard name is used to create a standard named ACL, whereas the command ip access-list
extended name is for an extended access list. After entering the command, the router is in named
standard ACL configuration mode as indicated by the prompt.
Note: Numbered ACLs use the global configuration command access-list whereas named IPv4 ACLs use
the ip access-list command.
Step 2. From the named ACL configuration mode, use permit or deny statements to specify one or more
conditions for determining whether a packet is forwarded or dropped.
Step 3. Apply the ACL to an interface using the ip access-group command. Specify if the ACL should be
applied to packets as they enter into the interface (in) or applied to packets as they exit the interface
(out).
Commenting ACLs
You can use the remark keyword to include comments (remarks) about entries in any IP standard or
extended ACL. The remarks make the ACL easier for you to understand and scan. Each remark line is
limited to 100 characters.
The remark can go before or after a permit or deny statement. You should be consistent about where
you put the remark so that it is clear which remark describes which permit or deny statement. For
example, it would be confusing to have some remarks before the associated permit or deny statements
and some remarks after the statements.
To include a comment for IPv4 numbered standard or extended ACLs, use the access-list access-
list_number remark remark global configuration command. To remove the remark, use the no form of
this command.
Topic 117: Packet Tracer – Configuring Standard ACLs:
• Plan an ACL Implementation
• Configure, Apply and Verify a Standard ACL
Topic 118: Modify IPv4 ACLs:
Editing Numbered ACLs
When configuring a standard ACL, the statements are added to the running-config. However, there is no
built-in editing feature that allows you to edit a change in an ACL.
There are two ways that a standard numbered ACL can be edited.
Method 1: Using a Text Editor
After someone is familiar with creating and editing ACLs, it may be easier to construct the ACL using a
text editor such as Microsoft Notepad. This allows you to create or edit the ACL and then paste it into
the router. For an existing ACL, you can use the show running-config command to display the ACL, copy
and paste it into the text editor, make the necessary changes, and paste it back in.
Configuration: Here are the steps to edit and correct ACL 1:
Step 1. Display the ACL using the show running-config command.
Step 2. Highlight the ACL, copy it, and then paste it into Microsoft Notepad. Edit the list as required.
After the ACL is correctly displayed in Microsoft Notepad, highlight it and copy it.
Step 3. In global configuration mode, remove the access list using the no access-list 1 command.
Otherwise, the new statements would be appended to the existing ACL. Then paste the new ACL into
the configuration of the router.
Step 4. Using the show running-config command, verify the changes
Method 2: Using the Sequence Number
The initial configuration of ACL 1 included a host statement for host 192.168.10.99. This was in error.
The host should have been configured as 192.168.10.10. To edit the ACL using sequence numbers follow
these steps:
Step 1. Display the current ACL using the show access-lists 1 command. The output from this command
will be discussed in more detail later in this section. The sequence number is displayed at the beginning
of each statement. The sequence number was automatically assigned when the access list statement
was entered. Notice that the misconfigured statement has the sequence number 10.
Step 2. Enter the ip access-lists standard command that is used to configure named ACLs. The ACL
number, 1, is used as the name. First the misconfigured statement needs to be deleted using the no
10 command with 10 referring to the sequence number. Next, a new sequence number 10 statement is
added using the command, 10 deny host 192.168.10.10.
Note: Statements cannot be overwritten using the same sequence number as an existing statement. The
current statement must be deleted first, and then the new one can be added.
Step 3. Verify the changes using the show access-lists command.
Verifying ACLs
The show ip interface command is used to verify the ACL on the interface. The output from this
command includes the number or name of the access list and the direction in which the ACL was
applied.
Viewing ACLs Statistics
Once the ACL has been applied to an interface and some testing has occurred, the show access-
lists command will show statistics for each statement that has been matched.
During testing of an ACL, the counters can be cleared using the clear access-list counters command. This
command can be used alone or with the number or name of a specific ACL.
Topic 119: Extended IPv4 ACLs:
Testing Packets with Extended ACLs
For more precise traffic-filtering control, extended IPv4 ACLs can be created. Extended ACLs are
numbered 100 to 199 and 2000 to 2699, providing a total of 799 possible extended numbered ACLs.
Extended ACLs can also be named.
Extended ACLs are used more often than standard ACLs because they provide a greater degree of
control.
Topic 120: Extended IPv4 ACLs - 2:
Deny Telnet and Permit Everything Else
In this example, the network administrator configured an ACL to allow users from the 192.168.10.0/24
network to browse both insecure and secure websites. Even though it has been configured, the ACL will
not filter traffic until it is applied to an interface. To apply an ACL to an interface, first consider whether
the traffic to be filtered is going in or out. When a user on the internal LAN accesses a website on the
Internet, traffic is traffic going out to the Internet. When an internal user receives an email from the
Internet, traffic is coming into the local router. However, when applying an ACL to an interface, in and
out take on different meanings. From an ACL consideration, in and out are in reference to the router
interface.
In the topology in the figure above, R1 has three interfaces. It has a serial interface, S0/0/0, and two
Gigabit Ethernet interfaces, G0/0 and G0/1. Recall that an extended ACL should typically be applied
close to the source. In this topology the interface closest to the source of the target traffic is the G0/0
interface.
Web request traffic from users on the 192.168.10.0/24 LAN is inbound to the G0/0 interface. Return
traffic from established connections to users on the LAN is outbound from the G0/0 interface. The
example applies the ACL to the G0/0 interface in both directions. The inbound ACL, 103, checks for the
type of traffic. The outbound ACL, 104, checks for return traffic from established connections. This will
restrict 192.168.10.0 Internet access to allow only website browsing.
Creating Named Extended ACLs
Named extended ACLs are created in essentially the same way that named standard ACLs are created.
Follow these steps to create an extended ACL, using names:
Step 1. From global configuration mode, use the ip access-list extended name command to define a
name for the extended ACL.
Step 2. In named ACL configuration mode, specify the conditions to permit or deny.
Step 3. Return to privileged EXEC mode and verify the ACL with the show access-lists name command.
Step 4. Save the entries in the configuration file with the copy running-config startup-config command.
To remove a named extended ACL, use theno ip access-list extended name global configuration
command.
Verifying Extended ACLs
After an ACL has been configured and applied to an interface, use Cisco IOS show commands to verify
the configuration.
Unlike standard ACLs, extended ACLs do not implement the same internal logic and hashing function.
The output and sequence numbers displayed in the show access-lists command output is the order in
which the statements were entered. Host entries are not automatically listed prior to range entries.
After an ACL configuration has been verified, the next step is to confirm that the ACLs work as planned;
blocking and permitting traffic as expected.
Topic 121: Packet Tracer – Configuring Extended ACLs:
Parameter Description
Topic 136: Packet Tracer – Configuring NAT Pool Overload and PAT-1:
• Build a Network
• Configure NAT Pool Overload
• Configure PAT
Topic 137: Packet Tracer – Configuring NAT Pool Overload and PAT-2:
This is the continuation of the previous topic.
Topic 138: Packet Tracer – Configuring NAT Pool Overload and PAT-3:
This is the continuation of the previous topic.
Topic 139: Ethernet Protocol:
Ethernet is the most widely used LAN technology used today. Ethernet operates in the data link layer
and the physical layer. It is a family of networking technologies that are defined in the IEEE 802.2 and
802.3 standards. Ethernet supports data bandwidths of:
• 10 Mb/s
• 100 Mb/s
• 1000 Mb/s (1 Gb/s)
• 10,000 Mb/s (10 Gb/s)
• 40,000 Mb/s (40 Gb/s)
• 100,000 Mb/s (100 Gb/s)
As shown in Figure below, Ethernet standards define both the Layer 2 protocols and the Layer 1
technologies. For the Layer 2 protocols, as with all 802 IEEE standards, Ethernet relies on the two
separate sublayers of the data link layer to operate, the Logical Link Control (LLC) and the MAC
sublayers.
LLC sublayer
The Ethernet LLC sublayer handles the communication between the upper layers and the lower layers.
This is typically between the networking software and the device hardware. The LLC sublayer takes the
network protocol data, which is typically an IPv4 packet, and adds control information to help deliver
the packet to the destination node. The LLC is used to communicate with the upper layers of the
application, and transition the packet to the lower layers for delivery.
LLC is implemented in software, and its implementation is independent of the hardware. In a computer,
the LLC can be considered the driver software for the NIC. The NIC driver is a program that interacts
directly with the hardware on the NIC to pass the data between the MAC sublayer and the physical
media.
MAC sublayer
MAC constitutes the lower sublayer of the data link layer. MAC is implemented by hardware, typically in
the computer NIC. The specifics are specified in the IEEE 802.3 standards. Figure below lists common
IEEE Ethernet standards.
Ethernet MAC sublayer has two primary responsibilities:
• Data encapsulation
• Media access control
Data encapsulation
The data encapsulation process includes frame assembly before transmission, and frame disassembly
upon reception of a frame. In forming the frame, the MAC layer adds a header and trailer to the
network layer PDU.
MAC and IP
There are two primary addresses assigned to a host device:
• Physical address (the MAC address)
• Logical address (the IP address)
Both the MAC address and IP address work together to identify a device on the network. The process of
using the MAC address and the IP address to find a computer is similar to the process of using a name
and address of an individual to send a letter.
A person's name usually does not change. A person's address on the other hand, relates to where they
live and can change.
Similar to the name of a person, the MAC address on a host does not change; it is physically assigned to
the host NIC and is known as the physical address. The physical address remains the same regardless of
where the host is placed.
The IP address is similar to the address of a person. This address is based on where the host is actually
located. Using this address, it is possible for a frame to determine the location of where a frame should
be sent. The IP address, or network address, is known as a logical address because it is assigned logically.
It is assigned to each host by a network administrator based on the local network that the host is
connected to. Both the physical MAC and logical IP addresses are required for a computer to
communicate on a hierarchical network, just like both the name and address of a person are required to
send a letter.
End-to-End Connectivity: MAC and IP
A source device will send a packet based on an IP address. One of the most common ways a source
device determines the IP address of a destination device is through Domain Name Service (DNS), in
which an IP address is associated to a domain name. For example, www.cisco.com is equal to
209.165.200.225. This IP address will get the packet to the network location of the destination device. It
is this IP address that routers will use to determine the best path to reach a destination. So, in short, IP
addressing determines the end-to-end behavior of an IP packet.
However, along each link in a path, an IP packet is encapsulated in a frame specific to the particular data
link technology associated with that link, such as Ethernet. End devices on an Ethernet network do not
accept and process frames based on IP addresses, rather, a frame is accepted and processed based on
MAC addresses.
On Ethernet networks, MAC addresses are used to identify, at a lower level, the source and destination
hosts. When a host on an Ethernet network communicates, it sends frames containing its own MAC
address as the source and the MAC address of the intended recipient as the destination. All hosts that
receive the frame will read the destination MAC address. If the destination MAC address matches the
MAC address configured on the host NIC, only then will the host process the message.
How are the IP addresses of the IP packets in a data flow associated with the MAC addresses on each
link along the path to the destination? This is done through a process called Address Resolution Protocol
(ARP).
Topic 141: Address Resolution Protocol (ARP):
Recall that each node on an IP network has both a MAC address and an IP address. In order to send
data, the node must use both of these addresses. The node must use its own MAC and IP addresses in
the source fields and must provide both a MAC address and an IP address for the destination. While the
IP address of the destination will be provided by a higher OSI layer, the sending node needs a way to
find the MAC address of the destination for a given Ethernet link. This is the purpose of ARP.
ARP relies on certain types of Ethernet broadcast messages and Ethernet unicast messages, called ARP
requests and ARP replies.
The ARP protocol provides two basic functions:
• Resolving IPv4 addresses to MAC addresses
• Maintaining a table of mappings
ARP Functions
ARP Table –
• Used to find data link layer address that is mapped to destination IPv4 address
• As a node receives frames from the media, it records the source IP and MAC address as a
mapping in the ARP table
ARP request –
• Layer 2 broadcast to all devices on the Ethernet LAN
• The node that matches the IP address in the broadcast will reply
• If no device responds, the packet is dropped because a frame cannot be created
Native VLANs
Tagged Frames on the Native VLAN
Some devices that support trunking, add a VLAN tag to native VLAN traffic. Control traffic sent on the
native VLAN should not be tagged. If an 802.1Q trunk port receives a tagged frame with the VLAN ID the
same as the native VLAN, it drops the frame. Consequently, when configuring a switch port on a Cisco
switch, configure devices so that they do not send tagged frames on the native VLAN. Devices from
other vendors that support tagged frames on the native VLAN include IP phones, servers, routers, and
non-Cisco switches.
Untagged Frames on the Native VLAN
When a Cisco switch trunk port receives untagged frames (which are unusual in a well-designed
network), it forwards those frames to the native VLAN. If there are no devices associated with the native
VLAN (which is not unusual) and there are no other trunk ports (which is not unusual), then the frame is
dropped. The default native VLAN is VLAN 1. When configuring an 802.1Q trunk port, a default Port
VLAN ID (PVID) is assigned the value of the native VLAN ID. All untagged traffic coming in or out of the
802.1Q port is forwarded based on the PVID value. For example, if VLAN 99 is configured as the native
VLAN, the PVID is 99 and all untagged traffic is forwarded to VLAN 99. If the native VLAN has not been
reconfigured, the PVID value is set to VLAN 1.
Topic 152: VLAN Implementation:
Normal Range VLANs
Different Cisco Catalyst switches support various numbers of VLANs. The number of supported VLANs is
large enough to accommodate the needs of most organizations. For example, the Catalyst 2960 and
3560 Series switches support over 4,000 VLANs. Normal range VLANs on these switches are numbered 1
to 1,005 and extended range VLANs are numbered 1,006 to 4,094.
Normal Range VLANs
• Used in small- and medium-sized business and enterprise networks.
• Identified by a VLAN ID between 1 and 1005.
• IDs 1002 through 1005 are reserved for Token Ring and FDDI VLANs.
• IDs 1 and 1002 to 1005 are automatically created and cannot be removed.
• Configurations are stored within a VLAN database file, called vlan.dat. The vlan.dat file is located
in the flash memory of the switch.
• The VLAN Trunking Protocol (VTP), which helps manage VLAN configurations between switches,
can only learn and store normal range VLANs.
Extended Range VLANs
• Enable service providers to extend their infrastructure to a greater number of customers. Some
global enterprises could be large enough to need extended range VLAN IDs.
• Are identified by a VLAN ID between 1006 and 4094.
• Configurations are not written to the vlan.dat file.
• Support fewer VLAN features than normal range VLANs.
• Are, by default, saved in the running configuration file.
• VTP does not learn extended range VLANs.
Creating a VLAN
When configuring normal range VLANs, the configuration details are stored in flash memory on the
switch in a file called vlan.dat. Flash memory is persistent and does not require the copy running-config
startup-config command. However, because other details are often configured on a Cisco switch at the
same time that VLANs are created, it is good practice to save running configuration changes to the
startup configuration.
In addition to entering a single VLAN ID, a series of VLAN IDs can be entered separated by commas, or a
range of VLAN IDs separated by hyphens using the vlan vlan-id command. For example, use the
following command to create VLANs 100, 102, 105, 106, and 107:
S1(config)# vlan 100,102,105-107
Assigning Ports to VLANs
After creating a VLAN, the next step is to assign ports to the VLAN. An access port can belong to only one
VLAN at a time; one exception to this rule is that of a port connected to an IP phone, in which case,
there are two VLANs associated with the port: one for voice and one for data.
The switchport access vlan command forces the creation of a VLAN if it does not already exist on the
switch.
Deleting a VLAN
The no vlan vlan-id global configuration mode command is used to remove VLAN 20 from the switch.
Caution: Before deleting a VLAN, be sure to first reassign all member ports to a different VLAN. Any
ports that are not moved to an active VLAN are unable to communicate with other hosts after the VLAN
is deleted and until they are assigned to an active VLAN.
Alternatively, the entire vlan.dat file can be deleted using the delete flash: vlan.dat privileged EXEC
mode command. The abbreviated command version (delete vlan.dat) can be used if the vlan.dat file has
not been moved from its default location. After issuing this command and reloading the switch, the
previously configured VLANs are no longer present. This effectively places the switch into its factory
default condition concerning VLAN configurations.
Note: For a Catalyst switch, the erase startup-config command must accompany the delete vlan.dat
command prior to reload to restore the switch to its factory default condition.
Verifying VLAN Information
After a VLAN is configured, VLAN configurations can be validated using Cisco IOS show commands.
The show interfaces vlan vlan-id command displays details that are beyond the scope of this course.
Topic 153: VLAN Trunks:
Native VLANs
A VLAN trunk is an OSI Layer 2 link between two switches that carries traffic for all VLANs (unless the
allowed VLAN list is restricted manually or dynamically). To enable trunk links, configure the ports on
either end of the physical link with parallel sets of commands.
To configure a switch port on one end of a trunk link, use the switchport mode trunk command. With
this command, the interface changes to permanent trunking mode. The port enters into a Dynamic
Trunking Protocol (DTP) negotiation to convert the link into a trunk link even if the interface connecting
to it does not agree to the change. DTP is described in the next topic. In this course, the switchport
mode trunk command is the only method implemented for trunk configuration.
Use the Cisco IOS switchport trunk allowed vlan vlan-list command to specify the list of VLANs to be
allowed on the trunk link.
Configuring IEEE 802.1Q Trunk Links
The bridge ID (BID) is used to determine the root bridge on a network. The BID field of a BPDU frame
contains three separate fields:
• Bridge priority
• Extended system ID
• MAC address
Each field is used during the root bridge election.
Bridge Priority
The bridge priority is a customizable value that can be used to influence which switch becomes the root
bridge. The switch with the lowest priority, which implies the lowest BID, becomes the root bridge
because a lower priority value takes precedence. For example, to ensure that a specific switch is always
the root bridge, set the priority to a lower value than the rest of the switches on the network. The
default priority value for all Cisco switches is 32768. The range is 0 to 61440 in increments of 4096. Valid
priority values are 0, 4096, 8192, 12288, 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056,
49152, 53248, 57344, and 61440. All other values are rejected. A bridge priority of 0 takes precedence
over all other bridge priorities.
Extended System ID
Early implementations of IEEE 802.1D were designed for networks that did not use VLANs. There was a
single common spanning tree across all switches. For this reason, in older Cisco switches, the extended
system ID could be omitted in BPDU frames. As VLANs became common for network infrastructure
segmentation, 802.1D was enhanced to include support for VLANs, requiring the VLAN ID to be included
in the BPDU frame. VLAN information is included in the BPDU frame through the use of the extended
system ID. All newer switches include the use of the extended system ID by default.
The extended system ID value is added to the bridge priority value in the BID to identify the priority and
VLAN of the BPDU frame.
When two switches are configured with the same priority and have the same extended system ID, the
switch having the MAC address with the lowest hexadecimal value will have the lower BID. Initially, all
switches are configured with the same default priority value. The MAC address is then the deciding
factor on which switch is going to become the root bridge. To ensure that the root bridge decision best
meets network requirements, it is recommended that the administrator configure the desired root
bridge switch with a lower priority. This also ensures that the addition of new switches to the network
does not trigger a new spanning tree election, which can disrupt network communication while a new
root bridge is being selected.
In Figure above, S1 has a lower priority than the other switches; therefore, it is preferred as the root
bridge for that spanning tree instance.
When all switches are configured with the same priority, as is the case with all switches kept in the
default configuration with a priority of 32768, the MAC address becomes the deciding factor for which
switch becomes the root bridge.
The MAC address with the lowest hexadecimal value is considered to be the preferred root bridge. In
the example above, S2 has the lowest value for its MAC address and is, therefore, designated as the root
bridge for that spanning tree instance.
Topic 162: Varieties of Spanning Tree Protocols:
Several varieties of spanning tree protocols have emerged after the original IEEE 802.1D.
The varieties of spanning tree protocols include:
STP - This is the original IEEE 802.1D version (802.1D-1998 and earlier) that provides a loop-free
topology in a network with redundant links. Common Spanning Tree (CST) assumes one spanning tree
instance for the entire bridged network, regardless of the number of VLANs.
PVST+ - This is a Cisco enhancement of the original 802.1D standard that provides a separate 802.1D
spanning tree instance for each VLAN configured in the network. The separate instance supports
PortFast, UplinkFast, BackboneFast, BPDU guard, BPDU filter, root guard, and loop guard.
802.1D-2004 - This is an updated version of the STP standard, incorporating IEEE 802.1w.
Rapid Spanning Tree Protocol (RSTP) or IEEE 802.1w - This is an evolution of STP that provides faster
convergence than STP.
Rapid PVST+ - This is a Cisco enhancement of RSTP that uses PVST+. Rapid PVST+ provides a separate
instance of 802.1w per VLAN. The separate instance supports PortFast, BPDU guard, BPDU filter, root
guard, and loop guard.
Multiple Spanning Tree Protocol(MSTP) - This is an IEEE standard inspired by the earlier Cisco
proprietary Multiple Instance STP (MISTP) implementation. MSTP maps multiple VLANs into the same
spanning tree instance. The Cisco implementation of MSTP is MST, which provides up to 16 instances of
RSTP and combines many VLANs with the same physical and logical topology into a common RSTP
instance. Each instance supports PortFast, BPDU guard, BPDU filter, root guard, and loop guard.
A network professional, whose duties include switch administration, may be required to decide which
type of spanning tree protocol to implement.
To analyze the STP topology, follow these steps:
Step 1. Discover the Layer 2 topology. Use network documentation if it exists or use the show cdp
neighbors command to discover the Layer 2 topology.
Step 2. After discovering the Layer 2 topology, use STP knowledge to determine the expected Layer 2
path. It is necessary to know which switch is the root bridge.
Step 3. Use the show spanning-tree vlan command to determine which switch is the root bridge.
Step 4. Use the show spanning-tree vlan command on all switches to find out which ports are in
blocking or forwarding state and confirm your expected Layer 2 path.
Topic 163: Link Aggregation:
In the figure below, traffic coming from several links (usually 100 or 1000 Mb/s) aggregates on the
access switch and must be sent to distribution switches. Because of the traffic aggregation, links with
higher bandwidth must be available between the access and distribution switches.
It may be possible to use faster links, such as 10 Gb/s, on the aggregated link between the access and
distribution layer switches. However, adding faster links is expensive. Additionally, as the speed
increases on the access links, even the fastest possible port on the aggregated link is no longer fast
enough to aggregate the traffic coming from all access links.
It is also possible to multiply the number of physical links between the switches to increase the overall
speed of switch-to-switch communication. However, by default, STP is enabled on switch devices. STP
will block redundant links to prevent routing loops.
For these reasons, the best solution is to implement an EtherChannel configuration.
EtherChannel
EtherChannel technology was originally developed by Cisco as a LAN switch-to-switch technique of
grouping several Fast Ethernet or Gigabit Ethernet ports into one logical channel. When an EtherChannel
is configured, the resulting virtual interface is called a port channel. The physical interfaces are bundled
together into a port channel interface.
EtherChannel technology has many advantages:
• Most configuration tasks can be done on the EtherChannel interface instead of on each
individual port, ensuring configuration consistency throughout the links.
• EtherChannel relies on existing switch ports. There is no need to upgrade the link to a faster and
more expensive connection to have more bandwidth.
• Load balancing takes place between links that are part of the same EtherChannel. Depending on
the hardware platform, one or more load-balancing methods can be implemented. These
methods include source MAC to destination MAC load balancing, or source IP to destination IP
load balancing, across the physical links.
• EtherChannel creates an aggregation that is seen as one logical link. When several EtherChannel
bundles exist between two switches, STP may block one of the bundles to prevent switching
loops. When STP blocks one of the redundant links, it blocks the entire EtherChannel. This blocks
all the ports belonging to that EtherChannel link. Where there is only one EtherChannel link, all
physical links in the EtherChannel are active because STP sees only one (logical) link.
• EtherChannel provides redundancy because the overall link is seen as one logical connection.
Additionally, the loss of one physical link within the channel does not create a change in the
topology; therefore a spanning tree recalculation is not required. Assuming at least one physical
link is present; the EtherChannel remains functional, even if its overall throughput decreases
because of a lost link within the EtherChannel.
• EtherChannel can be implemented by grouping multiple physical ports into one or more logical
EtherChannel links.
The original purpose of EtherChannel is to increase speed capability on aggregated links between
switches. However, this concept was extended as EtherChannel technology became more popular, and
now many servers also support link aggregation with EtherChannel. EtherChannel creates a one-to-one
relationship; that is, one EtherChannel link connects only two devices. An EtherChannel link can be
created between two switches or an EtherChannel link can be created between an EtherChannel-
enabled server and a switch. However, traffic cannot be sent to two different switches through the same
EtherChannel link.
The individual EtherChannel group member port configuration must be consistent on both devices. If
the physical ports of one side are configured as trunks, the physical ports of the other side must also be
configured as trunks within the same native VLAN. Additionally, all ports in each EtherChannel link must
be configured as Layer 2 ports.
PAgP
PAgP is a Cisco-proprietary protocol that aids in the automatic creation of EtherChannel links. When an
EtherChannel link is configured using PAgP, PAgP packets are sent between EtherChannel-capable ports
to negotiate the forming of a channel.
Link Aggregation Control Protocol (LACP)
LACP is part of an IEEE specification (802.3ad) that allows several physical ports to be bundled to form a
single logical channel. LACP allows a switch to negotiate an automatic bundle by sending LACP packets
to the peer. It performs a function similar to PAgP with Cisco EtherChannel. Because LACP is an IEEE
standard, it can be used to facilitate EtherChannels in multivendor environments. On Cisco devices, both
protocols are supported.
Topic 164: Packet Tracer–Switched Network with Redundant Links:
• Build the Network
• Determine the Root Bridge
Topic 165: Inter-VLAN Routing:
VLANs are used to segment switched networks. Layer 2 switches, such as the Catalyst 2960 Series, can
be configured by a network professional with over 4,000 VLANs. However, Layer 2 switches have very
limited IPv4 and IPv6 functionality and cannot perform the routing function of routers. While Layer 2
switches are gaining more IP functionality, such as the ability to perform static routing, these switches
do not support dynamic routing. With the large number of VLANs possible on these switches, static
routing is insufficient.
A VLAN is a broadcast domain, so computers on separate VLANs are unable to communicate without the
intervention of a routing device. Any device that supports Layer 3 routing, such as a router or a
multilayer switch, can be used to perform the necessary routing functionality. Regardless of the device
used, the process of forwarding network traffic from one VLAN to another VLAN using routing is known
as inter-VLAN routing.
Legacy Inter-VLAN Routing
Historically, the first solution for inter-VLAN routing relied on routers with multiple physical interfaces.
Each interface had to be connected to a separate network and configured with a distinct subnet.
In this legacy approach, inter-VLAN routing is performed by connecting different physical router
interfaces to different physical switch ports. The switch ports connected to the router are placed in
access mode and each physical interface is assigned to a different VLAN. Each router interface can then
accept traffic from the VLAN associated with the switch interface that it is connected to, and traffic can
be routed to the other VLANs connected to the other interfaces.
Router-on-a-Stick Inter-VLAN Routing
While legacy inter-VLAN routing requires multiple physical interfaces on both the router and the switch,
a more common, present-day implementation of inter-VLAN routing does not. Instead, some router
software permits configuring a router interface as a trunk link, meaning only one physical interface is
required on the router and the switch to route packets between multiple VLANs.
Router-on-a-stick is a type of router configuration in which a single physical interface routes traffic
between multiple VLANs on a network. The router interface is configured to operate as a trunk link and
is connected to a switch port that is configured in trunk mode. The router performs inter-VLAN routing
by accepting VLAN-tagged traffic on the trunk interface coming from the adjacent switch, and then
internally routing between the VLANs using subinterfaces. The router then forwards the routed traffic,
VLAN-tagged for the destination VLAN, out the same physical interface as it used to receive the traffic.
Subinterfaces are software-based virtual interfaces, associated with a single physical interface.
Subinterfaces are configured in software on a router and each subinterface is independently configured
with an IP address and VLAN assignment. Subinterfaces are configured for different subnets
corresponding to their VLAN assignment to facilitate logical routing. After a routing decision is made
based on the destination VLAN, the data frames are VLAN-tagged and sent back out the physical
interface.
Multi-Layer Switch Inter-VLAN Routing
The router-on-a-stick implementation of inter-VLAN routing requires only one physical interface on a
router and one interface on a switch, simplifying the cabling of the router. However, in other
implementations of inter-VLAN routing, a dedicated router is not required.
Multilayer switches can perform Layer 2 and Layer 3 functions, replacing the need for dedicated routers
to perform basic routing on a network. Multilayer switches support dynamic routing and inter-VLAN
routing.
To enable a multilayer switch to perform routing functions, the multilayer switch must have IP routing
enabled.
Multilayer switching is more scalable than any other inter-VLAN routing implementation. This is because
routers have a limited number of available ports to connect to networks. Additionally, for interfaces that
are configured as a trunk line, limited amounts of traffic can be accommodated on that line at one time.
With a multilayer switch, traffic is routed internal to the switch device, which means packets are not
filtered down a single trunk line to obtain new VLAN-tagging information. A multilayer switch does not,
however, completely replace the functionality of a router. Routers support a significant number of
additional features, such as the ability to implement greater security controls. Rather, a multilayer
switch can be thought of as a Layer 2 device that is upgraded to have some routing capabilities.
Topic 166: Configure Legacy Inter-VLAN Routing:
Legacy inter-VLAN routing requires routers to have multiple physical interfaces. The router accomplishes
the routing by having each of its physical interfaces connected to a unique VLAN. Each interface is also
configured with an IP address for the subnet associated with the particular VLAN to which it is
connected. By configuring the IP addresses on the physical interfaces, network devices connected to
each of the VLANs can communicate with the router using the physical interface connected to the same
VLAN. In this configuration, network devices can use the router as a gateway to access the devices
connected to the other VLANs.
The routing process requires the source device to determine if the destination device is local or remote
to the local subnet. The source device accomplishes this by comparing the source and destination IP
addresses against the subnet mask. When the destination IP address has been determined to be on a
remote network, the source device must identify where it needs to forward the packet to reach the
destination device. The source device examines the local routing table to determine where it needs to
send the data. Devices use their default gateway as the Layer 2 destination for all traffic that must leave
the local subnet. The default gateway is the route that the device uses when it has no other explicitly
defined route to the destination network. The IP address of the router interface on the local subnet acts
as the default gateway for the sending device.
When the source device has determined that the packet must travel through the local router interface
on the connected VLAN, the source device sends out an ARP request to determine the MAC address of
the local router interface. When the router sends its ARP reply back to the source device, the source
device can use the MAC address to finish framing the packet before it sends it out on the network as
unicast traffic.
Because the Ethernet frame has the destination MAC address of the router interface, the switch knows
exactly which switch port to forward the unicast traffic out of to reach the router interface for that
VLAN. When the frame arrives at the router, the router removes the source and destination MAC
address information to examine the destination IP address of the packet. The router compares the
destination address to entries in its routing table to determine where it needs to forward the data to
reach its final destination. If the router determines that the destination network is a locally connected
network, as is the case with inter-VLAN routing, the router sends an ARP request out the interface
physically connected to the destination VLAN. The destination device responds back to the router with
its MAC address, which the router then uses to frame the packet. The router then sends the unicast
traffic to the switch, which forwards it out the port where the destination device is connected.
Legacy Inter-VLAN Routing-Switch Configuration
To configure legacy inter-VLAN routing, start by configuring the switch.
Use the vlan vlan_id global configuration mode command to create VLANs. After the VLANs have been
created, the switch ports are assigned to the appropriate VLANs. The switchport access vlan
vlan_id command is executed from interface configuration mode on the switch for each interface to
which the router connects.
Finally, to protect the configuration so that it is not lost after a reload of the switch, the copy running-
config startup-config command is executed to back up the running configuration to the startup
configuration.
Next, the router can be configured to perform inter-VLAN routing.
Router interfaces are configured in a manner similar to configuring VLAN interfaces on switches. To
configure a specific interface, change to interface configuration mode from global configuration mode.
Router interfaces are disabled by default and must be enabled using the no shutdown command before
they are used. After the no shutdown interface configuration mode command has been issued, a
notification displays, indicating that the interface state has changed to up. This indicates that the
interface is now enabled.
The process is repeated for all router interfaces. Each router interface must be assigned to a unique
subnet for routing to occur.
Topic 167: Configure Router-on-a-Stick Inter-VLAN Routing:
Legacy inter-VLAN routing using physical interfaces has a significant limitation. Routers have a limited
number of physical interfaces to connect to different VLANs. As the number of VLANs increases on a
network, having one physical router interface per VLAN quickly exhausts the physical interface capacity
of a router. An alternative in larger networks is to use VLAN trunking and subinterfaces. VLAN trunking
allows a single physical router interface to route traffic for multiple VLANs. This technique is termed
router-on-a-stick and uses virtual subinterfaces on the router to overcome the hardware limitations
based on physical router interfaces.
Subinterfaces are software-based virtual interfaces that are assigned to physical interfaces. Each
subinterface is configured independently with its own IP address and subnet mask. This allows a single
physical interface to simultaneously be part of multiple logical networks.
When configuring inter-VLAN routing using the router-on-a-stick model, the physical interface of the
router must be connected to a trunk link on the adjacent switch. On the router, subinterfaces are
created for each unique VLAN on the network. Each subinterface is assigned an IP address specific to its
subnet/VLAN and is also configured to tag frames for that VLAN. This way, the router can keep the
traffic from each subinterface separated as it traverses the trunk link back to the switch.
Functionally, the router-on-a-stick model is the same as using the legacy inter-VLAN routing model, but
instead of using the physical interfaces to perform the routing, subinterfaces of a single physical
interface are used.
Topic 168: Layer 3 Switching:
Router-on-a-stick is simple to implement because routers are usually available in every network. Most
enterprise networks use multilayer switches to achieve high-packet processing rates using hardware-
based switching. Layer 3 switches usually have packet-switching throughputs in the millions of packets
per second (pps), whereas traditional routers provide packet switching in the range of 100,000 pps to
more than 1 million pps.
All Catalyst multilayer switches support the following types of Layer 3 interfaces:
Routed port - A pure Layer 3 interface similar to a physical interface on a Cisco IOS router.
Switch virtual interface (SVI) - A virtual VLAN interface for inter-VLAN routing. In other words, SVIs are
the virtual-routed VLAN interfaces.
High-performance switches, such as the Catalyst 6500 and Catalyst 4500, perform almost every function
involving OSI Layer 3 and higher using hardware-based switching that is based on Cisco Express
Forwarding.
All Layer 3 Cisco Catalyst switches support routing protocols, but several models of Catalyst switches
require enhanced software for specific routing protocol features. Catalyst 2960 Series switches running
IOS Release 12.2(55) or later, support static routing.
Catalyst switches use different default settings for interfaces. All members of the Catalyst 3560 and
4500 families of switches use Layer 2 interfaces by default. Members of the Catalyst 6500 family of
switches running Cisco IOS use Layer 3 interfaces by default. Default interface configurations do not
appear in the running or startup configuration. Depending on which Catalyst family of switches is used,
the switchport or no switchport interface configuration mode commands might be present in the
running config or startup configuration files.
Inter-VLAN Routing with Switch Virtual Interfaces
In the early days of switched networks, switching was fast (often at hardware speed, meaning the speed
was equivalent to the time it took to physically receive and forward frames onto other ports) and
routing was slow (routing had to be processed in software). This prompted network designers to extend
the switched portion of the network as much as possible. Access, distribution, and core layers were
often configured to communicate at Layer 2. This topology created loop issues. To solve these issues,
spanning-tree technologies were used to prevent loops while still enabling flexibility and redundancy in
inter-switch connections.
However, as network technologies have evolved, routing has become faster and cheaper. Today, routing
can be performed at wire speed. One consequence of this evolution is that routing can be transferred to
the core and the distribution layers without impacting network performance.
Many users are in separate VLANs, and each VLAN is usually a separate subnet. Therefore, it is logical to
configure the distribution switches as Layer 3 gateways for the users of each access switch VLAN. This
implies that each distribution switch must have IP addresses matching each access switch VLAN.
As shown in the figure below, Layer 3 (routed) ports are normally implemented between the distribution
and the core layer. The network architecture depicted is not dependent on spanning tree because there
are no physical loops in the Layer 2 portion of the topology.
Switch Virtual Interfaces
An SVI is a virtual interface that is configured within a multilayer switch. An SVI can be created for any
VLAN that exists on the switch. An SVI is considered to be virtual because there is no physical port
dedicated to the interface. It can perform the same functions for the VLAN as a router interface would,
and can be configured in much the same way as a router interface (i.e., IP address, inbound/outbound
ACLs, etc.). The SVI for the VLAN provides Layer 3 processing for packets to or from all switch ports
associated with that VLAN.
Whenever the SVI is created, ensure that particular VLAN is present in the VLAN database. In the figure
below, the switch should have VLAN 10 and VLAN 20 present in the VLAN database; otherwise, the SVI
interface stays down.
The following are some of the reasons to configure SVI:
• To provide a gateway for a VLAN so that traffic can be routed into or out of that VLAN
• To provide Layer 3 IP connectivity to the switch
• To support routing protocol and bridging configurations
The following are some of the advantages of SVIs (the only disadvantage is that multilayer switches are
more expensive):
• It is much faster than router-on-a-stick, because everything is hardware switched and routed.
• No need for external links from the switch to the router for routing.
• Not limited to one link. Layer 2 EtherChannels can be used between the switches to get more
bandwidth.
• Latency is much lower, because it does not need to leave the switch.