0% found this document useful (0 votes)
34 views

Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Uploaded by

Kalin Stoyanov
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Uploaded by

Kalin Stoyanov
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Home  TechLibrary 

Data Center Fabric Blueprint Architecture


Components
 9-Nov-20

This sec on gives an overview of the building blocks used in this blueprint architecture. The
implementa on of each building block technology is explored in more detail later sec ons.

For informa on about the hardware and so ware that serve as a founda on to your building
blocks, see the Data Center Fabric Reference Design Hardware and So ware Summary.

The building blocks include:

IP Fabric Underlay Network


The modern IP fabric underlay network building block provides IP connec vity across a Clos-
based topology. Juniper Networks supports the following IP fabric underlay models:

A 3-stage IP fabric, which is comprised of a er of spine devices and a er of leaf devices.


See Figure 1.

A 5-stage IP fabric, which typically starts as a single 3-stage IP fabric that grows into two 3-
stage IP fabrics. These fabrics are segmented into separate points of delivery (PODs) within
a data center. For this use case, we support the addi on of a er of super spine devices that
enable communica on between the spine and leaf devices in the two PODs. See Figure 2.

Figure 1: Three-Stage IP Fabric Underlay

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 1/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Figure 2: Five-Stage IP Fabric Underlay

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 2/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

As shown in both figures, the devices are interconnected using high-speed interfaces that are
either single links or aggregated Ethernet interfaces. The aggregated Ethernet interfaces are
op onal—a single link between devices is typically used— but can be deployed to increase
bandwidth and provide link level redundancy. Both op ons are covered.

We chose EBGP as the rou ng protocol in the underlay network for its dependability and
scalability. Each device is assigned its own autonomous system with a unique autonomous
system number to support EBGP. You can use other rou ng protocols in the underlay network;
the usage of those protocols is beyond the scope of this document.

Micro Bidirec onal Forwarding Detec on (BFD)—the ability to run BFD on individual links in an
aggregated Ethernet interface—can also be enabled in this building block to quickly detect link
failures on any member links in aggregated Ethernet bundles that connect devices.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 3/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

For informa on about implemen ng spine and leaf devices in 3-stage and 5-stage IP fabric
underlays, see IP Fabric Underlay Network Design and Implementa on. For informa on about
implemen ng the addi onal er of super spine devices in a 5-stage IP fabric underlay, see Five-
Stage IP Fabric Design and Implementa on.

IPv4 and IPv6 Support


Because many networks implement a dual stack environment that includes IPv4 and IPv6, this
blueprint provides support for both IP protocols. IPv4 and IPv6 are interwoven throughout this
guide to allow you to pick one or both of these protocols.

Network Virtualiza on Overlays


A network virtualization overlay is a virtual network that is transported over an IP underlay
network. This building block enables mul tenancy in a network, allowing you to share a single
physical network across mul ple tenants, while keeping each tenant’s network traffic isolated
from the other tenants.

A tenant is a user community (such as a business unit, department, workgroup, or applica on)
that contains groups of endpoints. Groups may communicate with other groups in the same
tenancy, and tenants may communicate with other tenants if permi ed by network policies. A
group is typically expressed as a subnet (VLAN) that can communicate with other devices in the
same subnet, and reach external groups and endpoints by way of a virtual rou ng and
forwarding (VRF) instance.

As seen in the overlay example shown in Figure 3, Ethernet bridging tables (represented by
triangles) handle tenant bridged frames and IP rou ng tables (represented by squares) process
routed packets. Inter-VLAN rou ng happens at the integrated rou ng and bridging (IRB)
interfaces (represented by circles). Ethernet and IP tables are directed into virtual networks
(represented by colored lines). To reach end systems a ached to other VXLAN Tunnel Endpoint
(VTEP) devices, tenant packets are encapsulated and sent over an EVPN-signalled VXLAN tunnel
(represented by green tunnel icons) to the associated remote VTEP devices. Tunneled packets
are de-encapsulated at the remote VTEP devices and forwarded to the remote end systems by
way of the respec ve bridging or rou ng tables of the egress VTEP device.

Figure 3: VXLAN Tunnels—Ethernet Bridging, IP Rou ng, and IRB

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 4/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

The following sec ons provide more details about overlay networks:

IBGP for Overlays

Bridged Overlay

Centrally Routed Bridging Overlay

Edge-Routed Bridging Overlay

Comparison of Bridged, Centrally Routed Bridging, and Edge-Routed Bridging Overlays

IRB Addressing Models in Bridging Overlays

Routed Overlay using EVPN Type 5 Routes

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 5/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

IBGP for Overlays


Internal BGP (IBGP) is a rou ng protocol that exchanges reachability informa on across an IP
network. When IBGP is combined with Mul protocol BGP (MP-IBGP), it provides the
founda on for EVPN to exchange reachability informa on between VTEP devices. This
capability is required to establish inter-VTEP VXLAN tunnels and use them for overlay
connec vity services.

Figure 4 shows that the spine and leaf devices use their loopback addresses for peering in a
single autonomous system. In this design, the spine devices act as a route reflector cluster and
the leaf devices are route reflector clients. Use of a route reflector sa sfies the IBGP
requirement for a full mesh without the need to peer all the VTEP devices directly with one
another. As a result, the leaf devices peer only with the spine devices and the spine devices peer
with both spine devices and leaf devices. Because the spine devices are connected to all the leaf
devices, the spine devices can relay IBGP informa on between the indirectly peered leaf device
neighbors.

Figure 4: IBGP for Overlays

You can place route reflectors almost anywhere in the network. However, you must consider the
following:

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 6/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Does the selected device have enough memory and processing power to handle the
addi onal workload required by a route reflector?

Is the selected device equidistant and reachable from all EVPN speakers?

Does the selected device have the proper so ware capabili es?

In this design, the route reflector cluster is placed at the spine layer. The QFX switches that you
can use as a spine in this reference design have ample processing speed to handle route reflector
client traffic in the network virtualiza on overlay.

For details about implemen ng IBGP in an overlay, see Configuring IBGP for the Overlay.

Bridged Overlay
The first overlay service type described in this guide is a bridged overlay, as shown in Figure 5.

Figure 5: Bridged Overlay

In this overlay model, Ethernet VLANs are extended between leaf devices across VXLAN
tunnels. These leaf-to-leaf VXLAN tunnels support data center networks that require Ethernet
connec vity between leaf devices but do not need rou ng between the VLANs. As a result, the
spine devices provide only basic underlay and overlay connec vity for the leaf devices, and do
not perform rou ng or gateway services seen with other overlay methods.

Leaf devices originate VTEPs to connect to the other leaf devices. The tunnels enable the leaf
devices to send VLAN traffic to other leaf devices and Ethernet-connected end systems in the
data center. The simplicity of this overlay service makes it a rac ve for operators who need an
easy way to introduce EVPN/VXLAN into their exis ng Ethernet-based data center.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 7/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

NOTE You can add rou ng to a bridged overlay by implemen ng an MX Series


router or SRX Series security device external to the EVPN/VXLAN fabric. Otherwise,
you can select one of the other overlay types that incorporate rou ng (such as an edge-
routed bridging overlay, a centrally-routed bridging overlay, or a routed overlay).

For informa on on implemen ng a bridged overlay, see Bridged Overlay Design and
Implementa on.

Centrally Routed Bridging Overlay


The second overlay service type is the centrally routed bridging overlay, as shown in Figure 6.

Figure 6: Centrally Routed Bridging Overlay

In a centrally routed bridging overlay rou ng occurs at a central gateway of the data center
network (the spine layer in this example) rather than at the VTEP device where the end systems
are connected (the leaf layer in this example).

You can use this overlay model when you need routed traffic to go through a centralized
gateway or when your edge VTEP devices lack the required rou ng capabili es.

As shown above, traffic that originates at the Ethernet-connected end systems is forwarded to
the leaf VTEP devices over a trunk (mul ple VLANs) or an access port (single VLAN). The VTEP
device forwards the traffic to local end systems or to an end system at a remote VTEP device.
An integrated rou ng and bridging (IRB) interface at each spine device helps route traffic
between the Ethernet virtual networks.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 8/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

EVPN supports two VLAN-aware Ethernet service models in the data center. Juniper Networks
supports both models. They are as follows:

VLAN-Aware–-This bridging overlay service model allows a collec on of VLANs to be easily


aggregated into the same overlay virtual network. It provides two op ons:
1. Default Instance VLAN-Aware—In this op on, you implement a single, default switching
instance that supports a total of 4094 VLANs. All leaf pla orms included in this design
(see Data Center Fabric Reference Design Hardware and So ware Summary) support the
default instance style of VLAN-aware overlay.
To configure this service model, see Configuring a VLAN-Aware Centrally-Routed
Bridging Overlay in the Default Instance.

2. Virtual Switch VLAN-Aware—In this op on, mul ple virtual switch instances support
4094 VLANs per instance. This Ethernet service model is ideal for overlay networks that
require scalability beyond a single default instance. Support for this op on is available
currently on the QFX10000 line of switches.
To implement this scalable service model, see Configuring a VLAN-Aware Centrally-
Routed Bridging Overlay with Virtual Switches.

Edge-Routed Bridging Overlay


The third overlay service op on is the edge-routed bridging overlay, as shown in Figure 7.

Figure 7: Edge-Routed Bridging Overlay

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 9/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

In this Ethernet service model, the IRB interfaces are moved to leaf device VTEPs at the edge of
the overlay network to bring IP rou ng closer to the end systems. Because of the special ASIC
capabili es required to support bridging, rou ng, and EVPN/VXLAN in one device, edge-routed
bridging overlays are only possible on certain switches. For a list of switches that we support as
leaf devices in an edge-routed bridging overlay, see Data Center Fabric Reference Design
Hardware and So ware Summary.

This model allows for a simpler overall network. The spine devices are configured to handle only
IP traffic, which removes the need to extend the bridging overlays to the spine devices.

This op on also enables faster server-to-server, intra-data center traffic (also known as east-
west traffic) where the end systems are connected to the same leaf device VTEP. As a result,
rou ng happens much closer to the end systems than with centrally routed bridging overlays.

NOTE When a QFX5110 or QFX5120 switch that func ons as a leaf device is
configured with IRB interfaces that are included in EVPN Type-5 rou ng instances,
symmetric inter-IRB unicast rou ng is automa cally enabled.

For informa on on implemen ng the edge-routed bridging overlay, see Edge-Routed Bridging
Overlay Design and Implementa on.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 10/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Comparison of Bridged, Centrally Routed Bridging, and Edge-Routed Bridging


Overlays
To help you decide which overlay type is best suited for your EVPN environment, see Table 1.

Table 1: Comparison of Bridged, Centrally Routed Bridging, and Edge-Routed Bridging Overlays

Edge-Routed Centrally Routed Bridged


Comparison Points Bridging Overlay Bridging Overlay Overlay

Fully distributed tenant inter-subnet ✓


rou ng

Minimal impact of IP gateway failure ✓

Dynamic rou ng to third-party nodes at ✓


leaf level

Op mized for high volume of east-west ✓


traffic

Be er integra on with raw IP fabrics ✓

IP VRF virtualiza on closer to the server ✓

Contrail vRouter mul homing required ✓

Easier EVPN interoperability with ✓


different vendors

Symmetric inter-subnet rou ng ✓ ✓

VLAN ID overlapping per rack ✓ ✓ ✓

Simpler manual configura on and ✓ ✓


troubleshoo ng

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 11/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Edge-Routed Centrally Routed Bridged


Comparison Points Bridging Overlay Bridging Overlay Overlay

Service provider- and Enterprise-style ✓ ✓


interfaces

Legacy leaf switch support (QFX5100) ✓ ✓

Centralized virtual machine traffic ✓


op miza on (VMTO) control

IP tenant subnet gateway on the firewall ✓


cluster

IRB Addressing Models in Bridging Overlays


The configura on of IRB interfaces in centrally routed bridging and edge-routed bridging
overlays requires an understanding of the models for the default gateway IP and MAC address
configura on of IRB interfaces as follows:

Unique IRB IP Address—In this model, a unique IP address is configured on each IRB
interface in an overlay subnet.
The benefit of having a unique IP address and MAC address on each IRB interface is the
ability to monitor and reach each of the IRB interfaces from within the overlay using its
unique IP address. This model also allows you to configure a rou ng protocol on the IRB
interface.
The downside of this model is that alloca ng a unique IP address to each IRB interface may
consume many IP addresses of a subnet.

Unique IRB IP Address with Virtual Gateway IP Address—This model adds a virtual gateway
IP address to the previous model, and we recommend it for centrally routed bridged
overlays. It is similar to VRRP, but without the in-band data plane signaling between the
gateway IRB interfaces. The virtual gateway should be the same for all default gateway IRB
interfaces in the overlay subnet and is ac ve on all gateway IRB interfaces where it is
configured. You should also configure a common IPv4 MAC address for the virtual gateway,
which becomes the source MAC address on data packets forwarded over the IRB interface.
In addi on to the benefits of the previous model, the virtual gateway simplifies default
gateway configura on on end systems. The downside of this model is the same as the

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 12/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

previous model.

IRB with Anycast IP Address and MAC Address—In this model, all default gateway IRB
interfaces in an overlay subnet are configured with the same IP and MAC address. We
recommend this model for edge-routed bridging overlays.
A benefit of this model is that only a single IP address is required per subnet for default
gateway IRB interface addressing, which simplifies default gateway configura on on end
systems.

Routed Overlay using EVPN Type 5 Routes


The final overlay op on is a routed overlay, as shown in Figure 8.

Figure 8: Routed Overlay

This op on is an IP-routed virtual network service. Unlike an MPLS-based IP VPN, the virtual
network in this model is based on EVPN/VXLAN.

Cloud providers prefer this virtual network op on because most modern applica ons are
op mized for IP. Because all communica on between devices happens at the IP layer, there is no
need to use any Ethernet bridging components, such as VLANs and ESIs, in this routed overlay
model.

For informa on on implemen ng a routed overlay, see Routed Overlay Design and
Implementa on.

Mul homing Support for Ethernet-Connected End Systems

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 13/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Figure 9: Ethernet-Connected End System Mul homing

Ethernet-connected multihoming allows Ethernet-connected end systems to connect into the


Ethernet overlay network over a single-homed link to one VTEP device or over mul ple links
mul homed to different VTEP devices. Ethernet traffic is load-balanced across the fabric
between VTEPs on leaf devices that connect to the same end system.

We tested setups where an Ethernet-connected end system was connected to a single leaf
device or mul homed to 2 or 3 leaf devices to prove traffic is properly handled in mul homed
setups with more than two leaf VTEP devices; in prac ce, an Ethernet-connected end system
can be mul homed to a large number of leaf VTEP devices. All links are ac ve and network
traffic can be load balanced over all of the mul homed links.

In this architecture, EVPN is used for Ethernet-connected mul homing. EVPN mul homed LAGs
are iden fied by an Ethernet segment iden fier (ESI) in the EVPN bridging overlay while LACP is
used to improve LAG availability.

VLAN trunking allows one interface to support mul ple VLANs. VLAN trunking ensures that
virtual machines (VMs) on non-overlay hypervisors can operate in any overlay networking
context.

For more informa on about Ethernet-connected mul homing support, see Mul homing an
Ethernet-Connected End System Design and Implementa on.

Mul homing Support for IP-Connected End Systems


Figure 10: IP-Connected End System Mul homing

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 14/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

IP-connected multihoming endpoint systems to connect to the IP network over mul ple IP-
based access interfaces on different leaf devices.

We tested setups where an IP–connected end system was connected to a single leaf or
mul homed to 2 or 3 leaf devices. The setup validated that traffic is properly handled when
mul homed to mul ple leaf devices; in prac ce, an IP-connected end system can be mul homed
to a large number of leaf devices.

In mul homed setups, all links are ac ve and network traffic is forwarded and received over all
mul homed links. IP traffic is load balanced across the mul homed links using a simple hashing
algorithm.

EBGP is used to exchange rou ng informa on between the IP-connected endpoint system and
the connected leaf devices to ensure the route or routes to the endpoint systems are shared
with all spine and leaf devices.

For more informa on about the IP-connected mul homing building block, see Mul homing an
IP-Connected End System Design and Implementa on.

Border Devices
Some of our reference designs include border devices that provide connec ons to the following
devices, which are external to the local IP fabric:

A mul cast gateway.

A data center gateway for data center interconnect (DCI).

A device such as an SRX router on which mul ple services such as firewalls, Network
Address Transla on (NAT), intrusion detec on and preven on (IDP), mul cast, and so on are

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 15/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

consolidated. The consolida on of mul ple services onto one physical device is known as
service chaining.

Appliances or servers that act as firewalls, DHCP servers, sFlow collectors, and so on.

NOTE If your network includes legacy appliances or servers that require a 1-


Gbps Ethernet connec on to a border device, we recommend using a QFX10008 or
a QFX5120 switch as the border device.

To provide the addi onal func onality described above, Juniper Networks supports deploying a
border device in the following ways:

As a device that serves as a border device only. In this dedicated role, you can configure the
device to handle one or more of the tasks described above. For this situa on, the device is
typically deployed as a border leaf, which is connected to a spine device.
For example, in the edge-routed bridging overlay shown in Figure 11, border leafs L5 and L6
provide connec vity to data center gateways for DCI, an sFlow collector, and a DHCP
server.

As a device that has two roles—a network underlay device and a border device that can
handle one or more of the tasks described above. For this situa on, a spine device usually
handles the two roles. Therefore, the border device func onality is referred to as a border
spine.
For example, in the edge-routed bridging overlay shown in Figure 12, border spines S1 and
S2 func on as lean spine devices. They also provide connec vity to data center gateways
for DCI, an sFlow collector, and a DHCP server.

Figure 11: Sample Edge-Routed Bridging Topology with Border Leafs

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 16/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Figure 12: Sample Edge-Routed Bridging Topology with Border Spines

Data Center Interconnect (DCI)


The data center interconnect (DCI) building block provides the technology needed to send traffic
between data centers. The validated design supports DCI using EVPN Type 5 routes or IPVPN
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 17/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

routes.

EVPN Type 5 or IPVPN routes are used in a DCI context to ensure inter-data center traffic
between data centers using different IP address subne ng schemes can be exchanged. Routes
are exchanged between spine devices in different data centers to allow for the passing of traffic
between data centers.

Figure 13: DCI Using EVPN Type 5 Routes Topology Overview

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 18/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Physical connec vity between the data centers is required before EVPN Type 5 messages or
IPVPN routes can be sent between data centers. The physical connec vity is provided by
backbone devices in a WAN cloud. A backbone device is connected to all spine devices in a
single data center, as well as to the other backbone devices that are connected to the other data
centers.

For informa on about configuring DCI, see:

Data Center Interconnect Design and Implementa on Using Type 5 Routes

Data Center Interconnect Design and Implementa on Using IPVPN

Service Chaining
In many networks, it is common for traffic to flow through separate hardware devices that each
provide a service, such as firewalls, NAT, IDP, mul cast, and so on. Each device requires separate
opera on and management. This method of linking mul ple network func ons can be thought
of as physical service chaining.

A more efficient model for service chaining is to virtualize and consolidate network func ons
onto a single device. In our blueprint architecture, we are using the SRX Series routers as the
device that consolidates network func ons and processes and applies services. That device is
called a physical network func on (PNF).

In this solu on, service chaining is supported on both centrally routed bridging overlay and
edge-routed bridging overlay. It works only for inter-tenant traffic.

Logical View of Service Chaining


Figure 14 shows a logical view of service chaining. It shows one spine with a right side
configura on and a le side configura on. On each side is a VRF rou ng instance and an IRB
interface. The SRX Series router in the center is the PNF, and it performs the service chaining.

Figure 14: Service Chaining Logical View

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 19/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

The flow of traffic in this logical view is:

1. The spine receives a packet on the VTEP that is in the le side VRF.

2. The packet is decapsulated and sent to the le side IRB interface.

3. The IRB interface routes the packet to the SRX Series router, which is ac ng as the PNF.

4. The SRX Series router performs service chaining on the packet and forwards the packet back
to the spine, where it is received on the IRB interface shown on the right side of the spine.

5. The IRB interface routes the packet to the VTEP in the right side VRF.

For informa on about configuring service chaining, see Service Chaining Design and
Implementa on.

Mul cast Op miza ons


Mul cast op miza ons help to preserve bandwidth and more efficiently route traffic in a
mul cast scenario in EVPN VXLAN environments. Without any mul cast op miza ons
configured, all mul cast replica on is done at the ingress of the leaf connected to the mul cast
source as shown in Figure 15. Mul cast traffic is sent to all leaf devices that are connected to
the spine. Each leaf device sends traffic to connected hosts.

Figure 15: Mul cast without Op miza ons

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 20/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

There are three types of mul cast op miza ons supported in EVPN VXLAN environments:

IGMP Snooping

Selec ve Mul cast Forwarding

Assisted Replica on of Mul cast Traffic

For informa on about Mul cast support, see Mul cast Support in EVPN-VXLAN Overlay
Networks.

For informa on about configuring Mul cast, see Mul cast Op miza on Design and
Implementa on.

IGMP Snooping
IGMP snooping in an EVPN-VXLAN fabric is useful to op mize the distribu on of mul cast
traffic. IGMP snooping preserves bandwidth because mul cast traffic is forwarded only on
interfaces where there are IGMP listeners. Not all interfaces on a leaf device need to receive
mul cast traffic.

Without IGMP snooping, end systems receive IP mul cast traffic that they have no interest in,
which needlessly floods their links with unwanted traffic. In some cases when IP mul cast flows
are large, flooding unwanted traffic causes denial-of-service issues.

Figure 16 shows how IGMP snooping works in an EVPN-VXLAN fabric. In this sample EVPN-
VXLAN fabric, IGMP snooping is configured on all leaf devices, and mul cast receiver 2 has

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 21/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

previously sent an IGMPv2 join request.

1. Mul cast Receiver 2 sends an IGMPv2 leave request.

2. Mul cast Receivers 3 and 4 send an IGMPv2 join request.

3. When leaf 1 receives ingress mul cast traffic, it replicates it for all leaf devices, and forwards
it to the spine.

4. The spine forwards the traffic to all leaf devices.

5. Leaf 2 receives the mul cast traffic, but does not forward it to the receiver because the
receiver sent an IGMP leave message.

Figure 16: Mul cast with IGMP Snooping

In EVPN-VXLAN networks only IGMP version 2 is supported.

For more informa on about IGMP snooping, see Overview of Mul cast Forwarding with IGMP
Snooping in an EVPN-VXLAN Environment.

Selec ve Mul cast Forwarding


Selec ve mul cast Ethernet (SMET) forwarding provides greater end-to-end network efficiency
and reduces traffic in the EVPN network. It conserves bandwidth usage in the core of the fabric
and reduces the load on egress devices that do not have listeners.

Devices with IGMP snooping enabled use selec ve mul cast forwarding to forward mul cast
traffic in an efficient way. With IGMP snooping enabled a leaf device sends mul cast traffic only

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 22/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

to the access interface with an interested receiver. With SMET added, the leaf device selec vely
sends mul cast traffic to only the leaf devices in the core that have expressed an interest in that
mul cast group.

Figure 17 shows the SMET traffic flow along with IGMP snooping.

1. Mul cast Receiver 2 sends an IGMPv2 leave request.

2. Mul cast Receivers 3 and 4 send an IGMPv2 join request.

3. When leaf 1 receives ingress mul cast traffic, it replicates the traffic only to leaf devices
with interested receivers (leaf devices 3 and 4), and forwards it to the spine.

4. The spine forwards the traffic to leaf devices 3 and 4.

Figure 17: Selec ve Mul cast Forwarding with IGMP Snooping

You do not need to enable SMET; it is enabled by default when IGMP snooping is configured on
the device.

For more informa on about SMET, see Overview of Selec ve Mul cast Forwarding.

Assisted Replica on of Mul cast Traffic


The assisted replica on (AR) feature offloads EVPN-VXLAN fabric leaf devices from ingress
replica on tasks. The ingress leaf does not replicate mul cast traffic. It sends one copy of the
mul cast traffic to a spine that is configured as an AR replicator device. The AR replicator device
distributes and controls mul cast traffic. In addi on to reducing the replica on load on the

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 23/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

ingress leaf devices, this method conserves bandwidth in the fabric between the leaf and the
spine.

Figure 18 shows how AR works along with IGMP snooping and SMET.

1. Leaf 1, which is set up as the AR leaf device, receives mul cast traffic and sends one copy to
the spine that is set up as the AR replicator device.

2. The spine replicates the mul cast traffic. It replicates traffic for leaf devices that are
provisioned with the VLAN VNI in which the mul cast traffic originated from Leaf 1.
Because we have IGMP snooping and SMET configured in the network, the spine sends the
mul cast traffic only to leaf devices with interested receivers.

Figure 18: Mul cast with AR, IGMP Snooping, and SMET

In this document, we are showing mul cast op miza ons on a small scale. In a full-scale
network with many spines and leafs, the benefits of the op miza ons are much more apparent.

Ingress Virtual Machine Traffic Op miza on for EVPN


When virtual machines and hosts are moved within a data center or from one data center to
another, network traffic can become inefficient if the traffic is not routed to the op mal
gateway. This can happen when a host is relocated. The ARP table does not always get flushed
and data flow to the host is sent to the configured gateway even when there is a more op mal
gateway. The traffic is “tromboned” and routed unnecessarily to the configured gateway.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 24/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Ingress Virtual Machine Traffic Op miza on (VMTO) provides greater network efficiency and
op mizes ingress traffic and can eliminate the trombone effect between VLANs. When you
enable ingress VMTO, routes are stored in a Layer 3 virtual rou ng and forwarding (VRF) table
and the device routes inbound traffic directly back to host that was relocated.

Figure 19 shows tromboned traffic without ingress VMTO and op mized traffic with ingress
VMTO enabled.

Without ingress VMTO, Spine 1 and 2 from DC1 and DC2 all adver se the remote IP host
route 10.0.0.1 when the origin route is from DC2. The ingress traffic can be directed to
either Spine 1 and 2 in DC1. It is then routed to Spine 1 and 2 in DC2 where route 10.0.0.1
was moved. This causes the tromboning effect.

With ingress VMTO, we can achieve op mal forwarding path by configuring a policy for IP
host route (10.0.01) to only be adver sed by Spine 1 and 2 from DC2, and not from DC1
when the IP host is moved to DC2.

Figure 19: Traffic with and without Ingress VMTO

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 25/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

For informa on about configuring VMTO, see Configuring VMTO.

DHCP Relay
Figure 20: DHCP Relay in Centrally Routed Bridging Overlay

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 26/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

The Dynamic Host Configura on Protocol (DHCP) relay building block allows the network to
pass DHCP messages between a DHCP client and a DHCP server. The DHCP relay
implementa on in this building block moves DHCP packets through a centrally routed bridging
overlay where the gateway is located at the spine layer.

The DHCP server and the DHCP clients connect into the network using access interfaces on leaf
devices. The DHCP server and clients can communicate with each other over the exis ng
network without further configura on when the DHCP client and server are in the same VLAN.
When a DHCP client and server are in different VLANs, DHCP traffic between the client and
server is forwarded between the VLANs via the IRB interfaces on spine devices. You must
configure the IRB interfaces on the spine devices to support DHCP relay between VLANs.

For informa on about implemen ng the DHCP relay, see DHCP Relay Design and
Implementa on.

Reducing ARP Traffic with ARP Synchroniza on and


Suppression (Proxy ARP)
The goal of ARP synchroniza on is to synchronize ARP tables across all the VRFs that serve an
overlay subnet to reduce the amount of traffic and op mize processing for both network devices
and end systems. When an IP gateway for a subnet learns about an ARP binding, it shares it with
other gateways so they do not need to discover the same ARP binding independently.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 27/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

With ARP suppression, when a leaf device receives an ARP request, it checks its own ARP table
that is synchronized with the other VTEP devices and responds to the request locally rather than
flooding the ARP request.

Proxy ARP and ARP suppression are enabled by default on all QFX Series switches that can act
as leaf devices in an edge-routed bridging overlay. For a list of these switches, see Data Center
Fabric Reference Design Hardware and So ware Summary.

IRB interfaces on the leaf device deliver ARP requests and NDP requests from both local and
remote leaf devices. When a leaf device receives an ARP request or NDP request from another
leaf device, the receiving device searches its MAC+IP address bindings database for the
requested IP address.

If the device finds the MAC+IP address binding in its database, it responds to the request.

If the device does not find the MAC+IP address binding, it floods the ARP request to all
Ethernet links in the VLAN and the associated VTEPs.

Because all par cipa ng leaf devices add the ARP entries and synchronize their rou ng and
bridging tables, local leaf devices respond directly to requests from locally connected hosts and
remove the need for remote devices to respond to these ARP requests.

For informa on about implemen ng the ARP synchroniza on, Proxy ARP, and ARP suppression,
see Enabling Proxy ARP and ARP Suppression for the Edge-Routed Bridging Overlay.

Layer 2 Port Security Features on Ethernet-Connected End


Systems
Centrally routed bridging overlay and edge-routed bridging overlay supports the following
security features on Layer 2 Ethernet-connected end systems:

Preven ng BUM Traffic Storms With Storm Control

Using MAC Filtering to Enhance Port Security

Analyzing Traffic Using Port Mirroring

For more informa on about these features, see MAC Filtering, Storm Control, and Port Mirroring
Support in an EVPN-VXLAN Environment.

For informa on about configuring these features, see Configuring Layer 2 Port Security Features
on Ethernet-Connected End Systems.

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 28/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Preven ng BUM Traffic Storms With Storm Control


Storm control can prevent excessive traffic from degrading the network. It lessens the impact of
BUM traffic storms by monitoring traffic levels on EVPN-VXLAN interfaces, and dropping BUM
traffic when a specified traffic level is exceeded.

In an EVPN-VXLAN environment, storm control monitors:

Layer 2 BUM traffic that originates in a VXLAN and is forwarded to interfaces within the
same VXLAN.

Layer 3 mul cast traffic that is received by an IRB interface in a VXLAN and is forwarded to
interfaces in another VXLAN.

Using MAC Filtering to Enhance Port Security


MAC filtering enhances port security by limi ng the number of MAC addresses that can be
learned within a VLAN and therefore limit the traffic in a VXLAN. Limi ng the number of MAC
addresses protects the switch from flooding the Ethernet switching table. Flooding of the
Ethernet switching table occurs when the number of new MAC addresses that are learned
causes the table to overflow, and previously learned MAC addresses are flushed from the table.
The switch relearns the MAC addresses, which can impact performance and introduce security
vulnerabili es.

In this blueprint, MAC filtering limits the number of accepted packets that are sent to ingress-
facing access interfaces based on MAC addresses. For more informa on about how MAC
filtering works, see the MAC limi ng informa on in Understanding MAC Limi ng and MAC
Move Limi ng

Analyzing Traffic Using Port Mirroring


With analyzer-based port mirroring, you can analyze traffic down to the packet level in an
EVPN-VXLAN environment. You can use this feature to enforce policies related to network
usage and file sharing and to iden fy problem sources by loca ng abnormal or heavy bandwidth
usage by par cular sta ons or applica ons.

Port mirroring copies packets entering or exi ng a port or entering a VLAN and sends the copies
to a local interface for local monitoring or to a VLAN for remote monitoring. Use port mirroring
to send traffic to applica ons that analyze traffic for purposes such as monitoring compliance,
enforcing policies, detec ng intrusions, monitoring and predic ng traffic pa erns, correla ng
events, and so on.

RELATED DOCUMENTATION

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 29/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks

Infrastructure as a Service: EVPN and VXLAN Solu on Guide

Juniper Networks EVPN Implementa on for Next-Genera on Data Center Architectures

https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 30/30

You might also like