Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Home TechLibrary
This sec on gives an overview of the building blocks used in this blueprint architecture. The
implementa on of each building block technology is explored in more detail later sec ons.
For informa on about the hardware and so ware that serve as a founda on to your building
blocks, see the Data Center Fabric Reference Design Hardware and So ware Summary.
A 5-stage IP fabric, which typically starts as a single 3-stage IP fabric that grows into two 3-
stage IP fabrics. These fabrics are segmented into separate points of delivery (PODs) within
a data center. For this use case, we support the addi on of a er of super spine devices that
enable communica on between the spine and leaf devices in the two PODs. See Figure 2.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 1/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 2/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
As shown in both figures, the devices are interconnected using high-speed interfaces that are
either single links or aggregated Ethernet interfaces. The aggregated Ethernet interfaces are
op onal—a single link between devices is typically used— but can be deployed to increase
bandwidth and provide link level redundancy. Both op ons are covered.
We chose EBGP as the rou ng protocol in the underlay network for its dependability and
scalability. Each device is assigned its own autonomous system with a unique autonomous
system number to support EBGP. You can use other rou ng protocols in the underlay network;
the usage of those protocols is beyond the scope of this document.
Micro Bidirec onal Forwarding Detec on (BFD)—the ability to run BFD on individual links in an
aggregated Ethernet interface—can also be enabled in this building block to quickly detect link
failures on any member links in aggregated Ethernet bundles that connect devices.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 3/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
For informa on about implemen ng spine and leaf devices in 3-stage and 5-stage IP fabric
underlays, see IP Fabric Underlay Network Design and Implementa on. For informa on about
implemen ng the addi onal er of super spine devices in a 5-stage IP fabric underlay, see Five-
Stage IP Fabric Design and Implementa on.
A tenant is a user community (such as a business unit, department, workgroup, or applica on)
that contains groups of endpoints. Groups may communicate with other groups in the same
tenancy, and tenants may communicate with other tenants if permi ed by network policies. A
group is typically expressed as a subnet (VLAN) that can communicate with other devices in the
same subnet, and reach external groups and endpoints by way of a virtual rou ng and
forwarding (VRF) instance.
As seen in the overlay example shown in Figure 3, Ethernet bridging tables (represented by
triangles) handle tenant bridged frames and IP rou ng tables (represented by squares) process
routed packets. Inter-VLAN rou ng happens at the integrated rou ng and bridging (IRB)
interfaces (represented by circles). Ethernet and IP tables are directed into virtual networks
(represented by colored lines). To reach end systems a ached to other VXLAN Tunnel Endpoint
(VTEP) devices, tenant packets are encapsulated and sent over an EVPN-signalled VXLAN tunnel
(represented by green tunnel icons) to the associated remote VTEP devices. Tunneled packets
are de-encapsulated at the remote VTEP devices and forwarded to the remote end systems by
way of the respec ve bridging or rou ng tables of the egress VTEP device.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 4/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
The following sec ons provide more details about overlay networks:
Bridged Overlay
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 5/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Figure 4 shows that the spine and leaf devices use their loopback addresses for peering in a
single autonomous system. In this design, the spine devices act as a route reflector cluster and
the leaf devices are route reflector clients. Use of a route reflector sa sfies the IBGP
requirement for a full mesh without the need to peer all the VTEP devices directly with one
another. As a result, the leaf devices peer only with the spine devices and the spine devices peer
with both spine devices and leaf devices. Because the spine devices are connected to all the leaf
devices, the spine devices can relay IBGP informa on between the indirectly peered leaf device
neighbors.
You can place route reflectors almost anywhere in the network. However, you must consider the
following:
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 6/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Does the selected device have enough memory and processing power to handle the
addi onal workload required by a route reflector?
Is the selected device equidistant and reachable from all EVPN speakers?
Does the selected device have the proper so ware capabili es?
In this design, the route reflector cluster is placed at the spine layer. The QFX switches that you
can use as a spine in this reference design have ample processing speed to handle route reflector
client traffic in the network virtualiza on overlay.
For details about implemen ng IBGP in an overlay, see Configuring IBGP for the Overlay.
Bridged Overlay
The first overlay service type described in this guide is a bridged overlay, as shown in Figure 5.
In this overlay model, Ethernet VLANs are extended between leaf devices across VXLAN
tunnels. These leaf-to-leaf VXLAN tunnels support data center networks that require Ethernet
connec vity between leaf devices but do not need rou ng between the VLANs. As a result, the
spine devices provide only basic underlay and overlay connec vity for the leaf devices, and do
not perform rou ng or gateway services seen with other overlay methods.
Leaf devices originate VTEPs to connect to the other leaf devices. The tunnels enable the leaf
devices to send VLAN traffic to other leaf devices and Ethernet-connected end systems in the
data center. The simplicity of this overlay service makes it a rac ve for operators who need an
easy way to introduce EVPN/VXLAN into their exis ng Ethernet-based data center.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 7/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
For informa on on implemen ng a bridged overlay, see Bridged Overlay Design and
Implementa on.
In a centrally routed bridging overlay rou ng occurs at a central gateway of the data center
network (the spine layer in this example) rather than at the VTEP device where the end systems
are connected (the leaf layer in this example).
You can use this overlay model when you need routed traffic to go through a centralized
gateway or when your edge VTEP devices lack the required rou ng capabili es.
As shown above, traffic that originates at the Ethernet-connected end systems is forwarded to
the leaf VTEP devices over a trunk (mul ple VLANs) or an access port (single VLAN). The VTEP
device forwards the traffic to local end systems or to an end system at a remote VTEP device.
An integrated rou ng and bridging (IRB) interface at each spine device helps route traffic
between the Ethernet virtual networks.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 8/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
EVPN supports two VLAN-aware Ethernet service models in the data center. Juniper Networks
supports both models. They are as follows:
2. Virtual Switch VLAN-Aware—In this op on, mul ple virtual switch instances support
4094 VLANs per instance. This Ethernet service model is ideal for overlay networks that
require scalability beyond a single default instance. Support for this op on is available
currently on the QFX10000 line of switches.
To implement this scalable service model, see Configuring a VLAN-Aware Centrally-
Routed Bridging Overlay with Virtual Switches.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-component… 9/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
In this Ethernet service model, the IRB interfaces are moved to leaf device VTEPs at the edge of
the overlay network to bring IP rou ng closer to the end systems. Because of the special ASIC
capabili es required to support bridging, rou ng, and EVPN/VXLAN in one device, edge-routed
bridging overlays are only possible on certain switches. For a list of switches that we support as
leaf devices in an edge-routed bridging overlay, see Data Center Fabric Reference Design
Hardware and So ware Summary.
This model allows for a simpler overall network. The spine devices are configured to handle only
IP traffic, which removes the need to extend the bridging overlays to the spine devices.
This op on also enables faster server-to-server, intra-data center traffic (also known as east-
west traffic) where the end systems are connected to the same leaf device VTEP. As a result,
rou ng happens much closer to the end systems than with centrally routed bridging overlays.
NOTE When a QFX5110 or QFX5120 switch that func ons as a leaf device is
configured with IRB interfaces that are included in EVPN Type-5 rou ng instances,
symmetric inter-IRB unicast rou ng is automa cally enabled.
For informa on on implemen ng the edge-routed bridging overlay, see Edge-Routed Bridging
Overlay Design and Implementa on.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 10/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Table 1: Comparison of Bridged, Centrally Routed Bridging, and Edge-Routed Bridging Overlays
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 11/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Unique IRB IP Address—In this model, a unique IP address is configured on each IRB
interface in an overlay subnet.
The benefit of having a unique IP address and MAC address on each IRB interface is the
ability to monitor and reach each of the IRB interfaces from within the overlay using its
unique IP address. This model also allows you to configure a rou ng protocol on the IRB
interface.
The downside of this model is that alloca ng a unique IP address to each IRB interface may
consume many IP addresses of a subnet.
Unique IRB IP Address with Virtual Gateway IP Address—This model adds a virtual gateway
IP address to the previous model, and we recommend it for centrally routed bridged
overlays. It is similar to VRRP, but without the in-band data plane signaling between the
gateway IRB interfaces. The virtual gateway should be the same for all default gateway IRB
interfaces in the overlay subnet and is ac ve on all gateway IRB interfaces where it is
configured. You should also configure a common IPv4 MAC address for the virtual gateway,
which becomes the source MAC address on data packets forwarded over the IRB interface.
In addi on to the benefits of the previous model, the virtual gateway simplifies default
gateway configura on on end systems. The downside of this model is the same as the
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 12/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
previous model.
IRB with Anycast IP Address and MAC Address—In this model, all default gateway IRB
interfaces in an overlay subnet are configured with the same IP and MAC address. We
recommend this model for edge-routed bridging overlays.
A benefit of this model is that only a single IP address is required per subnet for default
gateway IRB interface addressing, which simplifies default gateway configura on on end
systems.
This op on is an IP-routed virtual network service. Unlike an MPLS-based IP VPN, the virtual
network in this model is based on EVPN/VXLAN.
Cloud providers prefer this virtual network op on because most modern applica ons are
op mized for IP. Because all communica on between devices happens at the IP layer, there is no
need to use any Ethernet bridging components, such as VLANs and ESIs, in this routed overlay
model.
For informa on on implemen ng a routed overlay, see Routed Overlay Design and
Implementa on.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 13/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
We tested setups where an Ethernet-connected end system was connected to a single leaf
device or mul homed to 2 or 3 leaf devices to prove traffic is properly handled in mul homed
setups with more than two leaf VTEP devices; in prac ce, an Ethernet-connected end system
can be mul homed to a large number of leaf VTEP devices. All links are ac ve and network
traffic can be load balanced over all of the mul homed links.
In this architecture, EVPN is used for Ethernet-connected mul homing. EVPN mul homed LAGs
are iden fied by an Ethernet segment iden fier (ESI) in the EVPN bridging overlay while LACP is
used to improve LAG availability.
VLAN trunking allows one interface to support mul ple VLANs. VLAN trunking ensures that
virtual machines (VMs) on non-overlay hypervisors can operate in any overlay networking
context.
For more informa on about Ethernet-connected mul homing support, see Mul homing an
Ethernet-Connected End System Design and Implementa on.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 14/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
IP-connected multihoming endpoint systems to connect to the IP network over mul ple IP-
based access interfaces on different leaf devices.
We tested setups where an IP–connected end system was connected to a single leaf or
mul homed to 2 or 3 leaf devices. The setup validated that traffic is properly handled when
mul homed to mul ple leaf devices; in prac ce, an IP-connected end system can be mul homed
to a large number of leaf devices.
In mul homed setups, all links are ac ve and network traffic is forwarded and received over all
mul homed links. IP traffic is load balanced across the mul homed links using a simple hashing
algorithm.
EBGP is used to exchange rou ng informa on between the IP-connected endpoint system and
the connected leaf devices to ensure the route or routes to the endpoint systems are shared
with all spine and leaf devices.
For more informa on about the IP-connected mul homing building block, see Mul homing an
IP-Connected End System Design and Implementa on.
Border Devices
Some of our reference designs include border devices that provide connec ons to the following
devices, which are external to the local IP fabric:
A device such as an SRX router on which mul ple services such as firewalls, Network
Address Transla on (NAT), intrusion detec on and preven on (IDP), mul cast, and so on are
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 15/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
consolidated. The consolida on of mul ple services onto one physical device is known as
service chaining.
Appliances or servers that act as firewalls, DHCP servers, sFlow collectors, and so on.
To provide the addi onal func onality described above, Juniper Networks supports deploying a
border device in the following ways:
As a device that serves as a border device only. In this dedicated role, you can configure the
device to handle one or more of the tasks described above. For this situa on, the device is
typically deployed as a border leaf, which is connected to a spine device.
For example, in the edge-routed bridging overlay shown in Figure 11, border leafs L5 and L6
provide connec vity to data center gateways for DCI, an sFlow collector, and a DHCP
server.
As a device that has two roles—a network underlay device and a border device that can
handle one or more of the tasks described above. For this situa on, a spine device usually
handles the two roles. Therefore, the border device func onality is referred to as a border
spine.
For example, in the edge-routed bridging overlay shown in Figure 12, border spines S1 and
S2 func on as lean spine devices. They also provide connec vity to data center gateways
for DCI, an sFlow collector, and a DHCP server.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 16/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
routes.
EVPN Type 5 or IPVPN routes are used in a DCI context to ensure inter-data center traffic
between data centers using different IP address subne ng schemes can be exchanged. Routes
are exchanged between spine devices in different data centers to allow for the passing of traffic
between data centers.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 18/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Physical connec vity between the data centers is required before EVPN Type 5 messages or
IPVPN routes can be sent between data centers. The physical connec vity is provided by
backbone devices in a WAN cloud. A backbone device is connected to all spine devices in a
single data center, as well as to the other backbone devices that are connected to the other data
centers.
Service Chaining
In many networks, it is common for traffic to flow through separate hardware devices that each
provide a service, such as firewalls, NAT, IDP, mul cast, and so on. Each device requires separate
opera on and management. This method of linking mul ple network func ons can be thought
of as physical service chaining.
A more efficient model for service chaining is to virtualize and consolidate network func ons
onto a single device. In our blueprint architecture, we are using the SRX Series routers as the
device that consolidates network func ons and processes and applies services. That device is
called a physical network func on (PNF).
In this solu on, service chaining is supported on both centrally routed bridging overlay and
edge-routed bridging overlay. It works only for inter-tenant traffic.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 19/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
1. The spine receives a packet on the VTEP that is in the le side VRF.
3. The IRB interface routes the packet to the SRX Series router, which is ac ng as the PNF.
4. The SRX Series router performs service chaining on the packet and forwards the packet back
to the spine, where it is received on the IRB interface shown on the right side of the spine.
5. The IRB interface routes the packet to the VTEP in the right side VRF.
For informa on about configuring service chaining, see Service Chaining Design and
Implementa on.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 20/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
There are three types of mul cast op miza ons supported in EVPN VXLAN environments:
IGMP Snooping
For informa on about Mul cast support, see Mul cast Support in EVPN-VXLAN Overlay
Networks.
For informa on about configuring Mul cast, see Mul cast Op miza on Design and
Implementa on.
IGMP Snooping
IGMP snooping in an EVPN-VXLAN fabric is useful to op mize the distribu on of mul cast
traffic. IGMP snooping preserves bandwidth because mul cast traffic is forwarded only on
interfaces where there are IGMP listeners. Not all interfaces on a leaf device need to receive
mul cast traffic.
Without IGMP snooping, end systems receive IP mul cast traffic that they have no interest in,
which needlessly floods their links with unwanted traffic. In some cases when IP mul cast flows
are large, flooding unwanted traffic causes denial-of-service issues.
Figure 16 shows how IGMP snooping works in an EVPN-VXLAN fabric. In this sample EVPN-
VXLAN fabric, IGMP snooping is configured on all leaf devices, and mul cast receiver 2 has
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 21/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
3. When leaf 1 receives ingress mul cast traffic, it replicates it for all leaf devices, and forwards
it to the spine.
5. Leaf 2 receives the mul cast traffic, but does not forward it to the receiver because the
receiver sent an IGMP leave message.
For more informa on about IGMP snooping, see Overview of Mul cast Forwarding with IGMP
Snooping in an EVPN-VXLAN Environment.
Devices with IGMP snooping enabled use selec ve mul cast forwarding to forward mul cast
traffic in an efficient way. With IGMP snooping enabled a leaf device sends mul cast traffic only
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 22/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
to the access interface with an interested receiver. With SMET added, the leaf device selec vely
sends mul cast traffic to only the leaf devices in the core that have expressed an interest in that
mul cast group.
Figure 17 shows the SMET traffic flow along with IGMP snooping.
3. When leaf 1 receives ingress mul cast traffic, it replicates the traffic only to leaf devices
with interested receivers (leaf devices 3 and 4), and forwards it to the spine.
You do not need to enable SMET; it is enabled by default when IGMP snooping is configured on
the device.
For more informa on about SMET, see Overview of Selec ve Mul cast Forwarding.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 23/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
ingress leaf devices, this method conserves bandwidth in the fabric between the leaf and the
spine.
Figure 18 shows how AR works along with IGMP snooping and SMET.
1. Leaf 1, which is set up as the AR leaf device, receives mul cast traffic and sends one copy to
the spine that is set up as the AR replicator device.
2. The spine replicates the mul cast traffic. It replicates traffic for leaf devices that are
provisioned with the VLAN VNI in which the mul cast traffic originated from Leaf 1.
Because we have IGMP snooping and SMET configured in the network, the spine sends the
mul cast traffic only to leaf devices with interested receivers.
Figure 18: Mul cast with AR, IGMP Snooping, and SMET
In this document, we are showing mul cast op miza ons on a small scale. In a full-scale
network with many spines and leafs, the benefits of the op miza ons are much more apparent.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 24/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Ingress Virtual Machine Traffic Op miza on (VMTO) provides greater network efficiency and
op mizes ingress traffic and can eliminate the trombone effect between VLANs. When you
enable ingress VMTO, routes are stored in a Layer 3 virtual rou ng and forwarding (VRF) table
and the device routes inbound traffic directly back to host that was relocated.
Figure 19 shows tromboned traffic without ingress VMTO and op mized traffic with ingress
VMTO enabled.
Without ingress VMTO, Spine 1 and 2 from DC1 and DC2 all adver se the remote IP host
route 10.0.0.1 when the origin route is from DC2. The ingress traffic can be directed to
either Spine 1 and 2 in DC1. It is then routed to Spine 1 and 2 in DC2 where route 10.0.0.1
was moved. This causes the tromboning effect.
With ingress VMTO, we can achieve op mal forwarding path by configuring a policy for IP
host route (10.0.01) to only be adver sed by Spine 1 and 2 from DC2, and not from DC1
when the IP host is moved to DC2.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 25/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
DHCP Relay
Figure 20: DHCP Relay in Centrally Routed Bridging Overlay
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 26/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
The Dynamic Host Configura on Protocol (DHCP) relay building block allows the network to
pass DHCP messages between a DHCP client and a DHCP server. The DHCP relay
implementa on in this building block moves DHCP packets through a centrally routed bridging
overlay where the gateway is located at the spine layer.
The DHCP server and the DHCP clients connect into the network using access interfaces on leaf
devices. The DHCP server and clients can communicate with each other over the exis ng
network without further configura on when the DHCP client and server are in the same VLAN.
When a DHCP client and server are in different VLANs, DHCP traffic between the client and
server is forwarded between the VLANs via the IRB interfaces on spine devices. You must
configure the IRB interfaces on the spine devices to support DHCP relay between VLANs.
For informa on about implemen ng the DHCP relay, see DHCP Relay Design and
Implementa on.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 27/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
With ARP suppression, when a leaf device receives an ARP request, it checks its own ARP table
that is synchronized with the other VTEP devices and responds to the request locally rather than
flooding the ARP request.
Proxy ARP and ARP suppression are enabled by default on all QFX Series switches that can act
as leaf devices in an edge-routed bridging overlay. For a list of these switches, see Data Center
Fabric Reference Design Hardware and So ware Summary.
IRB interfaces on the leaf device deliver ARP requests and NDP requests from both local and
remote leaf devices. When a leaf device receives an ARP request or NDP request from another
leaf device, the receiving device searches its MAC+IP address bindings database for the
requested IP address.
If the device finds the MAC+IP address binding in its database, it responds to the request.
If the device does not find the MAC+IP address binding, it floods the ARP request to all
Ethernet links in the VLAN and the associated VTEPs.
Because all par cipa ng leaf devices add the ARP entries and synchronize their rou ng and
bridging tables, local leaf devices respond directly to requests from locally connected hosts and
remove the need for remote devices to respond to these ARP requests.
For informa on about implemen ng the ARP synchroniza on, Proxy ARP, and ARP suppression,
see Enabling Proxy ARP and ARP Suppression for the Edge-Routed Bridging Overlay.
For more informa on about these features, see MAC Filtering, Storm Control, and Port Mirroring
Support in an EVPN-VXLAN Environment.
For informa on about configuring these features, see Configuring Layer 2 Port Security Features
on Ethernet-Connected End Systems.
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 28/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
Layer 2 BUM traffic that originates in a VXLAN and is forwarded to interfaces within the
same VXLAN.
Layer 3 mul cast traffic that is received by an IRB interface in a VXLAN and is forwarded to
interfaces in another VXLAN.
In this blueprint, MAC filtering limits the number of accepted packets that are sent to ingress-
facing access interfaces based on MAC addresses. For more informa on about how MAC
filtering works, see the MAC limi ng informa on in Understanding MAC Limi ng and MAC
Move Limi ng
Port mirroring copies packets entering or exi ng a port or entering a VLAN and sends the copies
to a local interface for local monitoring or to a VLAN for remote monitoring. Use port mirroring
to send traffic to applica ons that analyze traffic for purposes such as monitoring compliance,
enforcing policies, detec ng intrusions, monitoring and predic ng traffic pa erns, correla ng
events, and so on.
RELATED DOCUMENTATION
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 29/30
11/10/2020 Data Center Fabric Blueprint Architecture Components - TechLibrary - Juniper Networks
https://www.juniper.net/documentation/en_US/release-independent/solutions/topics/concept/solution-cloud-data-center-compone… 30/30