SlideShare a Scribd company logo
MULTICLOUD AS THE NEXT
GENERATION CLOUD
INFRASTRUCTURE
Deepti Chandra
2 © 2018 Juniper Networks, Inc. All rights reserved.
Agenda
The “Application-aware” Cloud Principle
Problem Statement in Multicloud Deployment
SDN in the Multicloud
Building Blocks
Building the Private Cloud – DC Fabric
Building the Private Cloud – DC Interconnect (DCI)
Building the Private Cloud – WAN Integration
Building the Private Cloud – Traffic Optimization
THE APPLICATION-AWARE
CLOUD PRINCIPLE
4
USE
REQUIRING
RUNNING
End Users
People, vehicles,
appliances, devices
Applications
Made of software
components
MANAGEABILITY & OPERATIONS
SECURITY
CONNECTIVITY
MULTIPLE ENVIRONMENTS
Containers, VMs, BMS
MULTIPLE LOCATIONS
Embedded (e.g. device or vehicle),
in a Data Center
Telco
POPs
Public Cloud (VPCs)
DataCenters
Multi-site DC / Private Cloud
VMs
Containers
IP Fabric
BMS
Remote
Branch Office
Home
The Big Picture: Cooperative Clouds
CONNECTIVITY
SECURITY
MANAGEABILITY & OPERATIONS
CPE
FIREWALL
5
The Traditional Way
PRIVATE CLOUD DECISIONS DRIVEN BY:
1.  Existing assets
2.  Skills and know-how
3.  Security & Confidentiality
4.  Costs and TCO control
5.  Application specific requirements
(scale, latency, performance, hypervisors,…)
APPLICATIONS
EXISTING SYSTEMS/APPS
Build or buy or lease an
Execution environment
HOSTING
PUBLIC CLOUD
I need to deliver
a service to my users
6
What Has Changed
PRIVATE CLOUD
DECISIONS DRIVEN BY:
1.  User experience
2.  Costs and TCO control
3.  Agility (time to change)
4.  Security and confidentiality
5.  Skills and know-how
6.  Application specific requirements
(scale, latency, performance, hypervisors, …)
NEW APPLICATIONS
Today –
The New Cloud
HOSTING
PUBLIC CLOUD
Why Data Centers Need Multicloud?
Today most applications leverage cooperation between
components deployed across multiple cloud infrastructure
Centralized
Distributed
Racks
Containers
SaaS
PaaS
...
vm
cnt
bursting
replic
bms
bmsaaS
BMS
cnt
DR
VM
cnt
cnt
VM
Resourcepooling
EXISTING SYSTEMS/APPS
I need to deliver
a service to my users
PROBLEM STATEMENT IN THE
MULTICLOUD DEPLOYMENT
8
BMS VIRTUALIZED
Team 1Day 0
configuration
Software
upgrade
Service
management
Troubleshooting
Visibility and
reporting
DC
AWS
Tool A
Team 2 Tool B
Team N Tool K
Team J
Tool B
•  Different skillsets for different clouds
•  Manual operations for daily tasks
•  Long lead-times for change management
•  Inconsistent visibility for distinct environments
Challenges of the Multicloud
A set of independent ‘fabrics’
PUBLIC CLOUD
CloudFormation REST API
IPSEC or DirectConnect
BGP
PUBLIC CLOUD
Azure rest API
BGP
IPSEC
PRIVATE CLOUD DC
BGP EVPN/ VXLAN
Netconf
gRPC
PRIVATE CLOUD DC
Poor User Experience Operational Complexity Lack of Automation
9
Clouds are Disconnected Environments
BMS BMS WITH SRIOVOVS
DC OPS
CLOUD OPS
END USERS
SALES
PROD
DEV
10
A Day in the Life of DC/Cloud Operations
“I need a two-tier application
execution environment with these
characteristics”
“Can I have my DB cluster up and
running by next week and
connected to the Web front end ?”
THE END USER THE OPERATOR
1.  Provision tenant
2.  Image servers
3.  Create containers
4.  Create/select policies
5.  Request networking service
to public cloud
6.  Create EC2 instance
7.  ...
Need to correlate & contextualize:
“which IP1/MAC1 on VNI X on
Switch A can’t talk to IP2/MAC2
on VNI Y on Switch B?”
“DB Cluster can’t talk to Web server”
COMPLEXITY
INCONSISTENCY
REVENUE-LOSS
LONG LEAD TIME
PROVISIONING MANAGEMENT VISIBILITY
SDN IN THE
MULTICLOUD
12
WHAT DOES SDN
OFFER IN THE
MULTICLOUD?
MULTICLOUD
NETWORKING AS A
SERVICE
Single pane of glass orchestration across clouds
Secure service delivery across clouds
Visibility and unified management across clouds
Building for containerization
Federation to unify controllers across clouds
13
DCI FABRIC
Unmanaged Elements
Administration of Fabrics
Private Clouds
with any workload
CONTROLLER
•  A fabric is an independently
administered IP network managed
by the controller for configuration,
and eventually routing control
plane and analytics
•  The controller has IP reachability
to ALL endpoints (e.g. a device)
which belongs to the fabric
managed by the controller
•  Devices/endpoints can belong to
multiple fabrics
DC1 FABRIC
DC2 FABRIC
HV
HV
DC3 FABRIC
14
Interconnect
Fabrics
(Private to
Public Multicloud)
One-click Application Services
Multi-cloud Networking-as-a-service
for Any Workload and Any Cloud
Public clouds
Automate Private Clouds /
Multi-cloud infrastructure
Interconnect Fabrics
(Private Multicloud)
Predictive Analytics and Visibility
CONTROLLER
HV
HV
CONTROLLER
HV
HV
HV
Private Clouds
with any workload
15
A Unified View Across Cloud
and Networking Operations
HV
HV
CONTROLLER
mp-bgp evpn/
ip-vpn, netconf
BMS, BMS with SRIOV Server with OVSBare Metal Server Containers VM
netconf
sflow,gRPC
mp-bgp evpn,
ip-vpn
dhcp,tftp
neutron/
kubectl
Network
services API
HV
VGW
bgp,
mp-bgp evpn,
ip-vpn
IP, IPSEC
evpn/vxlan,
mplsogre,
mplsoudp
Rest/https
netconf
VGW
•  Underlay and overlay configuration
based on roles assignments
•  Multiple roles and fabrics support
per device (IP Clos, Interconnects)
MANAGEMENT
•  EVPN mp-bgp peering to DC devices
and bgp to external fabrics (e.g. VPCs)
•  Routing equivalence between physical
roles and virtualized elements
CONTROL
•  Support of both native device protocols
•  Aggregation @ infrastructure and
service elements
TELEMETRY & ANALYTICS
BUILDING BLOCKS
17
Data Center Requirements
Rising EW traffic growth Easy scale-outü
Resiliency and low latency Non-blocking, fast fail-over
Agility and speed Any service anywhere
Open architecture No vendor lock-in
Design simplicity No steep learning curve
Architectural flexibility EW, NS & DCI
Design Requirement
ü
ü
ü
ü
ü
Technology Attribute
18
Common Building Blocks for Data Centers
DATA CENTER
FABRIC
DATA CENTER
INTERCONNECT
HYBRID CLOUD
CONNECTIVITY
Peering
Routers
MPLS/IP
Backbone
Public Internet
Public Cloud
DCI
CLOS Fabric
(IP, MPLS)
DC Edge
Spine
TORs
WAN or Dark fiber
Data Center 2Data Center 1
DC Edge (collapsed))
OR Spine
DC Edge
OR Spine
Service
Edge
Boundary
Private/Public WAN
Data Center 1
(public cloud)
Data Center 1
(on-prem)
On-prem DC extension into public cloud
DC Edge Cloud Edge
WAN
INTEGRATION
Private/Public WAN
Data Center 2Data Center 1
High Performance Routing
DC Edge DC Edge
CORE
CORE COLO BASED
INTERCONNECT
BUILDING THE PRIVATE
CLOUD – DC FABRIC
20
Defining Terminology…
L L L L L L LL L L
S S S S SSS
F F
E E
POD-1 POD-N
Leaf
(DC Access Layer)
Spine
(DC Aggregation Layer)
Fabric
(DC Core/Interconnect)
Edge
(DC Edge)
Optional: Can be collapsed
into one layer
S
21
Building the DC Fabric
•  Leverage BGP constructs to achieve
L2/L3 traffic and multi-tenancy
•  L3 gateway placement can be at the
leaf or spine
•  Hierarchical route-reflection for
reduced control plane state and
redundancy
•  Easy integration with L3VPN with no
added provisioning
•  Service insertion for EW and/or NS
traffic (inter-tenant inter-subnet)
Let’s start with the smallest unit –
the POD
22
Architectural Flexibility
Containerization influence on network infrastructure
PROBLEM STATEMENT
Connection between servers and
TORs can be Layer 2 or Layer 3
IP FABRIC
c1 c2 … c2000
Communication needs to be enabled
between 2000 containers residing on
servers spread across racks
23
Building a Fabric for Containers
•  Layer 2: Trunk ports with each app container
identified by a separate VLAN mapped to VNIs on
hardware VTEPs
•  Benefits: Higher scale capabilities,
active-active load balancing with open standards
(EVPN N-way multihoming)
•  Layer 3: Routing agent can be residing
on the server hypervisor
•  Benefits: Routing table scale, greater provisioning
benefits by using unnumbered addresses for peering
between servers and TORs
Container IPs advertised
over the protocol peering
session between servers
and TORs
Routing agent
ESI-E
IP1 IP2 IP2000
c1 c2 … c2000
10.1.1.1/31 VLAN 1001-3000
10.1.1.0/31
BGP/OSPF
L3 load-balancing & redundancy
ESI-A ESI-E
IP1
VLAN
1001
IP2
VLAN
1002
IP2000V
LAN
3000
IP FABRIC
c1 c2 … c2000
VLAN 1001-3000 VLAN 1001-3000
VETP
IP FABRIC
24
Design Flexibility
App 1
App 2
App 3
Fabric (DCI + DC edge)
RACK-1
POD-N
RACK-2 RACK-N
Leaf
Spine
DC-1
POD-1
DC-2
Centralized or distributed routing – design choices based on requirements
WAN
Edge (DC edge)
RACK-1
POD-N
RACK-2 RACK-N
Leaf
Spine
POD-1
BL BL
Border-Leaf
(DCI)
Fabric
BUILDING THE
PRIVATE CLOUD – DC
INTERCONNECT (DCI)
26
App 1
App 2
App 3
Super-Spine / Fabric
RACK-1
POD-N
RACK-2 RACK-N
Leaf
Spine
DC-1
POD-1
DC-2
BL
RACK-1
POD-N
RACK-2 RACK-N
Leaf
Spine
POD-1
BL L
Let’s Draw a Picture – DCI
Super-Spine / Fabric
Service Block
(Security insertion)
Service Block
(Security insertion)
DCI
Optical/Dark Fiber
Shared WAN
Private Backbone
App 4
27
EVPN – DCI Design Options
•  Extended control plane
•  Interconnect used as
transport (EVPN unaware)
Scaling
constraint
Design
simplicity
Design
thought
Larger
deployments
L3
only
Larger
deployments
DCI
Over The Top (OTT)
•  Clear demarcation
•  Interconnect EVPN aware
•  MPLS TE in core, L2 stretch
Segmented Approach
•  Clear demarcation
•  Interconnect EVPN unaware
•  MPLS TE in core, NO L2
stretch
Layer 3 DCI
28
DCI Options
DCI
(EVPN unaware)
DC-1
(EVPN-VXLAN)
OTT DCI
DC-2
(EVPN-VXLAN)
Data-plane domain (VXLAN tunnels end-to-end)
Support for L2 and L3 workloads
Extended EVPN Control-plane domain (MP-iBGP same overlay AS across DCs)
OR
Segmented EVPN Control-plane domain (MP-eBGP different overlay AS across DCs)
DCI
(EVPN unaware)
DC-1
(EVPN-VXLAN)
DCI with Data Plane Stitching
DC-2
(EVPN-VXLAN)
Data-plane domain
(VXLAN tunnels
confined to DC)
Support for L2 and L3 workloads
Data-plane domain
(VXLAN or MPLS confined to WAN)
Data-plane domain
(VXLAN tunnels
confined to DC)
Data-plane stitching or translation at DC edge
(with and without IT interfaces)
DCI
(EVPN unaware e.g. L3VPN over MPLS core)
DC-1
(EVPN-VXLAN)
L3 DCI
DC-2
(EVPN-VXLAN)
Support for L3 workloads ONLY
Data-plane domain
(VXLAN tunnels
confined to DC)
Data-plane domain
(VXLAN tunnels
confined to DC)
Tenant IP only routes advertised into the core
29
Over the Top (OTT) – DCI
Control plane is extended across sites with the connecting infrastructure
used as transport only (EVPN unaware)
DCI
(EVPN unaware)DC-1
(EVPN-VXLAN)
DC-2
(EVPN-VXLAN)
Data-plane domain (VXLAN tunnels end-to-end)
Support for L2 and L3 workloads
Extended EVPN Control-plane domain (MP-iBGP same overlay AS across DCs)
OR
Segmented EVPN Control-plane domain (MP-eBGP different overlay AS across DCs)
30
DC-1
(EVPN-VXLAN)
DC-2
(EVPN-VXLAN)
DCI
(EVP-MPLS) OR (EVPN-VXLAN)
Segmentation of DC & WAN domains
Clear demarcation of DC and WAN boundaries, connecting infrastructure
is EVPN aware
Data-plane domain
(VXLAN tunnels
confined to DC)
Support for L2 and L3 workloads
Data-plane domain
(VXLAN or MPLS confined to WAN)
Data-plane domain
(VXLAN tunnels
confined to DC)
Data-plane stitching or translation at DC edge
31
DC-1
(EVPN-VXLAN)
DC-2
(EVPN-VXLAN)
DCI
(EVPN-unaware e.g. L3VPN over MPLS core)
Layer 3 DCI
Only Layer 3 connectivity is extended across DCs (no Layer 2).
Data plane domain is confined within DC and not extended across DCs.
Support for L3 workloads ONLY
Tenant IP only routes advertised into the core
Data-plane domain
(VXLAN tunnels
confined to DC)
Data-plane domain
(VXLAN tunnels
confined to DC)
L3VPN or EVPN Type 5
BUILDING THE PRIVATE
CLOUD – WAN INTEGRATION
33
L3 gateway/s
(Leaf or Spine)
H1
Host IP routes are present in the IP-VRF of the L3 gw
Host routes can be exchanged between
L3 gw/s and super-spine (DC-edge that
connects to the WAN) using EVPN Type 5
DC edge/s
100.0.10.100/32
L3VPN (MPLS core) or EVPN Type 5 NLRI (MPLS or IP core)
100.0.10.100/32
How are host IP prefixes exchanged between L3 gateway/s
and DC edge/s so as to be advertised out of the DC?
WAN
IP-VRF.inet.0
100.0.10.100/32
IP-VRF.inet.0
100.0.10.100/32
34
EVPN Route Type 5 – Classification
Route Type 5
draft-ietf-bess-evpn-prefix-advertisement
Pure Type 5 model
(Interface-less IP-VRF to IP-VRF)
Gateway address model
(Interface-ful IP-VRF to IP-VRF)
VXLAN
MPLS
VXLAN
MPLS
Type 5 route provides all necessary
forwarding information
Type 5 route needs recursive route resolution
for forwarding. The lookup is for an IP prefix but
forwarding information is extracted from Type 2 route
35
Pure Route Type 5 Model
IPVPN
Tenant 1
Tenant 1Tenant 1 Tenant 2
100.0.30/24 101.0.30/24
IPVPN
Tenant 2
IPVPN
Tenant 1
IPVPN
Tenant 2
Route Type 5 , IP : 100.0.30/24
Tenant 2
Route Type 5 , IP : 101.0.30/24
GW PE GW PE
102.0.30/24 103.0.30/24
DC DC
36
D-MAC: VRRP MAC
S-MAC: MAC1
D-IP : IP4
S-IP : IP1
Packet Walk – Pure Route Type 5
MAC-VRF IP-VRF
(VRF_TENANT_1)
IP-VRF
(VRF_TENANT_1)
MAC-VRF
VNI 5010
VNI 1020
irb.5010
VNI 1020
irb.5020 VNI 5020
INGRESS VTEPs
LEAF-1 LEAF-2
VRF_TENANT_1
H1 (VLAN 10)
IP1 = 100.0.30.100
MAC1 = 00:00:1e:63:c8:7c
D-IP: LEAF-3,4
S-IP: LEAF-1,2
VNI : VNI 1020
D-MAC: ROUTER-MAC
(LEAF-3,4)
S-MAC: ROUTER-MAC
(LEAF-1,2)
D-IP : IP4
S-IP : IP1
LEAF-3 LEAF-4
EGRESS VTEPs
VRF_TENANT_1
H4 (VLAN 20)
IP4 = 102.0.30.100
MAC4 = 00:00:00:93:3c:f4
D-MAC: MAC4
S-MAC: IRB MAC
D-IP : IP4
S-IP : IP1
37
EVPN Route Type 5 vs L3VPN NLRI
EVPN Pure Route Type 5 NLRI
* 5:10.1.1.30:30::0::100.0.30.0::24/304
RD Prefix info
L3VPN NLRI
10.1.1.30:30: 100.0.30.0/24
RD Prefix info
Similar information carried across both NLRI types
38
Benefits with EVPN Type 5
•  Unified solution end to end with one address family inside the
DC and outside
•  Data plane flexibility with EVPN – use over MPLS or IP core
•  If you do not have MPLS between DCs for DCI
•  It is not possible to run L3VPN over VXLAN
•  For control plane, Route Type 5 is the only option
•  Hybrid cloud connectivity (Type 5 with VXLAN over GRE/IPsec)
BUILDING THE PRIVATE CLOUD
– TRAFFIC OPTIMIZATION
40
What is VMTO ?
Virtual Machine Traffic Optimization
Resolves ingress
and egress traffic
tromboning, focusing
on north-south traffic
optimization
NO Layer 2 stretch –
different summary routes are
advertised from each Data Center.
NO traffic tromboning –
H0 sends traffic to DC-1 to reach
H1 and to DC-2 to reach H2
Remote host
90.0.9.10/24
H0
Route table on remote host
100.0.10/24: NH DC-1
100.0.20/24: NH DC-2
WAN
H2
VLAN 100 VLAN 200
H1
...
...
...
...
...
...
100.0.20.100/32
L3 GW for VLAN
200 only in DC-2
100.0.10.100/32
L3 GW for VLAN
100 only in DC-1
100.0.10/24 100.0.20/24
DC-1
DC-2
41 © 2018 Juniper Networks, Inc. All rights reserved.
Ingress and Egress North-South Traffic Optimization
Layer 2 stretch – Host prefix routes are
advertised from each Data Center
NO ingress traffic tromboning – H0 sends traffic
to DC-1 to reach H1 and to DC-2 to reach H8
NO egress traffic tromboning – To reach H0, H1
sends traffic to L3 gateway in DC-1 and H8
sends traffic to L3 gateway in DC-2
Remote host
90.0.9.10/24
H0
Route table on remote host
100.0.10.100/32: NH DC-1
100.0.10.101/32: NH DC-2
WAN
H8
VLAN 100 VLAN 100
H1
...
...
...
...
...
...
100.0.10.101/32
L3 GW for VLAN
100 only in DC-2
(VGA: 100.0.10.1)
100.0.10.100/32
L3 GW for VLAN 100
only in DC-1
(VGA: 100.0.10.1)
100.0.10.100/32 100.0.10.101/32
DC-1
DC-2
42 © 2018 Juniper Networks, Inc. All rights reserved.
WAN
How to Avoid Egress Tromboning?
SPINE-1 SPINE-2
LEAF-1 LEAF-2
VGA
100.0.1.1
VGA
100.0.1.1
SPINE-3 SPINE-4
LEAF-3 LEAF-4
VGA
100.0.1.1
VGA
100.0.1.1
VLAN 100
100.0.10.100/32
DC-1 DC-2
90.0.9.10/32Remote Host H0
VLAN 100
Each leaf device
prefers local DC L3
gateways
H1
Distributed layer 3 anycast gateway function ensures,
local DC gateway preferred (even on host migration)
43 © 2018 Juniper Networks, Inc. All rights reserved.
Control
Data
How to Avoid Ingress Tromboning?
DC-1
EVPN-VXLAN EVPN-VXLAN
WAN
DC-2
H0
90.90.1.1
100.0.10.100
(H1)
100.0.10/24
Due to the lack of specific host routes, summary
route from either data center could be preferred.
Assuming the BGP path selection algorithm on PE-3
prefers the summary route advertised from DC-2
100.0.10/24 *[BGP/170] from DC-1 ß active
[BGP/170] from DC-2 ß inactive
Layer 2 stretched across DCs
PE-1
(WAN edge)
DC-2_GW
(DC edge)
DC-1_GW
(DC edge)
100.0.10.101
(H2)
100.0.10/24
PE-2
(WAN edge)
PE-3
(WAN edge)
DC edge can be leaf/spine/super-spine
When host H0 needs to reach H2 (DC-2),
traffic from host H0 is sub-optimally routed
to DC-1 which then forwards it to DC-2
44 © 2018 Juniper Networks, Inc. All rights reserved.
No Ingress Tromboning
DC-1
EVPN-VXLAN EVPN-VXLAN
WAN
DC-2
H0
90.90.1.1
100.0.10.100
(H1)
100.0.10.100/32
PE-1
(WAN edge)
DC-1_GW
(DC edge)
DC-1_GW
(DC edge)
100.0.10.101
(H2)
100.0.10.101/32
PE-3
(WAN edge)
100.0.10.100/32 *[BGP/170] from DC-1 ß active
100.0.10.101/32 *[BGP/170] from DC-2 ß active
No tromboning due to host route
availability (exact host location
awareness)
PE-2
(WAN edge)
Thank You

More Related Content

What's hot (20)

PDF
SDN Service Provider use cases Network Function Virtualization (NFV)
Brent Salisbury
 
PDF
ISC Cloud 2013 - Cloud Architectures for HPC – Industry Case Studies
OpenNebula Project
 
PDF
The evolution of data center network fabrics
Cisco Canada
 
PPTX
Evolution of Network Virtualization
Pavan Hasabnis
 
PPTX
Cloud, SDN, NFV
Igor D.C.
 
PPTX
Arista reinventing data center switching
VLCM2015
 
PPTX
Software-Defined Networking(SDN):A New Approach to Networking
Anju Ann
 
PDF
OVNC 2015-성공적인 Customer Optimized Datacenter 구축 방안
NAIM Networks, Inc.
 
PDF
OpenNebula Interoperability and Portability DMTF 2011
Ignacio M. Llorente
 
PPTX
Demystifying OpenStack for NFV
Trinath Somanchi
 
PDF
VMworld 2013: An Introduction to Network Virtualization
VMworld
 
PDF
Open nebula leading innovation in cloud computing management
Ignacio M. Llorente
 
PDF
Networking Technology Transformation to SDN and NFV
Himawan Nugroho
 
PDF
Innovation in cloud computing architectures with open nebula
Ignacio M. Llorente
 
PPTX
Cóndor solución empresarial para entornos empresariales de KEMP Technologies
Peter Diaz
 
PDF
VMware NSX primer 2014
Sanjay Basu
 
PDF
VMworld 2013: Datacenter Transformation with Network Virtualization: Today an...
VMworld
 
PDF
Avaya Fabric Connect: The Right Foundation for the Software-Defined Data Center
Avaya Inc.
 
PDF
Scaling Your SDDC Network: Building a Highly Scalable SDDC Infrastructure wit...
Kristoffer Sheather
 
PDF
A Novel Use of Openflow and Its Applications in Connecting Docker and Dummify...
DaoliCloud Ltd
 
SDN Service Provider use cases Network Function Virtualization (NFV)
Brent Salisbury
 
ISC Cloud 2013 - Cloud Architectures for HPC – Industry Case Studies
OpenNebula Project
 
The evolution of data center network fabrics
Cisco Canada
 
Evolution of Network Virtualization
Pavan Hasabnis
 
Cloud, SDN, NFV
Igor D.C.
 
Arista reinventing data center switching
VLCM2015
 
Software-Defined Networking(SDN):A New Approach to Networking
Anju Ann
 
OVNC 2015-성공적인 Customer Optimized Datacenter 구축 방안
NAIM Networks, Inc.
 
OpenNebula Interoperability and Portability DMTF 2011
Ignacio M. Llorente
 
Demystifying OpenStack for NFV
Trinath Somanchi
 
VMworld 2013: An Introduction to Network Virtualization
VMworld
 
Open nebula leading innovation in cloud computing management
Ignacio M. Llorente
 
Networking Technology Transformation to SDN and NFV
Himawan Nugroho
 
Innovation in cloud computing architectures with open nebula
Ignacio M. Llorente
 
Cóndor solución empresarial para entornos empresariales de KEMP Technologies
Peter Diaz
 
VMware NSX primer 2014
Sanjay Basu
 
VMworld 2013: Datacenter Transformation with Network Virtualization: Today an...
VMworld
 
Avaya Fabric Connect: The Right Foundation for the Software-Defined Data Center
Avaya Inc.
 
Scaling Your SDDC Network: Building a Highly Scalable SDDC Infrastructure wit...
Kristoffer Sheather
 
A Novel Use of Openflow and Its Applications in Connecting Docker and Dummify...
DaoliCloud Ltd
 

Similar to Multicloud as the Next Generation of Cloud Infrastructure (20)

PPTX
DC Moving I migracion a otro datacenterd
ElvisJessVillalva
 
PDF
Presentation on Data Center Use-Case & Trends
Amod Dani
 
PDF
Designing Secure Cisco Data Centers
Cisco Russia
 
PDF
brocade-data-center-fabric-architectures-wp
Anuj Dewangan
 
PDF
Cisco Connect 2018 Malaysia - SDNNFV telco data center transformation
NetworkCollaborators
 
PPTX
Software Defined Networking - Huawei, June 2017
Novosco
 
PDF
Cloud & Data Center Networking
Thamalsha Wijayarathna
 
PDF
Data Centre Design for Canadian Small & Medium Sized Businesses
Cisco Canada
 
PDF
TechWiseTV Workshop: Intercloud Fabric
Robb Boyd
 
PDF
MetaFabric Architectures 1.0 - Virtualized IT Data Center
Juniper Networks
 
PDF
Presentation capturing the cloud opportunity
xKinAnx
 
PDF
Data Center Interconnect Seamlessly with SDN
Pluribus Networks
 
PDF
Data center interconnect seamlessly through SDN
Felecia Fierro
 
PPTX
Advanced Design and Optimization of Data Center Interconnection Networks.pptx
Service Solutions Pvt. Ltd. (SSL)
 
PDF
Cisco’s Cloud Ready Infrastructure
Cisco Canada
 
PDF
Cisco at v mworld 2015 theater presentation brfarnha
ldangelo0772
 
PDF
Data center pov 2017 v3
Jeff Green
 
PDF
Datacenterarchitecture
rlynes
 
PPTX
LinkedIn's Approach to Programmable Data Center
Shawn Zandi
 
PPTX
SDN and NFV Value in Business Services
Alan Sardella
 
DC Moving I migracion a otro datacenterd
ElvisJessVillalva
 
Presentation on Data Center Use-Case & Trends
Amod Dani
 
Designing Secure Cisco Data Centers
Cisco Russia
 
brocade-data-center-fabric-architectures-wp
Anuj Dewangan
 
Cisco Connect 2018 Malaysia - SDNNFV telco data center transformation
NetworkCollaborators
 
Software Defined Networking - Huawei, June 2017
Novosco
 
Cloud & Data Center Networking
Thamalsha Wijayarathna
 
Data Centre Design for Canadian Small & Medium Sized Businesses
Cisco Canada
 
TechWiseTV Workshop: Intercloud Fabric
Robb Boyd
 
MetaFabric Architectures 1.0 - Virtualized IT Data Center
Juniper Networks
 
Presentation capturing the cloud opportunity
xKinAnx
 
Data Center Interconnect Seamlessly with SDN
Pluribus Networks
 
Data center interconnect seamlessly through SDN
Felecia Fierro
 
Advanced Design and Optimization of Data Center Interconnection Networks.pptx
Service Solutions Pvt. Ltd. (SSL)
 
Cisco’s Cloud Ready Infrastructure
Cisco Canada
 
Cisco at v mworld 2015 theater presentation brfarnha
ldangelo0772
 
Data center pov 2017 v3
Jeff Green
 
Datacenterarchitecture
rlynes
 
LinkedIn's Approach to Programmable Data Center
Shawn Zandi
 
SDN and NFV Value in Business Services
Alan Sardella
 
Ad

Recently uploaded (20)

PDF
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PPTX
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PDF
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
Generative AI vs Predictive AI-The Ultimate Comparison Guide
Lily Clark
 
PPTX
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
PDF
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PDF
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PPTX
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PDF
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
PDF
Market Insight : ETH Dominance Returns
CIFDAQ
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PPTX
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
Generative AI vs Predictive AI-The Ultimate Comparison Guide
Lily Clark
 
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
Market Insight : ETH Dominance Returns
CIFDAQ
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
Ad

Multicloud as the Next Generation of Cloud Infrastructure

  • 1. MULTICLOUD AS THE NEXT GENERATION CLOUD INFRASTRUCTURE Deepti Chandra
  • 2. 2 © 2018 Juniper Networks, Inc. All rights reserved. Agenda The “Application-aware” Cloud Principle Problem Statement in Multicloud Deployment SDN in the Multicloud Building Blocks Building the Private Cloud – DC Fabric Building the Private Cloud – DC Interconnect (DCI) Building the Private Cloud – WAN Integration Building the Private Cloud – Traffic Optimization
  • 4. 4 USE REQUIRING RUNNING End Users People, vehicles, appliances, devices Applications Made of software components MANAGEABILITY & OPERATIONS SECURITY CONNECTIVITY MULTIPLE ENVIRONMENTS Containers, VMs, BMS MULTIPLE LOCATIONS Embedded (e.g. device or vehicle), in a Data Center Telco POPs Public Cloud (VPCs) DataCenters Multi-site DC / Private Cloud VMs Containers IP Fabric BMS Remote Branch Office Home The Big Picture: Cooperative Clouds CONNECTIVITY SECURITY MANAGEABILITY & OPERATIONS CPE FIREWALL
  • 5. 5 The Traditional Way PRIVATE CLOUD DECISIONS DRIVEN BY: 1.  Existing assets 2.  Skills and know-how 3.  Security & Confidentiality 4.  Costs and TCO control 5.  Application specific requirements (scale, latency, performance, hypervisors,…) APPLICATIONS EXISTING SYSTEMS/APPS Build or buy or lease an Execution environment HOSTING PUBLIC CLOUD I need to deliver a service to my users
  • 6. 6 What Has Changed PRIVATE CLOUD DECISIONS DRIVEN BY: 1.  User experience 2.  Costs and TCO control 3.  Agility (time to change) 4.  Security and confidentiality 5.  Skills and know-how 6.  Application specific requirements (scale, latency, performance, hypervisors, …) NEW APPLICATIONS Today – The New Cloud HOSTING PUBLIC CLOUD Why Data Centers Need Multicloud? Today most applications leverage cooperation between components deployed across multiple cloud infrastructure Centralized Distributed Racks Containers SaaS PaaS ... vm cnt bursting replic bms bmsaaS BMS cnt DR VM cnt cnt VM Resourcepooling EXISTING SYSTEMS/APPS I need to deliver a service to my users
  • 7. PROBLEM STATEMENT IN THE MULTICLOUD DEPLOYMENT
  • 8. 8 BMS VIRTUALIZED Team 1Day 0 configuration Software upgrade Service management Troubleshooting Visibility and reporting DC AWS Tool A Team 2 Tool B Team N Tool K Team J Tool B •  Different skillsets for different clouds •  Manual operations for daily tasks •  Long lead-times for change management •  Inconsistent visibility for distinct environments Challenges of the Multicloud A set of independent ‘fabrics’ PUBLIC CLOUD CloudFormation REST API IPSEC or DirectConnect BGP PUBLIC CLOUD Azure rest API BGP IPSEC PRIVATE CLOUD DC BGP EVPN/ VXLAN Netconf gRPC PRIVATE CLOUD DC Poor User Experience Operational Complexity Lack of Automation
  • 9. 9 Clouds are Disconnected Environments BMS BMS WITH SRIOVOVS DC OPS CLOUD OPS END USERS SALES PROD DEV
  • 10. 10 A Day in the Life of DC/Cloud Operations “I need a two-tier application execution environment with these characteristics” “Can I have my DB cluster up and running by next week and connected to the Web front end ?” THE END USER THE OPERATOR 1.  Provision tenant 2.  Image servers 3.  Create containers 4.  Create/select policies 5.  Request networking service to public cloud 6.  Create EC2 instance 7.  ... Need to correlate & contextualize: “which IP1/MAC1 on VNI X on Switch A can’t talk to IP2/MAC2 on VNI Y on Switch B?” “DB Cluster can’t talk to Web server” COMPLEXITY INCONSISTENCY REVENUE-LOSS LONG LEAD TIME PROVISIONING MANAGEMENT VISIBILITY
  • 12. 12 WHAT DOES SDN OFFER IN THE MULTICLOUD? MULTICLOUD NETWORKING AS A SERVICE Single pane of glass orchestration across clouds Secure service delivery across clouds Visibility and unified management across clouds Building for containerization Federation to unify controllers across clouds
  • 13. 13 DCI FABRIC Unmanaged Elements Administration of Fabrics Private Clouds with any workload CONTROLLER •  A fabric is an independently administered IP network managed by the controller for configuration, and eventually routing control plane and analytics •  The controller has IP reachability to ALL endpoints (e.g. a device) which belongs to the fabric managed by the controller •  Devices/endpoints can belong to multiple fabrics DC1 FABRIC DC2 FABRIC HV HV DC3 FABRIC
  • 14. 14 Interconnect Fabrics (Private to Public Multicloud) One-click Application Services Multi-cloud Networking-as-a-service for Any Workload and Any Cloud Public clouds Automate Private Clouds / Multi-cloud infrastructure Interconnect Fabrics (Private Multicloud) Predictive Analytics and Visibility CONTROLLER HV HV CONTROLLER HV HV HV Private Clouds with any workload
  • 15. 15 A Unified View Across Cloud and Networking Operations HV HV CONTROLLER mp-bgp evpn/ ip-vpn, netconf BMS, BMS with SRIOV Server with OVSBare Metal Server Containers VM netconf sflow,gRPC mp-bgp evpn, ip-vpn dhcp,tftp neutron/ kubectl Network services API HV VGW bgp, mp-bgp evpn, ip-vpn IP, IPSEC evpn/vxlan, mplsogre, mplsoudp Rest/https netconf VGW •  Underlay and overlay configuration based on roles assignments •  Multiple roles and fabrics support per device (IP Clos, Interconnects) MANAGEMENT •  EVPN mp-bgp peering to DC devices and bgp to external fabrics (e.g. VPCs) •  Routing equivalence between physical roles and virtualized elements CONTROL •  Support of both native device protocols •  Aggregation @ infrastructure and service elements TELEMETRY & ANALYTICS
  • 17. 17 Data Center Requirements Rising EW traffic growth Easy scale-outü Resiliency and low latency Non-blocking, fast fail-over Agility and speed Any service anywhere Open architecture No vendor lock-in Design simplicity No steep learning curve Architectural flexibility EW, NS & DCI Design Requirement ü ü ü ü ü Technology Attribute
  • 18. 18 Common Building Blocks for Data Centers DATA CENTER FABRIC DATA CENTER INTERCONNECT HYBRID CLOUD CONNECTIVITY Peering Routers MPLS/IP Backbone Public Internet Public Cloud DCI CLOS Fabric (IP, MPLS) DC Edge Spine TORs WAN or Dark fiber Data Center 2Data Center 1 DC Edge (collapsed)) OR Spine DC Edge OR Spine Service Edge Boundary Private/Public WAN Data Center 1 (public cloud) Data Center 1 (on-prem) On-prem DC extension into public cloud DC Edge Cloud Edge WAN INTEGRATION Private/Public WAN Data Center 2Data Center 1 High Performance Routing DC Edge DC Edge CORE CORE COLO BASED INTERCONNECT
  • 19. BUILDING THE PRIVATE CLOUD – DC FABRIC
  • 20. 20 Defining Terminology… L L L L L L LL L L S S S S SSS F F E E POD-1 POD-N Leaf (DC Access Layer) Spine (DC Aggregation Layer) Fabric (DC Core/Interconnect) Edge (DC Edge) Optional: Can be collapsed into one layer S
  • 21. 21 Building the DC Fabric •  Leverage BGP constructs to achieve L2/L3 traffic and multi-tenancy •  L3 gateway placement can be at the leaf or spine •  Hierarchical route-reflection for reduced control plane state and redundancy •  Easy integration with L3VPN with no added provisioning •  Service insertion for EW and/or NS traffic (inter-tenant inter-subnet) Let’s start with the smallest unit – the POD
  • 22. 22 Architectural Flexibility Containerization influence on network infrastructure PROBLEM STATEMENT Connection between servers and TORs can be Layer 2 or Layer 3 IP FABRIC c1 c2 … c2000 Communication needs to be enabled between 2000 containers residing on servers spread across racks
  • 23. 23 Building a Fabric for Containers •  Layer 2: Trunk ports with each app container identified by a separate VLAN mapped to VNIs on hardware VTEPs •  Benefits: Higher scale capabilities, active-active load balancing with open standards (EVPN N-way multihoming) •  Layer 3: Routing agent can be residing on the server hypervisor •  Benefits: Routing table scale, greater provisioning benefits by using unnumbered addresses for peering between servers and TORs Container IPs advertised over the protocol peering session between servers and TORs Routing agent ESI-E IP1 IP2 IP2000 c1 c2 … c2000 10.1.1.1/31 VLAN 1001-3000 10.1.1.0/31 BGP/OSPF L3 load-balancing & redundancy ESI-A ESI-E IP1 VLAN 1001 IP2 VLAN 1002 IP2000V LAN 3000 IP FABRIC c1 c2 … c2000 VLAN 1001-3000 VLAN 1001-3000 VETP IP FABRIC
  • 24. 24 Design Flexibility App 1 App 2 App 3 Fabric (DCI + DC edge) RACK-1 POD-N RACK-2 RACK-N Leaf Spine DC-1 POD-1 DC-2 Centralized or distributed routing – design choices based on requirements WAN Edge (DC edge) RACK-1 POD-N RACK-2 RACK-N Leaf Spine POD-1 BL BL Border-Leaf (DCI) Fabric
  • 25. BUILDING THE PRIVATE CLOUD – DC INTERCONNECT (DCI)
  • 26. 26 App 1 App 2 App 3 Super-Spine / Fabric RACK-1 POD-N RACK-2 RACK-N Leaf Spine DC-1 POD-1 DC-2 BL RACK-1 POD-N RACK-2 RACK-N Leaf Spine POD-1 BL L Let’s Draw a Picture – DCI Super-Spine / Fabric Service Block (Security insertion) Service Block (Security insertion) DCI Optical/Dark Fiber Shared WAN Private Backbone App 4
  • 27. 27 EVPN – DCI Design Options •  Extended control plane •  Interconnect used as transport (EVPN unaware) Scaling constraint Design simplicity Design thought Larger deployments L3 only Larger deployments DCI Over The Top (OTT) •  Clear demarcation •  Interconnect EVPN aware •  MPLS TE in core, L2 stretch Segmented Approach •  Clear demarcation •  Interconnect EVPN unaware •  MPLS TE in core, NO L2 stretch Layer 3 DCI
  • 28. 28 DCI Options DCI (EVPN unaware) DC-1 (EVPN-VXLAN) OTT DCI DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels end-to-end) Support for L2 and L3 workloads Extended EVPN Control-plane domain (MP-iBGP same overlay AS across DCs) OR Segmented EVPN Control-plane domain (MP-eBGP different overlay AS across DCs) DCI (EVPN unaware) DC-1 (EVPN-VXLAN) DCI with Data Plane Stitching DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels confined to DC) Support for L2 and L3 workloads Data-plane domain (VXLAN or MPLS confined to WAN) Data-plane domain (VXLAN tunnels confined to DC) Data-plane stitching or translation at DC edge (with and without IT interfaces) DCI (EVPN unaware e.g. L3VPN over MPLS core) DC-1 (EVPN-VXLAN) L3 DCI DC-2 (EVPN-VXLAN) Support for L3 workloads ONLY Data-plane domain (VXLAN tunnels confined to DC) Data-plane domain (VXLAN tunnels confined to DC) Tenant IP only routes advertised into the core
  • 29. 29 Over the Top (OTT) – DCI Control plane is extended across sites with the connecting infrastructure used as transport only (EVPN unaware) DCI (EVPN unaware)DC-1 (EVPN-VXLAN) DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels end-to-end) Support for L2 and L3 workloads Extended EVPN Control-plane domain (MP-iBGP same overlay AS across DCs) OR Segmented EVPN Control-plane domain (MP-eBGP different overlay AS across DCs)
  • 30. 30 DC-1 (EVPN-VXLAN) DC-2 (EVPN-VXLAN) DCI (EVP-MPLS) OR (EVPN-VXLAN) Segmentation of DC & WAN domains Clear demarcation of DC and WAN boundaries, connecting infrastructure is EVPN aware Data-plane domain (VXLAN tunnels confined to DC) Support for L2 and L3 workloads Data-plane domain (VXLAN or MPLS confined to WAN) Data-plane domain (VXLAN tunnels confined to DC) Data-plane stitching or translation at DC edge
  • 31. 31 DC-1 (EVPN-VXLAN) DC-2 (EVPN-VXLAN) DCI (EVPN-unaware e.g. L3VPN over MPLS core) Layer 3 DCI Only Layer 3 connectivity is extended across DCs (no Layer 2). Data plane domain is confined within DC and not extended across DCs. Support for L3 workloads ONLY Tenant IP only routes advertised into the core Data-plane domain (VXLAN tunnels confined to DC) Data-plane domain (VXLAN tunnels confined to DC) L3VPN or EVPN Type 5
  • 32. BUILDING THE PRIVATE CLOUD – WAN INTEGRATION
  • 33. 33 L3 gateway/s (Leaf or Spine) H1 Host IP routes are present in the IP-VRF of the L3 gw Host routes can be exchanged between L3 gw/s and super-spine (DC-edge that connects to the WAN) using EVPN Type 5 DC edge/s 100.0.10.100/32 L3VPN (MPLS core) or EVPN Type 5 NLRI (MPLS or IP core) 100.0.10.100/32 How are host IP prefixes exchanged between L3 gateway/s and DC edge/s so as to be advertised out of the DC? WAN IP-VRF.inet.0 100.0.10.100/32 IP-VRF.inet.0 100.0.10.100/32
  • 34. 34 EVPN Route Type 5 – Classification Route Type 5 draft-ietf-bess-evpn-prefix-advertisement Pure Type 5 model (Interface-less IP-VRF to IP-VRF) Gateway address model (Interface-ful IP-VRF to IP-VRF) VXLAN MPLS VXLAN MPLS Type 5 route provides all necessary forwarding information Type 5 route needs recursive route resolution for forwarding. The lookup is for an IP prefix but forwarding information is extracted from Type 2 route
  • 35. 35 Pure Route Type 5 Model IPVPN Tenant 1 Tenant 1Tenant 1 Tenant 2 100.0.30/24 101.0.30/24 IPVPN Tenant 2 IPVPN Tenant 1 IPVPN Tenant 2 Route Type 5 , IP : 100.0.30/24 Tenant 2 Route Type 5 , IP : 101.0.30/24 GW PE GW PE 102.0.30/24 103.0.30/24 DC DC
  • 36. 36 D-MAC: VRRP MAC S-MAC: MAC1 D-IP : IP4 S-IP : IP1 Packet Walk – Pure Route Type 5 MAC-VRF IP-VRF (VRF_TENANT_1) IP-VRF (VRF_TENANT_1) MAC-VRF VNI 5010 VNI 1020 irb.5010 VNI 1020 irb.5020 VNI 5020 INGRESS VTEPs LEAF-1 LEAF-2 VRF_TENANT_1 H1 (VLAN 10) IP1 = 100.0.30.100 MAC1 = 00:00:1e:63:c8:7c D-IP: LEAF-3,4 S-IP: LEAF-1,2 VNI : VNI 1020 D-MAC: ROUTER-MAC (LEAF-3,4) S-MAC: ROUTER-MAC (LEAF-1,2) D-IP : IP4 S-IP : IP1 LEAF-3 LEAF-4 EGRESS VTEPs VRF_TENANT_1 H4 (VLAN 20) IP4 = 102.0.30.100 MAC4 = 00:00:00:93:3c:f4 D-MAC: MAC4 S-MAC: IRB MAC D-IP : IP4 S-IP : IP1
  • 37. 37 EVPN Route Type 5 vs L3VPN NLRI EVPN Pure Route Type 5 NLRI * 5:10.1.1.30:30::0::100.0.30.0::24/304 RD Prefix info L3VPN NLRI 10.1.1.30:30: 100.0.30.0/24 RD Prefix info Similar information carried across both NLRI types
  • 38. 38 Benefits with EVPN Type 5 •  Unified solution end to end with one address family inside the DC and outside •  Data plane flexibility with EVPN – use over MPLS or IP core •  If you do not have MPLS between DCs for DCI •  It is not possible to run L3VPN over VXLAN •  For control plane, Route Type 5 is the only option •  Hybrid cloud connectivity (Type 5 with VXLAN over GRE/IPsec)
  • 39. BUILDING THE PRIVATE CLOUD – TRAFFIC OPTIMIZATION
  • 40. 40 What is VMTO ? Virtual Machine Traffic Optimization Resolves ingress and egress traffic tromboning, focusing on north-south traffic optimization NO Layer 2 stretch – different summary routes are advertised from each Data Center. NO traffic tromboning – H0 sends traffic to DC-1 to reach H1 and to DC-2 to reach H2 Remote host 90.0.9.10/24 H0 Route table on remote host 100.0.10/24: NH DC-1 100.0.20/24: NH DC-2 WAN H2 VLAN 100 VLAN 200 H1 ... ... ... ... ... ... 100.0.20.100/32 L3 GW for VLAN 200 only in DC-2 100.0.10.100/32 L3 GW for VLAN 100 only in DC-1 100.0.10/24 100.0.20/24 DC-1 DC-2
  • 41. 41 © 2018 Juniper Networks, Inc. All rights reserved. Ingress and Egress North-South Traffic Optimization Layer 2 stretch – Host prefix routes are advertised from each Data Center NO ingress traffic tromboning – H0 sends traffic to DC-1 to reach H1 and to DC-2 to reach H8 NO egress traffic tromboning – To reach H0, H1 sends traffic to L3 gateway in DC-1 and H8 sends traffic to L3 gateway in DC-2 Remote host 90.0.9.10/24 H0 Route table on remote host 100.0.10.100/32: NH DC-1 100.0.10.101/32: NH DC-2 WAN H8 VLAN 100 VLAN 100 H1 ... ... ... ... ... ... 100.0.10.101/32 L3 GW for VLAN 100 only in DC-2 (VGA: 100.0.10.1) 100.0.10.100/32 L3 GW for VLAN 100 only in DC-1 (VGA: 100.0.10.1) 100.0.10.100/32 100.0.10.101/32 DC-1 DC-2
  • 42. 42 © 2018 Juniper Networks, Inc. All rights reserved. WAN How to Avoid Egress Tromboning? SPINE-1 SPINE-2 LEAF-1 LEAF-2 VGA 100.0.1.1 VGA 100.0.1.1 SPINE-3 SPINE-4 LEAF-3 LEAF-4 VGA 100.0.1.1 VGA 100.0.1.1 VLAN 100 100.0.10.100/32 DC-1 DC-2 90.0.9.10/32Remote Host H0 VLAN 100 Each leaf device prefers local DC L3 gateways H1 Distributed layer 3 anycast gateway function ensures, local DC gateway preferred (even on host migration)
  • 43. 43 © 2018 Juniper Networks, Inc. All rights reserved. Control Data How to Avoid Ingress Tromboning? DC-1 EVPN-VXLAN EVPN-VXLAN WAN DC-2 H0 90.90.1.1 100.0.10.100 (H1) 100.0.10/24 Due to the lack of specific host routes, summary route from either data center could be preferred. Assuming the BGP path selection algorithm on PE-3 prefers the summary route advertised from DC-2 100.0.10/24 *[BGP/170] from DC-1 ß active [BGP/170] from DC-2 ß inactive Layer 2 stretched across DCs PE-1 (WAN edge) DC-2_GW (DC edge) DC-1_GW (DC edge) 100.0.10.101 (H2) 100.0.10/24 PE-2 (WAN edge) PE-3 (WAN edge) DC edge can be leaf/spine/super-spine When host H0 needs to reach H2 (DC-2), traffic from host H0 is sub-optimally routed to DC-1 which then forwards it to DC-2
  • 44. 44 © 2018 Juniper Networks, Inc. All rights reserved. No Ingress Tromboning DC-1 EVPN-VXLAN EVPN-VXLAN WAN DC-2 H0 90.90.1.1 100.0.10.100 (H1) 100.0.10.100/32 PE-1 (WAN edge) DC-1_GW (DC edge) DC-1_GW (DC edge) 100.0.10.101 (H2) 100.0.10.101/32 PE-3 (WAN edge) 100.0.10.100/32 *[BGP/170] from DC-1 ß active 100.0.10.101/32 *[BGP/170] from DC-2 ß active No tromboning due to host route availability (exact host location awareness) PE-2 (WAN edge)