b_nexus_dash_orchestrator_ac1_v1
b_nexus_dash_orchestrator_ac1_v1
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2021 Cisco Systems, Inc. All rights reserved.
CONTENTS
CHAPTER 1 About 1
This Demonstration 1
Requirements 1
This Solution 1
Terminology 2
Topology 3
Before You Present 4
Get Started 4
CHAPTER 2 Scenarios 7
This Demonstration
Limitations: The demonstration environment is a simulated environment and there is no actual data plane,
therefore the fabrics will not establish OSPF/BGP adjacencies. All configurations will be lost after a reboot
of the APIC simulators.
Requirements
The table below outlines the requirements for this preconfigured demonstration.
Required Optional
Laptop Cisco AnyConnect®
This Solution
As the newest advance in the Cisco ACI methods to interconnect networks, Cisco ACI Multi-Site is an
architectural approach for interconnecting and managing multiple sites, each serving as a single fabric and
availability zone. As shown in the diagram, the Multi-Site architecture has three main functional components:
• Two or more ACI fabrics built with Nexus 9000 switches deployed as leaf and spine nodes.
• One APIC cluster domain in each fabric.
• Nexus Dashboard with installed Multi-Site Orchestrator Service, which is used to manage the different
fabrics and to define inter-site policies.
Terminology
As a complementary product with Cisco ACI, much of the Cisco ACI Multi-Site terminology is shared with
ACI and APIC (for example, they both use the terms fabric, tenant, contract, application profile, EPG, bridge
domain, and L3Out). For definitions of ACI terminology, see Cisco Application Centric Infrastructure
Fundamentals.
Micro-services architecture: In its first implementation, the Cisco ACI Multi-Site Orchestrator (inter-site
policy manager) is represented by a service running on a Nexus Dashboard virtual appliance. Nexus Dashboard
appliance with MSO service does not need to be connected to the ACI leaf nodes, because it is only required
to establish IP connectivity between the VMs and the OOB IP addresses of the different APIC cluster nodes.
Namespace: Each fabric maintains separate data in its name space, including such objects as the TEP pools,
Class-IDs (EPG identifiers) and VNIDs (identifying the different Bridge Domains and the defined VRFs).
The site-connecting spine switches (EX or later) perform the necessary namespace translation (normalization)
between sites.
Schema: Profile including the site-configuration objects that will be pushed to sites.
Site: APIC cluster domain or single fabric, treated as an ACI region and availability zone. It can be located
in the same metro-area as other sites or spaced world-wide.
Stretched: Objects (tenants, VRFs, EPGs, bridge-domains, subnets, or contracts) are stretched when they are
deployed to multiple sites.
Template: Child of a schema, a template contains configuration-objects that are shared between sites or
site-specific.
Template Conformity: When templates are stretched across sites, their configuration details are shared and
standardized across sites. To maintain template conformity, it is recommended to only make changes in the
templates, using the Multi-Site GUI and not in a local site's APIC GUI.
Topology
This content includes preconfigured users and components to illustrate the scripted scenarios and features of
the solution. Most components are fully configurable with predefined administrative user accounts. You can
see the IP address and user account credentials to use to access a component by clicking the component icon
in the Topology menu of your active session and in the scenario steps that require their use.
Get Started
Follow the steps to schedule a session of the content and configure your presentation environment.
Procedure
Step 2 Connect to the workstation using one of the available connection methods:
• Cisco AnyConnect VPN [Show Me How] and the local RDP client on your laptop [Show Me How]
(Workstation 1: 198.18.133.36, Username: dCloud\demouser, Password: C1sco12345
Important Name the sites exactly as instructed or you will be unable to complete the demo successfully.
Procedure
Step 1 Sign in to the Cisco Nexus Dashboard using admin/C1sco12345 and then, close the splash screen.
Step 2 In the left panel, Click Sites and then, click Actions > Add Site.
Example:
The San Francisco site is listed in the Sites list and the Connectivity Status is Up.
Now you must add a second site to the Cisco Nexus Dashboard.
Step 20 Wait until the software discovers the New York site and then verify that both sites show Up in the Connectivity
Status and that the configuration URLs are correct.
Note It takes about two minutes for the status to update.
The Health Score filed can be in different states, including Critical; it is not important at this stage.
Example:
After adding both sites to the Cisco Nexus Dashboard, you must move them into a managed state in the
Multi-Site Orchestrator.
Step 21 After adding the sites, on the desktop, click the Fix my demo shortcut.
Step 23 When the script completes, open MSO then, click Infrastructure > Infrastructure Configuration and then,
verify the site configuration.
You should see BGPs, OSPFs, IP addresses, and so on, as in the image.
What to do next
Continue with Create an MSC Tenant , on page 34.
Note In the context of this guide, the terms Multi-Site Policy Manager, Multi-Site Manager, Multi-Site Orchestrator,
and Multi-Site Controller (MSC) are used interchangeably.
Procedure
Step 1 First, create a new user: double-click the Nexus Dashboard icon on the workstation desktop.
Step 2 Sign in using (admin/C1sco12345) and then, close the splash screen.
Example:
Step 9 Check the Write Privilege check boxes for the Site Administrator, Site Manager, and Tenant Manager
roles and then, click Create.
Example:
Step 13 Verify that the only options in the left-menu are Dashboard and Sites.
Example:
Procedure
Step 1 Sign in to the Cisco Nexus Dashboard using admin/C1sco12345 and then, close the splash screen.
Step 2 In the left panel, Click Sites and then, click Actions > Add Site.
Example:
The San Francisco site is listed in the Sites list and the Connectivity Status is Up.
Now you must add a second site to the Cisco Nexus Dashboard.
Step 18 (Optional) Drop the pin to locate your site on the map.
Step 19 Click Add.
Example:
Step 20 Wait until the software discovers the New York site and then verify that both sites show Up in the Connectivity
Status and that the configuration URLs are correct.
Note It takes about two minutes for the status to update.
The Health Score filed can be in different states, including Critical; it is not important at this stage.
Example:
After adding both sites to the Cisco Nexus Dashboard, you must move them into a managed state in the
Multi-Site Orchestrator.
Move each fabric that is connected to the MSO to Managed state and assign a unique Site-ID value. If you
configure overlapping site IDs to separate fabrics, MSO issues a warning.
Step 23 Expand the State drop-down list for San Francisco then, click Managed then, in the Site ID field, enter 1
and then, click Add.
Example:
The following configuration tasks are managed from the APIC at each site (site local configuration).
• Configuration of the access policies for the External L3 domain (Spine switch profile, interface profile,
interface policy group, attachable entity profile, external L3 domain)
• BGP Route Reflector Policy
The MSC will read in the BGP ASN and External L3 Domains from each site. Add these to each site from
the respective APICs.
Now you will enter an Autonomous System Number and an External Routed Domain for the San Francisco
site (Site 1). These elements are required for BGP routing.
Procedure
Step 1 Double-click
Example:
Step 4 Enter 65001 in the Autonomous System Number field, select spine 104 and 103 and click Save and
Continue.
Example:
Step 6 Click Fabric > Access Policies in the top menu and expand Physical and External Domains.
Step 7 Click L3 Domains.
Step 8 Expand the Tools menu at the upper right and select Create L3 Domain.
Example:
Step 10 Double-click
Step 13 Enter 65002 in the Autonomous System Number field, select spine 104 and 103 and click Save and
Continue.
Example:
Step 15 Click Fabric > Access Policies in the top menu and expand Physical and External Domains.
Step 16 Click L3 Domains.
Step 17 Expand the Tools menu and select Create L3 Domain.
Example:
Now you will use MSO to configure Infra for San Francisco (Site 1) and New York (Site 2). Perform the same
procedure for each site, noting the difference in IP addresses and other values.
Step 19 Return and log in (admin/Cisco12345) to the Nexus Dashboard then, in the menu, click Services and
open the Multi-Site Orchestrator.
Example:
Step 20 Click Infrastructure > Infra Configuration to display the BGP and OSPF settings page and then, click
Configure Infra.
Example:
Note The default setting for BGP is full mesh and uses standard BGP timer values. The default OSPF
network type is point-to-point.
Step 21 Select San Francisco in the side menu to add Infra settings for Site 1.
Note If configuring Site 2, select New York.
• Site 1: 10.1.100.1
• Site 2: 10.2.100.1
Note Inter-site control plane: Endpoint reachability information is exchanged across sites using a
Multiprotocol-BGP (MP-BGP) Ethernet VPN (EVPN) control plane. This approach allows the
exchange of MAC and IP address information for the endpoints that communicate across sites.
MP-BGP EVPN sessions are established between the spine nodes deployed in separate fabrics.
Example:
Step 31 Leave the default MTU (9216), default OSPF policy, and OSPF authentication and then, click Save.
Example:
Example:
Example:
Step 37 Select the site box (San Francisco or New York) to bring up the pane to enable Multi-Site.
Step 38 Configure the following fields as shown:
• ACI Mult-Site: On
• OVERLAY MULTICAST TEP: SF: 10.1.100.200 / NY: 10.2.100.200
• BGP Autonomous System Number: SF: 65001 / NY: 65002
• OSPF Area ID: 0
• OSPF Area Type: regular
Example:
Note The External Routed Domain drop down will display the domain previously configured on the APIC
(Multisite_External_L3_domain).
Step 39 Click Deploy to push the Infra L3out configuration to the APIC.
Step 40 Wait for the success message and close the Fabric Connectivity Intra window.
Step 41 Add the Multipod Data Plane TEP configuration as follows.
a) In the APIC window for the site being configured, click Tenants > infra > Policies > Protocol > Fabric
Ext Connections Policies and click Fabric Ext Connection Policy.
b) Enter extended:as2-nn4:5:16 in the Community field.
c) In the work pane, double-click Pod ID 1.
Example:
Step 46 Click Tenant > infra > Networking > L3Outs and verify that an L3outs called intersite has been configured
under the infra tenant. This indicates that Infra L3out has been successfully configured on APIC for Site 1.
Note The object for the L3Out also includes a small cloud icon. All ACI objects configured by the MSC
will include this icon.
Example:
Step 47 Repeat all the steps for the New York site, using the New York values indicated in the text.
Procedure
Step 1 In the Multi-Site Orchestrator, select Application Management > Tenants in the side menu and click Add
Tenant.
Example:
Step 4 Return to the APIC SF and APIC NY windows. Click Tenants > ALL TENANTS in each window and
verify that the Tesla tenant has been created on both fabrics.
Example:
Step 5 In either APIC SF or APIC NY, double-click Tesla to proceed to the APIC window for Tesla.
Note that the tenant object includes the cloud symbol, indicating that this object has been configured from
the MSC. The APIC GUI will also display a message to this effect.
Example:
Procedure
Step 1 In the Multi-Site Configuration window, select Application Management > Schemas from the vertical
menu and then, click Add Schema.
Example:
Step 2 Enter L3-stretch-schema as the schema name and then, click Add.
Example:
Note Schemas contain templates. The templates are associated to one or more sites and are used to define
the objects that will be stretched between sites or will remain site-local.
Step 3 Click + to add the Template then, in the Select a Template type dialog, select ACI Multi-cloud and then,
click Add.
Example:
Step 4 Because this configuration will stretch the Tenant and VRF, click Template 1, then click the pencil icon and
then, name the template SF and NY Template.
Example:
Also, you can change the template name by clicking Template Settings > Display Name.
Step 5 In Template Settings, in the Select a Tenant drop-down, select Tesla.
Example:
Because the objects in this template will be stretched across both sites, it must be associated with both sites.
Note When configuration is added to the MSO there is a Save button and a Deploy to sites button. Saving
the template configuration saves it to the MSO database but does not make any changes to the
APICs. Only after selecting Deploy to sites is the configuration change pushed to the APICs. At
this point in the configuration, we have added a template and created a VRF but have not saved nor
deployed the configuration
Step 10 Click Save to save the configuration to the MSO without deploying to APIC.
Example:
Step 14 In the Multi-Site Configuration window, select the + next to Templates then, in Template type select ACI
Multi-cloud and then, click Add.
Step 15 Click on Template2, then click the pencil icon so you can rename it.
Step 16 Enter SF Only in the Name field.
Step 17 In Tenant Settings, in the Select a Tenant drop-down, select Tesla to associate the tenant.
Example:
Step 20 Click Add EPG and then, enter Web in the Display Name field.
Example:
Step 21 Select the drop-down for the Bridge Domain to associate the Web EPG to a bridge domain.
Step 22 Enter Web-BD in the Bridge Domain field. Because Web-BD does not currently exist, the option to create
the object is one of the choices on the drop-down, click "Web-BD" was not found. Click to create
"Web-BD".
Example:
Step 23 Scroll down and select the Web-BD under Bridge Domain.
The default BD settings will appear on the right-side pane. This BD will not be stretched across sites.
a) Uncheck the L2 STRETCH box.
Step 24 Acknowledge the Warning by clicking Yes.
Note When the L2STRETCH box is unchecked the option to add a BD subnet is removed. This is because
the BD becomes a site local configuration. The site local configuration will be covered in a few
more steps
Step 25 On the Virtual Routing and Forwarding drop-down, select VRF1 (the VRF created in the SF and NY
Template).
Example:
Step 26 Select + next to Sites, use the drop-down to add the SF Only template to San Francisco.
This associates the template to only San Francisco Site.
Step 32 Select San Francisco SF Only in the vertical menu to view the site-local changes.
Example:
Step 33 Select Web-BD and then, click + Add Subnet to add a subnet.
Example:
Step 34 Add 10.1.1.254/24 in the Gateway IP field and then, click Save.
Example:
Step 35 At the top, click Save and select SF Only Template then click Deploy to Sites to deploy the changes to the
site, then click Deploy.
Example:
Note EPGs are also associated to domains (physical or VMM domains). The domain association and
static path binding configuration is also done from the MSO. This will always be a site local
configuration task and will be configured from selecting the site just as what was done for the BD
subnet. In this lab we will not be configuring the domain but be aware that this configuration is
always site local.
Step 36 Repeat the previous section to create a NY Only template and associate it to the Tesla tenant with the following
changes:
a) Create a NY Only template
• Application Profile Name: Webapp
• EPG Name: App
• Bridge Domain Name: App-BD
Step 40 Scroll down to Filters then, click Add Filter and then, in the Display Name field, enter any.
Example:
Example:
Now you will add a web-to-app provider contract to the Web EPG in the SF Only template, and a web-to-app
consumer contract to the App EPG in the NY Only template. This will enable the application tiers to
communicate between sites as long as they are deployed in the same tenant.
Step 65 Open either APIC NY or APIC SF if they are not already open.
Step 66 Click Tenants and double-click Tesla.
Step 67 Expand Tenant Tesla > Application Profiles > Webapp > Application EPGs and verify the presence of
the Web (SF APIC) and App (NY APIC) EPGs.
Example:
Step 68 Click Tenant > Tesla > Networking > Bridge Domains and verify the presence of the Web-BD (SF
APIC) and App-BD (NY APIC) bridge domains.
Example:
Procedure
Step 1 In the Multi-Site Orchestrator window, select Schemas > L3 stretch schema and then, select the SF Only
template.
Step 2 To create a new VRF, in the VRFs section, click Add VRF and then, in the Display Name field, enter VRF2.
Example:
Step 3 In the Bridge Domains section, click Web-BD and then, under Virtual Routing & Forwarding, select
VRF2.
Example:
Step 4 To create a new VRF, select the NY Only template then, under VRF, in the Display Name field, enter Shared
VRF.
Example:
Step 5 To create a new Bridge Domain, in the Bridge Domain field, click + Add Bridge Domain and enter the
following attributes.
• Display Name: Shared-BD
• Virtual Routing & Forwarding: Shared VRF
Example:
Step 6 Scroll down the page then, select + Add Subnet then, in the Add Subnet dialog, enter the following attributes
and then, click Save.
• Gateway IP: 10.10.1.254/24
• Shared between VRFS: checked
Example:
Step 7 To create a new EPG, scroll up to the Application Profile section, click + Add EPG and then, enter the
following properties.
• Display Name: DNS
• Bridge Domain: Shared-BD
Example:
Important Because the EPG provides services to other EPGs, you must remember to add the IP subnet
information under the EPG at the site level.
Step 8 To create a new subnet, in the DNS EPG work pane, click + Add Subnet then, in the Add Subnet dialog,
enter the following attributes and then, click Save.
• Gateway IP: 10.10.1.254/24
• Shared between VRFs: checked
• No Default SVI Gateway: checked
Example:
Note The sole purpose of defining the IP subnet information under the provider EPG is to enable the
necessary VRF rout-leaking functions between the Shared VRF and the other VRFs accessing the
shared services. The IP subnet that is configured at the BD provides the default gateway services,
so it is important to select the No Default SVI Gateway flag.
Step 9 To create the web-to-dns contract and ensure that the DNS EPG provides it, in the menu, under Templates,
select NY Only then, in the Contracts field, click Add Contract and then, enter the following attributes.
• Display Name: web-to-dns
• Scope: tenant
• Filter Chain: click + Add Filter and then, select any
Note By default, the contracts are created with VRF scope. In this case, the Web EPG, which is part of
a different VRF consumes toe contract so you must modify the scope of the contract to be either
tenant or global.
Example:
Step 10 Under EPG, click DNS and then, click + Add Contract.
Example:
Step 11 In the Add Contract dialog, in the Contract field, select web-to-dns then, in the Type field, select provider
and then, click Save.
Example:
Step 12 To push the configuration to New York, click Deploy to Sites and then, click Deploy.
Now configure the Web EPG to consume the web-to-dns contract.
Step 13 In the menu, under Templates, click SF Only then, under EPG, click Web and under contracts, click + Add
Contract.
Example:
Step 14 In the Add Contract dialog, in the Contract field, select web-to-dns then, in the Type field, select consumer
and then, click Save.
Example:
Step 15 To push out the new configuration, click Deploy to Sites and then, click Deploy.
Step 16 Verify the configuration that is deployed to San Francisco (apic1-a).
a) Log into the APIC and then click Tenants > Tesla > Application Profiles > Webapp.
b) Click Topology.
Example:
A new BD, called DB-BD, will exist on both the San Francisco and New York sites, and will enable this use
case on the Tesla tenant.
Procedure
Step 1 In ACI MSO, click Schemas > L3-stretch-schema > SF and NY Template.
Step 2 In the Bridge Domain field select Add Bridge Domain, and configure with the following:
• Display Name: DB-BD
• Virtual Routing & Forwarding: VRF1
• INTERSITE BUM TRAFFIC ALLOW: Deselected (Click Yes on the Warning dialog)
• Add Subnet: 10.3.1.254/24
Example:
Note At this point, there are no EPGs that are shared across sites, so the Webapp Application profile has
not been configured under the SF and NY template.
Step 5 Click +Add EPG to create a new EPG with the following configuration:
• Display Name: DB
• Bridge Domain: DB-BD
Step 8 Verify the configuration has been deployed to San Francisco (apic1-a).
a) Log into the APIC and click Tenants > Tesla > Application Profiles > Webapp.
Step 9 Select the Topology tab.
Example:
Step 10 Verify the configuration has been deployed to New York (apic1-b).
a) Log into the APIC and click Tenants > Tesla > Application Profiles > Webapp.
Step 11 Select the Topology tab.
Example:
Procedure
Step 1 In the APIC for the San Francisco (apic1-a) screen, click Tenants > Brownfield.
Step 2 Click Tenant Brownfield from the menu.
Note that this tenant has the following configuration:
• VRF: Brownfield-VRF
• Bridge Domain: Brownfield-BD
• Application Profile: AP
• Application EPG: Brownfield-EPG
Example:
Step 3 Return to ACI Multisite Orchestrator Home page and then, click Application Management > Tenants >
Add Tenant.
Step 4 Enter the Display Name Brownfield and select both San Francisco and New York as the associated
sites
Step 5 Click Save.
Note It is essential that the name of the tenant created on MSC matches the name of the tenant in the
brownfield fabric from where the configuration should be imported. The newly created tenant should
then be associated to both existing sites since the configuration will be imported from one and
stretched toward the other.
Example:
Step 8 Click + next to Templates to build your schema and then, select the default type ACI Multi-cloud.
Example:
Step 9 Click here to select a tenant and then, select the Brownfield tenant.
Example:
Step 12 In the resulting window, select the Application Profile and select AP.
The Import Relations toggle automatically switches to ON to import all the objects associated to the
Brownfield-AP application profile.
Step 13 Click Import and then, click Save.
Example:
Step 14 Select the Brownfield-BD bridge domain to verify that the configuration is not stretched (which is expected,
since it was imported from a specific site).
Step 15 Click the L2 Stretch check box to allow to stretch it to the Greenfield site. Click Yes to acknowledge the
warning.
Step 16 Check the Intersite BUM Traffic Allow box to ensure that BUM traffic is allowed.
Note This step is required in a real-life scenario to enable the migration of workloads from Brownfield
to Greenfield, leveraging live migration technologies (for example, vMotion in a vSphere
environment).
Example:
Now that the configuration has been imported to the Multi-Site Orchestrator, it is required to push the objects
toward the Greenfield ACI fabric (New York site).
Step 17 Rename the Template1 to Migration-Template.
Step 18 Now, associate the Migration-Template to the New York site.
a) Under Sites, click +, select New York, and then click Save.
Example:
Step 22 Verify that the configuration is now displayed correctly in the New York APIC controller (apic1-b).
Example: