Lab2 Question Final
Lab2 Question Final
Exam Scenario
Exam Scenario
During this lab exam you are building three new data centers. You will perform common
configuration and troubleshooting tasks in all the data centers. These tasks are related to the
technologies that are outlined in CCIE DC blueprint.
Data Center-1 (DC1) has conventional networking equipment based on Cisco Nexus 7000,
and Nexus 5000 Series Switches. The compute platform used in DCI is Cisco UCS B-Series
and C-Series servers, with a virtualized and non-virtualized application workload. For
storage services in DC1, a centralized storage device is used. The storage area network
services are provided using Cisco Nexus 5000 switches.
Data Center-2 (DC2) has a network fabric that is based on Cisco ACI technology. This
includes Cisco Nexus 9000 Spine Switches, and Cisco Nexus 9000 Leaf Switches, and three
Cisco APIC'S. Cisco Nexus 7000 Series Switches are also used in DC2 to provide routing to
the WAN. The compute platform used in DC2 is cisco UCS C-Series.
Data Center-3 (DC3) has network fabric based on ACI technology. This includes Cisco
Nexus 9000 Spine Switches, Cisco Nexus 9000 Leaf Switches, and one cisco APIC The
compute platform used in DC2 is Cisco UCS C-Series.
All data centers are equipped with an out-of-band (OOB) management network. The management
ports of all the equipment in both data centers is connected to this network. The console port of all
devices is connected to a Cisco 2811 router acting as a CommServer. You have full access to the
equipment via both the management ports as well as via console ports.
All data centers are managed from a centralized location within the WAN. This centralized
management network is hosting management functions such as Cisco US Director and Center.
This network use and out-of-band (OOB) management network for administering components
within the data center.
This CCIE Data Center lab exam is divided into three sections that will test proficiency of a
CCIE Data center:
1. Nexus Switch Infrastructure (DC1 and DC2)
2. Compute and Storage (DC1)
3, ACI (DC2 and DC3)
The topology shows the physical connectivity of the device within DC. DC2 and DC3 a logical
diagram and a detailed physical topology is provided with each question, where suitable.
Device Access
Note:
The GUI for the APIC and venter can be accessed from the Test-PC.
The VDC's for N7K device can be accessed from the mputty in Test-PC.
Vlans
VLAN ID NAME
10 VMotion
101 Web
102 App
103 DB
104 ESXI-MGMT
105 ACI-DCI
50 UCS FCOE VLAN for VSAN 50
57 Legacy EPG
Storage Objects
Read the Guidelines, documents and resources before you proceed to the next item.
1.2 VPC and Tagging/Trunking
Modify the existing configuration so that Cisco UCS Fabric Interconnects can see N5K1 and N5K2
as a single device. Make sure to delay vPC leg bring-up on the recovering vPC peer device for 150
seconds. Each vPC peer device must replicate locally the MAC address of the interface VLAN
defined on the other vPC peer device.
2 Points
1.3 Configure BGP
Configure the VXLAN between the DC1-NK5-1, DCI-N5K-2, and BGW1 in the DC1 and BGW2
and N7K1 in the DC2 Ensure that:
The network virtualization endpoint interface 1 is created to forward VXLAN traffic for
VLAN 101-104 with control plane host learning between devices.
Loopback 0 should be, used as source interface.
2 Points
1.4 PIM
Modify the existing configuration to support many sources and receivers and make sure PIM
routers do not build the SPT toward the source. Build a shared tree between sources and receivers
of a multicast group on all interfaces. Configure a PIM static RP address for a multicast group
range. Use DC1 RP-Address 100.0.0.1. Use DC2 RP-Address 200.0.0.1.
2 Points
1.5. EVPN
DC1-N5K1 and DC1-N5K2 must learn traffic from hosts on VLAN 101 to 104 through VRF
XANDAR. DC2-N7K-1 must learn traffic from DC2 hosts through VRF XANDAR. Hosts from
DC1 and DC2 hosts must ping each other. Use below information to achieve reachability
3 Points
1.6 OSPF
Modify the existing configuration to ensure that OSPF neighborship builds across all devices.
Device Router-ID
DC1-N5K-1 10.5.1.1
DC1-N5K-2 10.5.1.2
DC1-N7K-1 10.7.1.1
DC1-N7K-2 10.7.1.2
DC1-N9K-BGW1 10.9.1.1
2 Points
1.7 NTP and Traffic Management
Configure N5K1 as an NTP server and N5K2 as an NTP client, and use loopback0 for
connectivity. Set the MT to its maximum size 9192 bytes for the whole switch on N5K1 and N5K2,
and configure policy-map name jumbo to set MTU 9192 for the default class
3 Points
1.8 DCNM
Save existing template int_loopback_11_1 as Loopback_n7k and modify it so that it allows you to
create a loopback interface on DC1-N7K-1.
2 Points
1.9 Python with BGW2
Modify the existing Python script in dir bootflash:pythoninitial.py to configure the interface and
create a VLAN on DC2 N9K-2(BGW2).
2 Points
1.10 Syslog Server
Configure a syslog server on Nexus 9K devices so that it forwards all messages with severity level
5 or lower for VRF DATACENTER which set of configuration commands meets this requirement?
2 Points
2.1 Compute Connectivity
2 Points
2.2 LAN/SAN uplinks from fabric interconnects
LAN Connectivity:
Configure port channels from each fabric interconnect to DC1-N5K-1 and DC1-NSK-2 using
the VANs 101-104.
Configure port ethernet 1/12 from each fabric interconnect to DC1-N5K-3 and DC1-NSK-
4 using the VLANs listed in the table below.
Ensure that configuration is in place to maximize the number of VLANs in domain and
minimize CPU utilization caused by VLAN configuration.
SAN Connectivity:
Configure Fibre Channel uplink from each fabric interconnect to DC1-N5K-3 and DC1-N5K-
4 using the SANs listed in the table.
Configure the fabric interconnect to view the SAN and LAN neighbors.
3 Points
2.3 UCS Resource pool and network configuration
Create new resource pools for UUID, MAC, WWNN and WWPN. Use the information in this
table,
Create, and configure a policy LINK to detect unidirectional link in these situations:
One of the interfaces cannot send or receive traffic.
One of the interfaces is down.
One of the fiber strands in the cable is disconnected.
Create vNIC templates one on each fabric interconnect with names vNIC-A and vNIC-B and with
these configurations:
Use previously created resource pools and policies.
Use 9000 MTU.
No redundancy required.
Any changes in the template should reflect for any VNICs created using it.
Add necessary VLANs
Create vHBA templates one on each fabric interconnect with names vHBA-A and vHBA-B and
with these configurations:
2 points
2.4 UCS service profiles
Modify the WEB-SRVR-TEMP template so that it allows service profile Web-SRVR1 to associate
successfully. All policies must be placed into the service profile template.
4 points
2.5 Boot from SAN
Make changes in service profile ESX-SRVR01 and associate it to blade1/1 go that it can boot from
storage. The requirements for the service profile are below. Use your own naming convention
where one is not provided.
3 points
2.6 UCS Service Profile
2 Points
2.7 DB credit and EE credit
2 Points
2.8 Traffic Monitoring in UCS
Create SPAN session FCMonto analyze Fiber Channel traffic between Cisco UCS and Nexus
Series 5000 Switches. The requirements for the session are listed here. Use your own naming
convention where one is not provided.
2 points.
3.1 Access Provisioning
Configure a VPC domain to link Leaf-1 and Leaf-4 with each other to configure a vPC with
DC2-N7K1/SW1 based on the above diagram.
vPC Domain policies.
Configure the interface policies, policy groups, and interface and switch profiles based on
the information below.
Interface Policy
Policy Group
Interfaces Profiles.
2 Points
3.2 Domains, AAP and vPC
Connect Leaf-1/Leaf-4 as a vPC toward DC2-N7K1/SW1. Configure the VLAN pool, domain,
and AAEP for the vPC based on the parameters below. This switch will be used later for an L2OUT
configuration.
VLAN POOL
AAEP
Name: vPC-AAEP
Domain EXT-L2
IPG IPG-SW1-vPC
Configure a port channel 10 on DC2-N7K1/SW1 based on the above diagram. Use LACP as
the port channel protocol. The port channel must be configured as a Layer 2 port channel.
Configure the domain and AAEP for servers based on the parameters below.
AAEP
Name SERVERS-AAEP
Domain ACI-PORTS
IPG IPG-SERVERS
Configure the domain and AAP for routers based on the parameters
AAEP
Name ROUTERS-AAEP
Domain EXT-L3
IPG IPG-ROUTERS
2 Points
3.3 Tenant/Application Profile Provisioning
Configure a tenant based on the parameters below.
Tenant Parameters
Configure an application profile named BIZ-APP. Create two EPGs named EPG-WEB and
EPG-DB. Both the EPGs should use BD-2 as the bridge domain. Assign them static ports
based on this information:
Make sure that the end points are seen on the leafs.
2 Points
3.4 Internal Networking
Configure these filters for tenant Xandar:
Name: PING
Protocol: ICMP
Name: DB-ACCESS
Ports: 1521
Stateful: True
3 Points
3.5 External Networking
Configure and provision a contract to allow ICMP reachability between Legancy1 & SVI 50
on DC2-N7K1/SW1. Name the contract and subject EPG-EXTERNAL-VLAN-50. Use an
existing filter in the common tenant for this contract.
Configure the EPG-EXTERNAL to consume the contract and EPG-VLAN-50 to provide the
contract.
DC2-N7K1/SW1 has been configured with a SVI for VLAN 50 with an ip address of
172.16.1.111/24.
Make sure you can ping between Legacy1 and DC2-N7K1SW1 SVI 50.
2 Points
3.6 L3-OUT
Configure a L3OUT to connect the Cisco ACI fabric with SW2 using OSPF. Name the
external network as L3OUT. It must be able to communicate to BD-2. Use the appropriate
external domains to link LEAF-1 to the DC2-N7K1/SW2 using the appropriate port.
Name the new EPG EPG-L3OUT. It should contain the 172.16.30.0/24 network.
Configure MP-BGP in AS 65001. Use Spine-1 and Spine-2 as route reflectors.
Verify that the neighbor relationship between SW2 and LEAF-1 is coming up.
Verify that the 172.16.30.0/24 route is propagated to LEAF-1.
2 Points
3.7 Contract
Configure and provision a contract to allow only ICMP reachability from EPG-WEB toward
EPG-L3OUT. Name the contract and subject L3OUT-CONTRACT. Use an existing filter in
the common tenant for this contract.
Make sure you can ping from 172.16.301 (5W2) from ACI Web-1 and/or AC Web 2
3 Points
3.8 Fex Profiles
Configure FEX profiles based on the above diagram. The FEX profiles for N2K-3 must be
named FEX-3 and the FEX profile for N2K-4 should be named FEX-4. The ports should use
the existing IPG for servers.
Configure interface and switch profiles based on this information:
Interface Profiles
Switch Profiles
Verify that the FEX Devices are up and ports are seen on the Leaf Switches.
3 Points
3.9 Inter-Pod Communication
EPG-APP must be able to initiate access towards EPG-WEB using the APP-2-WEB
contract.
EPG-WEB should only be able to consume the contracts. EPG APP should only provide
the contracts.
Test by initiating a ping from EPG-APP toward EPG-WEB
You should also be able to initiate a ping from EPG-WEB toward EPG-APP as well.
3 Points
3.10 The End