0% found this document useful (0 votes)
46 views

PowerEdge MX Scalable Fabric Architecture

The document describes the scalable fabric architecture of PowerEdge MX, which allows multiple MX chassis to connect and behave as a single chassis from a networking perspective. It uses MX9116n FSE and MX7116n FEM modules in each chassis, with the FSEs connecting to the FEMs across chassis. This creates a single logical switch across all connected chassis. The document also discusses port mapping and connectivity guidelines for this architecture.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

PowerEdge MX Scalable Fabric Architecture

The document describes the scalable fabric architecture of PowerEdge MX, which allows multiple MX chassis to connect and behave as a single chassis from a networking perspective. It uses MX9116n FSE and MX7116n FEM modules in each chassis, with the FSEs connecting to the FEMs across chassis. This creates a single logical switch across all connected chassis. The document also discusses port mapping and connectivity guidelines for this architecture.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

PowerEdge MX Scalable

Fabric Architecture
(Dell PowerEdge MX Networking Deployment Guide)
May 2023

1
Table Of Contents

PowerEdge MX Scalable Fabric Architecture


Scalable Fabric Architecture
Complex Scalable Fabric topologies
Quad-port Ethernet NICs
Interfaces and port groups
Recommended port order for MX7116n FEM connectivity
Embedded top-of-rack switching
MX Chassis management wiring

PowerEdge MX Scalable Fabric Architecture


PowerEdge MX Scalable Fabric Architecture

Scalable Fabric Architecture


Complex Scalable Fabric topologies
Quad-port Ethernet NICs
Interfaces and port groups
Recommended port order for MX7116n FEM
connectivity
Embedded top-of-rack switching
MX Chassis management wiring

Scalable Fabric Architecture

Overview
A multichassis group enables multiple chassis to be managed
as if they were a single chassis. A PowerEdge MX Scalable
Fabric enables multiple chassis to behave like a single
chassis from a networking perspective.

A Scalable Fabric consists of two main components - the


MX9116n FSE and the MX7116n FEM. A typical configuration
includes one MX9116n FSE and one MX7116n FEM in each
of the first two chassis, and additional pairs of MX7116n FEMs
in the remaining chassis. Each MX7116n FEM connects to the
MX9116n FSE corresponding to its fabric and slot. This
hardware-enabled architecture applies regardless of whether
the switch is running in Full Switch or SmartFabric mode.

The following figure shows up to ten MX7000 chassis in a


single Scalable Fabric. The first two chassis house MX9116n
FSEs, while chassis 3 through 10 only house MX7116n FEMs.

PowerEdge MX Scalable Fabric Architecture


All connections in the following figure use QSFP28-DD
connections.

Note: For information on the Scalable Fabric Architecture with


the 100 GbE solution with the MX8116n, see PowerEdge MX
100 GbE solution with external Fabric Switching Engine.

Note: The following diagrams show the connections for a


scalable fabric on multiple chassis between the FSE and FEM
components. The diagrams do not show the VLTi connections
required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

Figure 21. Scalable Fabric example using Fabric A

Note: To expand from single-chassis to dual-chassis


configuration, see Expanding from a single-chassis to dual-
chassis configuration.

The following table shows the recommended IOM slot


placement when creating a Scalable Fabric Architecture.

PowerEdge MX Scalable Fabric Architecture


Table 2. Scalable Fabric Architecture maximum
recommended design

MX7000 chassis Fabric slot IOM module

A1 MX9116n FSE
Chassis 1
A2 MX7116n FEM

A1 MX7116n FEM
Chassis 2
A2 MX9116n FSE

A1 MX7116n FEM
Chassis 3–10
A2 MX7116n FEM

PowerEdge MX Scalable Fabric Architecture


To provide further redundancy and throughput to each
compute sled, Fabric B can be used to create an additional
Scalable Fabric Architecture. Utilizing Fabric A and B can
provide up to eight 25-Gbps connections to each MX740c or
sixteen 25-Gbps connections to each MX840c.

Figure 22. Two Scalable Fabrics spanning two


MX7000 chassis

Restrictions and guidelines


The following restrictions and guidelines are in place when
building a Scalable Fabric:

All MX7000 chassis in the same Scalable Fabric must be


in the same multichassis group.
Mixing IOM types in the same Scalable Fabric (for
example, MX9116n FSE in fabric slot A1 and MX5108n in
fabric slot A2) is not supported. See PowerEdge MX IOM
slot support matrix for more information about IOM
placement.
All participating MX9116n FSEs and MX7116n FEMs
must be in MX7000 chassis that are part of the same
MCM group. For more information, find the relevant
version of the User Guide in the OME-M and OS10
compatibility and documentation table.
When using both Fabric A and B for a Scalable Fabric,
the following restrictions apply:
IOM placement for each fabric must be the same in
each chassis. For instance, if an MX9116n FSE is in
chassis 1 fabric slot A1, then the second MX9116n
FSE should be in chassis 1 fabric slot B1.
For chassis 3 through 10, which only contain

PowerEdge MX Scalable Fabric Architecture


MX7116n FEMs, they must connect to the MX9116n
FSE that is in the same group.

Note: For information about the recommended MX9116n FSE


port connectivity order, see the Additional Information section.

Complex Scalable Fabric topologies

Beginning with OME-M 1.20.00 and SmartFabric OS10.5.0.7,


additional Scalable Fabric topologies are supported in Full
Switch and SmartFabric modes. These topologies are more
complex than the ones presented in previous sections. These
designs enable physical NIC redundancy using a pair of
switches instead of two pairs, providing a significant cost
reduction.

These complex topologies support connections between


MX9116n FSEs in FabA with MX7116n FEMs in FabB across
single and multiple chassis, up to a total of five chassis. Once
you connect the FSE and FEMs, ensure that the slot numbers
are the same for the connection. For example, MX9116n FSE
in slot A1 can be connected to an MX7116n FEM in slot B1
(same chassis), or slot A1 (second chassis), or slot B1
(second chassis), and so on.

Note: Cabling multiple chassis together with these topologies


can become very complex. Care must be taken to correctly
connect each component.

The complex scalable fabric topologies in this section apply to


dual-port Ethernet NICs.

PowerEdge MX Scalable Fabric Architecture


These complex topologies are described as follows.
Note: The following diagrams show the connections for a
scalable fabric on multiple chassis between the FSE and FEM
components. The diagrams do not show the VLTi connections
required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

Single chassis:

MX9116n FSE in slot A1 is connected to MX7116n FEM


in slot B1.
MX9116n FSE in slot A2 is connected to MX7116n FEM
in slot B2.

Figure 23. Single chassis topology

Dual chassis:

MX9116n FSE in Chassis 1 slot A1 is connected to


MX7116n FEMs in Chassis 1 slot B1, Chassis 2 slot A1,
Chassis 2 slot B1.
MX9116n FSE in Chassis 2 slot A2 is connected to
MX7116n FEMs in Chassis 1 slot A2, Chassis 1 slot B2,
Chassis 2 slot B2.

Figure 24. Dual chassis topology

Multiple chassis:

The topology with multiple chassis is similar to the dual


chassis. Make sure to connect the FSE and FEM in the same
numeric slot numbers. For example, connecting FSE in

PowerEdge MX Scalable Fabric Architecture


Chassis 1 slot A1 to FEM in Chassis 2 slot B2 is not
supported.

Figure 25. Multiple chassis topology

Quad-port Ethernet NICs

PowerEdge MX 1.20.10 adds support for the Broadcom 57504


quad-port Ethernet adapter. For chassis with MX7116n FEMs,
the first QSFP28-DD port is used when attaching dual-port
NICs. The first and second QSFP28-DD ports of the MX7116n
FEM are used when attaching quad-port NICs. When both
QSFP28-DD ports are connected, a server with a dual-port
NIC will only use the first port on each FEM. With quad-port
NICs, both ports are used.

Note: The MX5108n Ethernet switch does not support quad-


port adapters.

Note: The Broadcom 57504 quad-port Ethernet adapter is not


a converged network adapter and does not support FCoE or
iSCSI offload.

The MX9116n FSE has sixteen 25 GbE server-facing ports,


ethernet1/1/1 through ethernet1/1/16, which are used when
the PowerEdge MX server sleds are in the same chassis as
the MX9116n FSE.

With only dual-port NICs in all server sleds, only the odd-
numbered server-facing ports are active. If the server has a
quad-port NIC, but the MX7116n FEM has only one port
connected to the MX9116n FSE, only half of the NIC ports will

PowerEdge MX Scalable Fabric Architecture


be connected and show a link up.

The following table shows the MX server sled to MX9116n


FSE interface mapping for dual-port NIC servers which are
directly connected to the switch.

Table 3. Interface mapping for dual-port NIC


servers

Sled number MX9116n FSE server interface

Sled 1 ethernet 1/1/1

Sled 2 ethernet 1/1/3

Sled 3 ethernet 1/1/5

Sled 4 ethernet 1/1/7

Sled 5 ethernet 1/1/9

Sled 6 ethernet 1/1/11

Sled 7 ethernet 1/1/13

Sled 8 ethernet 1/1/15

10

PowerEdge MX Scalable Fabric Architecture


With quad-port NICs in all server sleds, both the odd- and
even-numbered server-facing ports will be active. The
following table shows the MX server sled to MX9116n FSE
interface mapping for quad-port NIC servers which are directly
connected to the switch.

Table 4. Interface mapping for quad-port NIC


servers

Sled number MX9116n FSE server interface

Sled 1 ethernet 1/1/1, ethernet 1/1/2

Sled 2 ethernet 1/1/3, ethernet 1/1/4

Sled 3 ethernet 1/1/5, ethernet 1/1/6

Sled 4 ethernet 1/1/7, ethernet 1/1/8

Sled 5 ethernet 1/1/9, ethernet 1/1/10

Sled 6 ethernet 1/1/11, ethernet 1/1/12

Sled 7 ethernet 1/1/13, ethernet 1/1/14

Sled 8 ethernet 1/1/15, ethernet 1/1/16

11

PowerEdge MX Scalable Fabric Architecture


When using multiple chassis and MX7116n FEMs, virtual slots
are used to maintain a continuous mapping between the NIC
and physical port. For more information on virtual slots, see
Virtual ports and slots.

In a multiple chassis Scalable Fabric, the interface numbers


for the first two are mixed, as one NIC connection is to the
MX9116n in the same chassis as the server, and the other
NIC connection is to the MX7116n. In this example, the
following table shows the server interface mapping for Chassis
1 using quad-port adapters.

Table 5. Interface mapping for multiple chassis

Chassis 1 Chassis 1 MX9116n Chassis 2 MX9116n


sled number server interface server interface

ethernet 1/1/1, ethernet 1/71/1,


Sled 1
ethernet 1/1/2 ethernet 1/71/9

ethernet 1/1/3, ethernet 1/71/2,


Sled 2
ethernet 1/1/4 ethernet 1/71/10

ethernet 1/1/5, ethernet 1/71/3,


Sled 3
ethernet 1/1/6 ethernet 1/71/11

ethernet 1/1/7, ethernet 1/71/4,


Sled 4
ethernet 1/1/8 ethernet 1/71/12

ethernet 1/1/9, ethernet 1/71/5,


Sled 5
ethernet 1/1/10 ethernet 1/71/13

12

PowerEdge MX Scalable Fabric Architecture


Chassis 1 Chassis 1 MX9116n Chassis 2 MX9116n
sled number server interface server interface

ethernet 1/1/11, ethernet 1/71/6,


Sled 6
ethernet 1/1/12 ethernet 1/71/14

ethernet 1/1/13, ethernet 1/71/7,


Sled 7
ethernet 1/1/14 ethernet 1/71/15

ethernet 1/1/15, ethernet 1/71/8,


Sled 8
ethernet 1/1/16 ethernet 1/71/16

13

PowerEdge MX Scalable Fabric Architecture


Quad-port NIC restrictions and guidelines

If the server has a quad-port NIC, but the MX7116n FEM


has only one port connected to the MX9116n FSE, only
half of the NIC ports will be connected and show a link up.
Both ports on the MX7116n FEM must be connected to
the same MX9116n FSE.
Note: Do not connect one MX7116n FEM port to one
MX9116n FSE and the other MX7116n FEM port to
another MX9116n FSE. This is not supported. The
Unsupported configuration for quad-port NICs figure
shows the unsupported configuration.

If a Scalable Fabric has some chassis with quad-port


NICs and some with only dual-port NICs, only the chassis
with quad-port NICs require the second MX7116n FEM
port to be connected, as shown in the Multiple chassis
topology with quad-port and dual-port NICs – single fabric
figure.
It is supported to have a dual-port NIC in Fabric A and a
quad-port NIC in Fabric B (or the inverse), or have a
quad-port NIC in both Fabric A and Fabric B.
Up to five chassis with quad-port NICs are supported in a
single Scalable Fabric.

The following set of figures show the basic supported


topologies when using quad-port Ethernet adapters.
Note: The following diagrams show the connections for a
scalable fabric on multiple chassis between the FSE and FEM
components. The diagrams do not show the VLTi connections
required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

The following figure shows a single-chassis topology with

14

PowerEdge MX Scalable Fabric Architecture


quad-port NICs. Make sure to connect both ports on the
MX7116n FEM to the same MX9116n FSE.

Figure 26. Single-chassis topology with quad-port


NICs - dual fabric

The following figure shows the two-chassis topology with


quad-port NICs in each chassis. Only a single fabric is
configured. Make sure to connect both ports on the MX7116n
FEM to the same MX9116n FSE.

Figure 27. Two-chassis topology with quad-port NICs


– single fabric

The following figure shows the two-chassis topology with


quad-port NICs. Dual fabrics are configured.

Figure 28. Two-chassis topology with quad-port NICs


– dual fabric

The following figure shows the multiple chassis topology with


quad-port NICs. Only a single fabric is configured.

Figure 29. Multiple chassis topology with quad-port


NICs – single fabric

The following figure shows the multiple chassis topology with


quad-port NICs in two chassis and dual-port NICs in one
chassis. Only a single fabric is configured. Make sure to
connect both ports on the MX7116n FEM to the same
MX9116n FSE with the quad-port card. Do not connect the

15

PowerEdge MX Scalable Fabric Architecture


second port on the MX7116n FEM when configured with a
dual-port NIC.

Figure 30. Multiple chassis topology with quad-port


and dual-port NICs – single fabric

The following figure shows one example of an unsupported


topology. The ports on the MX7116n FEMs must never be
connected to different MX9116n FSEs.

Figure 31. Unsupported configuration for quad-port


NICs

Interfaces and port groups

On the MX9116n FSE and MX5108n, server-facing interfaces


are internal and are enabled by default. To view the backplane
port connections to servers, use the show inventory media
command.

In the output, a server-facing interface displays INTERNAL as


its media. A FIXED port does not use external transceivers
and always displays as Dell EMC Qualified true.

OS10# show inventory media


--------------------------------------------------------------------------------
System Inventory Media
--------------------------------------------------------------------------------
Node/Slot/Port Category Media Serial-Number
Dell EMC-Qualified
--------------------------------------------------------------------------------
1/1/1 FIXED INTERNAL true

16

PowerEdge MX Scalable Fabric Architecture


1/1/2 FIXED INTERNAL true
1/1/3 FIXED INTERNAL true
1/1/4 FIXED INTERNAL true
1/1/5 FIXED INTERNAL true
1/1/6 FIXED INTERNAL true
1/1/7 FIXED INTERNAL true
1/1/8 FIXED INTERNAL true
1/1/9 FIXED INTERNAL true
1/1/10 FIXED INTERNAL tru
e
1/1/11 FIXED INTERNAL tru
e
1/1/12 FIXED INTERNAL tru
e
1/1/13 FIXED INTERNAL tru
e
1/1/14 FIXED INTERNAL tru
e
1/1/15 FIXED INTERNAL tru
e
1/1/16 FIXED INTERNAL tru
e
1/1/17 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC
TW04829489D0007 true
1/1/18 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC
TW04829489D0007 true
1/1/19 Not Present
1/1/20 Not Present
1/1/21 Not Present
--------------------- Output Truncated -------------------------------------
---
1/1/37 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC
TW04829489J0021 true
1/1/38 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC

17

PowerEdge MX Scalable Fabric Architecture


TW04829489J0021 true
1/1/39 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC
TW04829489J0024 true
1/1/40 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC
TW04829489J0024 true
1/1/41 QSFP28 QSFP28 100GBASE CR4 2M CN0
APX0084G1F05 true
1/1/42 QSFP28 QSFP28 100GBASE CR4 2M CN0
APX0084G1F49 true
--------------------- Output Truncated -------------------------------------
---

To view the server-facing interface port status, use the show


interface status command. Server-facing ports are numbered
1/1/1 to 1/1/16.

For the MX9116n FSE, servers that have a dual-port NIC


connect only to odd-numbered internal Ethernet interfaces; for
example, a MX740c in slot one would be 1/1/1, and a MX840c
in slots five and six occupies 1/1/9 and 1/1/11.

Note: Even-numbered Ethernet ports between 1/1/1–1/1/16


are reserved for quad-port NICs.

A port group is a logical port that consists of one or more


physical ports and provides a single interface. Only the
MX9116n FSE supports the following port groups:

QSFP28-DD – Port groups 1 through 12


QSFP28 – Port groups 13 and 14
QSFP28 Unified – Port groups 15 and 16

The following figure shows these port groups along the top,
and the bottom shows the physical ports in each port group.

18

PowerEdge MX Scalable Fabric Architecture


For instance, QSFP28-DD port group 1 has member ports
1/1/17 and 1/1/18, and unified port group 15 has a single
member, port 1/1/43.

Figure 32. MX9116n FSE port groups

QSFP28-DD port groups


On the MX9116n FSE, QSFP28-DD port groups are 1 through
12, which contain ports 1/1/17 through 1/1/40 and are used to:

Connect to a MX7116n FEM to extend the Scalable


Fabric
Connect to an Ethernet rack server or storage device
Connect to another networking device, typically an
Ethernet switch

By default, QSFP28-DD port groups 1 through 9 are in fabric-


expander-mode and QSFP28-DD port groups 10 through 12
are in 2x 100 GbE breakout mode. Fabric Expander mode is
an 8x 25 GbE interface that is used only to connect to
MX7116n FEMs in additional chassis. The interfaces from the
MX7116n FEM appear as standard Ethernet interfaces from
the perspective of the MX9116n FSE.

The following figure illustrates how the QSFP28-DD cable


provides 8x 25 GbE lanes between the MX9116n FSE and a
MX7116n FEM.

Figure 33. QSFP28-DD connection between


MX9116n FSE and MX7116n FEM

Note: Compute sleds with dual-port NICs require only

19

PowerEdge MX Scalable Fabric Architecture


MX7116n FEM port 1 to be connected.

In addition to fabric-expander-mode, QSFP28-DD port groups


support the following Ethernet breakout configurations:

Using QSFP28-DD optics/cables:


2x 100 GbE – Breakout a QSFP28-DD port into two
100-GbE interfaces
2x 40 GbE – Breakout a QSFP28-DD port into two
40-GbE interfaces
8x 25 GbE – Breakout a QSFP28-DD port into eight
25-GbE interfaces
8x 10 GbE – Breakout a QSFP28-DD port into eight
10-GbE interfaces
Using QSFP28 optics/cables:
1x 100 GbE – Breakout a QSFP28-DD port into one
100-GbE interface
4x 25 GbE – Breakout a QSFP28-DD port into four
25-GbE interfaces
Using QSFP+ optics/cables:
1x 40 GbE – Breakout a QSFP28-DD port into one
40-GbE interface
4x 10 GbE – Breakout a QSFP28-DD port into four
10-GbE interfaces

Note: Before changing the port breakout configuration from


one setting to another, the port must first be set back to the
hardware default setting.

Note: QSFP28-DD ports are backwards compatible with


QSFP28 and QSFP+ optics and cables.

20

PowerEdge MX Scalable Fabric Architecture


Single-density QSFP28 port groups
On the MX9116n FSE, single-density QSFP28 port groups are
13 and 14, contain ports 1/1/41 and 1/1/42 respectively, and
are used to connect to upstream networking devices. By
default, both port groups are set to 1x 100 GbE. Port groups
13 and 14 support the following Ethernet breakout
configurations:

4x 10 GbE – Breakout a QSFP28 port into four 10-GbE


interfaces
1x 40 GbE – Set a QSFP28 port to 40 GbE mode
4x 25 GbE – Breakout a QSFP28 port into four 25-GbE
interfaces
2x 50 GbE – Breakout a QSFP28 port into two 50-GbE
interfaces
1x 100 GbE – Reset the unified port back to the default,
100-GbE mode

Unified port groups operate as either Ethernet or FC. By


default, both unified port groups, 15 and 16, are set to 1x 100
GbE. To activate the two port groups as FC interfaces in Full
Switch mode, use the command mode fc. Both port groups
are enabled as Ethernet or FC together. You cannot have port
group 15 as Ethernet and port group 16 as Fibre Channel.

The MX9116n FSE unified port groups support the following


Ethernet breakout configurations:

4x 10 GbE – Breakout a QSFP28 port into four 10-GbE


interfaces
1x 40 GbE – Set a QSFP28 port to 40 GbE mode
4x 25 GbE – Breakout a QSFP port into four 25-GbE
interfaces
2x 50 GbE – Breakout a QSFP28 port into two 50-GbE

21

PowerEdge MX Scalable Fabric Architecture


interfaces
1x 100 GbE – Reset the unified port back to the default,
100-GbE mode

The MX9116n FSE unified port groups support the following


FC breakout configurations:

4x 8 Gb – Breakout a unified port group into four 8-Gb FC


interfaces
2x 16 Gb – Breakout a unified port group into two 16-Gb
FC interfaces
4x 16 Gb – Breakout a unified port group into four 16-Gb
FC interfaces
1x 32 Gb – Breakout a unified port group into one 32-Gb
FC interface
2x 32 Gb – Breakout a unified port group into two 32-Gb
FC interfaces
4x 32 Gb – Breakout a unified port group into four 32-Gb
FC interfaces, rate limited

Note: After enabling FC on the unified ports, these ports will


be set administratively down and must be enabled in order to
be used.

Rate limited 32 Gb Fibre Channel


When using 32-Gb FC, the actual data rate is 28 Gbps due to
64b/66b encoding. The following figure shows unified port
group 15. The port group is set to 4x 32 Gb FC mode.
However, each of the four lanes is 25 Gbps, not 28 Gbps.
When these lanes are mapped from the Network Processing
Unit (NPU) to the FC ASIC for conversion to FC signaling, the
four 32 Gb FC interfaces are mapped to four 25 Gbps lanes.

22

PowerEdge MX Scalable Fabric Architecture


With each lane operating at 25 Gbps, not 28 Gbps, the result
is rate limited to 25 Gbps.

Figure 34. 4x 32 Gb FC breakout mode, rate limit of


25 Gbps

While each 32 Gb FC connection is providing 25 Gbps, the


overall FC bandwidth available is 100 Gbps per unified port
group, or 200 Gbps for both ports. However, if an application
requires the maximum 28 Gbps throughput per port, use the
2x 32 Gb breakout mode. This mode configures the
connections between the NPU and the FC ASIC, as shown in
the following figure.

Figure 35. 2x 32 Gb FC breakout mode

In 2x 32 Gb FC breakout mode, the MX9116n FSE binds two


50 Gbps links together to provide a total of 100 Gbps
bandwidth per lane to the FC ASIC. This results in the two FC
ports operating at 28 Gbps. The overall FC bandwidth
available is 56 Gbps per unified port, or 112 Gbps for both
(compared to the 200 Gbps using 4x 32-Gb FC).

Note: Rate limited ports are not oversubscribed ports. There


is no FC frame drop on these ports and buffer to buffer credit
exchanges ensure flow consistency.

Virtual ports and slots


A virtual port is a logical interface that connects to a
downstream server and has no physical location on the

23

PowerEdge MX Scalable Fabric Architecture


switch. Virtual ports are created when a MX9116n FSE
onboards (discovers and configures) a MX7116n FEM.

If a MX7116n is moved and cabled to a different QSFP28-DD


port on the MX9116n, all software configurations on the virtual
ports are maintained. Only the QSFP28-DD breakout
interfaces mapped to the virtual ports change.

A virtual slot contains all provisioned virtual ports across one


or both FEM connections. On the MX9116n FSE, virtual slots
71 through 82 are pre-provisioned, and each virtual slot has
eight virtual ports. For example, virtual slot 71 contains virtual
ports ethernet 1/71/1 through 1/71/8. When a quad-port
adapter is used, that virtual slot will expand to 16 virtual ports,
for example ethernet 1/71/1 through 1/71/16.

If the MX9116n FSE is in SmartFabric mode, the MX7116n


FEM is automatically configured with a virtual slot ID and
virtual ports that are mapped to the physical interfaces. The
following table shows how the physical ports are mapped to
the virtual slot and ports.

If the MX9116n FSE is in Full Switch mode, it automatically


discovers the MX7116n FEM when the following conditions
are met:

The MX7116n FEM is connected to the MX9116n FSE by


attaching a Dell qualified cable between the QSFP28-DD
ports on both devices.
The interface for the QSFP28-DD port group connected to
the MX9116n FSE is in 8x 25 GbE FEM mode.
At least one blade server is inserted into the MX7000
chassis containing the MX7116n FEM.

The FEM will be automatically discovered and provisioned into

24

PowerEdge MX Scalable Fabric Architecture


a virtual slot when operating in SmartFabric mode. In Full
Switch mode, this mapping is done with the unit-provision
command. See show unit-provision for more information on
the show unit-provision command.

To verify that a MX7116n FEM is communicating with the


MX9116n FSE, enter the show discovered-expanders
command.

MX9116n-FSE # show discovered-expanders


Service Model Type Chassis Chassis-slot Port-grou
p Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/1

25

PowerEdge MX Scalable Fabric Architecture


Table 6. Virtual Port mapping example 1

MX711
MX9116n MX9116n MX7116n MX7116
6n
QSFP28-DD physical virtual n virtual
service
port group interface slot (ID) ports
tag

1/1/17:1 1/71/1

1/1/17:2 1/71/2

1/1/17:3 1/71/3

1/1/17:4 1/71/4
12AB34 portgroup1/1/
71
56 1
1/1/18:1 1/71/5

1/1/18:2 1/71/6

1/1/18:3 1/71/7

1/1/18:4 1/71/8

26

PowerEdge MX Scalable Fabric Architecture


Use the same command to show the list of MX7116n FEMs in
a quad-port NIC configured scenario, in which each MX7116n
FEM creates two connections with the MX9116n FSE. In a
dual-chassis scenario, MX7116n FEMs are connected on port
group 1 and port group 7 to the MX9116n FSE as shown
below. For example, if the quad-port NIC is configured on
compute sled 1, then virtual ports 1/1/71:1 and 1/1/71:9 will be
up.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-grou
p Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/1
71
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/7
71
D10DXC4 MX7116n FEM 1 SKY003Z A1 1/1/2
72

Table 7. Virtual Port mapping example 2

MX711
MX9116n MX9116n MX7116n MX7116
6n
QSFP28-DD physical virtual n virtual
service
port group interface slot (ID) ports
tag

1/1/17:1 1/71/1

1/1/17:2 1/71/2

1/1/17:3 1/71/3

27

PowerEdge MX Scalable Fabric Architecture


MX711
MX9116n MX9116n MX7116n MX7116
6n
QSFP28-DD physical virtual n virtual
service
port group interface slot (ID) ports
tag

1/1/17:4 1/71/4

1/1/18:1 1/71/5

1/1/18:2 1/71/6

1/1/18:3 1/71/7
portgroup1/1/
1 1/1/18:4 1/71/8
12AB34
71
56 portgroup1/1/ 1/1/29:1 1/71/9
7
1/1/29:2 1/71/10

1/1/29:3 1/71/11

1/1/29:4 1/71/12

1/1/30:1 1/71/13

1/1/30:2 1/71/14

1/1/30:3 1/71/15

28

PowerEdge MX Scalable Fabric Architecture


MX711
MX9116n MX9116n MX7116n MX7116
6n
QSFP28-DD physical virtual n virtual
service
port group interface slot (ID) ports
tag

1/1/30:4 1/71/16

29

PowerEdge MX Scalable Fabric Architecture


The MX9116n physical interfaces mapped to the MX7116n
virtual ports display dormant (instead of up) in the show
interface status output until a virtual port starts to transmit
server traffic.

MX9116n-FSE # show interface status


Port Description Status Speed Duplex Mode Vla
n
Eth 1/1/17:1 dormant
Eth 1/1/17:2 dormant
<output truncated>

Recommended port order for MX7116n FEM


connectivity

While any QSFP28-DD port can be used for any purpose, the
following table and figure outline the recommended but not
required, port order for connecting the chassis with the
MX7116n FEM modules to the MX9116n FSE to optimize
NPU utilization.

Note: If you are using the connection order shown in the


following table, you must change the Port group 9 breakout
type to FabricExpander.

30

PowerEdge MX Scalable Fabric Architecture


Table 8. Recommended PowerEdge MX7000 chassis
connection order

Chassi MX9116n FSE port Physical port


s group numbers

1/2 Port group 1 17 and 18

3 Port group 7 29 and 30

4 Port group 2 19 and 20

5 Port group 8 31 and 32

6 Port group 3 21 and 22

7 Port group 9 33 and 34

8 Port group 4 23 and 24

9 Port group 10 35 and 36

10 Port group 5 25 and 26

31

PowerEdge MX Scalable Fabric Architecture


Figure 36. Recommended MX7000 chassis
connection order

Embedded top-of-rack switching

Most environments with blade servers also have rack servers.


The following figure shows a typical design having rack
servers connecting to their respective top-of-rack (ToR)
switches and blade chassis connecting to a different set of
ToR switches. If the storage array is Ethernet-based, it is
typically connected to the core/spine. This design is inefficient
and expensive.

Figure 37. Traditional mixed blade/rack networking

Communication between rack and blade servers must


traverse the core, increasing latency, and the storage array
consumes expensive core switch ports. All of this results in
increased operations cost from the increased number of
managed switches.

Embedded ToR functionality is built into the MX9116n FSE.


Configure any QSFP28-DD port to break out into 8x 10 GbE
or 8x 25 GbE and connect the appropriate cables and optics.
This enables all servers and storage to connect directly to the
MX9116n FSE, and communication between all devices that
are kept within the switch. This provides a single point of
management and network security while reducing cost and
improving performance and latency.

32

PowerEdge MX Scalable Fabric Architecture


The preceding figure shows eight switches in total. In the
following figure, using embedded ToR, switch count is
reduced to the two MX9116n FSE in the two chassis:

Figure 38. MX9116n FSE embedded ToR

MX Chassis management wiring

You can use the automatic uplink detection and network loop
prevention features in OME-Modular to connect multiple
chassis with cables. This cabling or wiring method is called
stacking. Stacking saves port usage in the data center
switches and access for each chassis in the network.

While wiring a chassis, connect one network cable from each


management module to the out-of-band (OOB) management
switch of the data center. Ensure that both ports on the OOB
management switch are enabled and are in the same network
and VLAN.

The following image is a representation of the individual


chassis wiring:

Figure 39. Individual chassis management wiring

The following image is a representation of the two-chassis


wiring:

Figure 40. Two-chassis management wiring

The following image is a representation of the multi-chassis

33

PowerEdge MX Scalable Fabric Architecture


wiring:

Figure 41. Multi-chassis management wiring

34

PowerEdge MX Scalable Fabric Architecture

You might also like