FX2 Storage Networking With ISCSI v1.0-2
FX2 Storage Networking With ISCSI v1.0-2
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
Copyright © 2017 Dell Inc. or its subsidiaries. All rights reserved. Dell and the Dell EMC logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
This document focuses on three specific elements of the Dell EMC portfolio:
Dell EMC PowerEdge FX2: FX2 is an innovative design with modular IT building blocks to address evolving
workloads precisely. The flexibility of the FX2 chassis and its variety of components enables a wide range of
target customers: data centers building private clouds; web service providers, dedicated hosting
organizations, and enterprises addressing growth with easily scalable platforms. Specific targets are
configuration dependent, so customers can tailor infrastructure precisely with the right power, storage, and
connectivity to meet specific workload needs.
Dell EMC Networking: The Dell EMC Networking FN-series of switches runs internally to the FX2 chassis.
An integrated networking solution, the FN-series provides Ethernet, as well as LAN/SAN convergence with
iSCSI and FCoE support. The S-series switches are multi-layer Ethernet switches that scale beyond multiple
chassis. The switches run on Dell EMC’s own operating system or the operating system from one of Dell
EMC’s Open Networking ecosystem partners.
Dell EMC Unity: The Unity family delivers high-performance hybrid or All-Flash midrange, unified storage
with NAS and SAN connectivity. Unity simplifies and modernizes today’s data center with a powerful
combination of enterprise capabilities and cloud-like simplicity.
This guide provides assistance for a step-by-step deployment of iSCSI using Dell PowerEdge FX2s and Dell
EMC Unity 500F. It includes configuration of physical switches, ESXi hosts, a Virtual Distributed Switch, and
Unity 500F storage. These instructions for deploying iSCSI using Dell EMC hardware and software target a
network administrator or engineer. The instructions assume that the administrator or engineer has traditional
networking and VMware ESXi experience.
The PowerEdge FX2s enclosure is a 2-rack unit (RU) computing platform. It has capacity for two FC830 full-
width servers, four FC630 half-width servers, or eight FC430 quarter-width servers. The enclosure is also
available with a combination of servers and storage sleds. The FX2s enclosure used for the example in this
guide contains four FC630 servers as shown in Figure 1:
Dell EMC PowerEdge FX2s (front) with four PowerEdge FC630 servers
The back of the FX2s enclosure includes one Chassis Management Controller (CMC), two FN IO Modules
(FN IOMs), eight Peripheral Component Interconnect Express (PCIe) expansion slots, and redundant power
supplies as shown in Figure 2:
Note: The Dell EMC PowerEdge FX enclosure is currently available in two models: FX2 and FX2s. The FX2
enclosure is similar to the FX2s but does not support storage sleds and PCIe slots.
PowerEdge FC630
All three FN IOM options provide eight 10GbE internal, server-facing ports and four external ports. The FX2s
enclosure for the example deployment in this guide uses two PowerEdge FN410S IOMs.
FN410S
4-port SFP+ I/O Module
Provides four SFP+ 10GbE ports. Supports
optical and Direct Attach Copper (DAC) cable
media.
FN410T
4-port 10GBASE-T I/O Module
Provides four 10GBASE T ports. Supports
cost-effective, copper media up to 100
meters.
FN2210S
4-port Combo FC/Ethernet I/O Module
Provides four ports. Up to two ports can be
configured for 2, 4, or 8 Gbit/s FC . The
remaining ports support SFP+ 20GbE to
provide Ethernet connectivity. Supports
optical and DAC cable media.
PowerEdge FN410S
SAN LAN
Spine
Storage Array
FN410S-A1 FN410S-A2
Dual-port
CNA
10GbE
FC630-1
10GbE
FC630-2
10GbE
FC630-3
10GbE
FC630-4
FX2s Chassis
SAN
Storage array
S4048-ON-1 S4048-ON-2
FN410S-A1 FN410S-A2
Dual-port
CNA
10GbE
FC630-1
10GbE
FC630-2
10GbE
FC630-3
10GbE
FC630-4
FX2s Chassis
Note: The use of a leaf-spine network in the datacenter as shown in Figure 10 is considered a best practice.
See Dell EMC NSX Reference Architecture - FC630 Compute Nodes with iSCSI Storage for a detailed
description and configuration instructions for a leaf-spine network.
LAN
Spine S6010-
ON
FN410S-A1 FN410S-A2
Dual-port
CNA
10GbE
FC630-1
10GbE
FC630-2
10GbE
FC630-3
10GbE
FC630-4
FX2s Chassis
Ethernet topology
S6010-ON Spine
Po 3
S4048-ON S4048-ON
VLT
LEAF A LEAF B
Management Network
FC
10 GbE
SP A
10 GbE
FC630-
Node n1
Management Network Eth FC
Storage Array
10 GbE
Node n2
FC630-
10 GbE
FC630-
Node n3
10 GbE
FC630-
Node n4
FX2s Chassis
In this example, port channels connect each FN410S to S4048-ON switches that behave as leaf switches.
The leaf switches use VLT for increased bandwidth and resilience. Port channels from each leaf switch
connect to an S6010-ON switch acting as a Spine switch. Each of these port channel consists of two
members with 40GbE capacity each. The comprehensive configuration and validation of the spine switch,
including routing, is beyond the scope of this document. This document captures Layer 2 configuration of the
Spine switch for the converged topology example.
Note: This topology represents the port channel in Full-switch mode. In Standalone mode, the port
channels for external ports are 128 by default.
Since there are two VLANs used for iSCSI, please see VLAN/SAN Assignment Planning for further
information on VLAN planning. Zero-touch Standalone mode automatically applies the same iSCSI VLAN ID.
This happens by default and requires no configuration. The process works identically for the dual-port and
quad-port configurations.
S6010-ON Spine
Po 49 Po 48
S4048-ON LEAF A VLT S4048-ON LEAF B S4048-ON iSCSI 1 VLT S4048-ON iSCSI 2
Po 2
Po 1
Management Network
FC
10 GbE
SP A
10 GbE
FC630-
Node n1 Eth FC
Management Network
Storage array
10 GbE
Node n2
FC630-
10 GbE
FC630-
Node n3
10 GbE
FC630-
Node n4
FX2s Chassis
Management
Network
S3048- ON
S6010- ON
Spine
S4048- ON
Leaf A S4048- ON iSCSI 1 SP B MGMT
CMC
SP A MGMT
S4048- ON PCIe slots 1 - 8
Leaf B S4048- ON iSCSI 2
Storage array
FX2s
Management network
4.1 Servers
This section covers basic PowerEdge server preparation. Installation of guest operating systems (Microsoft
Windows Server, Red Hat Linux, and so on) is outside the scope of this document.
Note: Exact iDRAC console steps in this section may vary slightly depending on hardware, software and
browser versions used. See your PowerEdge server documentation for steps to connect to the iDRAC virtual
console.
1. Connect to the iDRAC in a web browser and launch the virtual console.
2. In the virtual console, from the Next Boot menu, select BIOS Setup.
3. Reboot the server.
4. From the System Setup Main Menu, select Device Settings.
5. From the Device Settings page, select the first port of the first NIC in the list.
6. From the Main Configuration Page, click Default followed by Yes to load the default settings. Click
OK.
7. To save changes to the settings, click Finish then Yes. Click OK.
8. Repeat for each NIC and port listed on the Device Settings page.
4.5 VLAN/SAN
To ensure consistency across all switches, it is a best practice to plan all the required VLANs for the network
ahead of time. This deployment uses two VLANs, one for each iSCSI SAN. Table 2 lists these VLANs and the
purpose of each. For additional information on VLANs and their use inside of a VMware-enabled environment,
see VMware vSphere 6.5 Documentation Center. This deployment uses the following addressing scheme for
the iSCSI networks to make the deployment faster and more efficient. It also helps with troubleshooting later.
Note: FN410S IOM configuration needs only the iSCSI VLANs in Full Switch mode. FN410S in standalone
mode automatically selects the iSCSI VLAN, eliminating the need for configuration. The storage network
S4048-ON requires VLAN configuration regardless of the IOM mode used.
• FN410S-A1
• S4048-ON: iSCSI 1
• S4048-ON: Leaf A
• S3048-ON: Management
The remaining switches use configurations very similar to the configuration detailed in this section.
Notes: The attachments contain individual switch configurations for the switches described by the examples
in this guide.
These commands restore factory settings and reload the switch. After reload, enter A at the [A/C/L/S] prompt
as shown below to exit Bare Metal Provisioning (BMP) mode.
Dell>
Note: BMP mode does not appear for IOMs after restoring factory settings and reloading the switch.
Note: The topology this example uses is feasible for Full-switch mode. Hence, Standalone mode is not used
for the example in non-converged topology.
Dell(conf)#exit
Dell#Feb 18 04:16:29: %STKUNIT0-M:CP %SYS-5-CONFIG_I: Configured from
console
reload
System configuration has been modified. Save? [yes/no]: y
!
Feb 18 04:16:33: %STKUNIT0-M:CP %FILEMGR-5-FILESAVED: Copied running-config
to startup-config in flash by default
Initial configuration involves setting the hostname and enabling Link Layer Discovery Protocol (LLDP), which
is useful for troubleshooting. Finally, configure the management interface and default gateway.
enable
configure
hostname FN410S-A1
protocol lldp
advertise management-tlv management-address system-name
no advertise dcbx-tlv ets-reco
no dcb enable
Configure a password to enable serial console. Change the default SSH/telnet password to root. Reset the
password on each of the first two lines as desired. Disable telnet.
Note: SSH and Telnet are both enabled by default. It is a best practice to use SSH instead of Telnet for
security. SSH can also be disabled with the command (conf)#no ip ssh server enable
Continue configuring the FN410S by changing the internal interfaces to portmode hybrid and switchport.
Notes:1. MTU - Dell EMC recommends setting the MTU to 9216 for best performance on switches used in
iSCSI SANs.
2. Port channel numbering - LACP port channel numbers can be any number from 1-128.
The example adds Te0/9 and Te 0/10 to Link Aggregation Control Protocol (LACP)-enabled port channel 100.
Similarly, this example uses LACP to add Te 0/9 and Te 0/10 to port channel 101 in FN410S slot A2.
interface Port-channel 1
description Port Channel for LAN traffic
no ip address
portmode hybrid
switchport
no shutdown
The example uses two VLAN interfaces, VLAN 100 and VLAN 101, for iSCSI. Since FN410S is in Full-switch
mode, the example requires tagging ports Te 1/1-8 to VLAN 100 in FN410S slot A1 and VLAN 101 in FN410S
in slot A2. Similarly, the example tags Port Channel 100 to VLAN 100 in the FN410S in slot A1 and Port
Channel 101 to VLAN 101 in FN410S slot A2. Although the example configures all internal interfaces for
switchport, portmode hybrid and jumbo frames, only internal interfaces that are mapped to the dual port
adapters require switchport, portmode hybrid and jumbo frames capability. See FN IOM Internal Port Mapping
for internal port mappings.
Note: On S4048-ON, Telnet is enabled and SSH is disabled by default. Both services require the creation of
a non-root user account to login. If needed, it is a best practice to use SSH instead of Telnet for security.
SSH can optionally be enabled with the command: (conf)#ip ssh server enable
A user account can be created to access the switch via SSH with the command:
(conf)#username ssh_user sha256-password ssh_passwordenable
enable
configure
hostname iSCSI-1
protocol lldp
advertise management-tlv management-address system-description system-name
advertise interface-port-desc
Use the next set of commands to configure the two downstream interfaces (connected to the FN410S). Issue
the interfaces portmode hybrid and switchport commands and set the MTU to 9216 for performance. Add
these two interfaces to port channel 100.
Configure the two upstream interfaces connected to Storage array. Place the interfaces in Layer 2 mode with
the switchport command and set the MTU to 9216 for performance.
Create VLAN interface 100 and tag all interfaces used in the SAN to VLAN 100. Attached configuration for
iSCSI switch 2 non-converged shows use of VLAN 101 in this example.
end
write
enable
configure
hostname S4048-LF-A-U31
protocol lldp
advertise management-tlv management-address system-description system-name
advertise interface-port-desc
The two leaf switches are in VLT, using ports 1/53 and 1/54 for the VLT port channel.
Note: To configure VLT, the VLT domain must have the back-up destination as the management IP address
of the secondary switch in the pair. In addition, the unit-id must be 0 and the secondary switch’s unit-id must
be 1.
Port channels 1 and 2 connect to the IOMs. Port channel 3 connects to the S6010-ON spine switch.
interface Port-channel 1
description Port Channel to FN IOM A1
no ip address
portmode hybrid
interface Port-channel 2
description Port Channel to FN IOM A2
no ip address
portmode hybrid
switchport
vlt-peer-lag port-channel 2
no shutdown
interface Port-channel 3
description Connection to S6010 Spine
no ip address
portmode hybrid
switchport
vlt-peer-lag port-channel 3
no shutdown
end
write
To verify the VLT details, issue the following command. Ensure that the ICL Link Status, Heartbeat Status,
and VLT Peer Status are Up.
S4048-LF-A-U31#show lacp 1
Port-channel 1 admin up, oper up, mode lacp
LACP Fast Switch-Over Disabled
Actor System ID: Priority 32768, Address 1418.777c.c4e8
Partner System ID: Priority 32768, Address f48e.383d.4e86
Actor Admin Key 1, Oper Key 1, Partner Oper Key 1, VLT Peer Oper Key 1
LACP LAG 1 is an aggregatable link
LACP LAG 1 is a VLT LAG
Note: Configure the FNIOM (only for Standalone mode) via the Dell Blade I/O Manager. Access the Dell
Blade I/O Manager through the FX2s CMC. See the attachment, Dell Blade IO Manager V1.0.pdf, for more
information.
enable
configure
hostname FN410S-A1
This deployment example uses a non-DCB environment. Hence, DCB must be manually disabled using the
command:
no dcb enable
Configure the serial console enable password, change the SSH/telnet password for root and disable telnet:
Note: SSH and Telnet are both enabled by default. It is a best practice to use SSH and disable Telnet for
security.
enable
configure
hostname FN410S-A1
protocol lldp
advertise management-tlv management-address system-name
no advertise dcbx-tlv ets-reco
This deployment example uses a non-DCB environment. Hence, DCB must be manually disabled using the
command:
no dcb enable
Configure a serial console enable password and change the default SSH/telnet password for root. Replace
password on each of the first two lines with the desired password. Disable telnet.
Note: SSH and Telnet are both enabled by default. It is a best practice to use SSH instead of Telnet for
security. SSH can also be disabled with the command (conf)#no ip ssh server enable.
Configure internal and external ports along with the port channels and VLAN.
Note: Although all internal interfaces include configuration for switchport, portmode hybrid, and jumbo
frames, only internal interfaces that are mapped to the dual-port adapters require switchport, portmode
hybrid, and jumbo frames capability. See FN IOM Internal Port Mapping for internal port mappings.
interface Port-channel 1
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
end
write
enable
configure
protocol lldp
advertise management-tlv management-address system-description system-name
advertise interface-port-desc
The two leaf switches are in VLT, using ports 1/53 and 1/54 for the VLT interconnect (VLTi).
Note: To configure VLT, the VLT domain needs to have the management IP address of the secondary
switch in the pair configured as the back-up destination. In addition, assign unit-id 0 to the primary switch
and the unit-id 1 to the secondary switch.
Add the interfaces as members of the respective port channels using LACP.
Connect port channels 1 and 2 to the IOMs. Connect port channel 49 to the spine switch, S6010-ON.
interface Port-channel 1
description Port Channel to FN IOM A1
no ip address
mtu 9216
portmode hybrid
switchport
vlt-peer-lag port-channel 1
no shutdown
interface Port-channel 2
description Port Channel to FN IOM A2
no ip address
mtu 9216
portmode hybrid
switchport
vlt-peer-lag port-channel 2
no shutdown
interface Port-channel 49
description Spine Connection
no ip address
mtu 9216
end
write
enable
configure
hostname S6010-SPINE
protocol lldp
advertise management-tlv management-address system-description system-name
advertise interface-port-desc
Port channel 49 and port channel 48 connect to leaf switches and iSCSI storage switches.
interface Port-channel 48
description iSCSI Connection
no ip address
mtu 9216
portmode hybrid
switchport
no shutdown
interface Port-channel 49
description Leaf Connection
end
write
enable
configure
hostname ISCSI-1
protocol lldp
advertise management-tlv management-address system-description system-name
advertise interface-port-desc
The two iSCSI switches are in VLT and ports 1/53 and 1/54 are utilized for the VLT port channel.
Interfaces Te 1/1 and 1/2 are connected to the Ethernet ports on the Storage Processors on The Storage
array. Port Channel 48 runs upto the Spine switch.
interface Port-channel 48
description Spine Connection
no ip address
mtu 9216
portmode hybrid
switchport
vlt-peer-lag port-channel 48
no shutdown
end
write
A standard switch named vSwitch0 is automatically created on each ESXi host during installation to provide
connectivity to the management network.
1. Go to the web client Home page, select Hosts and Clusters, and select a host in the Navigator
pane.
2. In the center pane, select Configure > Networking > Virtual switches.
3. Standard switch vSwitch0 appears in the list. Click on it to view details as shown in Figure 15.
The VDS is created with the uplink port group shown beneath it. When complete, the Navigator pane should
look similar Figure 16.
Note: The VDS that were already existing in the environment are displayed along with the newly created
VDS-iSCSI. In this deployment guide, only VDS-iSCSI will be used.
7.4 Set the iSCSI VDS MTU to 9000 and enable LLDP
Dell EMC recommends increasing the Maximum Transmission Unit (MTU) of devices handling storage traffic
to 9000 bytes for best performance. LLDP may be configured at the same time.
Repeat the steps above to create a port group named DPG-Production. In step 4, use the default VLAN
settings (no VLAN ID specified). The DPG-production port group is used for non-iSCSI traffic between VMs.
When complete, the Navigator pane will appear similar to Figure 20.
Repeat steps 2-6 above for DPG-ISCSI-2 port group and DPG-Production port group. For DPG-ISCSI-2
port group, make sure that Uplink 2 is moved to Active and Uplink 1 is moved to Unused. For DPG-
Production, in this example, Uplink 2 is moved to Standby and Uplink 1 is moved to Active.
Note: Before starting this section, be sure you know the vmnic-to-physical adapter mapping for each host.
This can be determined by going to Home > Hosts and Clusters and selecting the host in the Navigator
pane. In the center pane select Configure > Networking > Physical adapters. Adapter MAC addresses
can be determined by connecting to the iDRAC. In this example, vmnics used are numbered vmnic0 and
vmnic1. vmnic numbering will vary depending on adapters installed in the host.
IP addresses can be statically assigned to VMkernel adapters upon creation, or DHCP may be used. Static IP
addresses are used in this guide.
This deployment uses the following addressing scheme for the iSCSI networks:
When complete, the VMkernel adapters page for each ESXi host in the vSphere data center should look
similar to Figure 22. This page is visible by going to Hosts and Clusters, selecting a host in the Navigator
pane, then selecting Configure > Networking > VMkernel adapters in the center pane.
When complete, the Configure > Settings > Topology page for VDS-ISCSI should look similar to Figure 23.
1. Go to Home > Hosts and Clusters and select the first host in the compute cluster.
2. In the center pane, select Configure > Networking > VMkernel adapters.
3. Select the vmk1 VMkernel adapter. Click Edit settings.
4. Select NIC settings and change the MTU to 9000.
5. Click OK.
6. Repeat for the vmk2 VMkernel adapter.
Repeat steps 4 through 7 for the second vmhba (e.g. vmhba34) on the host. In step 6.a., connect it to the
DPG-ISCSI-2 port group.
Repeat the steps above for the remaining hosts in the FX2-FC630-ISCSI cluster.
When complete, each host's Storage Adapters page should look similar to Figure 24.
Repeat steps 4-9 for the host's second connected iSCSI adapter. In this example, the IP address configured
for the iSCSI target, 192.168.101.2 or 192.168.101.4 is used.
Repeat the above for the remaining hosts in the compute cluster.
Note: The below configuration is used as an example and is specific to the storage device used in this
example. See your storage configuration guide to configure your storage array for iSCSI.
Figure 25 references the SAN connectivity from the FN410S to the Storage Processors A and B, using two
S4048-ON switches. Table 6 provides connection details from the S4048-ON switches to the Storage
Processors A and B.
Storage array
Eth
FC SP B
Eth
SP A
FC
CMC
FN410S
FN410S
iSCSI SAN
Click on the + icon to create a new Pool. In the dialog box that opens up, provide a Name for the pool.
Provide a Description if needed. Click Next.
Select the Storage Tiers for the Pool. Choose from Extreme Performance Tier, Performance Tier and
Capacity Tier as per your requirement. Once you choose the Tier(s), if needed, change the RAID
Configuration for chosen tiers. Click Next.
In the Disks section, select the Amount of Storage. The total number of disks and the total capacity will
be displayed next to Totals label. Click Next.
Leave the Capability Profile Name section as it is and click Next.
Review your selections in the Summary section and click Finish. The Results section displays the
Overall status of Storage Pool being created, in the form of a percentage number. Once the Overall
status shows 100% Completed, click Close.
The newly created Storage Pool is now visible under the Pools section, as shown in Figure 26.
Note: If you close the Results section before the Overall status shows 100% completed, it continues to run
in the background.
9. The list of added of ESXi hosts should be displayed under the ESXi Hosts tab, as shown in Figure 28.
ESXi Hosts
LUN created
Click on the + icon to add an iSCSI interface. Provide the IP Address, Subnet Mask / Prefix Length
and VLAN ID. Click OK.
Add a total of 4 iSCSI Interfaces. When completed, it should look similar to Figure 30.
1. On the vSphere Web Client Home screen, select Hosts and Clusters.
In the Navigator pane, select the first host in the FX2-FC630-ISCSI cluster.
In the center pane, select Configure > Storage > Storage adapters and select the host's first storage
adapter (e.g. vmhba33).
Devices tab
Select the Paths tab. The target name, LUN number and status are shown. The status field is marked
either Active or Active (I/O) as shown in Figure 32.
Repeat steps 3-6 above for the host's second storage adapter (e.g. vmhba34).
Repeat the above for the remaining hosts in the cluster. All hosts should have two active connections to the
shared storage volume.
Leave the Partition configuration at its default settings and click Next > Finish to create the datastore.
To verify the datastore has been mounted by all hosts in the compute cluster:
All hosts in the compute cluster are listed with the datastore status shown as Mounted and Connected as
per Figure 34.
The path status to the LUN is verified from each host by selecting a host (from the list in Figure 34) and
expanding the Paths item near the bottom of the window. In this example, each host has four active paths to
the LUN as shown at the bottom of Figure 35.
The multipathing policy may be changed (e.g. to Round Robin) using the Edit Multipathing button.
1a 1b 1c 1d
3a 3b 3c 3d
FN IOM A1 (Top)
Internal Ports
1 2 3 4
FN IOM A2 (Bo tt om)
5 6 7 8 Internal Ports
1 2 3 4
5 6 7 8
1a 1 1
1b 2 2
1c 3 3
1d 4 4
3a 5 5
3b 6 6
3c 7 7
3d 8 8
Note: Ports 2, 4, 6 and 8 are not used when using half-width blades with dual port adapters.
Slot 1 Slot 2
Slot 3 Slot 4
FN IOM A1 (Top)
Internal Ports
1 2 3 4
1 2 3 4
5 6 7 8
1 1 1
2 3 3
3 5 5
4 7 7
Slot 1 Slot 2
Slot 3 Slot 4
FN IOM A1 (Top)
Internal Ports
1 2 3 4
FN IOM A2 (Bo tt om)
5 6 7 8 Internal Ports
1 2 3 4
5 6 7 8
1 1,2 1,2
2 3,4 3,4
3 5,6 5,6
4 7,8 7,8
2 SATA
1 SAS/
0
3
Slot 1
2 SATA
1 SAS/
0
3
Slot 3
FN IOM A1 (Top)
Internal Ports
1 2 3 4
5 6 7 8
FN IOM A2 (Bottom)
Internal Ports
1 2 3 4
5 6 7 8
1 1,3 1,3
3 5,7 5,7
3
Slot 1
2 SATA
1 SAS/
0
3
Slot 3
FN IOM A1 (Top)
Internal Ports
1 2 3 4
5 6 7 8 FN IOM A2 (Bottom)
Internal Ports
1 2 3 4
5 6 7 8
1 1,2,3,4 1,2,3,4
3 5,6,7,8 5,6,7,8
Standalone mode (SMUX) This is the factory default mode for the IOM. SMUX is a fully automated,
zero-touch mode that allows VLAN memberships to be defined on the
server-facing ports while all upstream ports are configured in port channel
128. Administrators cannot modify this setting.
VLT mode This low-touch mode automates all configurations except VLAN
membership. Port 9 is dedicated to the VLT interconnect in this mode.
Programmable MUX mode PMUX mode provides operational flexibility by allowing the administrator to
(PMUX) create multiple link aggregation groups (LAGs), configure VLANs on uplinks,
and configure data center bridging (DCB) parameters on the server-facing
ports.
Stacking mode (FN410S Stacking mode combines multiple switches to make a single logical switch,
and FN410T only) which is managed by a designated master unit in the stack. This mode
simplifies management and redundant configurations.
Full-switch mode Full-switch mode makes all switch features available. This mode requires
the most configuration but allows for the most flexibility and customization.
C.1 Switches
Qty Item Firmware Version
Qty per
Item Firmware Version
chassis
2 FC630 servers.
• 2 - Intel Xeon E5-2683 v4 2.10GHz,16 Cores • -
• 8 - 16GB DIMMS (128 GB total) • -
• 2 - 200 GB SATA SSD • -
• 2 - 16 GB Internal SD Cards • -
• 1 - PERC H730P Slim Storage Controller • 25.5.0.0018
• 2 - QLogic 577xx/578xx 10 Gb Ethernet BCM57810 • 08.07.26
• FC630 BIOS • 2.3.5
• FC630 iDRAC with Lifecycle Controller • 2.41.40.40
1 FC630 server.
• 2 - Intel Xeon E5-2680 v3 2.50GHz,12 Cores • -
• 8 - 16GB DIMMS (128 GB total) • -
• 2 - 300 GB SAS HDD • -
• 2 - 16 GB Internal SD Cards • -
• 1 - PERC H330P Mini Storage Controller • 25.5.0.0019
• 2 - QLogic 577xx/578xx 10 Gb Ethernet BCM57810 • 08.07.26
• FC630 BIOS • 2.3.5
• FC630 iDRAC with Lifecycle Controller • 2.41.40.40
1 FC630 server.
Dell EMC Unity 500F delivers all-flash storage from 7.7 TB to 8.0 PB raw capacity. It has concurrent support
for NAS, iSCSI, and FC protocols. The Disk Processing Enclosure (DPE) has a 2-RU form factor, has two
Storage Processors (SPs) and supports up to twenty-five 2.5-inch drives. Additional 2-RU Disk Array
Enclosures (DAEs) may be added providing twenty-five additional drives each.
Each of the SPs in the DPE has two on-board 12Gb SAS ports for connecting to Disk Access Enclosures
(DAEs). Additionally, a 4-port 12Gb SAS I/O Module can be provisioned in order to provide additional back-
end buses.
Note: See Dell EMC NSX Reference Architecture - FC630 Compute Nodes with iSCSI Storage for more
information on configuring SC4020.
D.1 Software
Item Version
VMware vCenter Server Appliance VMware VCenter Server 6.5.0 Build 4602587
vSphere Web Client 6.5.0 Build 4602587 (included with VCSA above)
D.2 Licenses
The vCenter Server is licensed by instance. The remaining licenses are allocated based on the number of
CPU sockets in the participating hosts.
Required licenses for the topology built in this guide are as follows:
Dell TechCenter is an online technical community where IT professionals have access to numerous resources
for Dell EMC software, hardware and services.
Dell EMC NSX Reference Architecture - FC630 Compute Nodes with iSCSI Storage
Dell EMC NSX Reference Architecture - FC630 Compute Nodes with iSCSI Storage.
Web: http://Support.Dell.com/
Telephone: USA: 1-800-945-3355
We encourage readers to provide feedback on the quality and usefulness of this publication by sending an
email to [email protected].