h18813 Powerflex Poweredge Mx7000 Chassis DG
h18813 Powerflex Poweredge Mx7000 Chassis DG
MX7000 Chassis
Proof of Concept Using Storage-Only Nodes
July 2021
H18813
Deployment Guide
Abstract
This guide describes how to deploy a Dell EMC PowerEdge MX7000
setup in a compute-only configuration with a two-layer Dell EMC
PowerFlex Integrated rack deployment. It includes validated design,
deployment, and configuration guidance.
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks
of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners.
Published in the USA 07/21 Deployment Guide H18813.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.
Contents
Chapter 1 Introduction 5
Document purpose ................................................................................................6
Audience ...............................................................................................................6
Terminology ..........................................................................................................6
Chapter 1 Introduction
Audience..............................................................................................................6
Terminology ........................................................................................................6
Document purpose
Audience
This guide is for solutions architects, implementation services managed services teams,
and anyone who is interested in using the MX7000 chassis in a PowerFlex environment.
This guide describes the configuration steps that are required to deploy PowerFlex using
MX7000 components. It assumes a basic understanding of the PowerEdge MX platform
and PowerFlex rack two-layer architecture.
Terminology
The following table provides definitions for some of the terms that are used in this guide:
Table 1. Terminology
Term Definition
CO Compute-only nodes
Term Definition
Each MX7000 chassis contains two pairs of intelligent, programmable MX9116n FSE
modules: one pair for management and production, and one pair for storage traffic. These
modules replace the aggregation switches found in a PowerFlex two-layer design. The
modules provide connectivity to all components, including external management, attached
storage-only nodes, and the MX7116n FEM modules. They also function as the ToR
switches for customer connectivity. Each MX7000 chassis also contains a pair of
MX7116n FEM modules, which are used to extend the switching fabric from one chassis
to another.
The following figure provides a side-by-side comparison of the traditional PowerFlex two-
layer configuration with the MX7000 two-layer configuration:
• All network connectivity enters and exists the Proof of Concept (POC) lab using the
MX9116n FSE modules including the PowerFlex management node, the PowerFlex
storage-only nodes, and the MX740c compute-only nodes.
Figure 2. Logical network for a PowerFlex two-layer deployment with an MX7000 chassis
Note:
There were two VLAN design changes on the PowerFlex Management Controller from our
standard logical design, which are shown in red:
• FE_dvswitch—Removed flex-stor-mgmt and added flex-vmotion
• BE_dvswitch—Removed flex-vmotion added flex-stor-mgmt
The MX9116n FSE modules that are located inside the B chassis slots provide storage
connectivity to the PowerFlex storage-only nodes, as shown in the following figure:
The MX7116n FEM modules are used to connect multiple chassis. The following figure
shows the MX7116n FEM module and its associated connectivity ports:
To provide a better representation of a fully cabled system, the following figure shows the
MX9116n FSE and MX7116n FEM connections:
MX740c compute Each node in the MX740c compute architecture is inserted into an MX7000 chassis and
nodes contains two network mezzanine adapters. Each mezzanine adapter contains two 25 GbE
architecture network links for a total of four 25 GbE links per node. The following figure shows that the
mezzanine adapters are hard wired to a specific fabric slot.
Note: This detail was considered when we configured the MX7000 chassis and associated nodes
for maximum high availability.
Because each chassis contains a pair of MX9116n FSE modules and a pair of MX7116n
FEM modules, the internal connections alternate when Chassis1 Node1 is compared with
Chassis2 Node1 for high availability and redundancy purposes. Each mezzanine card is
connected to one MX9116n FSE and one MX7116n FEM.
For example:
Chassis 1 Node 1 connects to switch port 1/1/1 on B1 and A2, and switch port
1/71/1 on B2 and A1.
Chassis 2 Node 1 connects to switch port 1/71/1 on B1 and A2, and switch port
1/1/1 on B2 and A1.
Hardware overview
This section briefly describes the hardware that is associated with this POC. For complete
information, see Appendix A - Validated Components.
Resource Components
Resource Components
Resource Components
Configuration The following configuration requirements for all switches, the management server, and
requirements PowerFlex node components are the same for a standard PowerFlex build:
• Network subnets
• Hostnames
• IP address
The MX7000 configuration requirements are different:
• Each MX7000 chassis requires an FQDN hostname and IP address used for OOB
management.
• Each MX740c sled is configured similarly to a standard PowerEdge R640 compute
node. Collect the following information:
iDRAC hostname and IP address
ESXi FQDN hostname, IP address, subnet mask, and default gateway
For vMotion: IP address and subnet mask
For flex-data1, flex-data2, flex-data3, and flex-data4: IP addresses and subnet
masks
Rack elevations The following figure shows the physical placement of the hardware components used in
this POC:
Port maps The following figure shows the port mapping for the R640 PowerFlex controller node and
the R640 Red Hat Enterprise Linux storage-only node:
Deployment considerations
While technically not part of the logical build process, the physical deployment of the
environment is a critical factor in setting up a functioning environment.
Note: For the physical placement of the hardware components used in this POC, see Figure 8.
The management switch uses a standard configuration and requires no additional or new
configuration features when introducing the MX7000 chassis to PowerFlex. The following
table provides standard configuration examples:
Category Configuration
Switch port configuration Example of a switch port configuration for an access port:
interface Ethernet1/1
switchport access vlan 101
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root no shutdown
The following table provides standard configuration examples of the Cisco Nexus 9332
switches:
Category Configuration
Category Configuration
Complete configuration See the Dell EMC PowerFlex Rack Logical Build Guide for the
complete switch configuration for Cisco Nexus aggregation switches.
Introduction .......................................................................................................23
Introduction
After the northbound switches are configured, configure the MX7000 chassis and
networking components in the following order:
• Perform initial chassis deployment.
• Configure OpenManage Enterprise – Modular Edition.
• Manage OpenManage Enterprise – Modular Edition.
Assign the basic settings required to access each chassis by using the LCD touch screen.
1. Activate the LCD touch screen by tapping the screen lightly. The Select Language
screen is displayed.
2. Select the language, such as English.
3. Select Main Menu, and then Save.
4. Select Settings > Network Settings > Edit > IPv4 > Static IP.
5. Select Yes to change the IP settings from DHCP to Static.
6. Provide the following information:
IP address
Subnet mask
Default gateway
7. Select Save.
8. Repeat steps 1 through 7 for the second chassis.
1. Open a browser to the chassis IP and log in to OME–Modular. The home page is
displayed.
2. Click Configure > Initial Configuration. The Chassis Deployment Wizard is
displayed.
Note: A configuration wizard is displayed if the user logs in to the OME-Modular web
interface for the first time. If the wizard is closed or accessed during subsequent
connections, the web interface can be accessed again by clicking Configure > Initial
Configuration. This option is displayed only if the chassis is not yet configured.
Note: If you select multiple NTP servers, OME–Modular selects the NTP server
algorithmically.
6. Configure the email, SNMP, and syslog settings, and then click Next.
The iDRAC tab is displayed.
7. Click the Configure iDRAC Quick Deploy Settings check box to configure the
password to access the iDRAC web interface and the management IP addresses,
and then click Next.
You can select the slots to which the iDRAC Quick Deploy settings must be
applied.
The Network IOM tab is displayed.
8. Click the Configure I/O Module Quick Deploy Settings check box to configure
the password to access the IOM console and management IP addresses, and
then click Next.
The Firmware tab is displayed.
9. Click the Configure all devices to use following catalog check box, click the
New Catalog check box, and click Catalog to open the Add Firmware Catalog
window.
10. Type a name for the catalog, and under Catalog Source ensure that Latest
component firmware versions on Dell.com is selected, and then click Finish to
save the changes and return to the Chassis Deployment Wizard.
11. Wait for the job to finish and select the new catalog name in the box, and then
click Next.
12. Click Next to view the Proxy tab, do not enable, and then click Next.
13. Click Next to view the Group Definition tab.
14. Click Create Group to configure the chassis group settings.
15. Click Next to view the Summary tab.
Note: After setting the time in the lead chassis, wait for the lead chassis time and the member
chassis time to synchronize before performing any operation. The time configuration can be
disruptive.
Note: See Appendix A for reference information about the code levels used in this guide.
1. Click Configuration > Firmware Compliance > Create Baseline. The Create
Firmware Baseline window is displayed.
2. Click Catalog type, and from the menu click the catalog that you created in the
previous task (Configuring OpenManage Enterprise – Modular Edition).
3. Type a name and description for the baseline, and then click Next.
4. Click Devices, and then click Select Devices Box.
5. Click All Devices, click All Devices on the right, and then click Finish
6. To verify the compliance of a firmware baseline, on the Firmware page, click the
baseline and click Check Compliance.
A summary of the compliance check is displayed on the right side of the firmware
page.
7. Click View Report.
The Compliance Report page is displayed. You can view details of the baseline
and status of the compliance.
8. On the Compliance Report page, select the device or component for which you
want to update the firmware.
The Update Firmware window is displayed.
9. Click Update Now to update the firmware immediately.
OS Version: 10.5.0.3P1
Build Version: 10.5.0.3P1.612
Build Time: 2019-12-18T19:19:25+0000
System Type: MX9116N-ON
Architecture: x86_64
6. Create additional required customer VLAN IDs, as shown in the following table:
Table 7. Customer VLAN IDs
VLAN Purpose
106 vMotion
848 MGMT
interface vlan106
description ESXi_vmotion
no shutdown
!
interface vlan848
description ESXi_mgmt
no shutdown
7. Run the following commands to configure bonds by using the required port
channels, as shown in the following table:
Table 8. Required port channels
interface port-channel11
description "CO ESXimgmt/vmotion/prod 236"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 106,848
lacp fallback enable
mtu 9216
vlt-port-channel 11
!
interface port-channel12
description "CO ESXimgmt/vmotion/prod 237"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 106,848
lacp fallback enable
mtu 9216
vlt-port-channel 12
!
interface port-channel13
description "CO ESXimgmt/vmotion/prod 238"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 106,848
lacp fallback enable
mtu 9216
vlt-port-channel 13
!
interface port-channel14
description "CO ESXimgmt/vmotion/prod 239"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 106,848
lacp fallback enable
mtu 9216
vlt-port-channel 14
!
interface port-channel21
description “Customer uplink”
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 848
lacp fallback enable
mtu 9216
vlt-port-channel 21
!
interface port-channel31
description "PxFM ESXI/PROD"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 106,848
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp fallback enable
mtu 9216
vlt-port-channel 31
Note: At this step, member ports are not displayed as they have not yet been defined.
MX9116n-A1 MX9116n-A2
Port-group 1/1/1 Port-group 1/1/1 Port-group 1/1/1
Mode Eth 25g-8x fabric-expander Mode Eth 25g-8x fabric-expander
mode mode
12. Configure peer links and Virtual Link Trunking (VLT) configurations using ports 37
through 40 using two 200 Gb DAC/AOC cables.
In the following configuration:
backup destination points to the peer switch.
vlt-mac is identical on each switch, as it is a common virtual interface.
Ports 37 through 38 are one physical QSFP28-DD port.
Ports 39 through 40 are one physical QSFP28-DD port.
MX9116n-A1 MX9116n-A2
vlt-domain 1 interface ethernet1/1/37
backup destination description "Peer-link 9116-A1"
10.234.90.244 no shutdown
discovery-interface no switchport
ethernet1/1/37-1/1/40
flowcontrol receive off
primary-priority 4196
!
vlt-mac 00:11:22:33:44:5a
interface ethernet1/1/38
!
interface ethernet1/1/37 description "Peer-link 9116-A1"
description "Peer-link 9116-A2" no shutdown
no shutdown no switchport
no switchport flowcontrol receive off
flowcontrol receive off !
! interface ethernet1/1/39
interface ethernet1/1/38 description "Peer-link 9116-A1"
description "Peer-link 9116-A2" no shutdown
no shutdown
no switchport
no switchport
flowcontrol receive off
flowcontrol receive off
!
!
interface ethernet1/1/39 interface ethernet1/1/40
description "Peer-link 9116-A2" description "Peer-link 9116-A1"
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off
!
interface ethernet1/1/40
description "Peer-link 9116-A2"
no shutdown
no switchport
flowcontrol receive off
interface ethernet1/1/42:1
description 9332
shut
channel-group 21 mode active
no switchport
mtu 9216
flowcontrol receive off
Note: STP+ must be enabled on the customer switches before you enable the uplinks.
Note: The asterisk (*) denotes the active switch. If the first uplink switch configured is active, only
one entry is available, and the port value is 2 by default.
Note: The port mapping is different for internal nodes because of the two chassis having
different switches per I/O module. (See MX740c compute nodes architecture.)
MX9116n-A1 MX9116n-A2
interface ethernet1/1/1 interface ethernet1/1/1
description "M740-236 – vmnic0" description "M740-238 - vmnic1"
no shutdown no shutdown
fec off fec off
switchport access vlan 848 switchport access vlan 848
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/1/3 interface ethernet1/1/3
description "M740-237 – vmnic0" description "M740-239 - vmnic1"
no shutdown no shutdown
fec off fec off
switchport access vlan 848 switchport access vlan 848
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/71/1 interface ethernet1/71/1
description "M740-238 – vmnic0" description "M740-236 - vmnic1"
no shutdown no shutdown
fec off fec off
switchport access vlan 848 switchport access vlan 848
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/71/2 interface ethernet1/71/2
description "M740-239 – vmnic0" description "M740-237 - vmnic1"
no shutdown no shutdown
fec off fec off
switchport access vlan 848 switchport access vlan 848
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
Note: On all four MX9116n FSE modules, PowerFlex is connected using 4-port 25 Gb
breakout cables to port 41.
MX9116n-A1 MX9116n-A2
interface ethernet1/1/43:1 interface ethernet1/1/43:1
description "PFM vmnic4 description "PFM vmnic6
ESXI/PROD" ESXI/PROD"
no shutdown no shutdown
channel-group 31 mode active channel-group 31
spanning-tree port type edge no switchport
trunk flowcontrol receive off
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp fallback enable
no switchport
flowcontrol receive off
Note: If the PowerFlex appliance contains 10 Gb NICs, the port-group mode must be changed to
mode Eth 10g-4x, and 4-port 10 Gb breakout cables must be used.
The following table provides examples of the full switch configurations for the POC lab
switches:
Configuration Document
A1 configuration
A1 configuration
example
A2 configuration
A2 configuration
example
Notes:
• These instructions have been modified from the original switch code to allow
PowerFlex to use the correct port.
• Any switch configuration differences between the MX9116n-B2 and MX9116n-A1
modules are noted; otherwise, configure both switches as described in this guide.
• The VLAN IDs in these instructions might differ from the standard.
The following commands set the configuration of the 10 port-groups to the correct
cable types:
port-group 1/1/1
mode Eth 25g-8x fabric-expander-mode
!
port-group 1/1/5
mode Eth 25g-8x
!
port-group 1/1/6
mode Eth 25g-8x
!
port-group 1/1/9
mode Eth 25g-8x
!
port-group 1/1/10
mode Eth 25g-8x
!
port-group 1/1/11
mode Eth 100g-2x
!
port-group 1/1/12
mode Eth 100g-2x
!
port-group 1/1/13
mode Eth 40g-1x
!
port-group 1/1/14
mode Eth 40g-1x
!
port-group 1/1/15
mode Eth 40g-1x
VLAN Purpose
interface vlan152
description data1
no shutdown
mtu 9216
!
interface vlan160
description data2
no shutdown
mtu 9216
!
interface vlan168
description data3
no shutdown
mtu 9216
!
interface vlan176
description data4
no shutdown
mtu 9216
!
interface vlan858
description flex_mgmt
no shutdown
6. Run the following commands to configure bonds by using the required port
channels shown in the following table:
Table 16. Required port channels
7 Data1, Data3, and Flex MGMT for the fourth storage-only node
9 Data1, Data3, and Flex MGMT for the fifth storage-only node
15 Data1, Data2, Data3, and Data4 for the first compute-only node
17 Data1, Data2, Data3, and Data4 for the third compute-only node
22 Customer uplink
interface port-channel1
description "231 mgmt d1 d3"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,858
lacp fallback enable
mtu 9216
vlt-port-channel 1
!
interface port-channel2
description "231 d2 d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 168,176
lacp fallback enable
mtu 9216
vlt-port-channel 2
!
interface port-channel3
description "232 mgmt d1 d3"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,858
lacp fallback enable
mtu 9216
vlt-port-channel 3
!
interface port-channel4
description "232 d2 d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 168,176
lacp fallback enable
mtu 9216
vlt-port-channel 4
!
interface port-channel5
description "233 mgmt d1 d3"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,858
lacp fallback enable
mtu 9216
vlt-port-channel 5
!
interface port-channel6
description "233 d2 d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 168,176
lacp fallback enable
mtu 9216
vlt-port-channel 6
!
interface port-channel7
description "234 mgmt d1 d3"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,858
lacp fallback enable
mtu 9216
vlt-port-channel 7
!
interface port-channel8
description "234 d2 d4 "
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 168,176
lacp fallback enable
mtu 9216
vlt-port-channel 8
!
interface port-channel9
description "235 mgmt d1 d3"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,858
lacp fallback enable
mtu 9216
vlt-port-channel 9
!
interface port-channel10
description "235 d2 d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 168,176
lacp fallback enable
mtu 9216
vlt-port-channel 10
!
interface port-channel15
description "CO 236 vmotion d1-d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,168,176
lacp fallback enable
mtu 9216
vlt-port-channel 15
!
interface port-channel16
description "CO 237 vmotion d1-d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,168,176
lacp fallback enable
mtu 9216
vlt-port-channel 16
spanning-tree port type edge
!
interface port-channel17
description "CO 238 vmotion d1-d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,168,176
lacp fallback enable
mtu 9216
vlt-port-channel 17
!
interface port-channel18
description "CO 239 vmotion d1-d4"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,168,176
lacp fallback enable
mtu 9216
vlt-port-channel 18
!
interface port-channel22
description “Customer uplink”
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 858
lacp fallback enable
mtu 9216
vlt-port-channel 22
!
interface port-channel30
description "PFM D1-D4/STIOMGMT"
no shutdown
switchport mode trunk
switchport access vlan 1
switchport trunk allowed vlan 152,160,168,176,858
lacp fallback enable
mtu 9216
vlt-port-channel 30
MX9116n-B1 MX9116n-B2
9. Validate that the MX9116-B1 module is connected to the correct MX7116n FEM:
MX9116N-B1# show unit-provision
10. Validate that MX9116-B2 module is connected to the correct MX7116n FEM:
MX9116N-B2# show unit-provision
11. Configure peer links and VLT configurations using ports 37 through 40 using two
200 Gb DAC/AOC cables.
Table 18. Peer link and VLT configuration
MX9116n-B1 MX9116n-B2
vlt-domain 1 vlt-domain 1
backup destination 10.234.90.245 backup destination 10.234.90.243
discovery-interface discovery-interface
ethernet1/1/37-1/1/40 ethernet1/1/37-1/1/40
primary-priority 4196 primary-priority 8192
vlt-mac 00:11:22:33:44:47 vlt-mac 00:11:22:33:44:47
! !
interface ethernet1/1/37 interface ethernet1/1/37
description "Peer-link 9116-B2" description "Peer-link 9116-B1"
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/1/38 interface ethernet1/1/38
description "Peer-link 9116-B2" description "Peer-link 9116-B1"
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/1/39 interface ethernet1/1/39
description "Peer-link 9116-B2" description "Peer-link 9116-B1"
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/1/40 interface ethernet1/1/40
description "Peer-link 9116-B2" description "Peer-link 9116-B1"
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off
Note: On each storage MX9116n FSE (B1, B2), ports 25 through 28 and ports 33
through 36 are used with 25g-8x breakout cables to connect to the external storage-only
nodes. The connections alternate among the nodes to provide resiliency. See the
following example and table for clarification.
Example: Switch port 1/1/33 was set to allow a 25g-8x breakout cable so that
eight virtual ports in the switch are created in the following manner: 1/1/33:1
through 1/1/33:8.
Note: Not all breakout cable ports are used in this POC.
Switch physical port Switch logical port Cable breakout number Device/port
The ports are configured with LACP. The following example configures the correct
external ports for the storage-only nodes (applies to both modules):
interface ethernet1/1/25:1
description "231 - p2p1 - mgmt d1 d3 rep1"
no shutdown
channel-group 1 mode active
fec off
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/25:2
description "232 - p2p1 - mgmt d1 d3 rep1"
no shutdown
channel-group 3 mode active
fec off
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/25:3
description "233 - p2p1 - mgmt d1 d3 rep1"
no shutdown
channel-group 5 mode active
fec off
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/27:1
description "231 - p3p2 - d2 d4 rep2"
no shutdown
channel-group 2 mode active
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/27:2
description "232 - p3p2 - d2 d4 rep2"
no shutdown
channel-group 4 mode active
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/27:3
description "233 - p3p2 - d2 d4 rep2"
no shutdown
channel-group 6 mode active
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/33:1
description "234 - p2p1 - mgmt d1 d3 rep1"
no shutdown
channel-group 7 mode active
fec off
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/33:2
description "235 - p2p1 - mgmt d1 d3 rep1"
no shutdown
channel-group 9 mode active
fec off
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/35:1
description "234 - p3p2 - d2 d4 rep2"
no shutdown
channel-group 8 mode active
no switchport
mtu 9216
flowcontrol receive off
!
interface ethernet1/1/35:2
description "235 - p3p2 - d2 d4 rep2"
no shutdown
channel-group 10 mode active
no switchport
mtu 9216
flowcontrol receive off
!
Note: The following example configures the correct internal ports for the compute-only
nodes so that they have access to the storage network. The port mapping is different for
internal nodes because the two chassis have different switches per I/O module.
MX9116n-B1 MX9116n-B2
interface ethernet1/1/1 interface ethernet1/1/1
description "M740-236 – vmnic2" description "M740-238 – vmnic4"
no shutdown no shutdown
channel-group 15 mode active channel-group 17 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/1/3 interface ethernet1/1/3
description "M740-237 – vmnic2" description "M740-239 – vmnic4"
no shutdown no shutdown
channel-group 16 mode active channel-group 18 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/71/1 interface ethernet1/71/1
description "M740-238 – vmnic2" description "M740-236 – vmnic4"
no shutdown no shutdown
channel-group 17 mode active channel-group 15 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off! flowcontrol receive off
! !
interface ethernet1/71/2 interface ethernet1/71/2
description "M740-239 - vmnic3" description "M740-237 - vmnic0"
no shutdown no shutdown
channel-group 18 mode active channel-group 16 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
Note: On all four MX9116n FSE modules, PowerFlex is connected using 4-port 25 Gb
breakout cables to port 41.
MX9116n-B1 MX9116n-B2
interface ethernet1/1/43:1 interface ethernet1/1/43:1
description "PFM vmnic5 description "PFM vmnic7
ESXI/PROD" ESXI/PROD"
no shutdown no shutdown
channel-group 30 channel-group 30
no switchport no switchport
flowcontrol receive off flowcontrol receive off
If the PowerFlex appliance contains 10 Gb NICs, the port-group mode must be changed
to mode Eth 10g-4x and 4-port 10 Gb breakout cables must be used.
The following table provides examples of the full switch configurations for the POC lab
switches:
Configuration Document
B1 configuration
B1 configuration
example
B2 configuration
B2 configuration
example
These steps take place inside the OpenManage Enterprise – Modular Edition web
interface.
Configure the Manually configure the data protection (RAID 1) for the Dell BOSS card on each MX740
Dell BOSS card compute sled:
(RAID 1)
1. Launch the virtual console and select Next Boot to BIOS setup to enter the
system BIOS.
2. Power cycle the server and enter F2 to open the BIOS setup.
3. From the System Setup main menu, select Device Settings.
4. Select AHCI Controller in Slot1: BOSS-1 Configuration Utility.
5. Select Create RAID Configuration.
6. Select both devices and select Next.
7. Enter VD_R1_1 as the name and leave the default values.
8. Select Yes for Would you like to create this virtual disk?.
9. Select Next and then Select OK.
10. Select Exit and then Select Yes.
11. Select OK and then Select Exit.
12. Select System BIOS > Boot Settings > BIOS Boot Settings, and then select
Hard-Disk Drive Sequence.
13. Select DELLBOSS VD, select + to move, and then select OK.
14. Select Back > Back > Finish > Finish, and then select Yes.
15. Repeat steps 1 through 9 for each additional edge server.
Verify the BIOS Verify that the boot order is set correctly in the BIOS on each MX740 compute sled:
boot order
1. Log in to the iDRAC interface.
sequence
2. Launch the Virtual Console from the Virtual Console Preview pane.
3. Boot the server into BIOS mode by pressing F2 during the boot process.
4. Select System BIOS.
5. Select BIOS Boot Settings and enter the following settings:
Boot Mode: BIOS
Boot Sequence Retry: Enabled
Hard Disk Failover: Disabled
6. Select BIOS Boot Settings and use the following values:
Table 23. BIOS Boot Settings values
Introduction .......................................................................................................50
Introduction
After the MX7000 chassis pair have been configured, configure the PowerFlex
Management Node.
Note: This chapter provides the configuration details from the POC lab. The POC used a single
R640 PowerFlex node to manage the cluster. We recommend moving to a three-node
management cluster to provide high availability for virtual machine workloads.
• Verify that the iDRAC firmware and BIOS are at the correct version of RCM 3.6
• Ensure that the IDRAC network is configured.
• Verify that the ESXi RCM ISO file is on the same network as the host.
Mount the 1. Connect to the iDRAC and launch a virtual remote console.
VMware vSphere
2. Use the Virtual Media option to attach the ESXi ISO to the DVD ISO.
ESXi ISO
3. Click Close.
4. Set Boot to Virtual CD/DVD/ISO.
5. Reboot the server.
6. Click Power > Reset System (warm boot).
Install VMware 1. When the VMware ESXi installer is loaded and the installation Welcome screen is
ESXi displayed, press Enter to continue.
2. Press F11 to accept the end-user license agreement.
3. Select DELLBOSS VD as the install location and press Enter to continue.
If the selected storage contains any other installation of ESXi and VMFS
datastore, you are prompted to select an option about the existing ESXi and
VMFS.
4. Select Install ESXi, overwrite the VMFS datastore to proceed with a fresh
installation, and press Enter to continue.
5. Select the keyboard type for the host and press Enter to continue
6. When prompted, type the root password and press Enter to start the installation.
7. Press F11 to confirm the installation.
8. When the installation is complete, remove the installation media and press Enter
to reboot the server.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter <PFMC-host-short-name-DAS>.
6. Click Save.
Installing vCSA
Deploy the VMware vCenter Server Appliance (vCSA) by deploying and setting up the
vCenter Server.
1. After selecting Continue from stage 1, select Next from the stage 2 introduction
page.
2. Select Synchronize time with NTP Server and enter NTP Server IP addresses
and select Disabled for SSH access and click Next.
3. At the SSO configuration page, create a new SSO domain.
4. Enter the Single Sign-On domain name and provide password for SSO and click
Next.
5. On the Configure CEIP page, clear Join the VMware’s Customer Experience
Improvement Program and click Next.
6. At the Ready to complete page, verify all settings and click Finish.
7. Click OK if a warning message is displayed.
8. When Stage 2 is completed, click Close and log in to vCenter using the SSO
credentials.
• flex-node-mgmt
• flex-vmotion
• flex-vcsa-ha
• flex-stor-mgmt
• flex-vmotion
• flex-data1
• flex-data2
• flex-data3
• flex-data4
Introduction .......................................................................................................65
Introduction
After the PowerFlex management node has been configured, configure the MX740c ESXi
compute sleds.
This chapter provides the configuration details from the POC lab. The following
instructions are to be performed in OpenManage Enterprise – Modular Edition.
d. Enter the VLAN value. See the ESXi Management VLAN ID field in the LCS
for the required VLAN value.
e. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY
configuration to the values defined in the LCS.
f. Go to DNS Configuration. See the LCS for the required DNS value.
g. Go to DCUI IPv6 Configuration.
h. Select Disable IPv6.
4. Press ESC to return to the DCUI.
5. Type Y to commit the changes and restart the node.
6. Verify host connectivity by pinging the IP address from the jump server, using the
command prompt.
The following VLANs are used in this POC for the compute cluster:
• vlan 848 – flex-node-mgmt
• vlan 858 – flex-stor-Mgmt
• vlan 106 – flex- vMotion
• vlan 152 – flex Data1
• vlan 160 – flex Data2
• vlan 168 – flex Data3
• vlan 176 – flex Data4
The following table shows how the distributed switches are configured in the compute
cluster:
Dvswitch0 Dvswitch1
• active/active • active/active
• Route based on IPHASH • Route based on IPHASH
• MLAG • MLAG with vmnic2/vmnic3
• MTU: 9000 • Distributed Port group (Ephemeral):
• MLAG with vmnic0/vmnic1 flex-data1-152
• Distributed Port groups (Ephemeral): flex-data2-160
flex-node-mgmt flex-data3-168
hv-vmotion-106 flex-data4-176
5. Right-click the new port group and click Edit Settings to open the wizard.
6. In the <port group> wizard, type the data values as requested, accept the default
settings, but do the following for the Teaming and Failover field:
a. Change Load balancing to Route based on IP hash.
b. Change the Failover order to be:
Active uplinks: lag-managment (LACP).
Unused uplinks: Uplink 1 and Uplink 2.
c. Click OK.
7. Repeat steps 2 through 6 for hv-vmotion-106
Migrate compute Migrate the physical NICs and ESXi management (vmk0) adapter from vSwitch0 to
ESXi hosts DSwitch0-CO. These steps are for migrating the ESXi compute sleds from vSwitch0 to
DSwitch0-CO. They include reconfiguring LACP one side (NIC) at a time during migration
to the distributed switch.
Note: We recommend that you perform these steps on one ESXi host at time.
Only vSwitch0 is created with the flex-node-mgmt adapter configured on each compute
ESXi host. This section provides steps to migrate both the physical NICs and ESXi
management adapters to DSwitch0. All MX9116n FSE switch ports for the compute ESXi
hosts are configured as a trunk access port during initial configuration. These ports are
configured later with LACP enabled. The following are the high-level steps:
Note: Do not apply these changes at this time. The following is an example of the switch
configuration after all nodes have been converted into LACP.
MX9116n-A1 MX9116n-A2
interface ethernet1/1/1 interface ethernet1/1/1
description "M740-236 - vmnic0" description "M740-238 - vmnic1"
no shutdown no shutdown
channel-group 11 mode active channel-group 13 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/1/3 interface ethernet1/1/3
description "M740-237 - vmnic0" description "M740-239 - vmnic1"
no shutdown no shutdown
channel-group 12 mode active channel-group 14 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/71/1 interface ethernet1/71/1
description "M740-238 - vmnic0" description "M740-236 - vmnic1"
no shutdown no shutdown
channel-group 13 mode active channel-group 11 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
! !
interface ethernet1/71/2 interface ethernet1/71/2
description "M740-239 - vmnic0" description "M740-237 - vmnic1"
no shutdown no shutdown
channel-group 14 mode active channel-group 12 mode active
fec off fec off
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
3. Record to which physical switch port vmnic0 (9116-A1) and vmnic1 (9116-A2) are
connected.
4. Click Home, then select Hosts and Clusters and expand the compute cluster.
5. Select the first compute ESXi host in the left pane and then select the Configure
tab in the right pane.
6. Select Virtual switches under Networking.
7. Expand vSwitch0.
8. Under Physical Adapters, expand Uplink 1 (vmnic0), click the ellipsis (…), and
select View Settings.
9. Click the LLDP tab.
10. Record the Port ID (Switch Port) and System Name (Switch) values.
11. Repeat steps 3 through 10 for uplink 2 (vmnic1) and then go to step 12.
12. Remove vmnic1 from vSwitch0 on the compute ESXi host (sled) as follows:
a. Click Home, click Hosts and Clusters, and then expand the compute
cluster.
b. Select the first compute ESXi host in the left pane and then select the
Configure tab in the right pane.
c. Select Virtual switches under Networking.
d. Expand vSwitch0 and select Manage Physical Adapters to open the
wizard.
e. In the Manage Physical Network Adapters wizard, type the data values as
requested, accept the default settings, and do the following:
i Select vmnic1 and click the red X to remove the NIC.
ii Verify that vmnic1 is removed from list.
iii Click OK.
13. Configure LACP on the 9116-A2 switch port on a compute ESXi sled:
a. Use SSH to connect to the 9116-A2 switch.
b. Type the following switch commands to configure LACP for the ESXi host in
chassis 1/slot 1:
Note: See the port map to obtain port connectivity for each compute ESXi host.
Ensure that you use the correct channel-group number for each ESXi host being
configured.
# Config t
# interface ethernet1/1/1
# channel-group 12 mode active
b. Select the first compute ESXi host in left pane and then select the Configure
tab in right pane.
c. Select Virtual switches under Networking.
d. Expand DSwitch0 and select Manage Physical Adapters to open the
wizard.
e. In the Manage Physical Network Adapters wizard, type the data values as
requested, accept the default settings, but do the following:
i Select Mgmt-1 (Mgmt).
ii Click + to open list of NICs.
iii Select vmnic1.
iv Click OK
16. Wait 60 seconds and then select the VMkernel adapter link under Networking to
ensure that the flex-node-mgmt (vmk0) adapter is on DSwitch0.
17. Remove vmnic0 from vSwitch0 on the compute ESXi host (sled):
18. Configure LACP on the 9116-A1 switch port on a compute ESXi sled:
19. Use SSH to connect to the 9116-A1 switch.
20. Type the following example switch commands to configure LACP for ESXi host in
chassis 1/slot 1:
Note: See the port map to obtain port connectivity for each compute ESXi host. Ensure that you
use the correct channel-group number for each ESXi host being configured.
# Config t
# interface ethernet1/1/1
# channel-group 12 mode active
22. Repeat the preceding steps for each compute ESXi host.
Creating the Create the network adapters on the following distributed switches:
VMkernel
• DSwitch0: co-mgmt-848, hv-vmotion-106
adapters on
distributed • DSwitch1: fos-data1-152, fos-data2-160, flex-data3-168, flex-data4-176
switches
Before you begin, verify that the compute ESXi hosts are configured on the distributed
switches.
1. From Home, click Hosts and Clusters and expand the data center and then
expand the compute cluster.
2. Select the first compute ESXi host in the left pane and then select the Configure
tab in the right pane.
3. Click VMkernel adapters under Networking.
4. Click Add Networking to open the wizard.
5. In the Add Networking wizard, type the data values as requested, accept the
default settings, and use the following configuration information
Table 27.
6. Wait 60 seconds and then verify that co-mgmt-848 and hv-vmotion-106 (vmk1)
network adapter is created on DSwitch0.
7. Repeat steps 4 through 6 to create flex-data1-152 and flex-data2-160, flex-data3-
168 and flex-data4-176 on DSwitch1. Each port group will have a different
address than what is in preceding table.
8. Repeat steps 1 through 7 for each compute ESXi host.
Introduction .......................................................................................................77
Discovering resources......................................................................................84
Introduction
After the MX740c ESXi compute sleds have been configured, configure the R640 storage-
only nodes and PowerFlex OS. This section provides the configuration details from the
POC lab.
The storage-only PowerFlex integrated rack was built with five Dell EMC PowerEdge
servers. Complete the following steps before configuring the storage-only PowerFlex
integrated rack aggregation switch:
• Connect the storage-only PowerFlex integrated rack to the MX9116n FSE network
through a 200 Gb link.
• Connect the MX9116n FSE modules appropriately and ensure that the sleds are
correctly configured.
• Connect the MX9116n FSE modules to the customer core switches.
• Enable the data VLANs (VLAN 152, 160, 168, and 176) on the MX9116n FSE
modules.
• Obtain the embedded operating system (CentOS 7.7) software.
The logical configuration of the two-Layer PowerFlex storage-only nodes are no different
from a standard build except VLAN traffic uses different bond interfaces:
• bond0—flex-mgmt data1 and data3 on a single pair of network interfaces
Note: You can configure PowerFlex Manager as the DHCP server or PXE server in the Initial
Setup wizard when you first log in to the PowerFlex Manager UI. If you specified that you want to
use an external server in the LCS, it is already configured.
Change the After you accept the license agreement, log in to the Dell EMC Initial Appliance
default password Configuration UI and change the default password of the PowerFlex Manager virtual
appliance:
1. Log in to the PowerFlex Manager virtual appliance through the VM console using
the following default credentials and then press Enter:
Username: delladmin
Password: delladmin
2. Click Agree to accept the License Agreement and click Submit. After you agree
to the License Agreement, you can go back and review the license agreement at
any time by clicking EULA in the Dell EMC Initial Appliance Configuration UI.
3. In the Dell EMC Initial Appliance Configuration UI, click Change Admin
Password.
4. Enter the required information (Current Password, New Password, Confirm New
Password) and click Change Password.
Configure the Configure the VMware ESXi management OOB management, and operating system
networks installation networks through the Dell EMC Initial Appliance Configuration UI, using the
connections shown in the following table:
If you can successfully log in to the PowerFlex Manager UI, PowerFlex Manager is
successfully deployed.
If you cannot log in to PowerFlex Manager, ensure that you are using the correct
<IP_Address> by typing ip address in the command line and searching for the IP
address of the PowerFlex Manager virtual appliance. The <IP_Address> must be the
same <IP_Address> that is displayed in the Dell EMC Initial Appliance Configuration UI.
Configure the Set the date and time of the PowerFlex Manager virtual appliance in the Dell EMC Initial
date and time Appliance Configuration UI.
If you change the time after the license is uploaded, features might be disabled because
the appliance might fall outside the time range for which the license is valid.
1. In the Dell EMC Initial Appliance Configuration UI, click Date/Time properties.
2. Under the Date and Time tab, choose either of the following:
To automatically set the time, select the Synchronize date and time over the
network check box and complete the following steps depending on your
requirements:
− Add an NTP server that is configured to the NTP Servers menu.
− Edit the default NTP server (s) from the NTP Servers menu.
− Delete the NTP server (s) from the NTP Servers menu.
To manually set the time, select the Hour, Minute, and Second under the Time
page.
3. On the Time Zone tab, select the nearest city to your time zone on the map and
select the System clock uses UTC check box. Click OK.
Change the Update the hostname from the default setting, using the Dell EMC Initial Appliance
hostname Configuration UI.
1. In the Dell EMC Initial Appliance Configuration UI, click Hostname.
The Update Hostname dialog is displayed.
2. Enter the new hostname.
3. Select Update Hostname.
A dialog box states the hostname is successfully updated.
4. Reboot the PowerFlex Manager virtual appliance for the changes to take effect.
Enable remote Enable SSH and access the PowerFlex Manager VM through the command line to allow
access to the completion of certain tasks, such as administration or system maintenance.
PowerFlex
1. Log in to the PowerFlex Manager appliance console using the delladmin
Manager VM username.
If you do not see the system prompt, log out of the shell and log back in.
2. Enter the following command:
sudo su -
For detailed information about how to configure PowerFlex Manager, see the Initial setup
and configuration section in the PowerFlex Manager online help.
Before you start, log in to the Dell EMC Download Center and download the PowerFlex
Manager license key file to a local network share.
Note: After uploading the license, you must complete the remaining tasks in the
Setup wizard. If you close the Setup wizard without completing the remaining tasks,
go to Settings > Virtual appliance management to complete the tasks.
b. On the Time Zone and NTP Settings page, configure the time zone and
add NTP server information. Click Save and Continue.
c. On the Proxy Settings page, if using a proxy server, select the check box
and enter configuration details. Click Save and Continue.
d. On the DHCP Settings page, to configure PowerFlex Manager as a DHCP
or PXE server, select the Enable DHCP/PXE server check box. Enter the
DHCP details and click Save and Continue.
Note: Enter the starting IP address in the range, for example 192.168.104.100, and
the ending IP address in the range, for example, 192.168.104.250. The ending IP
range can be increased depending on the number of servers that are being
configured.
e. Do not configure alert connector during the Setup wizard. You must deploy
the system and create the cluster first. Skip this step in the wizard
f. On the Summary page, verify the settings. Click Finish to complete the
initial setup.
The Getting Started page is displayed after you complete the initial setup. The
page guides you through the common configuration that is required to prepare a
new PowerFlex Manager environment. A green check mark on a step indicates
that you have completed the step. First, click Configure Compliance under Step
1: Firmware and Software Compliance to provide the RCM location and
authentication information for use within PowerFlex Manager.
A message is displayed when the RCM upgrade starts. You can close this
window. The upgrade runs in the background.
3. Click Define Networks under Step 2: Networks to enter detailed information
about the available networks in the environment. The networks defined in
PowerFlex Manager are used in templates to specify the networks or VLANs that
are configured on nodes and switches for services. The following table outlines
the purpose of each network:
Table 29. Network purpose
Network Purpose
Hypervisor migration Used to manage the network that you want to use for live
migration. Live migration allows you to move running VMs
from one node of the failover cluster to a different node in
the same cluster.
Operating system installation Allows static or DHCP network for operating system imaging
on nodes.
PowerFlex data Used for traffic between PowerFlex data clients (SDC) and
data servers (SDS). Used for all node types except for
dedicated storage-only nodes.
PowerFlex data (SDC traffic Used for traffic between SDC and SDS and between SDS
only) and SDS. Used for cloning a dedicated storage template.
Network Purpose
PowerFlex data (SDS traffic Used for traffic between SDS and SDS. Used for cloning a
only) dedicated storage template.
Discovering resources
Discover and grant PowerFlex Manager access to resources in the environment. Provide
the management IP address and credential for each discoverable resource.
Dell Technologies recommends using separate operating system credentials for SVM and
VMware ESXi. For information about creating or updating credentials in PowerFlex
Manager, click Settings > Credentials Management and access the online help.
During node discovery, you can configure iDRAC nodes to automatically send alerts to
PowerFlex Manager. If the PowerFlex nodes are not configured for alert connector,
Secure Remote Services does not receive critical or error alerts for those resources.
1. Gather the IP addresses and credentials that are associated with the resources.
2. On the PowerFlex Manager Getting Started page, click Discover Resources
under Step 3: Discover.
3. On the Welcome page of the Discovery wizard, read the instructions and click
Next.
4. On the Identify Resources page, click Add Resource Type. From the Resource
Type list, select the resource you want to discover.
5. Enter the management IP address of the resource in the IP Address Range field.
To discover a resource in an IP range, provide a starting and ending IP address.
6. In the Resource State list, select either Managed or Unmanaged. The resource
state must be managed for PowerFlex Manager to send alerts to Secure Remote
Services.
7. To discover resources into a selected node pool instead of the global pool
(default), select the node pool from the Discover into Node Pool list.
8. Select the appropriate credential from the Credentials list.
9. If you want PowerFlex Manager to automatically reconfigure the iDRAC nodes it
finds, select the Reconfigure discovered nodes with new management IP and
credentials checkbox. This option is not selected by default, because it is faster to
discover the nodes if you bypass the reconfiguration.
10. Select the Auto configure nodes to send alerts to PowerFlex Manager
checkbox to have PowerFlex Manager automatically configure iDRAC nodes to
send alerts to PowerFlex Manager.
11. Click Next to start discovery. On the Discovered Resources page, select the
resources from which you want to collect inventory data and click Finish. The
discovered resources are listed on the Resources page.
Network Purpose
General purpose LAN Optional; used to access network resources for basic
networking activities.
Hypervisor migration Used to manage the network that you want to use for live
migration. Live migration allows you to move running VMs
from one node of the failover cluster to a different node in
the same cluster.
Operating system installation Allows static or DHCP network for operating system
imaging on nodes.
PowerFlex data Used for traffic between PowerFlex data clients (SDC) and
data servers (SDS). Used for all node types except for
dedicated storage-only nodes.
PowerFlex data (SDC traffic only) Used for traffic between SDC and SDS and between SDS
and SDS. Used for cloning a dedicated storage template.
PowerFlex data (SDS traffic only) Used for traffic between SDS and SDS. Used for cloning a
dedicated storage template.
3. In the Clone Template dialog box, enter a template name under Template
Name.
6. To specify the version to use for compliance, select the version from
the Firmware and Software Compliance list or choose Use PowerFlex
Manager appliance default catalog.
Because the template only includes server firmware updates, you cannot select a
minimum compliance version. The compliance version for a template must include
the full set of compliance update capabilities. PowerFlex Manager does not show
any minimum compliance versions in the Firmware and Software Compliance list.
7. Indicate access rights to the service deployed from this template by selecting
one of the following options:
9. On the Additional Settings page, provide new values for the Network
Settings, OS Settings, and Cluster Settings.
12. After the template is created, click Templates, select the PowerFlex
presentation server template, and then click Edit.
6. On the Additional Settings page, enter new values for the Network
Settings, PowerFlex Gateway Settings, and Cluster Settings.
7. Click Finish.
9. Edit the PowerFlex Gateway and VMware cluster, select the required field, and
click Save.
d. Specify the service permissions for this template under Who should have
access to the service deployed from this template? by doing one of the
following:
To restrict access to administrators, select Only PowerFlex Manager
Administrators.
To grant access to administrators and specific standard users, select
PowerFlex Manager Administrators and Specific Standard and
Operator Users.
To grant access to administrators and all standard users, select
PowerFlex Manager Administrators and All Standard and Operator
Users.
e. Click Next.
4. On the Addition Settings page:
Under Network settings, provide values for PowerFlex Management and Data
networks types.
Under OS setting, select the operating system credential to be used across all
nodes. Then, select the operating system repository to use for each of the
repositories in the original template.
Under PowerFlex Gateway Settings, create or select the PowerFlex Gateway
credential.
Under Node Pool Settings, select the node pool to use for each of the node
pools in the original template. Click Finish.
5. Go to the Templates page, right-click, and click Edit Storage node.
a. Select Storage Node from the list as the component, enter the number of
instances, and then click Continue.
b. Provide or verify the operating system, hardware, BIOS, and network
settings details.
c. Click Validate Settings.
d. Click Save
6. On the Templates page, right-click and click Edit Cluster.
a. Select PowerFlex Cluster from the list as the component.
b. Click Continue.
7. Under PowerFlex Settings, select the Target PowerFlex Gateway and verify other
details.
8. Click Save.
Deploy the 1. On the menu bar, click Storage-Only Node Templates > Deploy Service.
PowerFlex
2. On the Deploy Service page, perform the following steps:
storage-only
node service a. From the Select Published Template list, select the template to deploy a
service.
b. In the Service Name and Service Description fields, identify the service.
c. To specify the version to use for compliance, select the version from the
Firmware and Software Compliance list or choose Use PowerFlex Manager
appliance default catalog.
PowerFlex Manager checks the VMware vCenter version to determine if it
matches the VMware ESXi version for the selected compliance version. If the
VMware ESXi version is later than the vCenter version, PowerFlex Manager
blocks the service deployment and displays an error. PowerFlex Manager
instructs you to upgrade vCenter first or use a different compliance version
that is compatible with the installed vCenter version.
Note: Changing the firmware repository might update the firmware level on nodes
for this service. The global default firmware repository maintains the firmware on the
shared devices.
d. Indicate who has access to the service deployed from this template by
selecting one of the available options. Select one from the list and click Next.
3. Under PowerFlex Setting on the Deployment Settings Page, choose the Auto
Generate option and verify Protection Domain Name, Protection Domain Name
Template, Storage Pool Name, Number of Storage Pool, and Storage Pool Name
Template.
You can also click the Specify a new protection domain name now option to
the provide details manually.
To configure the PowerFlex settings, choose one of the following options for
PowerFlex MDM Virtual IP Source:
PowerFlex Manager Selected IP instructs PowerFlex Manager to select the
virtual IP addresses.
User Entered IP enables you to specify the IP address manually for each
PowerFlex data network that is part of the node definition in the service
template.
4. To configure operating system settings, select an IP Source.
To manually enter the IP address, click User Entered IP. From the IP Source list,
click Manual Entry. Then enter the IP address in the Static IP Address field.
5. To configure hardware settings, select the node source from the Node Source
list.
6. Click Next.
7. On the Schedule Deployment page, select one of the following options and click
Next:
Deploy Now—Select this option to deploy the service immediately.
Deploy Later—Select this option and enter the date and time to deploy the
service.
8. Review the Summary page.
The Summary page gives you a preview of what the service will look like after the
deployment.
9. Click Finish when you are ready to begin the deployment. For more information,
see PowerFlex Manager online help.
To manually install an external SDC on an ESXi server using the ESXCLI command-line
interface:
1. Install the SDC on an ESXi 7.0.x server:
a. Copy the SDC file to the tmp directory on the local datastore on the host.
b. Use SSH to connect to the host and run the following command:
esxcli software vib install -d /tmp/sdc-3.5.1100.101-
esx7.x.zip --no-sig-check
The SDC is installed on the ESXi server and is connected to PowerFlex. You can now
create and map volumes to the compute nodes.
PowerFlex appliance
Component Version
PowerFlex hardware
RCM 3.6
PowerFlex Manager
PxFM 3.6.0
PowerFlex
PowerFlex 3.5.1100.104
VMware
Storage-only nodes
Management switch
Component Version
MX740c blade
BIOS 2.7.7
iDRAC 4.20.20.20
VMware
References.........................................................................................................97
References
The following Dell Technologies documentation provides additional and relevant
information. Access to these documents depends on your login credentials. If you do not
have access to a document, contact your Dell Technologies representative.
• PowerFlex InfoHub site
• Dell EMC PowerFlex Rack Architecture Overview
• Dell EMC PowerFlex: Networking Best Practices and Design Considerations
• Dell EMC PowerFlex Rack Using PowerFlex Manager
• Dell EMC PowerEdge MX
• PowerEdge MX7000 documentation
• Dell EMC Networking MX9116n Fabric Switching Engine documentation