01_PowerScale+Hardware+Concepts+-+Participant+Guide
01_PowerScale+Hardware+Concepts+-+Participant+Guide
HARDWARE CONCEPTS
PARTICIPANT GUIDE
PARTICIPANT GUIDE
PowerScale Hardware Concepts-SSP
Table of Contents
Platforms Overview 6
Platforms Overview 6
Scenario 6
PowerScale Nodes Overview 7
Knowledge Check 11
Basic Node Components 11
Knowledge Check 16
Flash Nodes 16
Hybrid Nodes 20
Archive Nodes 22
Accelerator Nodes 23
Knowledge Check 24
Node Positioning 25
Platform Refresh 27
PowerScale Features 28
Knowledge Check 32
Networking Architecture 46
Networking Architecture 46
Scenario 46
PowerScale Networking Architecture 46
General Networking Considerations 48
Knowledge Check 50
Node Interconnectivity 50
Simple Switch Architecture 52
Leaf-Spine Architecture 53
Leaf-Spine Network Components 54
Knowledge Check 55
Front-End Network Design 56
Knowledge Check 57
Platforms Overview
Platforms Overview
Scenario
The current topic is divided into three sections before moving into the
architecture of nodes.
The PowerScale family has several types of nodes, all of which can be
incorporated into a single cluster. Different nodes provide varying ratios of
capacity, throughput, or input/output operations per second (IOPS). The
converged architecture of these platforms enables the scaling out of
compute and capacity. The OneFS operating system powers all the
nodes.
The compute and storage of nodes are pooled to form one cluster.
While no longer available for sales, Isilon Gen6 hybrid and archive nodes
will be serviced until May 2028, and flash nodes until May 2029.
• F200
• F600
• F900
• F210
• F710
• F910
Like the other PowerScale all-flash models, three nodes form a cluster,
and nodes can be added individually.
The 2U F910 requires OneFS 9.8 and is node compatible with the F900.
Newest PowerScale all-flash nodes. F210 (top), F710 (middle), and F910 (bottom).
Knowledge Check
2. Can different node types be mixed and matched within the same
cluster?
a. Yes
b. No
Administration is done from any node in the cluster as each node provides
network connectivity, storage, memory, nonvolatile RAM (NVDIMM), and
CPU processing power. There are different nodes for compute and
capacity, and these can be mixed and matched to meet specific business
needs.
Disks
2 HDDs typically have slower access speeds, however, if the node has
sufficient CPUs, the type of spinning media may not significantly affect
performance for applications with random access patterns. Dell
PowerScale SmartPools can mitigate performance issues by classifying
and directing data to the most proper media based on performance
requirements, optimizing data access for distinct types of applications.
3 Except for the all-flash models, adding one or two SSDs to archive and
• NVMe Drives4
• Self-Encrypting Drives5
Processor
Adding nodes always increase processing power and improves the overall
performance of the cluster. Forward Error Correction7, or FEC, use the
CPUs for calculations in high ingest and update workflows. Read, modify,
and write cycles can consume considerable CPU resources, as can the
all the drives in the node must be SEDs. A cluster composed of SED
nodes and non-SED nodes is supported during a transition to an all-SED
cluster.
6 DARE is a requirement for federal and industry regulations ensuring that
Protection is built into the file system structure and can be applied down to
the level of individual files.
Cache
Both L1 cache and L2 cache are managed and maintained in RAM. These
two cache layers are present in all Dell PowerScale storage nodes.
However, OneFS is also capable of using SSDs as level 3, or L3 cache.
The table shows each cache level, and how each works together to
provide performance improvements across the entire cluster.
Front-End Connectivity
Clients connect to the cluster using ethernet connections that are available
on all nodes. Because each node provides its own ethernet ports, the
amount of network bandwidth available to the cluster scales linearly.
Knowledge Check
Flash Nodes
The F-series nodes sit at the top of both performance and flash capacity
with All-Flash arrays for ultra-compute and high capacity. Even when the
cluster scales, the latency remains predictable.
F800/810 4U chassis
• F800
• F810
F900 2U node
• F900
F910 2U node
• F910
F200 1U node
• F200
F600 1U node
• F600
F210 1U node
• F210
F710 1U node
• F710
Hybrid Nodes
The hybrid storage platforms are highly flexible and maintain a balance
between large capacity and high-performance storage providing support
for a broad range of enterprise workloads.
• H400
• H500
• H600
• H5600
• H700
• H7000
• Includes inline compression and deduplication
Archive Nodes
The A-series nodes are designed as highly efficient and resilient active
archive storage or long-term data retention for large-scale data archives.
The archive platforms can be combined with new or existing flash and
hybrid storage systems into a single cluster that provides an efficient
tiered storage solution.
• A200
• A2000
• A300
• A3000
• Includes inline compression and deduplication
Accelerator Nodes
OneFS 9.3 and later includes support for two new PowerScale accelerator
node models. The accelerator nodes do not contain any local storage that
can be used as a part of the cluster and are optimized for CPU or memory
configurations. Based on the 1RU Dell PE R640 platform, accelerator
nodes include:
− Each B100 node delivers Fibre Channel ports that can connect
directly to a tape subsystem or a Storage Area Network (SAN).
Knowledge Check
4. Select the best fit node type for each use case.
Flash Hybrid Archive
Nodes Nodes Nodes
An organization implementing a home ☐ ☐ ☐
Node Positioning
Consider how the nodes are positioned for sales and upgrades as
organizations work to stay competitive and operate efficiently.
Selling Criteria
Selling Criteria All Gen6 nodes (old and F200, F600, F900 and
new) F210, F710, F910
Hardware Upgrades
Compatibility
Platform Refresh
PowerScale Features
The table shows the common features provided across the PowerScale
platforms.
existing cluster, and then use SmartFail to remove the old nodes.
16 All the information remains the same, including the cluster name, file
Knowledge Check
Scenario
The current topic is divided into three sections before moving into the
high-level networking architecture.
The graphic shows the rear view and front view of an Isilon Gen6 and
PowerScale Gen6 chassis.
1: The compute module bay of the two nodes makes up one node pair.
Scaling out a cluster with Gen6 nodes is done by adding more node pairs.
You cannot mix node types in the same node pair or chassis.
6: Each node can have 1 or 2 SSDs that are used as L3 cache, global
namespace acceleration (GNA), or other SSD strategies.
7: The chassis comes in two different depths, the normal depth is about
37 inches and the deep chassis is about 40 inches.
8: Each node has five drive sleds. Depending on the length of the chassis
and type of the drive, each node can handle up to 30 drives or as few as
15. A drive sled must always have the same type of disk drive.
9: You cannot mix 2.5" and 3.5" drive sleds in a node. Disks in a sled are
all the same type.
10: The sled can be either a short sled or a long sled. The types are:
Gen6 Tour
Gen6 Chassis
All Gen6 chassis come with the front panel and the front panel display
module. The front panel covers the drive sleds while allowing access to
the display.
Movie:
The web version of this content contains a movie.
See the video transcript in the course guide or download from the player.
The hardware tour will take a deeper look inside the node’s compute
module.
Movie:
The web version of this content contains a movie.
See the video transcript in the course guide or download from the player.
Knowledge Check
2. Which statements are true about drives and drive sleds for a Gen6
cluster?
a. A3000 nodes are housed in a deep chassis which can
accommodate four 3.5-inch drives per sled.
b. You can only mix 2.5 and 3.5-inch drive sleds in a H7000 deep
chassis.
c. Each F800 node has five drive sleds that can house 2.5-inch
drives.
d. The maximum number of drives that can be housed in any Gen6
node is 15.
F600 with rNDC 1 GbE management ports and PCIe front-end NIC, or F200 with rNDC
front-end NIC.
8: PSU 1 - 1100w
9: PSU 2 - 1100w
Every all-flash node has drives in drives bays, instead of drive sleds,
which are used in the Gen6 and PowerScale hybrid and archive platforms.
All nodes have front system indicators, a pull-out service tag, and an LCD
display providing system information, status, and error messages. The
LCD display can be used to configure or view the system iDRAC IP
address.
The F210 uses four, 2.5-inch NVMe SSD drives in bays 1-4.
The F710 uses ten, 2.5-inch NVMe SSD drives in bays 1-10.
The F710 and F210 are 1U form factor nodes and are based on the
PowerEdge R660 architecture.
The F910 is a 2U form factor node and is based on the PowerEdge R760
architecture.
F210
F710
F910
PowerScale Journal
Journal - NVDIMM
The F200, F600, and F900 use a 16 GB NVDIMM for the journal.
The vault operation saves contents from DRAM to NVDIMM flash, and a
restore process moves contents from NVDIMM back to DRAM during
BIOS initialization on power-up.
Journal - M.2
Archive and hybrid nodes are partner nodes whose journals are mirrored.
Every journal has a mirror copy on another node.
The F210, F710, and the F910 include a journal hardware upgrade with
the new 32 GB configuration of the Dell Software Defined Persistent
Memory (SDPM) technology.
The vault solution is a single module holding a Battery Backup Unit (BBU)
and the Vault Optimized Storage Solution (VOSS) NVMe M.2 drive. On
power loss, data is copied from DDR5 DIMMs to the VOSS module.
Knowledge Check
Architecture Overview
P100
B100
Knowledge Check
Networking Architecture
Networking Architecture
Scenario
Context: The technician must understand the two types of networks that
get configured along with the supported protocols and connection types.
21Because each node provides its own Ethernet ports, the amount of
network bandwidth available to the cluster scales linearly.
Minimizing latency
Available bandwidth
25Load sharing creates two or more paths to the same endpoint and
shares the network load.
Knowledge Check
Node Interconnectivity
PowerScale nodes can use either an InfiniBand or Ethernet switch for the
backend. InfiniBand was designed as a high-speed interconnect for high-
1: Back-end ports int-a and int-b. The int-b port is the upper port. Gen6
back-end ports are identical for InfiniBand and Ethernet and cannot be
identified by looking at the node. If Gen6 nodes are integrated into a Gen5
or earlier cluster, the back-end uses InfiniBand. There is a procedure to
convert an InfiniBand back-end to Ethernet if the cluster no longer has
pre-Gen6 nodes.
3: Some nodes, such as archival nodes, may only use part of a 10 GbE
port bandwidth while other workflows use the full 40 GbE port bandwidth.
The Ethernet performance is comparable to InfiniBand so there should be
no performance bottlenecks with mixed performance nodes in a single
cluster. Administrators should see no performance differences if moving
from InfiniBand to Ethernet.
For smaller clusters, a single hop switching architecture that is paired with
the inherent back-end redundancy should suffice to meet the bandwidth,
latency, and traffic requirements. A single layer of switches can be used
for each interface (int-a and int-b). For example, consider a four-node
cluster. A single switch can be used to network the traffic for each
interface, int-a, and int-b.
Leaf-Spine Architecture
• Dell Z9100-ON 32-port 100 GbE switch (EOL 05/2023, EOSS TBA)
• Dell S4148-ON 48-port 10 GbE switch (EOL 5/2023. EOSS TBA)
• Dell Z9264-ON 64-port 100 GbE switch.
• Dell S5232-ON 32-port 100 GbE switch
A conceptual graphic of the leaf-spine topology for a PowerScale cluster using Z9100-ON
switches for the int-A network.
Spine Dell Z9264-ON 64- Back-end network with 64, 100 GbE
port switch ports that connects to the leaf
switch.
Knowledge Check
The external network provides connectivity for clients over standard file-
based protocols. It supports link aggregation, and network scalability is
provided through software in OneFS. PowerScale Gen6 nodes (A300,
A3000, H700, H7000) and the all-flash nodes (F200, F600, F900, F210,
F710, F910) have two front-end ports: 10 GbE, 25 GbE, 40 GbE, or 100
GbE.
26 For stateful protocols, such as SMB and NFSv4, this prevents client-
side timeouts and unintended reconnection to another node in the cluster.
Instead, clients maintain their connection to the logical interface and
continue operating normally. OneFS supports Continuous Availability (CA)
for stateful protocols like SMB, and NFSv4 is supported.
Knowledge Check
Scenario
• Serial Console27
• Web Administration Interface (WebUI)28
• Command Line Interface (CLI)29
• Platform Application Programming Interface (PAPI)30
• Front Panel Display31
• iDRAC/IPMI32
and manage the cluster. Access to the CLI is through a secure shell (SSH)
connection to any node in the cluster.
30 The customer uses the OneFS application programming interface (API)
Movie:
Link:
https://edutube.dell.com/Player.aspx?vno=KjBgi9m8LmZLw58klDHmOA=
=&attachments=true&autoplay=false
Four options are available for managing the cluster. The web
administration interface (WebUI), the command-line interface (CLI), the
serial console, or the platform application programming interface (PAPI),
also called the OneFS API. The first management interface that you may
use is a serial console to node 1. A serial connection using a terminal
emulator, such as PuTTY, is used to initially configure the cluster. The
serial console gives you serial access when you cannot or do not want to
use the network. Other reasons for accessing using a serial connection
The WebUI is a graphical interface that is used to manage the cluster. The
WebUI requires at least one IP address be configured on one external
Ethernet port of each node. To access the WebUI from another system,
an Internet browser is used to connect to port 8080.33
33 The user must log in using the root account, admin account, or an
account with log-on privileges. After opening the web administration
interface, there is a four-hour login timeout. The WebUI supports multiple
browsers including Microsoft Edge, Mozilla Firefox, Google Chrome, and
Apple Safari.
Knowledge Check
The CLI can be accessed in two ways: Out-of-band34 and In-band35. Both
methods use any SSH client such as OpenSSH or PuTTY. Access to the
interface changes based on the assigned privileges.
OneFS commands are code that is built on top of the UNIX environment
and are specific to OneFS management. You can use commands together
in compound command structures combining UNIX commands with
customer facing and internal commands.
The graphic shows the OneFS CLI access using one of the cluster nodes.
34 Accessed using a serial cable that is connected to the serial port on the
back of each node. As many laptops no longer have a serial port, a USB-
serial port adapter may be needed.
35 Accessed using an external IP address that is configured for the cluster.
5: The CLI command use includes the capability to customize the base
command with the use of options, also known as switches and flags. A
single command with multiple options results in many different
permutations, and each combination results in different actions performed.
6: The CLI is a scriptable interface. The UNIX shell enables scripting and
execution of many UNIX and OneFS commands.
CLI Usage
The example shows the help option used for the isi status command.
isi config
Use the console to change initial configuration settings. When in the isi
config console, other configuration commands are unavailable. The
exit command is used to go back to the default CLI.
Knowledge Check
2. Select all the cluster management capabilities that are available using
the CLI.
Deep Dive: See the PowerScale OneFS Info Hubs and then
find the OneFS API Reference Guide under the proper
OneFS software version.
Access the Dell PowerScale API Developer Portal for full
API documentation.
• The Gen6 front panel display is an LCD screen with five buttons that
are used for basic administration tasks37.
• The interface consists of the LCD screen, a round ENTER button for
making selections, and four arrow buttons for navigating menus.
• There are four LEDs across the bottom of the interface that show
which node is being communicated with.
• To change the communication to another node, use the arrow buttons.
Column 2
Knowledge Check
38 You can join a node to a cluster and the panel displays the node name
after the node has joined the cluster. The LCD display provides system
information, status, and error messages to indicate that the system is
functioning correctly or requires attention.
Click the Save Progress & Exit button in the course menu or below
to record this content as complete.
Go to the next learning or assessment, if applicable.
A2000
The A2000 is an ideal solution for high-density, deep archive storage that
safeguards data efficiently for long-term retention. The A2000 stores up to
1280 TB per chassis and scales to over 80 PB in a single cluster
A300
An ideal active archive storage solution that combines high performance,
nearline accessibility, value, and ease of use. The A300 provides between
120 TB to 1.2 PB per chassis and scales to 75 PB in a single cluster. The
A300 includes inline compression and deduplication capabilities.
A3000
An ideal solution for high-performance, high-density, deep archive storage
that safeguards data efficiently for long-term retention. The A3000 stores
up to 1.6 PB per chassis and scales to 100 PB in a single cluster. The
A3000 includes inline compression and deduplication capabilities.
F200
Provides the performance of flash storage in a cost-effective form factor to
address the needs of a wide variety of workloads. Each node can scale
raw storage capacity from 3.84 TB to 30.72 TB per node and up to 7.7 PB
of raw capacity per cluster. The F200 includes in-line compression and
deduplication. The minimum number of F200 nodes per cluster is three
while the maximum cluster size is 252 nodes. The F200 is best suited for
remote offices, small M&E workloads, small hospitals, retail outlets, IoT,
factory floor, and other similar deployment scenarios.
F600
With NVMe drives, the F600 provides a larger capacity with performance
in a cost-effective compact form factor to power demanding workloads.
Each node allows you to scale raw storage capacity from 15.36 TB to 240
TB per node and up to 60 PB of raw storage per cluster. The F600
includes inline software data compression and deduplication. The
minimum number of F600 nodes per cluster is three while the maximum
cluster size is 252 nodes. The F600 is best suited for M&E studios,
hospitals, and financials that need performance and capacity for
demanding workloads.
F710
The F710 is a 1U platform with up to 10 all-flash NVMe, TLC or QLC, SSD
drives per node. Raw storage scales from 38TB to 307TB per node and
includes in-line compression and deduplication.
F800
The F800 is suitable for workflows that require extreme performance and
efficiency. It is an all-flash array with ultra-high performance. It delivers up
to 250,000 IOPS and up to 15 GB/s aggregate throughput in a single
chassis configuration. Also, it delivers up to 15.75M IOPS and 945 GB/s of
aggregate throughput in a 252 node cluster. The raw storage capacity
scales from 96 TB to 924 TB in a single 4U chassis and up to 58 PB in a
single cluster.
F900
Provides the maximum performance of all-NVMe storage in a cost-
effective configuration to address the needs of demanding workloads.
Each node is 2U in height and hosts 24 NVMe SSDs. It can scale raw
storage capacity from 46 TB to 720 TB per node and up to 186 PB of raw
capacity per cluster. The F900 includes in-line compression and
deduplication. The minimum number of F900 nodes per cluster is three
while the maximum cluster size is 252 nodes. The F900 is best suited for
Media and Entertainment 8K, genomics, algorithmic trading, artificial
intelligence, machine learning, and HPC workloads.
F910
The F910 is the next-gen version of the F900 and is node compatible with
the F900. 2U in height and hosts 24 NVMe SSDs. 720 TB maximum per
node and up to 186 PB of raw capacity per cluster, includes in-line
compression and deduplication.
H400
The H400 provides a balance of performance, capacity, and value to
support a wide range of file workloads. It delivers up to 3 GB/s bandwidth
per chassis and provides capacity options ranging from 120 TB to 960 TB
per chassis. The H400 uses a medium compute performance node with
SATA drives.
H5600
The H5600 combines massive scalability – 1.28 PB (raw) per chassis and
up to 8 GB/s bandwidth in an efficient, highly dense, deep 4U chassis. The
H5600 delivers inline data compression and deduplication. It is designed
to support a wide range of demanding, large-scale file applications and
workloads.
H600
The H600 is designed to provide high performance at value, delivers up to
120,000 IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for
high-performance computing (HPC) workloads that don’t require the
extreme performance of all-flash. These are spinning media nodes with
various levels of available computing power - H600 combines our turbo
compute performance nodes with 2.5" SAS drives for high IOPS
workloads.
H700
Provides maximum performance and value to support demanding file
workloads. The H700 provides capacity up to 1.2 PB per chassis and 75
PB per cluster. The H700 includes inline compression and deduplication
capabilities.
H7000
Provides versatile, high-performance, high-capacity hybrid platform with up to 1.6
PB per chassis and 100.8 PB per cluster. The deep-chassis based H7000 is
ideal to consolidate a range of file workloads on a single platform. The H7000
includes inline compression and deduplication capabilities.