0% found this document useful (0 votes)
71 views74 pages

01_PowerScale+Hardware+Concepts+-+Participant+Guide

The document serves as a participant guide for understanding PowerScale hardware concepts, detailing various node types, their components, and architecture. It covers the implementation of PowerScale clusters, including node configurations, networking architecture, and management tools. The guide is structured with knowledge checks to reinforce learning and includes information on different PowerScale node offerings and their use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views74 pages

01_PowerScale+Hardware+Concepts+-+Participant+Guide

The document serves as a participant guide for understanding PowerScale hardware concepts, detailing various node types, their components, and architecture. It covers the implementation of PowerScale clusters, including node configurations, networking architecture, and management tools. The guide is structured with knowledge checks to reinforce learning and includes information on different PowerScale node offerings and their use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

POWERSCALE

HARDWARE CONCEPTS

PARTICIPANT GUIDE

PARTICIPANT GUIDE
PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 2


Platforms Overview

Table of Contents

PowerScale Hardware Concepts 5

Platforms Overview 6
Platforms Overview 6
Scenario 6
PowerScale Nodes Overview 7
Knowledge Check 11
Basic Node Components 11
Knowledge Check 16
Flash Nodes 16
Hybrid Nodes 20
Archive Nodes 22
Accelerator Nodes 23
Knowledge Check 24
Node Positioning 25
Platform Refresh 27
PowerScale Features 28
Knowledge Check 32

Components and Architecture 34


Components and Architecture 34
Scenario 34
Gen6 Hardware Components 34
Gen6 Tour 36
Knowledge Check 36
PowerScale F200 and F600 Hardware Components 37
PowerScale F900 Hardware Components 38
All-Flash Nodes - Drive Bays 39
PowerScale F210, F710, and F910 Hardware Components 41
PowerScale Journal 42
Knowledge Check 43

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 3


Platforms Overview

Accelerator Nodes Hardware Components 44


Knowledge Check 45

Networking Architecture 46
Networking Architecture 46
Scenario 46
PowerScale Networking Architecture 46
General Networking Considerations 48
Knowledge Check 50
Node Interconnectivity 50
Simple Switch Architecture 52
Leaf-Spine Architecture 53
Leaf-Spine Network Components 54
Knowledge Check 55
Front-End Network Design 56
Knowledge Check 57

Cluster Management Tools 58


Cluster Management Tools 58
Scenario 58
Management Interfaces Overview 58
Serial Console Video 60
Web Administration Interface (WebUI) 61
Knowledge Check 62
Command Line Interface (CLI) 63
CLI Usage 65
isi config 65
Knowledge Check 66
Platform Application Programming Interface (PAPI) 67
Front Panel Display 68
Knowledge Check 69
You Have Completed This Content 70

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 4


Platforms Overview

PowerScale Hardware Concepts

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 5


Platforms Overview

Platforms Overview

Platforms Overview

Scenario

A new customer, Infinity Solutions, chose PowerScale as their storage


solution to implement several workflows. Now, the Platform Engineer (PE)
must be trained on how to implement the solution, starting with
components, component installations, cluster implementation, and finally
maintenance of the solution. The new PE is familiar with the high-level
OneFS features and functions, and now requires training on the hardware
aspect of the solution. The PE must be familiar with four areas before
beginning the installation activities:

• What are the components that make up a node?


• What are the node offerings and their use cases?
• What is the high-level networking implementation of the environment?
• How to connect to the node and cluster?

The current topic is divided into three sections before moving into the
architecture of nodes.

The web version of this content contains an interactive activity.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 6


Platforms Overview

PowerScale Nodes Overview

PowerScale has multiple converged compute systems that are called


nodes. PowerScale nodes are the building blocks of a PowerScale
cluster.1 Each node is a rack-mountable enterprise appliance that
combines compute and storage capabilities. PowerScale is designed for
large volumes of unstructured data. The design goal for PowerScale
nodes is to provide a scale-out solution with minimal operational
overhead. The solution is agile to deploy, like using similar services from
the cloud.

1 A PowerScale OneFS cluster behaves as a single, central storage


system, and designed to eliminate single points of failure through several
key mechanisms:
1. Journaled file system. Each node in the cluster has a battery-backed
NVRAM card, protecting uncommitted writes to the file system. When a
node boots up, it checks its journal and selectively replays transactions to
disk, ensuring fast and consistent recoveries after system failures, such as
power loss.
2. Quorum for data protection. To accept and ensures consistency across
data writes, a quorum of nodes must be active and responding. A quorum
is defined as a simple majority: a cluster with *n* nodes requires *n/2 + 1*
nodes online to allow writes. The quorum also determines the number of
nodes needed for specific data protection levels.
3. No dedicated parity node or drive: Unlike traditional RAID systems,
OneFS does not rely on a dedicated parity node or drive. Instead, all data,
metadata, and protection information are distributed across the nodes.
This design further eliminates single points of failure.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 7


Platforms Overview

The PowerScale family has several types of nodes, all of which can be
incorporated into a single cluster. Different nodes provide varying ratios of
capacity, throughput, or input/output operations per second (IOPS). The
converged architecture of these platforms enables the scaling out of
compute and capacity. The OneFS operating system powers all the
nodes.

The compute and storage of nodes are pooled to form one cluster.

Isilon Gen6 Platforms

• Flash Nodes: F800, F810


• Hybrid Nodes: H400, H500, H5600, H600
• Archive Nodes: A200, A2000

A minimum of four nodes form a cluster in a single 4U chassis. Additional


nodes are added to the cluster as node pairs, which provide peer node
redundancy.

While no longer available for sales, Isilon Gen6 hybrid and archive nodes
will be serviced until May 2028, and flash nodes until May 2029.

Isilon Gen6 chassis.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 8


Platforms Overview

PowerScale Gen6 Platforms

• Hybrid: H700, H7000


• Archive: A300, A3000

The PowerScale Gen6 series also requires a minimum of four nodes to


form a cluster in a single 4U chassis. PowerScale Gen6 nodes are also
added to the cluster as node pairs.

PowerScale Gen6 A3000 chassis.

PowerScale All-flash Platforms

• F200
• F600
• F900

Minimum of three nodes to form a cluster, with additional nodes added


one at a time. The F200 and F600 are 1U nodes while the F900 is 2U.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 9


Platforms Overview

F200 (top), F600 (middle), and F900 (bottom) node.

PowerScale All-Flash Platform - updated nodes

• F210
• F710
• F910

Like the other PowerScale all-flash models, three nodes form a cluster,
and nodes can be added individually.

The F210 and F710 take advantage of significant performance


improvements with combined hardware and software innovations and
require PowerScale OneFS 9.7.

The 2U F910 requires OneFS 9.8 and is node compatible with the F900.

Newest PowerScale all-flash nodes. F210 (top), F710 (middle), and F910 (bottom).

PowerScale Accelerator Platforms

• Performance Accelerator: P100


• Backup Accelerator: B100

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 10


Platforms Overview

Accelerator nodes can be added to existing storage node clusters. Both


P100 and B100 are 1U nodes.

P100 (top) and B100 (bottom).

Knowledge Check

1. What are the benefits of pooling nodes into a cluster?


a. Increase the linear scalability of compute and storage
b. Effective use of resources as one large system
c. Increase data availability by eliminating single points of failure

2. Can different node types be mixed and matched within the same
cluster?
a. Yes
b. No

Basic Node Components

Individual PowerScale nodes provide the data storage capacity and


processing power of the PowerScale scale-out NAS platform. All the
nodes are peers to each other and no single primary or administrative
node.

Administration is done from any node in the cluster as each node provides
network connectivity, storage, memory, nonvolatile RAM (NVDIMM), and
CPU processing power. There are different nodes for compute and
capacity, and these can be mixed and matched to meet specific business
needs.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 11


Platforms Overview

Disks

It is common to have a two or three-tiered solution where working data is


on high-performance drives and the rest of the data is on lower
performance storage. The working data is set for the proper access
pattern and protected at a high level. This way, 60 to 95 percent of all data
under management can exist on less expensive disk but is still accessible
at reasonable performance rates. Furthermore, this archived data can be
promoted to the fastest disk at any point and is protected for long time
periods.

• Hard Disk Drives (HDD)2


• Solid State Drives (SSD)3
− SSDs allow archive and hybrid nodes to handle data access more
efficiently, especially for frequently accessed files or metadata.
− Reduced latency. Data retrieval from SSDs is faster, giving faster
response times to applications and users.
− SSDs can enhance the data deduplication and compression
process, making them more efficient, and optimizing storage.

2 HDDs typically have slower access speeds, however, if the node has
sufficient CPUs, the type of spinning media may not significantly affect
performance for applications with random access patterns. Dell
PowerScale SmartPools can mitigate performance issues by classifying
and directing data to the most proper media based on performance
requirements, optimizing data access for distinct types of applications.
3 Except for the all-flash models, adding one or two SSDs to archive and

hybrid nodes significantly improves performance on the vast majority of


PowerScale installations.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 12


Platforms Overview

• NVMe Drives4
• Self-Encrypting Drives5

− Dell recommends that a PowerScale DARE, or Data at Rest


Encryption6, cluster be composed of only self-encrypting drive
(SED) nodes.

Processor

Adding nodes always increase processing power and improves the overall
performance of the cluster. Forward Error Correction7, or FEC, use the
CPUs for calculations in high ingest and update workflows. Read, modify,
and write cycles can consume considerable CPU resources, as can the

4 NVMe drives capitalize on the low latency and internal parallelism of


solid-state storage devices. NVMe based SSDs are the fastest for data
access and are used in high-performance, high throughput, low latency
workflows. The PowerScale F910, F900, F710, F600, and F210 supports
an all NVMe SSD storage architecture.
5 PowerScale supports self-encrypting drives. If there is a node with SEDs,

all the drives in the node must be SEDs. A cluster composed of SED
nodes and non-SED nodes is supported during a transition to an all-SED
cluster.
6 DARE is a requirement for federal and industry regulations ensuring that

data is encrypted when it is stored. Dell PowerScale OneFS provides


DARE through self-encrypting drives (SEDs) and a key management
system. The data on SEDs is encrypted, and the data may not be
accessed if the SED is stolen or removed from the cluster.
7 FEC protection in OneFS is modeled on the Reed-Solomon algorithm.

Protection is built into the file system structure and can be applied down to
the level of individual files.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 13


Platforms Overview

AutoBalance and SmartPools data movements which involve FEC


calculation overhead. FEC is discussed later.

Random Access Memory (RAM)

Some benefits of added RAM:

• Support higher connection counts and more persistent state.


• Improve L2 hit rate on reads.
• Improve metadata read bandwidth.
• Potentially reduce disk IOPS for read/modify/write cycles on data being
updated.
• Allows tuning options like larger coalescer buffers.
• Decrease impact of job manager tasks on normal operations.

Memory has less impact on:

• Random and synchronous writes.


• File creation and deletion rates.

Cache

The OneFS caching infrastructure design aggregates the cache present


on each node in a cluster into one globally accessible pool of memory.8
This allows all the nodes memory cache to be available to every node in
the cluster.

8OneFS uses an efficient messaging system, similar to non-uniform


memory access (NUMA).

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 14


Platforms Overview

OneFS uses up to three levels of read cache, plus an NVRAM-backed


write cache, or coalescer.

Both L1 cache and L2 cache are managed and maintained in RAM. These
two cache layers are present in all Dell PowerScale storage nodes.
However, OneFS is also capable of using SSDs as level 3, or L3 cache.

The table shows each cache level, and how each works together to
provide performance improvements across the entire cluster.

Name Type Persistence Description

L1 Cache RAM Volatile Also called front-end cache, holds


clean, cluster coherent copies of
file system data and metadata
blocks requested by clients over
the frontend network.

L2 Cache RAM Volatile Back-end cache, containing clean


copies of file system data and
metadata on a local node.

SmartCache NVRAM Non-volatile Persistent, battery backed NVRAM


/ Write journal cache which buffers any
Coalescer pending writes to front-end files
that have not been committed to
disk.

L3 Cache SSD Non-volatile Contains file data and metadata


blocks evicted from L2 cache,
effectively increasing L2 cache
capacity.
L3 is good for random, read heavy
workflows accessing the same
datasets. L3 cache has no
prefetch.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 15


Platforms Overview

Front-End Connectivity

Clients connect to the cluster using ethernet connections that are available
on all nodes. Because each node provides its own ethernet ports, the
amount of network bandwidth available to the cluster scales linearly.

OneFS supports multiprotocol access to the cluster data. Clients can


connect to the cluster using SMB, NFS, S3, HDFS, FTP, or HTTP.

Important: Gen6 nodes (old and new), PowerScale all-


flash, hybrid, and archive nodes can exist within the same
cluster. Every PowerScale node is equal to every other
PowerScale node of the same type in a cluster.

Knowledge Check

3. Match the node component with its function.

A. Cache House the striped data and its copies


across nodes.

B. RAM Calculate the required number of


data protection stripes.

C. Processor Improves the time taken to read data


for L2.

D. Disks Increase the read and write rates


particularly for data accessed
frequently and sequentially.

Flash Nodes

The web version of this content contains an interactive activity.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 16


Platforms Overview

The F-series nodes sit at the top of both performance and flash capacity
with All-Flash arrays for ultra-compute and high capacity. Even when the
cluster scales, the latency remains predictable.

F800/810 4U chassis

• F800
• F810

F900 2U node

• F900

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 17


Platforms Overview

F910 2U node

• F910

• F800, F810, F900, and F910 General Use Cases9

F200 1U node

• F200

F600 1U node

9Digital media: 4K, 8K, broadcast, real-time streaming, and post-


production.
Electronic Design Automation: design, simulation, verification, and
analysis of electronic and mechanical systems design.
Life Sciences: genomics DNA and RNA sequencing.
The F910 is a massive next-gen high-performance computing platform
with all-flash NVMe performance, designed for new workload demands
such as GenAI.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 18


Platforms Overview

• F600

F210 1U node

• F210

F710 1U node

• F710

• F200 and F600 General Use Cases10


• F210 and F710 General Use Cases11

10 1) Digital media: small and medium-size studios


2) Enterprise edge: remote and branch offices along with edge locations
that require high-performance local storage
3) Healthcare, Life Sciences: Genomics sequencing, digital pathology,
small hospitals, clinics
11 The F710 is best suited for Generative AI and other high performing

vertical workloads like, media and entertainment, healthcare and life


sciences, high frequency trading, and EDA workloads. The F210 is best
suited for customer beginning their AI and Analytics journey, and other
high-demanding workloads that require a balance of performance and
capacity. Both the F210 and F710 deliver significant performance gains
over the previous generation.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 19


Platforms Overview

Go to: For a detailed list of the flash node specifications,


see the PowerScale All-Flash NAS page.

Hybrid Nodes

The hybrid storage platforms are highly flexible and maintain a balance
between large capacity and high-performance storage providing support
for a broad range of enterprise workloads.

H400, H500, H600, H5600 4U chassis

• H400
• H500
• H600
• H5600

o Only H5600 includes inline compression and deduplication

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 20


Platforms Overview

H700, H7000 4U chassis

• H700
• H7000
• Includes inline compression and deduplication

• General Use Cases12

Go to: For a detailed list of the hybrid node specifications,


see the PowerScale Hybrid NAS page.

121) Digital media: broadcast, real-time streaming, rendering, and post-


production
2) Enterprise File Services: Home directories, File shares, group and
project data
3) Analytics: Big data analytics, Hadoop and Splunk log analytics

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 21


Platforms Overview

Archive Nodes

The A-series nodes are designed as highly efficient and resilient active
archive storage or long-term data retention for large-scale data archives.
The archive platforms can be combined with new or existing flash and
hybrid storage systems into a single cluster that provides an efficient
tiered storage solution.

A200, A2000 4U chassis

• A200
• A2000

A300, A3000 4U chassis

• A300

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 22


Platforms Overview

• A3000
• Includes inline compression and deduplication

• General Use Cases13

Go to: For a detailed list of the archive node specifications,


see the PowerScale Archive NAS page.

Accelerator Nodes

OneFS 9.3 and later includes support for two new PowerScale accelerator
node models. The accelerator nodes do not contain any local storage that
can be used as a part of the cluster and are optimized for CPU or memory
configurations. Based on the 1RU Dell PE R640 platform, accelerator
nodes include:

• The PowerScale Performance Accelerator (PSPA) P100 is


promoted as a low-cost, value-added means to add performance to
clusters that consist of nodes that are CPU workload heavy. In
addition, the PSPA is positioned in various verticals that are targeted at
specific workloads such as M&E. There may be use cases for new
clusters to incorporate PSPAs as a part of the solution to meet certain
cost and performance targets that may not be able to be met with

131) Deep Archives: large-scale, archiving data storage. 2) Disaster


Recovery: disaster recovery target for organizations requiring a large-
capacity storage solution. 3) File Archives: storage and access to
reference data to meet business, regulatory and legal requirements.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 23


Platforms Overview

standard node types. PSPAs are intended to improve cluster


performance by accommodating more concurrent active client
connections and improve individual session throughput.
− The P100 is not recommended for larger clusters as the
performance impact diminishes and adding more storage nodes
becomes a better solution.
• The PowerScale Backup Accelerator (PSBA) B100 is promoted as
the go to solution for two-way NDMP from a PowerScale cluster.
Offloading NDMP workloads from the primary storage nodes provides
a seamless way to accelerate tape backups.

− Each B100 node delivers Fibre Channel ports that can connect
directly to a tape subsystem or a Storage Area Network (SAN).

P100 and B100 front view

P100 and B100 front view without bezel

Go to: For the complete list of specifications, see the spec


sheet for accelerator nodes.

Knowledge Check

4. Select the best fit node type for each use case.
Flash Hybrid Archive
Nodes Nodes Nodes
An organization implementing a home ☐ ☐ ☐

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 24


Platforms Overview

directory workflow for several departments


with balanced performance and capacity
requirements.

In the case of InfiTube Media implementing a


video streaming service that requires high
☐ ☐ ☐
throughput and low latency per stream.

An organization implementing a long-term


storage solution to retain data for several
☐ ☐ ☐
years for compliance reasons.

Node Positioning

The web version of this content contains an interactive activity.

The PowerScale family consists of all nodes in the Isilon Generation 6,


PowerScale Generation 6, and PowerScale All-Flash series. The nodes in
the PowerScale family are classified into three categories based on the
balance between capacity and performance: Flash nodes, Hybrid nodes,
and Archive nodes.

Consider how the nodes are positioned for sales and upgrades as
organizations work to stay competitive and operate efficiently.

Selling Criteria

Selling Criteria All Gen6 nodes (old and F200, F600, F900 and
new) F210, F710, F910

Base Sale 1 Chassis (4 nodes) 3 Nodes


(minimum
cluster)

Incremental Added in node-pairs (with or Add one node at a time


without chassis if available)

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 25


Platforms Overview

Use Case Classification

Metric Use cases PowerScale


Platforms

High performance, low Extreme Performance F910, F900, F710,


latency F810, F800, F600

$/IOP Performance F210, F200, H600

$/MB/s Balanced H7000, H5600, H700,


H500, H400

$/TB Archive A3000, A300, A2000,


A200

Hardware Upgrades

Current Nodes A2000, H400 H500, H600 F800, F810


A200 H5600

Recommended A3000, A300 or H700, F210, F910,


Nodes A300 H700 H7000 F200 F900,
Upgrade F710, F600

Compatibility

Node class compatibility enables deployment of a newer-generation node


to a node pool that has nodes from an earlier generation. The
requirements for nodes to be compatible in the same node pool include:

• Same node series


• Same number of HDDs, SEDs, and SSDs
• Same HDD and SSD layouts

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 26


Platforms Overview

• Compatible node generations must have a compatible memory


configuration
• Compatible RAM

Go to: The PowerScale Supportability and Compatibility


Guide on the OneFS PowerScale Info Hub has more
information about node compatibility and upgrades.

Platform Refresh

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 27


Platforms Overview

Most NAS platform technology refreshes involve replacing the hardware


completely every three to five years. PowerScale provides a seamless
technology refresh and upgrade experience.
• Straightforward technology refresh14

PowerScale hardware is also refreshed on a three-to-five-year cycle,


depending on your business processes and needs.
• Add new nodes, remove old nodes15
• Nondisruptive, minimal administration16

PowerScale Features

The table shows the common features provided across the PowerScale
platforms.

Attribute Description Benefits Use Case

14 The refresh to a new storage platform can be straightforward, performed


while you are still using the old storage platform.
15 Perform a PowerScale technology refresh by adding new nodes to your

existing cluster, and then use SmartFail to remove the old nodes.
16 All the information remains the same, including the cluster name, file

system structure, configuration, data, and DNS information.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 28


Platforms Overview

Scalability • Distributed fully • Increased Media and


symmetric performance per Entertainment17
clustered usable Terabyte
architecture (TB)
that combines • Enable lower
modular latency apps to
storage. use scale-out
• A cluster can NAS.
scale up to 252
nodes.
• Leveraging
standard
technology to
increase focus
on scale.

17 A Media and Entertainment production house needs high single stream


performance at PB scale that is cost-optimized. The organization requires
cloud archive in a single namespace, archive optimized density with a low
Total Cost of Ownership (TCO) solution. This environment typically has
large capacities and employs new performance technologies at will.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 29


Platforms Overview

Performance • Data reduction • Predictable


for storage performance at
efficiency scale
• Policy-based • Increased
automated storage
tiering options efficiency
to optimize
storage
resources and
lower costs
• Optimizing
components to
maximize
performance

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 30


Platforms Overview

Data • Self-healing • Small fault Financial


Protection design18 domains Sectors19
• FEC and • Eliminating
mirroring data single point of
protection failure
schemes • Predictable
• Policy-based failure handling
retention and at PB densities
protection
against
accidental
deletion

Sizing • Dedicated • Gen6 cluster in a Start-up


cache drives box and simple Company20
growth path
• Converged
architecture • Customizable
solution
• Non-disruptive
hardware • Same building
replacement blocks
irrespective of
cluster profile

18 Protects against disk or node failure, and includes back-end intra-cluster


failover
19 Financial sectors rely heavily on data protection and availability to

operate. Data loss such as customer transactions or system downtime can


negatively affect the business.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 31


Platforms Overview

Knowledge Check

5. Consider an organization hosting their high-performance M&E


workflow on an all F800 cluster. With the newer offerings in
PowerScale, the organization can perform a refresh to move their data
to faster drives and more computationally intensive systems. What
type of cluster should the organization consider for migrating their
workflows?
a. A cluster with F900 nodes
b. A cluster with a mix of F200 and F600 nodes
c. A cluster with H7000 nodes
d. A cluster with mix of H700 and F900 nodes

20 A simplicity and agility use case is a small start-up company growing at


a rapid pace, which needs to start with limited capacity and then grow on
demand for scale and new workloads.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 32


Platforms Overview

6. Consider an organization with an H600 cluster running OneFS 9.0 for


hosting a home directory workflow. After an internal review, the
organization has decided to perform regular backups and add A3000
nodes to archive infrequently accessed data. For backups to be
offloaded, the organization wants to add a few backup accelerator
nodes (B100). To create this new environment, what is the first
requirement that the administrator needs to consider?

Use the PowerScale OneFS Supportability and Compatibility Guide.


a. Upgrade the cluster to OneFS 9.3 or later before adding the new
nodes.
b. Upgrade the cluster to OneFS 9.2.1 or later before adding the new
nodes.
c. Create a separate node pool for the A3000 and B100 nodes.
d. Enable the evaluation license for all the required OneFS features
for the A3000 and B100 nodes.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 33


Components and Architecture

Components and Architecture

Components and Architecture

Scenario

Before undertaking any node installation, performance replacement, or


maintenance activities, it is critical the technician identifies the major
components that make up a node. PowerScale has several node
offerings, some have the same architecture while others have a different
architecture. The technician must differentiate between the high-level
architecture of each category and the fundamental differences.

The current topic is divided into three sections before moving into the
high-level networking architecture.

The web version of this content contains an interactive activity.

Gen6 Hardware Components

The Isilon and PowerScale Gen6 platforms are based on a proprietary


architecture that is designed by Dell Technologies. Gen6 (old and new)
requires a minimum of four nodes to form a cluster. Nodes must be added
to the cluster in pairs. The chassis holds four compute nodes and twenty
drive sled slots. Both compute modules in a node pair power-on when one
of the nodes connects to a power source.

The graphic shows the rear view and front view of an Isilon Gen6 and
PowerScale Gen6 chassis.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 34


Components and Architecture

1: The compute module bay of the two nodes makes up one node pair.
Scaling out a cluster with Gen6 nodes is done by adding more node pairs.
You cannot mix node types in the same node pair or chassis.

2: Each PowerScale Gen6 node provides two ports for back-end


connectivity and supports 10/25 GbE or 40/100 GbE, and InfiniBand.

3: Power supply unit - Peer node redundancy: When a compute module


power supply failure takes place, the power supply from the peer node
temporarily provides power to both nodes.

4: Large journals offer flexibility in deciding when data should be moved to


the disk. Each node has a dedicated M.2 vault drive for the journal. A
node mirrors their journal to its peer node. The node writes the journal
contents to the vault when a power loss occurs. A backup battery
maintains power while data is stored in the vault.

5: Each Gen6 node provides two ports for front-end connectivity.


Depending on model, the connectivity options for clients and applications
are 10 GbE, 25 GbE, 40 GbE, and 100 GbE.

6: Each node can have 1 or 2 SSDs that are used as L3 cache, global
namespace acceleration (GNA), or other SSD strategies.

7: The chassis comes in two different depths, the normal depth is about
37 inches and the deep chassis is about 40 inches.

8: Each node has five drive sleds. Depending on the length of the chassis
and type of the drive, each node can handle up to 30 drives or as few as
15. A drive sled must always have the same type of disk drive.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 35


Components and Architecture

9: You cannot mix 2.5" and 3.5" drive sleds in a node. Disks in a sled are
all the same type.

10: The sled can be either a short sled or a long sled. The types are:

• Long Sled - four drives of size 3.5"


• Short Sled - three drives of size 3.5"
• Short Sled - three or six drives of size 2.5"

Gen6 Tour

Gen6 Chassis

All Gen6 chassis come with the front panel and the front panel display
module. The front panel covers the drive sleds while allowing access to
the display.

Movie:
The web version of this content contains a movie.
See the video transcript in the course guide or download from the player.

Inside Gen6 Node

The hardware tour will take a deeper look inside the node’s compute
module.

Movie:
The web version of this content contains a movie.
See the video transcript in the course guide or download from the player.

Knowledge Check

1. What are the benefits of adding nodes as node-pairs in the Gen6


architecture?
a. Redundancy in power supply unit.
b. Journal mirroring to provide high availability during failures.
c. Active-passive back-end ports to manage intra-cluster traffic.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 36


Components and Architecture

d. Mix node types in a single pair for different performance and


storage requirements.

2. Which statements are true about drives and drive sleds for a Gen6
cluster?
a. A3000 nodes are housed in a deep chassis which can
accommodate four 3.5-inch drives per sled.
b. You can only mix 2.5 and 3.5-inch drive sleds in a H7000 deep
chassis.
c. Each F800 node has five drive sleds that can house 2.5-inch
drives.
d. The maximum number of drives that can be housed in any Gen6
node is 15.

PowerScale F200 and F600 Hardware Components

The web version of this content contains an interactive activity.

PowerScale All-Flash nodes require a minimum of three nodes to form a


cluster. You can add single nodes to the cluster. The F600 and F200 are a
1U form factor and are based on the PowerEdge R640 architecture.

F600 with rNDC 1 GbE management ports and PCIe front-end NIC, or F200 with rNDC
front-end NIC.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 37


Components and Architecture

Important: In the F600 nodes, the rNDC is not intended for


client network access, but it can provide 1 GbE
management port functionality. In the F200, the rNDC can
provide 10 GbE or 25 GbE connections for front-end
networking or 1 GbE management port functionality,
depending on which rNDC module is installed.

PowerScale F900 Hardware Components

The F900 is a 2U form factor and is based on the PowerEdge R740xd


architecture.

Rear view of an F900 node.

1: Back-end NIC: Two InfiniBand connections or dual port 40/100 GbE


NIC connection.

2: PCI Extender - CPU2

3: PCI Extender - CPU1

4: iDRAC RJ45: Enables remote access to iDRAC.

5: VGA port: Enables a display device connection to the system.

6: USB 3.0 connections

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 38


Components and Architecture

7: rNDC: 1 GbE Ethernet ports are for management network access. In


the F900 and F600 nodes, the rNDC does not provide network
connectivity.

8: PSU 1 - 1100w

9: PSU 2 - 1100w

10: Front-end NIC. Dual port 10/25 or 40/100 GbE.

All-Flash Nodes - Drive Bays

Every all-flash node has drives in drives bays, instead of drive sleds,
which are used in the Gen6 and PowerScale hybrid and archive platforms.

All nodes have front system indicators, a pull-out service tag, and an LCD
display providing system information, status, and error messages. The
LCD display can be used to configure or view the system iDRAC IP
address.

Drive bay locations - F200, F600, F900

The F200 uses four, 2.5-inch SAS SSDs in bays 0-3,

F200 drive bays

The F600 uses eight, 2.5-inch NVMe SSDs in bays 2–9.

• The drive bays on the left, 0 and 1, are not used.

F600 drive bays

The F900 uses 24, 2.5-inch NVMe SSDs in bays 0–23.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 39


Components and Architecture

F900 drive bays

Drive bay locations - F210, F710, F910

The F210 uses four, 2.5-inch NVMe SSD drives in bays 1-4.

• The F210 is node compatible with the F200

F210 drive bays

The F710 uses ten, 2.5-inch NVMe SSD drives in bays 1-10.

• The F710 is not node compatible with the F600

F710 drive bays

The F910 uses 24, 2.5-inch NVMe SSDs in bays 1-24.

• The F910 is node compatible with the F900

F910 drive bays

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 40


Components and Architecture

PowerScale F210, F710, and F910 Hardware


Components

The F710 and F210 are 1U form factor nodes and are based on the
PowerEdge R660 architecture.

The F910 is a 2U form factor node and is based on the PowerEdge R760
architecture.

F210

F210 rear view components

F710

F710 rear view components

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 41


Components and Architecture

F910

F910 rear view components

Go to: The PowerScale Hardware Info Hub for more


content related to PowerScale hardware.

PowerScale Journal

The journal, which records file-system changes in persistent memory,


recovers the file-system after failures, such as a power loss.

• A vault operation saves contents to persistent memory should a node


fail.
• When a node restarts, the journal replays file transactions to restore
the file system.

Journal - NVDIMM

The F200, F600, and F900 use a 16 GB NVDIMM for the journal.

The vault operation saves contents from DRAM to NVDIMM flash, and a
restore process moves contents from NVDIMM back to DRAM during
BIOS initialization on power-up.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 42


Components and Architecture

isi_hwmon monitors NVDIMM and the NVDIMM battery. If the NVDIMM


battery cannot support a vault operation, the node becomes read-only.

Journal - M.2

In Generation 6 PowerScale Archive and Hybrid nodes, journals are


stored in M.2 flash modules in case of node failure.

Archive and hybrid nodes are partner nodes whose journals are mirrored.
Every journal has a mirror copy on another node.

Journal - Software Defined Persistent Memory (SDPM)

The F210, F710, and the F910 include a journal hardware upgrade with
the new 32 GB configuration of the Dell Software Defined Persistent
Memory (SDPM) technology.

Previous platforms used NVDIMM-n for persistent memory, which


consumed a DIMM slot.

The vault solution is a single module holding a Battery Backup Unit (BBU)
and the Vault Optimized Storage Solution (VOSS) NVMe M.2 drive. On
power loss, data is copied from DDR5 DIMMs to the VOSS module.

Go to: Learn how Journaling is part of the Dell PowerScale


OneFS high availability approach to ensure that data is
consistent and intact.

Knowledge Check

3. Which statements are true about some of the fundamental


architectural comparisons between Gen6 nodes and PowerScale
F200, F600, and F900 nodes?
a. PowerScale All-flash nodes are based on PowerEdge based
architecture whereas Gen6 nodes are based on a proprietary Dell
architecture.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 43


Components and Architecture

b. Gen6 nodes house drives in sleds whereas PowerEdge based


solutions house drives in drive bays.
c. The Gen6 nodes use a 16 GB NVDIMM for the journal whereas
PowerScale All-Flash solutions use M.2 vault drives.
d. The minimum number of nodes required to form a Gen6 cluster is
four nodes whereas PowerScale All-Flash cluster requires a
minimum of three nodes.
e. In each Gen6 node or PowerScale All-Flash node, there is a
redundant power supply unit to protect against abrupt failures.

4. The VOSS module is included in what nodes?


a. F210, F710, and F910
b. F200, F600, and F900
c. PowerScale archive and hybrid nodes.
d. Partner nodes that vault data to each other during a power loss.

Accelerator Nodes Hardware Components

The web version of this content contains an interactive activity.

Architecture Overview

• These Accelerators are part of the PowerScale generation based on


PowerEdge servers.
• Both platforms launch on the Dell 14G R640 OEM XL PowerEdge
server.
• The Accelerators do not contain any local storage (other than boot
drives).
• The Accelerators work in Gen6 and later clusters.
• Inline Compression/ Deduplication is supported.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 44


Components and Architecture

P100

Rear view of the P100 accelerator node.

B100

Rear view of the B100 accelerator node.

Knowledge Check

5. Which statements are true about some of the fundamental


architectural comparisons between PowerScale storage nodes and
PowerScale accelerator nodes?
a. PowerScale storage nodes have data drives whereas accelerator
nodes only have a boot drive for the operating system.
b. Some PowerScale storage nodes have inline data deduplication
and compression whereas accelerator nodes are limited only to
inline data compression.
c. Since B100 accelerator nodes aid NDMP backups, there are no
front-end ports for client connectivity.
d. P100 accelerator nodes can be added one at a time to a cluster to
improve performance for workflows.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 45


Networking Architecture

Networking Architecture

Networking Architecture

Scenario

The networking configuration is critical for the organization to ensure


efficient traffic management. While the organization intends to have a
separate team implement the network, the technician must be acquainted
with the high-level networking implementation details. The technician
should help with networking best practices, and basic networking
architecture design and implementation. The current topic is divided into
three sections before moving into the cluster and node access.

The web version of this content contains an interactive activity.

PowerScale Networking Architecture

Context: The technician must understand the two types of networks that
get configured along with the supported protocols and connection types.

Front-end, External Network

Clients connect to the cluster using Ethernet connections21 that are


available on all nodes.

21Because each node provides its own Ethernet ports, the amount of
network bandwidth available to the cluster scales linearly.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 46


Networking Architecture

The complete cluster is combined with hardware, software, and networks


in the following view:

F200 cluster showing supported front-end protocols.

Back-end, Internal Network

OneFS supports a single cluster22 on the internal network. This back-end


network, which is configured with redundant switches for high availability,
acts as the backplane for the cluster.23 Usually, a simple switch
architecture is used for each interface in the back-end. However, for larger
deployments, a leaf-spine architecture is implemented for the back-end to

22 All intranode communication in a cluster is performed across a


dedicated backend network, comprising either Ethernet or low-latency
QDR InfiniBand (IB).
23 This enables each node to act as a contributor in the cluster and

isolating node-to-node communication to a private, high-speed, low-


latency network. This back-end network utilizes Internet Protocol (IP) for
node-to-node communication.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 47


Networking Architecture

provide redundancy and maximize throughput. The connections can be


either Ethernet or InfiniBand depending on the node type.

Gen6 chassis connecting to the back-end network.

General Networking Considerations

A network is unique to the requirements of each enterprise data center.


When approaching network design, use principles as a leading factor,
coupled with the enterprise requirements. The requirements must include
current and future application consumption, providing the guiding factor in
major decisions. Some of the guiding principles include:

Single points of failure

Ensure that the network design has layers of redundancy. Dependence on


a single device or link relates to a loss of resources or outages. The
enterprise requirements consider risk and budget, guiding the level of
redundancy. Implement redundancy through backup paths24 and load

24 If a primary link fails, traffic uses a backup path.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 48


Networking Architecture

sharing25. When designing access to PowerScale nodes, assume that


links and hardware will fail, and ensure that access to the nodes survive
those failures.

Minimizing latency

Ensuring that latency is minimal from the client endpoints to the


PowerScale nodes maximizes performance and efficiency. To minimize
latency, measure it accurately between the endpoints. Some techniques to
reduce latency include minimizing hops between endpoints, minimizing
network congestion, reducing packet routing, and optimizing the firewall.

Application and protocol traffic

Understanding the application data flow from clients to the PowerScale


cluster across the network allows for resources to be allocated accordingly
while minimizing latency and hops along this flow.

Available bandwidth

As traffic traverses the different layers of the network, the available


bandwidth should not be significantly different. Compare this available
bandwidth with the workflow requirements.

25Load sharing creates two or more paths to the same endpoint and
shares the network load.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 49


Networking Architecture

Go to: For more information, see the Dell PowerScale:


Network Design Considerations white paper.

Knowledge Check

1. What should be configured on PowerScale for a client to access data


from a Windows machine?
a. The PowerScale nodes and the client should exist in the same
environment and be connected over the front-end network.
b. The PowerScale nodes and the client should exist in the same
environment and be connected over an Ethernet based back-end
network.
c. The PowerScale nodes and the client should exist in the same
environment and be connected over an InfiniBand based back-
end network.
d. The clients can remotely access PowerScale over an IPv4 or IPv6
public network without existing in the same environment.

2. In the previous system, clients faced several outages due to switch


and link failures from time to time. How can this be avoided when
implementing the network for a PowerScale network?
a. Implement redundancy through backup paths and load sharing.
b. Minimize hops between endpoints and reduce network
congestion.
c. Understand the application and protocol traffic.
d. Compare this available bandwidth with the workflow requirements.

Node Interconnectivity

The web version of this content contains an interactive activity.

PowerScale nodes can use either an InfiniBand or Ethernet switch for the
backend. InfiniBand was designed as a high-speed interconnect for high-

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 50


Networking Architecture

performance computing, and Ethernet provides the flexibility and high


speeds that sufficiently support the PowerScale internal communications.

Rear view of back-end connections on Gen6, F200/F600, and F900 nodes

1: Back-end ports int-a and int-b. The int-b port is the upper port. Gen6
back-end ports are identical for InfiniBand and Ethernet and cannot be
identified by looking at the node. If Gen6 nodes are integrated into a Gen5
or earlier cluster, the back-end uses InfiniBand. There is a procedure to
convert an InfiniBand back-end to Ethernet if the cluster no longer has
pre-Gen6 nodes.

2: PowerScale nodes with different back-end speeds can connect to the


same back-end switch and not see any performance issues. For example,
an environment has a mixed cluster where A300 nodes have 10 GbE
back-end ports and H700 nodes have 40 GbE ports. Both node types can
connect to a 40 GbE switch without effecting the performance of other
nodes on the switch. The 40 GbE switch provides 40 GbE to the H700
nodes and 10 GbE to the A300 nodes.

3: Some nodes, such as archival nodes, may only use part of a 10 GbE
port bandwidth while other workflows use the full 40 GbE port bandwidth.
The Ethernet performance is comparable to InfiniBand so there should be
no performance bottlenecks with mixed performance nodes in a single
cluster. Administrators should see no performance differences if moving
from InfiniBand to Ethernet.

4: F200 and F600 back-end ports use the PCIe slot.

5: F900 back-end ports use the PCIe slot.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 51


Networking Architecture

Rear view of accelerator nodes

Warning: Do not plug a back-end Ethernet topology into a


back-end InfiniBand NIC. If an Ethernet connection is
plugged into the InfiniBand NIC, it switches the back-end
NIC from one mode to the other and will not come back to
the same state.

Simple Switch Architecture

For smaller clusters, a single hop switching architecture that is paired with
the inherent back-end redundancy should suffice to meet the bandwidth,
latency, and traffic requirements. A single layer of switches can be used
for each interface (int-a and int-b). For example, consider a four-node
cluster. A single switch can be used to network the traffic for each
interface, int-a, and int-b.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 52


Networking Architecture

Leaf-Spine Architecture

Starting with OneFS 8.2 and later, a leaf-spine back-end network


architecture provides the performance, scale, and capacity necessary for
large deployments of up to 252 nodes. Leaf-Spine is a two-level hierarchy
where nodes connect to switches called leaf switches, and leaf switches in
turn connect to switches called spine switches. The topology uses the
maximum internal bandwidth and ports of the supported switches, some
include:

• Dell Z9100-ON 32-port 100 GbE switch (EOL 05/2023, EOSS TBA)
• Dell S4148-ON 48-port 10 GbE switch (EOL 5/2023. EOSS TBA)
• Dell Z9264-ON 64-port 100 GbE switch.
• Dell S5232-ON 32-port 100 GbE switch

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 53


Networking Architecture

A conceptual graphic of the leaf-spine topology for a PowerScale cluster using Z9100-ON
switches for the int-A network.

Important: Leaf-spine is recommended for medium to large


cluster implementations, depending on the node type. For
small clusters, a simple switch architecture for each
interface can be used.

Leaf-Spine Network Components

The Z9264-ON 64-port switch is used as an example to describe the


components.

Component Hardware Description

Spine Dell Z9264-ON 64- Back-end network with 64, 100 GbE
port switch ports that connects to the leaf
switch.

Leaf Dell Z9264-ON 64- • Up to 10 x 100 GbE uplinks to


port switch spine
• 54 40/100 GbE nodes or 108
10/25 GbE nodes using
breakout cable

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 54


Networking Architecture

Uplink Leaf to Spine There must be the same number of


connection uplinks on every Leaf switch. That
number should be the number of
Downlink Leaf to Node uplinks that are required by the Leaf
connection switch with the most downlinks.

Breakout 40 GbE cable that This cable connects a Leaf switch


Cable breaks out to four 10 40 GbE port to four 10 GbE nodes
GbE cables OR 100 or a Leaf switch 100 GbE port to
GbE cable that four 25 GbE nodes.
breaks out to four 25
GbE cables

Go to: The Dell PowerScale: Leaf-Spine Network Best


Practices and the Leaf-Spine Cluster Installation Guide on
the PowerScale Hardware Info Hub for more information
about networking and switch considerations.
See the PowerScale Networking Map to learn more about
the supported configurations.

Knowledge Check

3. The organization is looking to construct the back-end network for a 62-


node F900 cluster using the maximum bandwidth ports. What is the
minimum number of Z9264 switches required to implement the back-
end for the primary interface?

Use the Dell PowerScale: Leaf-Spine Network Best Practices to find


the answer.
a. 2
b. 3
c. 4
d. 6

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 55


Networking Architecture

Front-End Network Design

The web version of this content contains an interactive activity.

The external network provides connectivity for clients over standard file-
based protocols. It supports link aggregation, and network scalability is
provided through software in OneFS. PowerScale Gen6 nodes (A300,
A3000, H700, H7000) and the all-flash nodes (F200, F600, F900, F210,
F710, F910) have two front-end ports: 10 GbE, 25 GbE, 40 GbE, or 100
GbE.

In the event of a Network Interface Controller (NIC) or connection failure,


clients do not lose their connection to the cluster26.

26 For stateful protocols, such as SMB and NFSv4, this prevents client-
side timeouts and unintended reconnection to another node in the cluster.
Instead, clients maintain their connection to the logical interface and
continue operating normally. OneFS supports Continuous Availability (CA)
for stateful protocols like SMB, and NFSv4 is supported.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 56


Networking Architecture

Tip: Interfaces are named "25gige-N" or "100gige-N."


Interface names may not show the link speed. For example,
the interface name for NICs that are running at slower
speeds such as 10 Gb does not change to "10gige-1." You
can use ifconfig to check the link speed.

Knowledge Check

4. What front-end connectivity feature prevents Windows or Linux clients


from experiencing intermittent connection failures?
a. SMB and NFSv4 are stateful and clients maintain their connection
to the logical interface.
b. The int-b redundant network takes an active state during NIC
failures.
c. The 1 GbE port on Gen6 is temporarily used to hold the client
connection until it is restored.
d. The client traffic is redirected using the back-end ports when
connection is lost.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 57


Cluster Management Tools

Cluster Management Tools

Cluster Management Tools

Scenario

A PowerScale cluster can be accessed and managed using several


methods. It is important for the technician to know the differences between
each connection method, the interface, and the functions available when
using each interface. For example, when to use the isi command
structure.

Management Interfaces Overview

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 58


Cluster Management Tools

The OneFS management interface is used to perform various


administrative and management tasks on the PowerScale cluster and
nodes. Management capabilities vary based on which interface is used.
The different types of management interfaces in OneFS are:

• Serial Console27
• Web Administration Interface (WebUI)28
• Command Line Interface (CLI)29
• Platform Application Programming Interface (PAPI)30
• Front Panel Display31
• iDRAC/IPMI32

27 The serial console is used for initial cluster configurations by


establishing serial access to the node designated as node 1.
28 The browser-based OneFS web administration interface provides

secure access with OneFS-supported browsers. This interface is used to


view robust graphical monitoring displays and to perform cluster-
management tasks.
29 The command-line interface runs "isi" commands to configure, monitor,

and manage the cluster. Access to the CLI is through a secure shell (SSH)
connection to any node in the cluster.
30 The customer uses the OneFS application programming interface (API)

to automate the retrieval of the most detailed network traffic statistics. It is


divided into two functional areas: One area enables cluster configuration,
management, and monitoring functionality, and the other area enables
operations on files and directories on the cluster.
31 The Front Panel Display is located on the physical node or chassis. It is

used to perform basic administrative tasks onsite.


32 integrated Dell Remote Access Controller. For iDRAC, the RJ45 (1

GbE) connects to the external network switch. Caution is recommended

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 59


Cluster Management Tools

Serial Console Video

Movie:

The web version of this content contains a movie.

OneFS Serial Console Overview video.

Link:
https://edutube.dell.com/Player.aspx?vno=KjBgi9m8LmZLw58klDHmOA=
=&attachments=true&autoplay=false

Four options are available for managing the cluster. The web
administration interface (WebUI), the command-line interface (CLI), the
serial console, or the platform application programming interface (PAPI),
also called the OneFS API. The first management interface that you may
use is a serial console to node 1. A serial connection using a terminal
emulator, such as PuTTY, is used to initially configure the cluster. The
serial console gives you serial access when you cannot or do not want to
use the network. Other reasons for accessing using a serial connection

when using iDRAC as some iDRAC features and functionality are


accessible with the iDRAC interface, but are not supported. OneFS only
supports the following IPMI commands with the iDRAC interface:●
Shutdown (power off)● Reboot (power cycle)● Startup (power on)● Power
Status (read-onlyNote: iDRAC applies only to F900, F600, and F200 node
types.OneFS v9.0 and later provides support for IPMI, the Intelligent
Platform Management Interface protocol. IPMI allows out-of-band console
access and remote power control across a dedicated ethernet interface
via Serial over LAN (SoL). As such, IMPI provides true lights-out
management for PowerScale F-series all-flash nodes and Gen6 H-series
and A-series chassis without additional RS-232 serial port concentrators.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 60


Cluster Management Tools

may be for troubleshooting, site rules, a network outage, and so on.


Shown are the terminal emulator settings.

The configuration Wizard automatically starts when a node is first powered


on or reformatted. If the Wizard starts, the menu and prompt are displayed
as shown. Choosing option 1 steps you through the process of creating a
cluster. Option 2 will exit the Wizard after the node finishes joining the
cluster. After completing the configuration Wizard, running the isi
config command enables you to change the configuration settings.

Web Administration Interface (WebUI)

The WebUI is a graphical interface that is used to manage the cluster. The
WebUI requires at least one IP address be configured on one external
Ethernet port of each node. To access the WebUI from another system,
an Internet browser is used to connect to port 8080.33

Example browser URLs:


• https://192.168.3.11:8080
• https://delledu.lab:8080

33 The user must log in using the root account, admin account, or an
account with log-on privileges. After opening the web administration
interface, there is a four-hour login timeout. The WebUI supports multiple
browsers including Microsoft Edge, Mozilla Firefox, Google Chrome, and
Apple Safari.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 61


Cluster Management Tools

The OneFS WebUI dashboard for the root user.

Knowledge Check

1. In which situations would an administrator use the Serial Console over


the WebUI?
a. To implement a new cluster
b. To join a node to an existing cluster
c. To interact with the cluster using a PuTTY interface

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 62


Cluster Management Tools

d. To perform basic cluster health checks and monitoring

Command Line Interface (CLI)

The CLI can be accessed in two ways: Out-of-band34 and In-band35. Both
methods use any SSH client such as OpenSSH or PuTTY. Access to the
interface changes based on the assigned privileges.

OneFS commands are code that is built on top of the UNIX environment
and are specific to OneFS management. You can use commands together
in compound command structures combining UNIX commands with
customer facing and internal commands.

The graphic shows the OneFS CLI access using one of the cluster nodes.

34 Accessed using a serial cable that is connected to the serial port on the
back of each node. As many laptops no longer have a serial port, a USB-
serial port adapter may be needed.
35 Accessed using an external IP address that is configured for the cluster.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 63


Cluster Management Tools

1: The default shell is zsh.

2: OneFS is built upon FreeBSD, enabling use of UNIX-based commands,


such as cat, ls, and chmod. Every node runs OneFS, including the many
FreeBSD kernel and system utilities.

3: Connections use Ethernet addresses.

4: OneFS supports management isi commands as not all administrative


functionalities are available using the WebIU.

5: The CLI command use includes the capability to customize the base
command with the use of options, also known as switches and flags. A
single command with multiple options results in many different
permutations, and each combination results in different actions performed.

6: The CLI is a scriptable interface. The UNIX shell enables scripting and
execution of many UNIX and OneFS commands.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 64


Cluster Management Tools

CLI Usage

The example shows the help option used for the isi status command.

The man isi or isi --help command is an important command for a


new administrator to get familiar with all CLI commands. These
commands provide an explanation of the available isi commands and
command options. For a basic description of any command and its
available options, type the -h option after the command.

Deep Dive: For a complete listing of CLI usage and


commands, search the PowerScale OneFS Info Hubs by
software version, then find the OneFS CLI Administration
Guide and the CLI Command Reference Guide.

isi config

The isi config command, pronounced "izzy config," opens the


configuration console. The console contains configured settings from the
time the Wizard started running.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 65


Cluster Management Tools

Use the console to change initial configuration settings. When in the isi
config console, other configuration commands are unavailable. The
exit command is used to go back to the default CLI.

The isi config subsystem.

Knowledge Check

2. Select all the cluster management capabilities that are available using
the CLI.

Use the PowerScale CLI Administration guide on the PowerScale


OneFS Info Hub for the list of CLI capabilities.
a. Node and cluster operations such as restart, reformat, and
shutdown.
b. View vector-based monitoring charts for cluster performance and
capacity.
c. Perform OneFS version upgrades and firmware updates.
d. Perform administration of OneFS features and cluster
configurations.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 66


Cluster Management Tools

Platform Application Programming Interface (PAPI)

The OneFS Application Programming Interface, or OneFS API, is a secure


and scriptable36 interface for managing the cluster. HTTPS is used in API
to encrypt communications. OneFS applies authentication and RBAC
controls to API commands to ensure that only authorized commands are
run.

Deep Dive: See the PowerScale OneFS Info Hubs and then
find the OneFS API Reference Guide under the proper
OneFS software version.
Access the Dell PowerScale API Developer Portal for full
API documentation.

36A chief benefit of PAPI is its scripting simplicity, enabling customers to


automate their storage administration.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 67


Cluster Management Tools

Front Panel Display

Front Panel Display of a Gen6 chassis.

• The Gen6 front panel display is an LCD screen with five buttons that
are used for basic administration tasks37.
• The interface consists of the LCD screen, a round ENTER button for
making selections, and four arrow buttons for navigating menus.
• There are four LEDs across the bottom of the interface that show
which node is being communicated with.
• To change the communication to another node, use the arrow buttons.

Column 2

37 Some of them include: adding the node to a cluster, checking node or


drive status, events, cluster details, capacity, IP, and MAC addresses.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 68


Cluster Management Tools

Front Panel Display of PowerScale All-Flash nodes.

• The front panel for PowerScale All-Flash nodes has limited


functionality38 compared to the Gen6 nodes.
• The interface consists of LCD screen, two arrows (left and right) for
navigation, and a selection button between the arrows.

Knowledge Check

3. What administrative tasks can be performed by the front panel of a


F200 node?
a. Adding a new node to a cluster.
b. Check for number of outstanding cluster events and alerts.
c. System information, status, and error messages that are related to
cluster functionality.
d. Basic network-related administration such as IP assignment.

38 You can join a node to a cluster and the panel displays the node name
after the node has joined the cluster. The LCD display provides system
information, status, and error messages to indicate that the system is
functioning correctly or requires attention.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 69


Cluster Management Tools

You Have Completed This Content

Click the Save Progress & Exit button in the course menu or below
to record this content as complete.
Go to the next learning or assessment, if applicable.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 70


Glossary
A200
The A200 is an ideal active archive storage solution that combines near-
primary accessibility, value, and ease of use. The A200 provides between
120 TB to 960 TB per chassis and scales to 60 PB in a single cluster.

A2000
The A2000 is an ideal solution for high-density, deep archive storage that
safeguards data efficiently for long-term retention. The A2000 stores up to
1280 TB per chassis and scales to over 80 PB in a single cluster

A300
An ideal active archive storage solution that combines high performance,
nearline accessibility, value, and ease of use. The A300 provides between
120 TB to 1.2 PB per chassis and scales to 75 PB in a single cluster. The
A300 includes inline compression and deduplication capabilities.

A3000
An ideal solution for high-performance, high-density, deep archive storage
that safeguards data efficiently for long-term retention. The A3000 stores
up to 1.6 PB per chassis and scales to 100 PB in a single cluster. The
A3000 includes inline compression and deduplication capabilities.

F200
Provides the performance of flash storage in a cost-effective form factor to
address the needs of a wide variety of workloads. Each node can scale
raw storage capacity from 3.84 TB to 30.72 TB per node and up to 7.7 PB
of raw capacity per cluster. The F200 includes in-line compression and
deduplication. The minimum number of F200 nodes per cluster is three
while the maximum cluster size is 252 nodes. The F200 is best suited for
remote offices, small M&E workloads, small hospitals, retail outlets, IoT,
factory floor, and other similar deployment scenarios.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 71


F210
The F210 is a 1U platform containing all-flash NVMe TLC drives, and
optionally, a 15TB QLC drive. Raw storage scales from 8 TB to 61 TB per
node and includes in-line compression and deduplication. The F210 is
node compatible with the F200.

F600
With NVMe drives, the F600 provides a larger capacity with performance
in a cost-effective compact form factor to power demanding workloads.
Each node allows you to scale raw storage capacity from 15.36 TB to 240
TB per node and up to 60 PB of raw storage per cluster. The F600
includes inline software data compression and deduplication. The
minimum number of F600 nodes per cluster is three while the maximum
cluster size is 252 nodes. The F600 is best suited for M&E studios,
hospitals, and financials that need performance and capacity for
demanding workloads.

F710
The F710 is a 1U platform with up to 10 all-flash NVMe, TLC or QLC, SSD
drives per node. Raw storage scales from 38TB to 307TB per node and
includes in-line compression and deduplication.

F800
The F800 is suitable for workflows that require extreme performance and
efficiency. It is an all-flash array with ultra-high performance. It delivers up
to 250,000 IOPS and up to 15 GB/s aggregate throughput in a single
chassis configuration. Also, it delivers up to 15.75M IOPS and 945 GB/s of
aggregate throughput in a 252 node cluster. The raw storage capacity
scales from 96 TB to 924 TB in a single 4U chassis and up to 58 PB in a
single cluster.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 72


F810
The F810 is suitable for workflows that require extreme performance and
efficiency. The F810 also provides high-speed inline data deduplication
and in-line data compression. It delivers up to 3:1 efficiency, depending on
your specific dataset and workload. The F810 delivers up to 250,000 IOPS
and up to 15 GB/sec aggregate throughput in a single chassis
configuration. It delivers up to 15.75M IOPS and 945 GB/s of aggregate
throughput in a 252 node cluster. The raw storage capacity scales from
230 TB to 924 TB in a 4U chassis and up to 58 PB in a single cluster.

F900
Provides the maximum performance of all-NVMe storage in a cost-
effective configuration to address the needs of demanding workloads.
Each node is 2U in height and hosts 24 NVMe SSDs. It can scale raw
storage capacity from 46 TB to 720 TB per node and up to 186 PB of raw
capacity per cluster. The F900 includes in-line compression and
deduplication. The minimum number of F900 nodes per cluster is three
while the maximum cluster size is 252 nodes. The F900 is best suited for
Media and Entertainment 8K, genomics, algorithmic trading, artificial
intelligence, machine learning, and HPC workloads.

F910
The F910 is the next-gen version of the F900 and is node compatible with
the F900. 2U in height and hosts 24 NVMe SSDs. 720 TB maximum per
node and up to 186 PB of raw capacity per cluster, includes in-line
compression and deduplication.

H400
The H400 provides a balance of performance, capacity, and value to
support a wide range of file workloads. It delivers up to 3 GB/s bandwidth
per chassis and provides capacity options ranging from 120 TB to 960 TB
per chassis. The H400 uses a medium compute performance node with
SATA drives.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 73


H500
The H500 is a versatile hybrid platform that delivers up to 5 GB/s
bandwidth per chassis with a capacity ranging from 120 TB to 960 TB per
chassis. It is an ideal choice for organizations looking to consolidate and
support a broad range of file workloads on a single platform. H500 is
comparable to a top of the line X410, combining a high compute
performance node with SATA drives. The whole Gen 6 architecture is
inherently modular and flexible with respect to its specifications.

H5600
The H5600 combines massive scalability – 1.28 PB (raw) per chassis and
up to 8 GB/s bandwidth in an efficient, highly dense, deep 4U chassis. The
H5600 delivers inline data compression and deduplication. It is designed
to support a wide range of demanding, large-scale file applications and
workloads.

H600
The H600 is designed to provide high performance at value, delivers up to
120,000 IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for
high-performance computing (HPC) workloads that don’t require the
extreme performance of all-flash. These are spinning media nodes with
various levels of available computing power - H600 combines our turbo
compute performance nodes with 2.5" SAS drives for high IOPS
workloads.

H700
Provides maximum performance and value to support demanding file
workloads. The H700 provides capacity up to 1.2 PB per chassis and 75
PB per cluster. The H700 includes inline compression and deduplication
capabilities.

H7000
Provides versatile, high-performance, high-capacity hybrid platform with up to 1.6
PB per chassis and 100.8 PB per cluster. The deep-chassis based H7000 is
ideal to consolidate a range of file workloads on a single platform. The H7000
includes inline compression and deduplication capabilities.

PowerScale Hardware Concepts-SSP

© Copyright 2024 Dell Inc Page 74

You might also like