0% found this document useful (0 votes)
2K views

Isilon Training

Uploaded by

Aitonix Scan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

Isilon Training

Uploaded by

Aitonix Scan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1293

Welcome to the Isilon Fundamentals 2017 course!

This course introduces Dell EMC’s Big Data storage


product, Isilon. Isilon is a scale-out NAS storage solution with an architecture that differs substantially from
other Dell EMC storage. This e-Learning introduces the architecture, features, and capabilities of Isilon to
audiences who have not encountered it previously.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or
its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The
trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of DELL EMC
Corporation and other parties. Nothing contained in this publication should be construed as granting any license or right to use any
Trademark without the prior written permission of the party that owns the Trademark.

AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica,
Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva,
Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker, CIO Connect,
ClaimPack, ClaimsEditor, Claralert,CLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information
Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, CoprHD,
EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge, Data Protection Suite, Data Protection Advisor, DBClassify, DD
Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document
Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender, EMC
Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM, eRoom, Event Explorer, FAST, FarPoint, FirstPass,
FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase,
Illuminator, InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC
LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor, Metro, MetroPoint,
MirrorView, Mozy, Multi-Band Deduplication, Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack,
OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven,
EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect,
RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage,
SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler,
Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal
Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize
Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET,
VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-
Friction Enterprise Storage.

Revision Date: May 2017

Revision Number: MR-1WP-ISIFUND17, OneFS 8.1, v1.2

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 1


After completing this lesson, you will be able to understand the history of computer storage and illustrate
the changes in data storage needs over time. You will also be able to differentiate structured vs.
unstructured data, compare block-based data to file-based data, and explain the OneFS architecture.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 2


The first general-purpose computer became operational in 1946. It was called the Electronic Numerical
Integrator And Computer (ENIAC). ENIAC used more than 17,000 vacuum tubes and 70,000 resistors to
hold a ten-digit decimal number in its memory. The data was output as punch cards, a format that IBM
continued to use well into the early1960s .

In the 1960s, magnetic tape eclipsed punch cards as the way to store corporate computer data. During the
mid-1960s magnetic tape gave way to the hard disk drive. The first hard drive was the size of two
refrigerators, required 50 disks to store less than four megabytes of data, and was called the IBM 3330.

During the 1980s the personal computer revolution introduced miniaturization. This brought a wide array of
storage form factors. Less than 30 years after two refrigerator-sized units stored less than four megabytes,
the average consumer could store about one-third of that amount on a three-and-a-half inch plastic disk.
The evolution of data storage was one of constant progress, improvement, and improved efficiency paired
with increased storage density.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 3


As time marched on, so did data storage. Data storage eventually evolved into two types of data:
structured data and unstructured data.

Structured data often resides within a fixed field inside a larger record or file. A large fie often requires a
data model that defines the type of data, such as a numeric or alphanumeric model. In addition to defining
the type of data, access to the data and the processing of the data are important. Therefore, in today’s
data storage world, structured data is most often expressed in a relational database. The rigid table
structure makes structured data easy to query. Spreadsheets, library catalogs, inventory sheets, phone
directories, and customer contact information are all examples of structured data that fit neatly into the
rows and columns of a database.

Unstructured data does not fit into neat rows and columns because it has little or no classification data.
Image files, photographs, graphics files, video, and audio files are all examples of unstructured data.
Imagine you have a spreadsheet with information about your pet dog. The spreadsheet might have the
dog’s name, birthdate, breed, color, weight, parent’s names and information, breeder information, location,
etc. This structured data would be very easy to plug into the predefined field of a database, as the
information deals with classifying individual traits. Now imagine what would happen if you tried to fit a
photograph of your dog into those same fields: it wouldn’t fit. There is no way to classify an image in the
same way that we list out dog’s name, birthdate, breed, or color. According to industry analysts, the
creation rate of unstructured data outpaces structured data, with unstructured data comprising 80 to 90%
of all digital data.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 4


With unstructured data being the large majority of data storage growth a solution was needed. An
International Data Corporation (IDC) study published in 2008 showed that the amount of digital data
created, captured, and replicated worldwide grew tenfold in just five years. This finding was based on the
proliferation of then-new technologies such as Voice over IP, RFID, smartphones, and consumer use of
GPS; and the continuance of data generators such as digital cameras, HD TV broadcasts, digital games,
ATMs, email, videoconferencing, medical imaging, and so on. A 2012 study from IDC found that the digital
universe is still expanding at a breathtaking pace. To understand the results, it helps to realize that the
preface “exa” means one billion, billion, or one quintillion. An Exabyte (EB) is one quintillion bytes. Another
way to say it is, an Exabyte is one billion gigabytes.

In 1986, the entire world had the technical capacity to store merely 2.6 Exabyte's. By 2020, the world will
need to store more than 40,000 Exabyte's. Much of this growth occurs because a person formerly had to
sit in an office to use a computer, but today, billions of individuals generate data, all day, everywhere they
go, from mobile devices.

Thus, studies document that the world’s data storage needs are not merely growing; they are
mushrooming. Fortunately Isilon specializes in the storage of unstructured data, but we will come back to
that after we look at a few more ways to classify data storage.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 5


Another way of categorizing data storage systems is to describe them as block-based versus file-based.
Block-based data is usually found in Storage Area Network (SAN) technology such as the Dell EMC
product VNX. In contrast, file-based data is usually associated with Network Attached Storage (NAS)
technology, such as Celera and Isilon.

A block of data is a sequence of bits or bytes in a fixed length; the length is determined by the file system.
Saving a single piece of data requires the operating system, or OS, to break the file into blocks and each
block is written to a particular sector, or area, of the drive. A single file may require compiling many blocks
together. Block-based data is especially useful when working with small bits of information that need to be
accessed or written frequently. An excellent example of block-based data would be a large database of
postal codes. Someone querying the database probably wants only a small number of the postal codes,
but rarely wants all of them. block-based data makes it easy to gather information in partial sets and is
particularly adept at handling high volumes of small transactions, such as stock trading data. Data
associated with stock trading can generate one billion relatively small files in only a few hours. Block
format is the go-to format when you need flexibility and intensive speed of input and output operations.

File-based data, on the other hand, is created depending upon the application and protocol being used.
Some applications store data as a whole file, which is broken up and sent across the network as packets.
All of the data packets are required to reassemble the file. Unlike block-based data, where you can grab
only one type of postal code, in file-based data storage you need the whole file in order for it to be useful.
For example, a PDF file is generally not readable unless you have all of it accessible; having only part of
the file will generate an error and not allow the file to be opened.

File-based data is organized in chunks too large to work well in a database or in an application that deals
with a high volume of transactions. In IT applications, block-based data usually relates to structured data
while file-based data usually relates to unstructured data. Isilon specializes in handling file-based data.
Can Isilon do block-based storage? Technically, yes, but if you are looking for a block-based solution there
are other Dell EMC products that specialize in block and would best handle that type of workflow.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 6


In the early days of computer data, corporations stored data on hard drives in the server. The company’s intellectual
property depended entirely upon that hard drive continuing to work. Thus, to minimize risk, corporations mirrored the
data on a Redundant Array of Independent Disks (RAID). RAID disks were directly attached to a server so that the
server thought the hard drives were part of it. This technique is called Direct Attached Storage (DAS).

As applications proliferated, soon there were many servers, each with its own DAS. This worked fine, with some
drawbacks. If one server’s DAS was full while another server’s DAS was half empty, the empty DAS couldn’t share its
space with the full DAS. People thought, “What if we took all these individual storage stacks and put them in one big
stack, then used the network to let all the servers access that one big pool of storage? Then our servers could share
capacity!”

Accomplishing that approach required a traffic cop to keep track of what data went with what application. Thus, the
volume manager was invented. Adding a volume manager to the storage system created the Storage Attached Network
(SAN).

SAN was optimized for block-based data. It worked fine until employers began giving their employees computers.
Employees then needed to get to the stored data, but they couldn’t: SAN was set up for servers, not personal computers
(PCs). PCs worked differently from the storage file server and network communications only communicate from one file
system to another file system. The answer arrived when corporations put employee computers on the network, and
added to the storage a file system to communicate with users. And from this, Network Attached Storage (NAS) was
born.

NAS works pretty well. But it could be improved. For example, now the server is spending as much time servicing
employee requests as it is doing the application work it was meant for. The file system doesn’t know where data is
supposed to go, because that’s the volume manager’s job. The volume manager doesn’t know how the data is
protected; that’s RAID’s job. If high-value data needs more protection than other data, you need to migrate the data to a
different volume that has the protection level that data needs. So there is opportunity to improve NAS.

To alleviate these issues, Isilon combined the file system, the volume manager, and the data protection into one
seamless, self-aware Operating System (OS): OneFS.

Some advantages of this approach include the simplicity of having all data in a single file system and a single volume.
When you have storage capacity without hard architectural limitations, your system is easier to manage and grow.

Isilon was designed to work in a mixed environment. Even if the clients attached to the server are a mix of Windows,
UNIX, and Mac OS X operating systems, Isilon offers a single unified platform for all.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 7


Though we’ve defined DAS, SAN, and NAS, it is important to draw attention to the distinction between two
kinds of NAS architectures, scale-up and scale-out, before wrapping up a thorough overview of data
storage.

Scale-up NAS came first, represented here with a green line. In this architecture, a pair of controllers or
filers manages a stack of disk trays. You can readily add capacity – if you need more storage, you simply
add more drives. But the architecture doesn’t let you pile on more filers. As disk space grows, computing
resources do not.

In contrast, scale-out NAS, represented here with a blue line, uses nodes. Each node contains drives, but
it also contains more processing and more memory. By adding nodes, performance and capacity scale out
in proportion. The green line shows that over time, the filers must work harder and harder to manage the
growing capacity. Result: performance slows.

The blue line shows that as you add nodes, performance improves, because every node can exploit all the
resources of every other node.

DAS, SAN, and scale-up NAS have their places, but they were invented before the Big Data era. Scale-
out NAS systems were designed for Big Data. Thus, in many regards, the Isilon approach using scale-out
NAS architecture makes managing Big Data less challenging.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 8


This lesson explains what pieces make up an Isilon cluster, and how those pieces fit together to
communicate with one another. Upon completion of this lesson, you will be able to identify what makes a
cluster, explain how nodes form a cluster, recognize data distribution within OneFS, and describe Isilon
clients and protocols. Let’s start right in.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 9


Isilon is the industry leading, scale-out NAS storage system made up of multiple servers called ‘nodes’.
The nodes are combined with Isilon software to create a ‘cluster’, which behaves as a single, central
storage system for a company’s unstructured data.

There are two types of Isilon clusters currently in the field. There are original Generation 5 nodes which
require a minimum of three nodes to form a cluster. Additionally you will find the Generation 6 clusters. 6th
generation Isilon clusters require a minimum of four nodes, which are contained in one chassis. Let’s
move on and review the different hardware types in a cluster and their positioning.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 10


The basic building block of an Isilon cluster is a node. Our smallest cluster, a Generation 5 cluster, starts
with three individual nodes. When you connect those three nodes together they form a cluster.

The Generation 6 clusters start with a minimum of four nodes. Generation 5 and 6 nodes can also exist
within the same cluster. Having both types of nodes in one cluster is the typical path for a hardware
refresh as our customers move toward the Next Generation Isilon brand. Currently, a cluster can have up
to 144 nodes, regardless to which node types are present.

Architecturally, every Isilon node is equal to every other Isilon node of the same type in a cluster. No one
specific node has to be the controller or filer. Instead, OneFS unites the entire cluster in a globally
coherent pool of memory, CPU, and capacity. OneFS writes files in stripes across the nodes for built-in
high availability. So when a file request is received, any available node will communicate the request to
the nodes assembling the file. Then that node will deliver the file requested. Therefore requests are not
processed through one controller node, but rather the node which is most accessible based on availability.
With the sixth generation Isilon architecture, a single cluster can theoretically scale to 1000+ petabytes.
The “availability” possibilities become even greater as the next generation Isilon nodes become a larger
percentage of a cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 11


Let’s take a slightly deeper look at Isilon scale-out NAS architecture, viewing it from the different layers or
levels where it is most easily understood.

Starting at the Client/Application layer, the Isilon NAS architecture supports mixed modes. Windows,
UNIX, and OSX operating systems can all connect to an Isilon cluster and access the same files.

At the networking level, the Isilon OneFS operating system supports key industry-standard protocols over
Ethernet, including network file shares, Server Message Block (SMB), HTTP, FTP, Hadoop Distributed
File System (HDFS) for data analytics, SWIFT, and REST for object and cloud computing requirements.
As a file-based storage system, Isilon does not support protocols associated with block-based data.

The next logical level would be the nodes. Nodes are combined into one volume by the OneFS operating
system. All information is shared among nodes, thus allowing a client to connect to any node in the cluster
and access any directory in the file system. And finally on the back end, all the nodes are connected with
either an InfiniBand fabric switch (for Generation 5 and Generation 6 clusters) or with an 40Gb Ethernet
network (Generation 6 clusters only) for low-latency internal communication with one another.

And there you have a slightly deeper look at the levels, or layers, of the scale-out NAS architecture. Next,
let’s take a look at external networking.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 12


The external networking components of an Isilon cluster provides client access over a variety of protocols.
Each storage node connects to one or more external Ethernet networks using 1 Gigabit (GbE), 10 GbE, or
40 GbE connections.

The 10 GbE and 40 GbE interfaces support link aggregation. Link aggregation creates a logical interface
that clients connect to. In the event of a Network Interface Controller (NIC) or connection failure, clients do
not lose their connection to the cluster. For stateful protocols, such as SMB and NFSv4, this prevents
client-side timeouts and unintended reconnection to another node in the cluster. Instead, clients maintain
their connection to the logical interface and continue operating normally. Support for Continuous
Availability (CA) for stateful protocols like SMB and NFSv4 is available with OneFS 8.0.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 13


Connectivity is not just a cluster to external network issue. The nodes in the cluster communicate
internally using either an InfiniBand or Ethernet switch on the backend. InfiniBand was designed as a high-
speed interconnect for high performance computing and Ethernet provides the flexibility and high speeds
to support the Isilon cluster. The reliability and performance of the interconnect is very important in
creating a true scale-out storage system. The interconnect needs to provide both high throughput and very
low latency. A single front-end operation can generate multiple messages on the back end, since the
nodes coordinate work among themselves when they write or read data. Thus, the dual backend
InfiniBand or Ethernet switches handle all intra-cluster communication and provide redundancy in the
event that one switch fails. The new Ethernet back-end is only supported for the Gen 6 nodes. The
switches are provided by Isilon and have a minimum of 24 ports at launch. Now that we have reviewed
how the clusters connect to each other, let’s look at how the clients connect to the cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 14


Clients can connect to a cluster in various ways. OneFS lets you give the cluster a virtual host name and
clients can access the cluster using DNS. Connectivity is based on standard networking and DNS
principles. Round-robin is included for distributing client connections but with an advanced license you can
have clients connect to a given node based on criteria you define. For example, you can have clients
connect to the IP address with the lowest number of connections, or you can direct client connections to
the interface currently showing the least throughput, or the least CPU usage.

The enhanced functionality includes continuous availability for SMBv3. This feature allows SMB, Network
File System version three (NFSv3), and NFSv4 clients to dynamically move to another node in the event
the node they are connected to goes down.

This feature applies to Microsoft Windows 8, Windows 10 and Windows Server 2012 R2 clients. This
feature is part of Isilon's non-disruptive operation initiative to give customers more options for continuous
work and less down time. The CA option allows seamless movement from one node to another and no
manual intervention on the client side. This enables a continuous workflow from the client side with no
appearance or disruption to their working time. CA supports home directory workflows as well.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 15


In a single cluster, how many hardware failures can the system withstand while offering the customer
100% data availability? That depends; while many storage solutions use Redundant Array of Independent
Disks (RAID), Isilon does not use RAID. Instead, Isilon takes a different approach. Before we go into the
details of Isilon’s approach, it is important to understand the N+M data protection model. N + M comes
from the Reed-Solomon algorithm, an industry standard developed to enhance data integrity when it’s
undesirable to have data retransmitted from another source. Most DVDs and television broadcasts use
Reed-Solomon codes so that you can view video data without interruption.

In the N+M data model, N represents the number of nodes in the cluster, and M represents the number of
simultaneous hardware failures that the cluster can withstand without incurring data loss. “Hardware
failures” refers to drive loss, node loss, or a combination of the simultaneous loss of drives and/or nodes.

As the system writes the data, it also protects the data with parity bits called Forward Error Correction
(FEC). The OneFS operating system spreads the data across numerous drives in multiple nodes so that if
part of the data goes missing, the missing data can be recalculated and restored. This involves complex
mathematics, but to illustrate the concept, we’ll use a basic example using three colored boxes with some
contents. Let’s assume “5” and “3” be the contents of the blue and yellow boxes, respectively, on the first
row. The sum of these two numbers is “8”, which we are calling a parity bit, and is the contents of the
green box. For here on, once we know the contents of these three colored boxes we can successfully tell
what value inside of a missing box on any of the subsequent rows of boxes, can’t we? That is, on the third
row, knowing the contents of the blue and green boxes as “5” and “8” we can conclude the value of the
missing yellow box to be “3”. In general, the basic idea of the parity bit is a value that OneFS sets to show
what total should result when the binary data is added together.

In another words, with the parity bit the OneFS can easily rebuild the missing data. With the aid of the
parity bit, any one value could vanish, and OneFS could readily recalculate and restore it. Now that we
understand how the N+M data protection model works, let’s look at Isilon’s approach using FEC.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 16


FEC allows the customer to choose how many bits of parity to implement. One bit of parity for many disks
is known as N + 1; two parity points for many disks is known as N + 2, and so on.

With N + 1 protection, data is 100% available even if a single drive or node fails. With N + 2 protection,
two components can fail, but the data will still be 100% available. OneFS supports up to N+4 – users can
organize their cluster so that as many as four drives, or four entire nodes, can fail without loss of data or of
access to the data.

RAID is disk-based, so when you choose a level of protection – that is, how many parity bits – you’ve
chosen for the entire RAID volume. With Isilon’s FEC approach, you can set different levels of protection
for different nodes, directories, or even different files. Also you can change protection levels on the fly,
non-disruptively. Unlike RAID where you have the same protection level across all the disks and this
cannot be changed without reformatting the disks.

When a client connects though a single node and saves data, the write operation occurs across multiple
nodes in the cluster. This is also true for read operations. When a client connects to a node and requests
a file from the cluster, the node to which the client has connected uses the backend InfiniBand or 40GbE
network to coordinate with other nodes to retrieve, rebuild, and deliver the file back to the client.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 17


During a write operation, when OneFS stripes data across nodes, the system breaks the file-based data
into smaller logical sections called stripe units. The smallest element in a stripe unit is 8 kilobytes and
each stripe unit is 128K, or 16 8kb blocks. If the data file is larger than 128 kb, the next part of the file is
written to a second node. If the file is larger than 256 kilobytes, the third part is written to a third node, and
so on. OneFS stripes these 128 kilobyte units across the cluster, using advanced algorithms to determine
data layout for maximum efficiency and performance.

The process of striping spreads all write operations from a client across the nodes of a cluster. The
example in this animation demonstrates how a file is broken down into chunks, after which it is striped
across disks in the cluster along with parity, also known as FEC.

Though a client connects to only one node, when that client saves data to the cluster, the write operation
occurs in multiple nodes in the cluster. Each node contains between 12 and 59 hard disk drives, or a
combination of SSDs and disk drives. As the system lays out data across the cluster, it distributes stripe
units across nodes, and each node in turn may utilize numerous drives, if the file is large enough. This
method minimizes the role of any specific drive or node. If one piece of hardware stops working, the data it
contains can be reconstructed.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 18


A client connects to a single node at a time. When that client requests a file from the cluster, the node that
the client has connected to will not have the entire file on its local drives.

The node with the requesting client then uses the backend InfiniBand or 40GbE to communicate with other
nodes to retrieve, rebuild, and deliver the file.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 19


In this lesson, we dive deeper into various models of Isilon nodes. You’ll get an overview of the intended
uses for each model, then some of the technical detail supporting the overview. You’ll also see how users
can add nodes to scale their Isilon cluster. The lesson concludes with an explanation of optional
enhancements to nodes, such as data at rest encryption and accelerators.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 20


Individual Isilon nodes provide the data storage capacity and processing power of the Isilon scale-out NAS
platform. All of the nodes are peers to each other and so there is no single 'master' node and no single
'administrative node'. Administration can be done from any node in the cluster as each node provides
network connectivity, storage, memory, non-volatile RAM (NVRAM) and processing power found in the
Central Processing Units (CPUs). There are also different node configurations, compute, and capacity.
These varied configurations can be mixed and matched to meet specific business needs. Let’s begin by
looking at the 6th generation hardware design.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 21


The Generation 6 hardware is designed so that a single chassis contains four Isilon nodes. Each node, or
blade, is a fully functional Isilon node that contains compute, capacity, network, cache - all of the features
enjoyed by the Generation 5 platforms are scaled into nodes that are designed for higher performance and
capacity in a smaller footprint. The chassis are 4U in height and are either 37" or 40" deep depending on
the node type. There are four compute slots per chassis, each holding one node. Each node houses five
drive sleds totaling 20 drive sleds per chassis. Additionally, from a hardware perspective, it is important to
note with Gen 6 nodes that there are four rail sets available depending on the rack depth and rack
supplier; EMC or a 3rd party.

The density of the drives in an Generation 6 chassis are notably heavier than the previous Generation 5.
Therefore extreme care and every safety precaution should be taken when rack mounting or working with
the nodes. Having reviewed the Generation 6 hardware design, let’s take a look at the Generation 5
hardware design.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 22


The Isilon product family consists of two hardware platforms. The Generation 5 platform which is pictured
on the slide, and the Generation 6 platform. The Gen 6 platform is scheduled to launch in 2017.

The Generation 5 portfolio consists of four storage node series and two non-storage nodes. The storage
nodes series are the S-Series, X-Series, NL-Series, and the HD-Series. Each of these are pictured here.
Not pictured are the non-storage: the A100 accelerator and the backup accelerator. The S-Series is for
high-performance primary storage and is designed for high-transactional and IO-intensive tier one
workflows. The X-Series strikes a balance between large capacity and high-performance storage. X-
Series nodes are best for high-throughput and high-concurrency tier two workflows and also for larger files
with fewer users. The NL-Series is designed to provide a cost-effective solution for tier three workflows,
such as nearline storage and data archiving. NL nodes are ideal for nearline archiving and for disk-based
backups. The HD-Series is a high-density, deep archival platform. This platform is used for archival level
data that must be retained for long, if not indefinite, periods of time but is rarely accessed. Moving forward,
the Generation 5 nodes will be joined by the Generation 6 nodes. Let’s review those now.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 23


The Generation 6 Isilon platform consists of six different offerings based on the customer's need for
performance and capacity. Because Gen 6 is a modular architecture you can scale out compute and
capacity separately. The Dell EMC Isilon F800 is the all-flash array with ultra compute and high capacity.
The F800 sits at the top of both the performance and capacity categories when it is paired with 15.4 TB
drives, giving it the distinction of being both the fastest and densest node in the product line. Next in terms
of compute are the H600/H500 nodes. The H-Series are hybrid nodes for both performance and certain
levels of capacity. These are spinning media nodes with variable compute. The H600 combines turbo
compute with 2.5" SAS drives. The H500, with ten processors, is a high compute node with bundles of
SATA/SAS drives. The H400 is a modular design and it comes in two flavors, the H400 and the A200. The
H400 uses a medium compute bundle with SATA 4Kn drives and A200 (A for archive) is an entry level
node that can be purchased partially populated. The A2000 is a deeper chassis, with 80 drives in a 40 inch
rack for cold archiving. For the low latency and high performance NAS, the F800 offering would be the
most suitable with the highest number of processor cores along with SSDs.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 24


Now that you are familiar with our platforms, let’s look at what makes them effective, unique, and scalable.
A primary purpose of Isilon’s NAS approach is the “scale-out” part. An administrator can expand the
storage at will by adding a new node. In Isilon’s case, once the node is racked and cabled, adding it to the
cluster takes just a short few minutes. That is because OneFS’ policies will automatically discover the
node, set up addresses for the node, incorporate the node into the cluster, and begin rebalancing capacity
on all nodes to take advantage of the new space. In that brief time, the node fully configures itself, is ready
for new data writes, and begins taking on data from the other nodes to AutoBalance the entire cluster.

The video linked here was shot by an enthused Isilon customer who posted it on YouTube with the
comment that adding a node was “insanely fast”. It is not especially exciting to watch, until you realize that
accomplishing the same tasks with another NAS solution takes 26 steps and multiple hours.

If you look at the free space available in the pie chart, this Isilon customer took his system from 280
terabytes to 403 terabytes, adding 120 terabytes of storage in a minute.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 25


In addition to nodes that can be added to a cluster in a few short minutes allowing for fantastic scalability,
Isilon offers some optional add-ons to further enhance our customer’s experience.

The first is Data At Rest Encryption (DARE). The term “data at rest” refers to any data sitting on your
drives. DARE is used for confidential or sensitive information.

Isilon’s security protocols are top notch, and though that is not the topic of discussion today for the sake of
this example let’s assume the following scenario. Somehow a hostile party infiltrated your network, they
were able to access your Isilon cluster, and somehow acquired the various levels of permissions/access to
see the data striped on the clusters. With DARE these hypothetical super hackers still could not read the
data, because it is encrypted. So even assuming this almost-impossible, worst case scenario, no
information would be compromised. DARE is not just an excellent security protocol. DARE is, in fact,
required by many vertical markets. For example around the globe federal governments, financial services,
and HIPAA-compliant health care providers all must encrypt stored data. This is where a feature, such as
DARE, is obviously valuable.

A less obvious benefit of DARE occurs when it’s time to upgrade your hardware. If you run a corporation in
one of the previously mentioned regulated industries and you need to dispose of an old drive, you may
have a problem. That is, anytime data is left on a bad, or good, disk drive there is the potential that the
data can be retrieved. No matter how many erase programs are used, nor how many times it is run, such
programs do not literally delete the data. They only mark the sector of the drive that holds the data as
available for overwrite. Thus, the data still remains on the drive and it can potentially be recovered with
hacker tools. For that reason, a whole industry has sprung up around physically destroying retired
hardware. DARE provides an easier solution. DARE ensures all data at rest has been encrypted. If the
data has been encrypted with a 256-bit key for all its life, you can recycle the drive as is, without fear that
anyone can read it. Isilon implements DARE by offering optional Self-Encrypting Drives, or SEDs.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 26


In addition to SEDs, another Isilon add on is an accelerator. Isilon accelerator nodes are made for
Generation 5 customers who don’t need more capacity but need performance increases in specific
workflows. An accelerator contains processors and memory but no storage drives. Accelerators dedicate
performance to a single client or group of clients. In essence, an accelerator acts as a large cache to
increase single stream performance. The accelerator is based on a single hardware platform, the A-
Series. The A-Series is based on one platform but comes in two variations: a model for pure performance
and a model to speed up backups. The main differences between the models is that the performance
accelerator has more memory, 256 GB of RAM. This RAM is used for L1 cache. Meanwhile the model for
accelerating backups is the only Isilon node that offers fibre channel ports for direct backup to tape or VTL
libraries.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 27


In this lesson, we will look deeper into what OneFS is, understand the management options, discuss
authentication, understand how OneFS offers multiprotocol support, and identify the options for getting
remote technical support.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 28


The key to Isilon’s scale-out NAS is the architecture of OneFS. OneFS is a distributed clustered file
system. That means one file system spans across every node in the storage cluster. When you add more
nodes the file system automatically redistributes the content to take advantage of the resources of the
entire cluster. When the system needs to write a file, it breaks data into smaller sections called stripes. For
performance reasons, as well as for data protection, OneFS stripes data across all the nodes and drives in
a cluster.

As the system writes the data, it also protects the data. Previously we reviewed N + M protection and
parity bits. The technical way to describe that kind of fault tolerance protection level is to say that OneFS
uses Reed Solomon FEC algorithms. OneFS also uses Flexprotect. FlexProtect creates an n-way,
redundant fabric that scales as nodes are added to the cluster, providing 100% data availability even with
four simultaneous node failures. Flexprotect utilizes advanced technology to provide redundancy and
availability capabilities far beyond those of traditional RAID. Unlike scale-up NAS, in scale-out NAS there
is not a singular master node or device that controls the cluster. Each node is a peer that shares the
management workload and acts independently as a point of access for incoming data requests. That way,
bottlenecking does not occur when there are a large influx of simultaneous requests. Thus, there is a copy
of OneFS on every node in the cluster. This approach prevents downtime since every node can take over
activities for another node if it happens to go offline. Within the cluster, OneFS coordinates all the nodes
on the back end, across either the InfiniBand or 40GbE networks. Now that we know how to store data
using OneFS, let’s look at how OneFS benefits this storage approach through optimization.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 29


The OneFS architecture is designed to optimize processes and applications across the cluster. As each
node is added to the cluster, it brings with it computing power, storage, caching, and networking
resources. The shared infrastructure permits access to all these resources on any node in the cluster,
from any other node in the cluster, giving you the performance benefits of parallel processing. The results
are improved utilization of cluster resources such as compute power, disk, memory, and networking.
Because all the nodes work together, the more nodes that are added, the more powerful the cluster gets.
Parallel processing is an extremely beneficial feature, but what about access to the file once it has been
processed? Let’s look at OneFS’ ability to apply multiprotocol file access to answer that question.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 30


OneFS supports access to the same file using different protocols and authentication methods at the same
time. SMB clients that are authenticating using Active Directory (AD), along with NFS clients that are
authenticating using LDAP, can access the same file with their appropriate permissions applied. The
permissions activities are seamless to the client.

To enable multiprotocol file access, Isilon translates Windows Security Identifiers (SIDS) and UNIX User
Identities (UIDs) into a common identity format. OneFS stores these identities on the cluster, tracking all
the user IDs from the various authentication sources. OneFS also stores the appropriate permissions for
each identity or group. We call this common identity format stored on the cluster the “on-disk
representation” of users and groups.

For instance, the SMB protocol exclusively uses SIDs for authorization data. If a user needs to retrieve a
file for a Windows client, as OneFS starts to retrieve the file, it converts the on-disk identity into a
Windows-friendly SID and checks the permissions. Or, if the user is saving a file, OneFS would do the
same kind of translation, from the on-disk representation to SIDs, before saving the file. This works the
same way on the UNIX side using UIDs and Globally Unique Identifiers (GUIDs) instead of SIDs. And
that’s how all users can access OneFS files in a mixed-platform client environment.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 31


Authentication services offer a layer of security by verifying a users’ credentials before allowing them to
access and modify files. Authentication answers the question, “Are you really who you say you are?”

OneFS supports four methods for authenticating users: AD, Lightweight Directory Access Protocol
(LDAP), Network Information Service (NIS), and local/file provider accounts on the cluster.

You are likely already familiar with AD. While Active Directory can serve many functions, the primary
reason for joining the cluster to an AD domain is to let the AD domain controller perform user and group
authentication. Each node in the cluster shares the same AD computer account, making it easy to
administer and manage.

You probably know LDAP, too. A primary advantage of LDAP is the open nature of its directory services
and the ability to use LDAP across many platforms. OneFS can use LDAP to authenticate user and group
access to the cluster.

NIS is Sun Microsystem’s directory access protocol. To avoid confusion, please note NIS differs from
NIS+, which the Isilon cluster does not support.

Isilon supports local user and group authentication using the web administration interface which fall under
the category of local or file provider authentication.

Enabling multiple authentication methods on a cluster is possible with OneFS. However it is important to
test how the different methods interact. A word of caution: work methodically, it is quite easy to get
confused about which authentication is authorizing who.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 32


Each and every node contains a copy of the OneFS operating system and the cluster’s configuration files.
To execute its functions, OneFS creates automated policies. Managing by automated policies makes
processes repeatable, which decreases the time you spend manually managing the cluster. You can
change any policy as required. A change to the configuration is a change to the configuration on every
node in the cluster. The cluster executes policies as one cohesive system.

Policies drive every process in OneFS. That includes the way data is distributed across the cluster and on
each node. It also includes how client connections get distributed among the nodes, when and how
maintenance tasks are executed. Policies are very important to the functionality of OneFS because they
enable so many automated activities.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 33


You have three options for managing the cluster. You can use the web administration interface, the
command line interface (CLI), and the Platform Application Programming Interface (PAPI). The web
administration interface is pretty robust, but if you’re willing to dive into the CLI you can achieve a slightly
more advanced configuration. Some management functionality is only available in the web administration
interface. Conversely, sometimes the CLI offers a feature that is not available in the web administration
interface. Finally, PAPI enables customer and third-party applications to programmatically execute
management commands and directly access data on the cluster. All three options are important tools for
cluster management.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 34


So who is allowed to access and make changes using the cluster management tools? Well, in addition to
the built-in root and admin users, OneFS provides role-based access control (RBAC). RBAC means you
can define a set of privileges to customize access to administration features in the OneFS web
administration interface and the CLI, and a smaller set of privileges for PAPI management. There are five
built-in roles and you also have the option to create custom roles to fit your needs. At this level it isn’t as
important to know what those built in roles are as much as it is important to know a user can be assigned
to more than one role at a time.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 35


If there's an issue with your cluster, there are two types of support available.

First, you can manually upload log files to the Dell EMC support FTP site. The log files provide detailed
information about the cluster activities when a client needs technical support. Dell EMC’s support
personnel request these files at the beginning of a support call.

The second option is EMC Secure Remote Services (ESRS) which monitors your Isilon cluster, and with
your permission, allows remote access to Dell EMC Technical Support personnel to gather cluster data
and troubleshoot issues. ESRS is a secure, IP-based customer service support system. ESRS features
include 24x7 remote monitoring and secure authentication with AES 256-bit encryption and RSA digital
certificates. When configured, ESRS monitors your Isilon cluster on a node-by-node basis and sends
alerts regarding the health of your devices. Dell EMC Technical Support personnel can establish remote
sessions through SSH or the web administration interface. During remote sessions, support personnel can
run scripts that gather diagnostic data about cluster settings and operations, which is sent to a secure FTP
site, and troubleshoot open support cases on your cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 36


After completing this lesson, you should be able to understand connection management, explain data
distribution, I/O optimization, and data protection. Additionally you should be able to configure
management roles, manage the cluster’s capacity, identify data visibility and analytics, examine
deduplication, define data integrity, understand data resiliency, and explain data recovery and data
retention.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 37


Let’s start with the data distribution, or simply put, how OneFS spreads data across the cluster. Various
models of Isilon nodes, or node types, can be present in a cluster. Nodes are assigned to node pools
based on the model type, number of drives, and the drive’s size. The cluster can have multiple node pools
and groups of node pools can be combined to form tiers of storage.

Several policies determine how the data distribution occurs. The default policy is for data to write
anywhere in the cluster. Data distributes among the different node pools based on highest percentage of
available space. So the data target can be pool or a tier anywhere on the cluster. If the distribution of the
data needs to be more specific, the customer should license SmartPools. If SmartPools is licensed, an
individual node pool can be the target for the data distribution.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 38


You can optimize data input/output to match the workflows for your business. By default, optimization is
managed cluster-wide but you have the option of managing individual directories or even individual files.
The data access pattern can be optimized for random access, sequential access or concurrent access.

The Pre-fetch, or read ahead, is an optimization algorithm that attempts to predict what data will be
needed next, before the request is made. When clients open larger files, especially streaming formats like
video and audio, the cluster assumes that you will generally watch minute four of that video after minute
three. So it proactively loads minutes four, five, and maybe even six into memory ahead of when it is
requested. Because of this function, delivering those minutes will be faster than if the cluster had to go to
the hard drive repeatedly for each request. With OneFS you can configure the pre-fetch cache
characteristics to work best with the selected access pattern.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 39


Performance optimization is the first thing a customer notices about their cluster in day to day operations.
But what does the average “Joe Admin” notice second? Right after they notice how great it works, they
notice if it doesn’t work. They want it fast and they want it to work. That is why data protection is essential.
Data protection level refers to how many components in a cluster can malfunction without loss of data.
Data protection on OneFS is flexible and configurable.

The system is enabled, by default, for virtual hot spares (VHS). VHS enables you to allocate disk space
that will hold the data as it's rebuilt when a disk drive fails.

We mentioned FEC, earlier in this course, and how OneFS uses FEC for data protection. Well, this is
different than RAID. RAID works at the disk level – once you’ve chosen a RAID type, that whole RAID
volume can only be that type of RAID; and if you want to change the RAID type, you’d have to move all the
data off the RAID disks before you can reformat.

Because OneFS uses FEC for protection, you can set the protection level differently based on tier, node
pool, directory, and even by the individual file. Extra protection creates extra overhead because OneFS
writes more FEC stripes. You decide how to trade off extra capacity (meaning, less protection) with
greater redundancy (meaning, less capacity). Based on the value of the data, you can set the protection
level differently.

For example, a Research and Development department has a node pool dedicated to testing. Because the
test data is not that valuable because it's not production data, they’ve set minimal N+1 protection. Their
customer database, however, is the company’s most valuable asset. Customer data is written to a
different node pool set and to a higher level of protection ‒ N+4. Protection is flexible with OneFS so they
could even set up to eight mirrors of each file and while this is not space efficient, for very frequently read
files it can really speed things up.

The standard functionality is available in the unlicensed version of SmartPools, which is sometimes
referred to as SmartPools Basic. If you have the license for SmartPools, you get enhanced capabilities.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 40


One of the ways you can subdivide capacity is by assigning storage quotas to users or groups. You
manage the quotas by policy. Quotas can be set by user, by group or by directory or path. You can also
nest quotas, which means you can apply quotas within a quota. For example, you can place a quota on a
whole department, then a smaller quota on each user within that department, and even a different quota
on a file share they all use, and yet another on the sub-directories of that file share. All these are flexible
and can be applied or modified as needed at almost any time.

Quotas let you implement thin provisioning. For example, let us say you advise group A, “you may use up
to one terabyte of storage”. Group A will not instantly utilize the full terabyte of storage you have allocated
to them. They may never use it all . But with quota-based thin provisioning, you can keep showing the
group an available terabyte of storage, even if you don’t have a full terabyte actually available on the
cluster currently.

OneFS has three primary types of quotas: accounting or advisory quotas; plus two levels of enforcement
quotas, soft limit and hard limit. Advisory quotas are informational only. If a user exceeds their advisory
storage quota, OneFS lets them; but the cluster provides a comparison between the quota allocation and
actual usage. In contrast, if a user exceeds a soft limit quota, the system notifies the user with a warning
email. If the user exceeds a hard limit quota, the system will from then on deny the user the ability to write.
It also notifies the user that the quota has been violated.

You can customize the quota notifications in OneFS so that they meet your requirements.

Quotas are enhanced functionality that requires licensing. To get the feature, you must purchase a
SmartQuotas license for each node in the cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 41


OneFS can implement deduplication. Deduplication provides an automated way to increase storage
efficiency. OneFS achieves deduplication by analyzing to find duplicate sets of data blocks, then storing
only a single copy of any data block that is duplicated. Deduplication runs as a post-process job; in other
words, on data already stored on the cluster.

Deduplication works at the 8K block level on files over 32K in size.

You can run deduplication jobs against a specific directory path or on the entire directory structure.

OneFS also provides a dry run deduplication assessment tool without a need for a license. This allows you
to test drive deduplication to see how much capacity you would save if you ran the actual deduplication
process. To enable the deduplication full functionality, you need a SmartDedupe license for each node in
the cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 42


Another enhanced Isilon feature gives you advanced data visibility and analytics. InsightIQ is a powerful
tool that monitors one or more clusters, then presents data in a robust graphical interface with reports you
can export. You can customize the reports. You have the ability to drill-down into the information and
break out specific information as desired, and even take advantage of usage growth and prediction
features.

InsightIQ monitors many aspects of system performance, such as CPU utilization and interface
throughput. The tool also reports on the file system analytics including quota usage, files per user, files per
directory, average file size, and more.

An external VMware system or standalone Linux server is required to use InsightIQ. The separate server
runs external to the cluster and collects data from the cluster in scheduled intervals. To enable these
capabilities, you get a free InsightIQ license for each cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 43


We’ve already discussed data protection and that OneFS uses the Reed-Solomon algorithm for FEC,
instead of RAID. In earlier training, you saw how Isilon breaks data into stripes and spreads it across
nodes.

Each stripe is protected separately with FEC blocks, or parity. Stripes are spread across the nodes and
not contained in a single node. Only one or two data or protection stripe units are contained on a single
node for any given data stripe. Protecting at this granular level allows you to vary your protection levels
and set them separately for node pools, directories, or even individual files.

What’s the point of all this? Well, in most popular SANs and in typical scale-up NAS, you have a pair of
heads so that each one can back up the other, and that’s what provides high availability. With OneFS, you
could say that high availability is baked right in to every data transaction, because the data is spread onto
many drives and multiple nodes, all of them ready to pitch in and help reassemble the data if a component
fails. This approach creates an amazingly resilient platform.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 44


Data resiliency refers to the ability to recover past versions of a file that has changed over time. Sooner or
later, every storage admin gets asked to roll back to a previous “known good” version of a file. OneFS
provides this capability using snapshots. Snapshots capture the changed blocks and metadata information
for the file.

OneFS uses the copy-on-write or redirect-on-write snapshot methodology depending on the size of the
change. This is an automated action that is determined by OneFS. The COW snapshot approach keeps
the live version of data intact while storing differences in a snapshot, while ROW will write changes into
available space and then update pointers to look at the new changes. Because the system is only writing
changes, the writes are very fast.

Snapshot policies are used to determine the snapshot schedule, the path to the snapshot location, and
snapshot retention periods. Snapshot deletions happen as part of a scheduled job, or you can also delete
them manually. And although you can delete snapshots out of chronological order it is not recommended
and not a best practice.

Some OneFS system processes use snapshots internally. No license is required for system-based
snapshot usage. However, to use snapshots for data resiliency requires a SnapshotIQ license for each
node in the cluster.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 45


When we think data recovery, first we think of data backup. Isilon supports NDMP for integration with
backup applications provided by major manufacturers, such as Symantec, EMC, CommVault, and IBM.

A backup application external to the cluster manages the backup process. You can set this up in one of
two ways:
• For Generation 5 and Generation 6 platforms, send cluster data over your LAN to the backup
device
• For Generation 5 platform only, send data directly from the cluster to the backup device using
Isilon backup accelerators

Depending upon the amount of data and the interfaces selected on the external network, backing up
across a network might not be as efficient as using the backup accelerator. The backup accelerator
provides access to the data across the fast InfiniBand internal network and delivers it to the backup device
over Fibre Channel ports. NDMP support comes standard with OneFS.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 46


While NDMP backup comes standard with OneFS, replication is an enhanced data recovery option.
Replication keeps a copy of one cluster’s data on another cluster. OneFS performs replication during
normal operations, from one Isilon cluster to another. Replication can occur over a LAN or over a WAN.
Replication may be from one to one, or from one to many Isilon clusters. Synchronization only works in
one direction.

OneFS supports two types of replication – copy and synchronization.

With copy, any new files on the source are copied over to the target, while files that have been deleted on
the source remain unchanged on the target.

With synchronization, both the source and target clusters maintain identical file sets, except that files on
the target are read-only.

You can set replication policies to run synchronization jobs at specified times or you can replicate
automatically if the source data changes. The policies can be set up per directory or for specific data
types, and you can also configure exceptions to include or exclude specific files.

OneFS also empowers you to limit the bandwidth used for replication, in order to optimize the traffic for
more important workflows.

Replication requires a SyncIQ license.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 47


Data retention is the ability to prevent data from being deleted or modified before some future date. In
OneFS, you can configure data retention at the directory level, so different directories can have different
retention policies. You can also use policies to automatically commit certain types of files for retention.

OneFS offers two types of data retention: enterprise and compliance.

Enterprise is more flexible than Compliance, and meets most companies’ retention requirements. It can
allow privileged deletes by an administrator.

Compliance level of retention is even more secure, designed to meet SEC regulatory requirements. In
Compliance mode, once data is committed to disk, no one can change or delete the data until the retention
clock expires. A common hacker ploy for beating retention safeguards is to temporarily change the system
clock to some date way in the future, thus releasing all files. Compliance mode defeats this approach by
relying upon a specialized clock that prohibits clock changes.

You can still use SyncIQ to replicate the files that have retention policies applied.

A SmartLock license is required to use this feature.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 48


In this lesson, we introduce how and what makes Isilon's OneFS operating system unique. We then delve
into the challenges of using Big Data and the uses of multi-protocol. We'll also explore the Edge-to-Core-
to-Cloud solution and where Isilon fits into this overall storage landscape. Let’s get started!

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 49


The term Big Data is being used across the technology industry but what exactly is Big Data? Big Data is
defined as any collection of data sets so large, diverse, and fast changing that it is difficult for traditional
technology to efficiently process and manage. What exactly makes computer data “Big Data”?

The storage industry says that Big Data is digital data having too much volume, velocity, or variety to be
stored traditionally. To make sure the three V’s of Big Data are perfectly clear, let’s consider some
examples.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 50


What do we mean by volume? Consider any global website that works at scale. YouTube’s press page
says YouTube ingests 100 hours of video every minute. That is one example of Big Data volume.

What’s an example of velocity? Machine-generated workflows produce massive volumes of data. For
example, the longest stage of designing a computer chip is physical verification, where the chip design is
tested in every way to see not only if it works, but also if it works fast enough. Each time researchers fire
up a test on a graphics chip prototype, sensors generate many terabytes of data per second. Storing
terabytes of data in seconds is an example of Big Data velocity.

Perhaps the best example of variety is the world’s migration to social media. On a platform such as
Facebook, people post all kinds of file formats: text, photos, video, polls, and more. According to a CNET
article from June 2012, Facebook was taking in more than 500 terabytes of data per day, including 2.7
billion Likes and 300 million photos. Every day. That many kinds of data at that scale represents Big Data
variety.

The “Three Vs” – volume, velocity, and variety – often arrive together. When they combine, administrators
truly feel the need for high performance, higher capacity storage. The three Vs generate the challenges of
managing Big Data.

Growing data has also forced an evolution in storage architecture over the years due to the amount of data
that needs to be maintained- sometimes for years or beyond. Isilon is a Big Data solution because it can
handle the volume, velocity, and variety that defines the fundamentals of Big Data. These topics will be
addressed as the course continues.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 51


Let’s start with the first of the three Vs, managing Big Data volume.

Challenge: Complex data architecture. SAN and scale-up NAS data storage architectures encounter a
logical limit at 16 terabytes, meaning, no matter what volume of data arrives, a storage administrator has
to subdivide it into partitions smaller than 16 terabytes. This is part of why customers wind up with silos of
data. To simplify this challenge, scale-out NAS such as an Isilon cluster holds everything in one single
volume with one LUN. Isilon is like one gigantic bucket for your data, and really can scale seamlessly
without architectural hard stops forcing subdivisions on the data.

Challenge: Low utilization of raw capacity. SAN and scale-up NAS architectures must reserve much of
the raw capacity of the system for management and administrative overhead, such as RAID parity disks,
metadata for all those LUNs and mega-LUNs, duplicate copies of the file system, and so on. As a result,
conventional SAN and NAS architectures often use only half of the raw capacity available, because you
have to leave headroom on each separate stack of storage. Suppose you have seven different silos of
data. As soon as you put them in one big volume, you immediately get back the headroom from six of the
seven stacks. In that way, Isilon offers high utilization. Isilon customers routinely use 80% or more of raw
disk capacity.

Challenge: Non-flexible data protection. When you have Big Data volumes of information to store, it
had better be there, dependably. If an organization relies on RAID to protect against data loss or
corruption, the failure of a single disk drive causes disproportionate inconvenience. The most popular
RAID implementation scheme allows the failure of only two drives before data loss. (A sizable Big Data
installation will easily have more than 1000 individual hard drives, so odds are at least one drive is down at
any given time). The simpler answer is to protect data using a different scheme. Shortly you’ll learn about
Isilon’s clustered architecture based on nodes that do not use RAID. Nodes full of hard drives rely less on
any single drive and can recover a failed drive as a non-emergency.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 52


What advantages does scale-out NAS offer for administrators coping with high velocity, the second V of Big Data?
Here are some examples:

Challenge: Difficult to scale performance. Some data storage architectures use two controllers, sometimes
referred to as servers or filers, to run a stack of many hard drives. You can scale capacity by adding more hard
drives, but it’s difficult to scale performance. In a given storage stack, the hard drives offer nothing but capacity - all
the intelligence of the system, including computer processing and RAM, must come from the two filers. If the
horsepower of the two filers becomes insufficient, the architecture does not allow you to pile on more filers. You have
to start over with another stack and two more filers. In contrast, every node in an Isilon cluster contains capacity
PLUS compute power PLUS memory. The nodes can work in parallel, so each node you add scales out linearly – in
other words, all aspects of the cluster scale up, including capacity and performance.

Challenge: Silos of data. Due to the architectural restrictions we just discussed, SAN and scale-up NAS end up with
several isolated stacks of storage. Many customer sites have a different storage stack for each application or
department. If the R&D stack performs product testing that generates results at Big Data velocity, the company may
establish an HPC stack, which could reach capacity rapidly. Other departments or workflows may have independent
storage stacks that have lots of capacity left, but there’s no automated way for R&D to offload their HPC overflow to,
for example, a backup storage stack. Instead, an administrator has to manually arrange a data migration. In contrast,
an Isilon cluster distributes data across all its nodes to keep them all at equal capacity. You don’t have one node
taking a pounding while other nodes sit idle. There are no hot spots, and thus, no manual data migrations. Automated
balancing makes much more sense if the goal is to keep pace with Big Data velocity.

Challenge: Concurrency. In conventional storage, a file is typically confined to a RAID stripe. That means that the
maximum throughput of reading that file is limited to how fast those drives can deliver the file. But in modern
workflows, you may have a hundred engineers or a thousand digital artists all needing access to a file, and those
RAID drives can’t keep up. Perhaps the two filers on that stack can’t process that many requests efficiently. Isilon’s
answer is that every node has at least a dozen drives, plus more RAM and more computer processing, for more
caching and better concurrent access. When there is heavy demand for a file, several nodes can deliver it.

Challenge: Many manual processes. Besides manual data migrations, conventional storage has many more
manual processes. An administrator over a SAN or a scale-up NAS product spends a significant amount of time
creating and managing LUNs, partitioning storage, establishing mounts, launching jobs, and so on. In contrast, Isilon
is policy-driven. Once you define your policies, the cluster does the rest automatically.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 53


A scale-out Data Lake is a large storage system where enterprises can consolidate vast amounts of their
data from other solutions or locations, into a single store called a Data Lake. This helps address the
variety issue with Big Data. The data can be secured, analysis can be performed, and actions can be
taken based on the insights that surface. Enterprises can then eliminate the cost of having silos or
“islands” of information spread across their enterprises. The scale-out Data Lake further enhances this
paradigm by providing scaling capabilities in terms of capacity, performance, security, and protection.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 54


Isilon scale-out NAS architecture simplifies managing Big Data by letting you scale-up to more than 50+
PB for Generation 5 clusters, or 1,000 petabytes of storage for Gen 6 Isilon clusters all within a single
contiguous volume, with one namespace, and one file system.

Isilon was purpose-built to ease the challenges of processing, storing, managing, and delivering data at
scale. Isilon’s positioning is to provide simple yet powerful answers for Big Data storage administrators.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 55


Isilon is the industry leading scale-out clustered storage solution. It provides a single volume of data
storage at a massive scale that is easy to use and manage, offering linear scalability and readiness for a
customer’s performance applications, Hadoop analytics, and other workflows. A Data Lake is a single
central data repository that can store data from a variety of sources, such as file shares, web apps, and
the cloud. It enables businesses to access the same data for a variety of uses and enables the data to be
manipulated using a variety of clients, analyzers, and applications. The data is real-time production data
and does not need to be copied or moved from an external source, like another Hadoop cluster, into the
Data Lake. The Data Lake provides tiers based on data usage, and the ability to instantly increase the
storage capacity when needed. This slide identifies the key characteristics of a scale-out Data Lake.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 56


EMC IsilonSD Edge is a software-defined storage product that expands the Data Lake by bringing in data
from the edge (remote and branch offices), enabling you to consolidate, simplify management, and protect
unstructured data. This product enables you to consolidate data from edge locations to your core data
center and then leverage the multi-protocol capabilities to support a wide range of 2nd and 3rd platform
applications, including Big Data analytics; enabling you to gain value and insight from the enterprise edge
data.
Management at the edge is simplified by using the familiar software tools and the automated features
found in OneFS. By leveraging off-the-shelf hardware and virtual server environments located at your
remote locations, you can deploy an economical software-defined storage solution with the power of Isilon
OneFS. You can also increase efficiency and storage utilization at the edge to over 80% by aggregating
unused storage capacity.
Finally, by using IsilonSD Edge, you can increase data protection by automatically replicating data to your
core data center while eliminating the need for manual data backup processes and protect data
consistently at all of your remote locations.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 57


The Isilon CloudPools software, which enables your customers to select from a number of public cloud
services or use a private cloud based on EMC Elastic Cloud Storage (ECS), provides the policy-based,
automated tiering that enables your customers to seamlessly integrate with the cloud as an additional
storage tier from the Isilon cluster at their data center.

CloudPools lets your customers address rapid data growth and optimize data center storage resources by
using the cloud as a highly economical storage tier with massive storage capacity for cold or frozen data
that is rarely used or accessed. This enables more valuable on premise storage resources to be used for
more active data and applications. To secure data that is archived in the cloud, CloudPools encrypts data
that is transmitted from the Isilon cluster at the core data center to the cloud storage service. This data
remains encrypted in the cloud until it is retrieved and returned to the Isilon cluster at the data center.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 58


You can use EMC Isilon to consolidate file-based, unstructured data into a Data Lake that can eliminate
costly storage silos, simplify management, increase data protection, and acquire more value from your
data assets.

With built-in multi-protocol capabilities, Isilon can support a wide range of traditional and next-generation
applications on a single platform, including powerful Big Data analytics that provide you with better insight
and use of your stored information.

Data at edge locations, such as remote or branch offices, are growing. These edge locations are often
inefficient islands of storage, running with limited IT resources and inconsistent data protection practices.
Data at the edge generally lives outside of the business Data Lake, making it difficult to incorporate into
data analytics projects. The edge-to-core-to-cloud approach can extend your Isilon Data Lake to edge
locations and out into the cloud, thus enabling consolidation, protection, management, and backups of
remote edge location data.

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 59


Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 60
Thank you!

Copyright © 2017 Dell Inc. Isilon Fundamentals 2017 61


POWERSCALE
CONCEPTS-SSP

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Internal Use - Confidential
PowerScale Concepts-SSP

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page i
Table of Contents

Prerequisite Skills ...................................................................................................... 2


Prerequisite Skills ................................................................................................................ 3

Rebranding - Isilon is now PowerScale ................................................................... 4


Rebranding - Isilon is now PowerScale ................................................................................ 5

PowerScale Solutions - Internal ............................................................................... 6


PowerScale Solutions - Internal ........................................................................................... 7

Course Objectives...................................................................................................... 8
Course Objectives................................................................................................................ 9

Data Storage Overview ............................................................................................ 10


Data Storage Overview ...................................................................................................... 11
Module Objectives ............................................................................................................. 12
Storage Evolution............................................................................................................... 13
Types of Data Storage ....................................................................................................... 14
Block-Based Data and File-Based Data ............................................................................. 15
Digital Transformation ........................................................................................................ 16
Data Storage: Ever Changing and Ever Growing ............................................................... 17
Two Types of NAS: Scale-Up and Scale-Out ..................................................................... 18
Scale-Out NAS .................................................................................................................. 21
From DAS to NAS .............................................................................................................. 23
OneFS Operating System .................................................................................................. 25

PowerScale Physical Architecture ......................................................................... 27


PowerScale Physical Architecture...................................................................................... 28
Module Objectives ............................................................................................................. 29
PowerScale Hardware Overview........................................................................................ 30
PowerScale Networking Architecture ................................................................................. 31
PowerScale Architecture - External Network ...................................................................... 33
PowerScale Architecture - Interconnect ............................................................................. 34

PowerScale Concepts-SSP

Page ii © Copyright 2020 Dell Inc.


Enhanced Connection Management .................................................................................. 36
N + M Data Protection........................................................................................................ 38
FEC Instead of RAID ......................................................................................................... 39
File Striping Example ......................................................................................................... 40

PowerScale Nodes ................................................................................................... 41


PowerScale Nodes ............................................................................................................ 42
Module Objectives ............................................................................................................. 43
PowerScale Nodes Overview ............................................................................................. 44
PowerScale Family ............................................................................................................ 45
Gen 6 Hardware Components............................................................................................ 48
Gen 6.5 Hardware Components......................................................................................... 50
Node Interconnectivity ....................................................................................................... 52
Quick Scalability ................................................................................................................ 54
Additional Features: Self-Encrypting Drives (SEDs) ........................................................... 55

PowerScale OneFS Operating System ................................................................... 56


PowerScale OneFS Operating System .............................................................................. 57
Module Objectives ............................................................................................................. 58
OneFS - Distributed Clustered File System ........................................................................ 59
Benefits of OneFS.............................................................................................................. 61
Multiprotocol File Access ................................................................................................... 62
Authentication .................................................................................................................... 63
Policy-Based Automation ................................................................................................... 64
Management Interfaces ..................................................................................................... 65
Built-In Administration Roles .............................................................................................. 67
Secure Remote Services ................................................................................................... 68

Data Management and Security .............................................................................. 69


Data Management and Security ......................................................................................... 70
Module Objectives ............................................................................................................. 71
Data Distribution Across Cluster ........................................................................................ 72
Data IO Optimization.......................................................................................................... 73

PowerScale Concepts-SSP

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page iii
Data Protection for Simultaneous Failures ......................................................................... 74
User Quotas for Capacity Management ............................................................................. 75
Deduplication for Data Efficiency ....................................................................................... 76
Data Visibility and Analytics ............................................................................................... 77
Data Integrity - FEC Protection .......................................................................................... 78
Data Resiliency - Snapshots .............................................................................................. 79
Data Recovery - Backup .................................................................................................... 80
Data Recovery - Replication .............................................................................................. 81
Data Retention ................................................................................................................... 83

PowerScale and Big Data ........................................................................................ 84


PowerScale and Big Data .................................................................................................. 85
Module Objectives ............................................................................................................. 86
What Is Big Data? .............................................................................................................. 87
Big Data - Volume, Velocity, Variety................................................................................... 88
Big Data Challenges: Volume ............................................................................................ 90
Big Data Challenges: Velocity ............................................................................................ 91
Big Data Challenges: Variety ............................................................................................. 93
Big Data Positioning of PowerScale ................................................................................... 94
PowerScale OneFS: Scale-Out Data Lake ......................................................................... 95
PowerScale CloudPools..................................................................................................... 96
PowerScale and Edge-to-Core-to-Cloud ............................................................................ 98

Course Summary ..................................................................................................... 99


Course Summary ............................................................................................................. 100

Appendix ............................................................................................... 101

PowerScale Concepts-SSP

Page iv © Copyright 2020 Dell Inc.


Prerequisite Skills

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 1
Prerequisite Skills

Prerequisite Skills

PowerScale Concepts-SSP

Internal Use - Confidential


Page 2 © Copyright 2020 Dell Inc.
Prerequisite Skills

Prerequisite Skills

To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Networking fundamentals such as TCP/IP, DNS and routing
• An introduction to storage such as NAS and SAN differences and basic storage
principles and features
• Installation process of an PowerScale cluster

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 3
Rebranding - Isilon is now PowerScale

Rebranding - Isilon is now PowerScale

PowerScale Concepts-SSP

Page 4 © Copyright 2020 Dell Inc.


Rebranding - Isilon is now PowerScale

Rebranding - Isilon is now PowerScale

Important: In mid-2020 Isilon launched a new hardware platform, the


F200 and F600 branded as PowerScale. Over time the Isilon brand
will convert to the new platforms PowerScale branding. In the
meantime you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 5


PowerScale Solutions - Internal

PowerScale Solutions - Internal

PowerScale Concepts-SSP

Internal Use - Confidential


Page 6 © Copyright 2020 Dell Inc.
PowerScale Solutions - Internal

PowerScale Solutions - Internal

The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.

PowerScale Solutions

PowerScale Advanced Administration (VC,C)

PowerScale Advanced Disaster Recovery (VC,C)

(Knowledge and Experience based Exam)

Implementation Engineer Technology Architect Platform Engineer

PowerScale Concepts (ODC)


PowerScale Concepts PowerScale Concepts
PowerScale Hardware Concepts (ODC)
(ODC) (ODC)
PowerScale Hardware Installation (ODC)
PowerScale Administration PowerScale Solutions Design
PowerScale Hardware Maintenance (ODC)
(ODC,VC,C) (ODC)
PowerScale Implementation (ODC)

Information Storage and Management (ODC, VC, C)

(C) - Classroom

(VC) - Virtual Classroom

(ODC) - On Demand Course

For more information, visit: http://dell.com/certification

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 7
Course Objectives

Course Objectives

PowerScale Concepts-SSP

Internal Use - Confidential


Page 8 © Copyright 2020 Dell Inc.
Course Objectives

Course Objectives

After completion of this course, you will be able to:


→ Compare structured and unstructured data.
→ Describe the PowerScale physical architecture.
→ Discuss nodes workflow application, node model details, and adding new
nodes to the cluster.
→ Describe the PowerScale OneFS operating system.
→ Explain data management and security in PowerScale.
→ Discuss PowerScale with a Big Data solution.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 9
Data Storage Overview

Data Storage Overview

PowerScale Concepts-SSP

Internal Use - Confidential


Page 10 © Copyright 2020 Dell Inc.
Data Storage Overview

Data Storage Overview

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 11
Data Storage Overview

Module Objectives

After completing this lesson, you will be able to:


• Explain data storage evolution.
• Discuss the two types of data storage.
• Explain the journey of DAS to NAS.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 12 © Copyright 2020 Dell Inc.
Data Storage Overview

Storage Evolution

The web version of this content contains an interactive activity.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 13
Data Storage Overview

Types of Data Storage

During the data storage evolution, two types of data developed: structured data and
unstructured data. PowerScale specializes in storing unstructured data.

Structured Data Unstructured Data

• Resides in fixed field of records or files • Does not reside in fixed model

• Requires defined data types, access, and


processes • Does not exist in typical row/column
format
• Most often in relational database
• Example - photos, documents and
• Example - records or files, census presentations
records, economic catalog(s), phone
director(ies), customer contact records,
spreadsheets etc.

Note: 80 – 90% of digital data is unstructured.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 14 © Copyright 2020 Dell Inc.
Data Storage Overview

Block-Based Data and File-Based Data

Block-based data
• Sequence of bytes at fixed length
• Single piece of file or whole file

• Best for high input/output and low latency

• Associated with structured data

File-based data
• Discrete unit of information defined by application or created by user

• Only useful as complete file


• Too large for database apps and high I/O

• Associated with unstructured data

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 15
Data Storage Overview

Digital Transformation

Digital Transformation (DX) has become a ubiquitous component of nearly every


organization’s strategic plan over the last few years. DX-related emerging
technologies will have profound effects on the means of production and will
transform the way consumers interact with every organization in the future.

IDC projects, that through 2022, 75% of successful digital strategies will be built by
a transformed IT organization, with modernized and rationalized infrastructure,
applications, and data architectures.

[...] within the next four years, the global economy will finally reach digital
supremacy, with more than half of Gross domestic product (GDP)Gross domestic
product (GDP) is a monetary measure of the market value of all the final goods and
services produced in a specific time period. - IDC FutureScape1

At the same time, many organizations still struggle to tactically apply DX learnings
to their own business.

1IDC FutureScape: Worldwide IT Industry 2020 Predictions: October 2019, IDC


#US45599219

PowerScale Concepts-SSP

Internal Use - Confidential


Page 16 © Copyright 2020 Dell Inc.
Data Storage Overview

Data Storage: Ever Changing and Ever Growing

With unstructured data being the majority of data storage growth, a solution was
needed. An International Data Corporation (IDC) study published in 2018 showed
that the amount of digital data created, captured, and replicated worldwide grew
exponentially. This finding was based on the proliferation of then-new technologies
such as Voice over IP, RFID, smartphones, and consumer use of GPS. Also, the
continuance of data generators such as digital cameras, HD TV broadcasts, digital
games, ATMs, email, videoconferencing, medical imaging, and so on.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 17
Data Storage Overview

Two Types of NAS: Scale-Up and Scale-Out

PowerScale clusters are a NAS solution. There are two types of NAS architectures;
scale-up and scale-out.

Scale-Up

• With a scale-up2 platform, if more storage is needed, another independent NAS


system is added to the network.
• A scale-up solution3 has controllers that connect to trays of disks and provide
the computational throughput.
• Traditional NAS is great for specific types of workflows, especially those
applications that require block-level access.

2Scale-up storage is the traditional architecture that is dominant in the enterprise


space. High performance, high availability single systems that have a fixed capacity
ceiling characterize scale-up.

3The two controllers can run active/active or active-passive. For more capacity,
add another disk array. Each of these components is added individually. As more
systems are added, NAS sprawl becomes an issue.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 18 © Copyright 2020 Dell Inc.
Data Storage Overview

Controller with
disk shelves

Independent systems on
network - separate points
of management
Client
s

Structured or
Unstructured storage

Additional storage - Usually


restricted to tens or hundreds of
TBs

Scale-Out

• With a clustered NAS solutions, or scale-out architecture, all the NAS boxes, or
PowerScale nodes, belong to a unified cluster with a single point of
management.
• In a scale-out solution4, the computational throughput, disks, disk protection,
and management are combined and exist for a single cluster.

4Not all clustered NAS solutions are the same. Some vendors overlay a
management interface across multiple independent NAS boxes. This gives a
unified management interface, but does not unify the file system. While this
approach does ease the management overhead of traditional NAS, it still does not
scale well.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 19
Data Storage Overview

Unstructured storage

1000+ PBS

Clients

Adding storage, adds


compute and bandwidth

PowerScale Concepts-SSP

Internal Use - Confidential


Page 20 © Copyright 2020 Dell Inc.
Data Storage Overview

Scale-Out NAS

Scale-out NAS5 is now a mainstay in most data center environments. The next
wave of scale-out NAS innovation has enterprises embracing the value6 of NAS
and adopting it as the core of their infrastructure.

5The PowerScale scale-out NAS storage platform combines modular hardware


with unified software to harness unstructured data. Powered by the OneFS
operating system, a PowerScale cluster delivers a scalable pool of storage with a
global namespace.

6 Enterprises want to raise the standard on enterprise grade resilience, with a no


tolerance attitude toward data loss and data unavailable situations and support for
features to simplify management. Organizations see massive scale and
performance with smaller data center rack footprints that the performance-centric
workloads drives.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 21
Data Storage Overview

1: The unified software of the platform provides centralized web-based and


command-line administration to manage the following features:

• A cluster that runs a distributed file system.


• Scale-out nodes that add capacity and performance.
• Storage options that manage files and tiering.
• Flexible data protection and high availability.
• Software modules that control costs and optimize resources.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 22 © Copyright 2020 Dell Inc.
Data Storage Overview

From DAS to NAS

DAS

In the early days of computer data, corporations stored


RAID
data on hard drives in a server. The intellectual property
of the company depended entirely on the continuous
functionality of hard drive. Thus, to minimize risk,
DAS
corporations mirrored the data on a Redundant Array of
Independent Disks (RAID). RAID disks were directly
attached to a server so that the server thought the hard
drives were part of it. This technique is called Direct Attached Storage (DAS).

SAN

Volume Manager RAID

SAN

As applications proliferated, soon there were many servers, each with its own DAS.
This worked fine, with some drawbacks. If one server’s DAS was full while another
server’s DAS was half empty, the empty DAS couldn’t share its space with the full
DAS. Due to this limitation with DAS, SAN was introduced which effectively utilized
volume manager and RAID.

NAS

File System Volume Manager RAID

NAS

SAN was set up for servers, not personal computers (PCs). PCs worked differently
from the storage file server and the network communications in PCs, only
communicate from one file system to another file system. The breakthrough came
when corporations put employee computers on the network, and added to the

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 23
Data Storage Overview

storage a file system to communicate with users. From this, Network Attached
Storage (NAS) was born.

NAS works pretty well, but there is room for improvement. For example, the server
is spending as much time servicing employee requests as it is doing the application
work it was meant for. The file system doesn’t know where data is supposed to go,
because that’s the volume manager’s job. The volume manager doesn’t know how
the data is protected; that’s RAID’s job. If high-value data needs more protection
than other data, you need to migrate the data to a different volume that has the
protection level that data needs. So there is opportunity to improve NAS.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 24 © Copyright 2020 Dell Inc.
Data Storage Overview

OneFS Operating System

With traditional NAS systems the file system7, volume manager8, and the
implementation of RAID9 are all separate entities.

OneFS is the operating system and the underlying file system that drives and
stores data. OneFS is a single file system that performs the duties of the volume
manager and applies protection.
• Creates a single file system for the cluster.10
• Volume manager and protection.11

7The file system is responsible for the higher-level functions of authentication and
authorization.

8 The volume manager controls the layout of the data.

9 RAID controls the protection of the data.

10As nodes are added, the file system grows dynamically and content is
redistributed.

11The PowerScale scale-out NAS storage platform combines modular hardware


with unified software to harness unstructured data. A PowerScale cluster delivers a
scalable pool of storage with a global namespace.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 25
Data Storage Overview

• Data shared across cluster.12


• Scale resources.13

12Because all information is shared among nodes, the entire file system is
accessible by clients connecting to any node in the cluster.

13Each PowerScale storage node contains globally coherent RAM, meaning that,
as a cluster becomes larger, it also becomes faster. When a node is added, the
performance scales linearly.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 26 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

PowerScale Physical Architecture

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 27
PowerScale Physical Architecture

PowerScale Physical Architecture

PowerScale Concepts-SSP

Internal Use - Confidential


Page 28 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

Module Objectives

After completing this lesson, you will be able to:


• Explain PowerScale data storage solution.
• Discuss PowerScale physical architecture.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 29
PowerScale Physical Architecture

PowerScale Hardware Overview

Nodes combine to create a cluster. Each cluster


behaves as a single, central storage system.
PowerScale is designed for large volumes of
unstructured data. PowerScale has multiple servers
that are called nodes.

PowerScale includes all-flash, hybrid, and archive


storage systems.
Dual chassis, 8 node Generation
6 (or Gen 6) cluster
Gen 6 highlights.14
Gen 6.5 highlights.15

14The Gen 6 platform reduces the data center rack footprints with support for four
nodes in a single 4U chassis. It enables enterprise to take on new and more
demanding unstructured data applications. The Gen 6 can store, manage, and
protect massively large datasets with ease. With the Gen 6, enterprises can gain
new levels of efficiency and achieve faster business outcomes.

15 The ideal use cases for Gen 6.5 (F200 and F600) is remote office/back office,
factory floors, IoT, and retail. Gen 6.5 also targets smaller companies in the core
verticals, and partner solutions, including OEM. The key advantages are low entry
price points and the flexibility to add nodes individually, as opposed to a chassis/2
node minimum for Gen 6.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 30 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

PowerScale Networking Architecture

OneFS supports standard network communication protocols IPv4 and IPv6.


PowerScale nodes include several external Ethernet connection options, providing
flexibility for a wide variety of network configurations16.

Network: There are two types of networks that are associated with a cluster:
internal and external.

Front-end, External Network

Client/Application PowerScale Storage


Layer Layer

Ethernet

Protocols: NFS, SMB, S3, Ethernet Backend communication


HTTP, FTP, HDFS, SWIFT Layer (PowerScale internal)

F200 cluster showing supported frontend protocols.

Clients connect to the cluster using Ethernet connections17 that are available on all
nodes.

16In general, keeping the network configuration simple provides the best results
with the lowest amount of administrative overhead. OneFS offers network
provisioning rules to automate the configuration of additional nodes as clusters
grow.

17Because each node provides its own Ethernet ports, the amount of network
bandwidth available to the cluster scales linearly.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 31
PowerScale Physical Architecture

The complete cluster is combined with hardware, software, networks in the


following view:

Back-end, Internal Network

OneFS supports a single cluster18 on the internal network. This back-end network,
which is configured with redundant switches for high availability, acts as the
backplane for the cluster.19

18 All intra-node communication in a cluster is performed across a dedicated


backend network, comprising either 10 or 40 GbE Ethernet, or low-latency QDR
InfiniBand (IB).

19 This enables each node to act as a contributor in the cluster and isolating node-
to-node communication to a private, high-speed, low-latency network. This back-
end network utilizes Internet Protocol (IP) for node-to-node communication.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 32 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

PowerScale Architecture - External Network

The external network provides connectivity for clients over standard file-based
protocols. It supports link aggregation, and network scalability is provided through
software in OneFS. A Gen 6 node has to 2 front-end ports - 10 GigE, 25 GigE, or
40 GigE, and one 1 GigE port for management. Gen 6.5 nodes have 2 front-end
ports - 10 GigE, 25 GigE, or 100 GigE. In the event of a Network Interface
Controller (NIC) or connection failure, clients do not lose their connection to the
cluster. For stateful protocols, such as SMB and NFSv4, this prevents client-side
timeouts and unintended reconnection to another node in the cluster. Instead,
clients maintain their connection to the logical interface and continue operating
normally. Support for Continuous Availability (CA) for stateful protocols like SMB
and NFSv4 is available with OneFS 8.0.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 33
PowerScale Physical Architecture

PowerScale Architecture - Interconnect

Back-end Network

• The back-end network is a private PowerScale network that is used for


intercluster communication.
• It is a distributed connectivity.
• The back-end network supports redundancy for high availability.
• With OneFS 8.2 and later versions, the back-end network may have a leaf-
spine network. Leaf-spine is a two level hierarchy where nodes connect to leaf
switches, and leaf switches connect to spine switches.

InfiniBand

• InfiniBand is a high-speed unmanaged fabric. It supports both Gen 5 and Gen 6


nodes.
• InfiniBand with Gen 6 nodes is only used when Gen 6 nodes are added to a
cluster that has, or had, older generation nodes.
• The InfiniBand switches are provided with PowerScale and they come with
range of sizes.

Ethernet

• An Ethernet back-end is a high speed managed fabric with limited monitoring


capability.
• Ethernet switches only support Gen 6 nodes.
• The minimum size of the switch is 24 ports.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 34 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

Leaf and Spine

• Two level hierarchy.


• Cluster nodes connect to leaf switches which communicate with each other via
the spine switches.
• Switches are not interconnected - switches of the same type (leaf or spine) do
not connect to one another
• Each leaf switch connects with all spine switch(es)
• All leaf switches have the same number of uplinks to the spine switches.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 35
PowerScale Physical Architecture

Enhanced Connection Management

The clients can access the cluster using DNS, and the enhanced functionality20
provides connection distribution policies as shown in the graphic. Also, they provide
continuous availability21 (CA) capabilities.

1 4
2
3

1: Determines the average throughput on each available network interface and


selects the network interface with the lowest network interface load.

20The enhanced functionality includes continuous availability for SMBv3. This


feature enables SMBv3 with CA, and NFSv4 with CA can dynamically move to
another node in the event the node they are connected goes down.

21The continuous availability feature applies to Microsoft Windows 8, Windows 10,


and Windows Server 2012 R2 clients. This feature is part of nondisruptive
operation initiative of PowerScale to give customers more options for continuous
work and less down time. The CA option enables seamless movement from one
node to another and no manual intervention on the client side.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 36 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

2: Determines the average CPU utilization on each available network interface and
selects the network interface with lightest processor usage.

3: Selects the next available network interface on a rotating basis. This selection is
the default method. Without a SmartConnect license for advanced settings, this is
the only method available for load balancing.

4: Determines the number of open TCP connections on each available network


interface and selects the network interface with the fewest client connections.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 37
PowerScale Physical Architecture

N + M Data Protection

OneFS sets parity bits, also called FEC protection. In the example below, using the
parity bit (green), OneFS determines the missing pieces.

Here, if blue + yellow = green, the missing pieces are identified using the parity
bits.

If blue + yellow = green

Parity bit

Then

PowerScale Concepts-SSP

Internal Use - Confidential


Page 38 © Copyright 2020 Dell Inc.
PowerScale Physical Architecture

FEC Instead of RAID

FEC enables the customer to choose the number of bits of parity to implement.
One bit of parity for many disks is known as N+1; two parity points for many disks
are known as N+2, and so on.

FEC with N+1 Protection

With the N+1 protection, data is 100% available even if a drive or a node fails.

Node 1 Node 2 Node 3 Node 4

Failure

FEC with N+2, N+3, and N+4 Protection

With N+2, N+3, and N+4 protection, data is 100% available if multiple drives or
nodes fail.

Node 1 Node 2 Node 3 Node 4

Failure Failure

Node 5 Node 6 Node 7 Node 8

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 39
PowerScale Physical Architecture

File Striping Example

During the write operation, with OneFS, the file from the client is striped across the
nodes. The system breaks the file-based data into smaller logical sections called
stripe units. The smallest element in a stripe unit is 8 kilobytes and each stripe unit
is 128 kilobytes, or sixteen 8 kilobytes blocks. If the datafile is larger than 128
kilobytes, the next part of the file is written to a second node. If the file is larger than
256 kilobytes, the third part is written to a third node, and so on. The graphic
illustrates a 384-kilobytes file with 3 stripe units and 1 FEC unit.

File
Stripe Unit
FEC

Node 3 Node 4
Node 1 Node 2

Leaf
Leaf

Spine

PowerScale Concepts-SSP

Internal Use - Confidential


Page 40 © Copyright 2020 Dell Inc.
PowerScale Nodes

PowerScale Nodes

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 41
PowerScale Nodes

PowerScale Nodes

PowerScale Concepts-SSP

Internal Use - Confidential


Page 42 © Copyright 2020 Dell Inc.
PowerScale Nodes

Module Objectives

After completing this lesson, you will be able to:


• Explain PowerScale nodes.
• Discuss the Gen 6 and Gen 6.5 hardware design, PowerScale product families,
and additional features.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 43
PowerScale Nodes

PowerScale Nodes Overview

Generation 6 (or Gen 6) chasis and Generation 6.5 nodes

The design goal for the PowerScale nodes are to keep the simple ideology of NAS,
provide the agility of the cloud, and the cost of commodity.

Storage nodes are peers.

The Gen 6x family has different offerings that are based on the need for
performance and capacity. As Gen 6 is a modular architecture, you can scale out
compute and capacity separately. All the nodes are powered by OneFS.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 44 © Copyright 2020 Dell Inc.
PowerScale Nodes

PowerScale Family

Click each tab to learn about the different offerings that Gen 6 family provides.

F-Series

The F-series nodes sit at the top of both


performance and capacity with all-flash arrays
for ultra compute and high capacity. The all
flash platforms can accomplish 250-300k
protocol operations per chassis, and get 15
GB/s aggregate read throughput from the
chassis. Even when the cluster scales, the latency remains predictable.

• F80022
• F81023
• F60024
• F20025

22 The F800 is suitable for workflows that require extreme performance and
efficiency.

23 The F810 is suitable for workflows that require extreme performance and
efficiency. The F810 also provides high-speed inline data deduplication and in-line
data compression. It delivers up to 3:1 efficiency, depending on your specific
dataset and workload.

24 Ideal for small, remote clusters with exceptional system performance for small
office/remote office technical workloads.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 45
PowerScale Nodes

H-Series

After F-series nodes, next in terms of computing power are the H-series nodes.
These are hybrid storage platforms that are highly flexible and strike a balance
between large capacity and high-performance storage to provide support for a
broad range of enterprise file workloads.

• H40026
• H50027
• H560028
• H60029

25Ideal for low cost all-flash node pool for existing Gen6 clusters. Ideal for small,
remote clusters.

26The H400 provides a balance of performance, capacity and value to support a


wide range of file workloads. It delivers up to 3 GB/s bandwidth per chassis and
provides capacity options ranging from 120 TB to 720 TB per chassis.

27The H500 is a versatile hybrid platform that delivers up to 5 GB/s bandwidth per
chassis with a capacity ranging from 120 TB to 720 TB per chassis. It is an ideal
choice for organizations looking to consolidate and support a broad range of file
workloads on a single platform.

28The H5600 combines massive scalability – 960 TB per chassis and up to 8 GB/s
bandwidth in an efficient, highly dense, deep 4U chassis. The H5600 delivers inline
data compression and deduplication. It is designed to support a wide range of
demanding, large-scale file applications and workloads.

29The H600 is Designed to provide high performance at value, delivers up to


120,000 IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for high

PowerScale Concepts-SSP

Internal Use - Confidential


Page 46 © Copyright 2020 Dell Inc.
PowerScale Nodes

A-Series

The A-series nodes namely have lesser compute power compared to other nodes
and are designed for data archival purposes. The archive platforms can be
combined with new or existing all-flash and hybrid storage systems into a single
cluster that provides an efficient tiered storage solution.

• A20030
• A200031

performance computing (HPC) workloads that don’t require the extreme


performance of all-flash.

30The A200 is an ideal active archive storage solution that combines near-primary
accessibility, value and ease of use.

31The A2000 is an ideal solution for high density, deep archive storage that
safeguards data efficiently for long-term retention.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 47
PowerScale Nodes

Gen 6 Hardware Components

Gen 6 requires a minimum of four nodes to form a cluster. You must add nodes to
the cluster in pairs.

The chassis holds four compute nodes and 20 drive sled slots.

Both compute modules in a node pair power-on immediately when one of the
nodes is connected to a power source.

Gen 6 chassis

1 10 9

2 8
4
6

3
5 7

1: The compute module bay of the two nodes make up one node pair. Scaling out a
cluster with Gen 6 nodes is done by adding more node pairs.

2: Each Gen 6 node provides two ports for front-end connectivity. The connectivity
options for clients and applications are 10 GbE and 40 GbE.

3: Each node can have 1 or 2 SSDs that are used as L3 cache, global namespace
acceleration (GNA), or other SSD strategies.

4: Each Gen 6 nodes provides two ports for back-end connectivity. A Gen 6 node
supports 10 GbE, 40 GbE, and InfiniBand.

5: Power supply unit - Peer node redundancy: When a compute module power
supply failure takes place, the power supply from the peer compute module in the
node pair will temporarily provide power to both nodes.

PowerScale Concepts-SSP

Internal Use - Confidential


Page 48 © Copyright 2020 Dell Inc.
PowerScale Nodes

6: Each Node has five drive sleds. Depending on the length of the chassis and type
of the drive, each node can handle up to 30 drives or as few as 15.

7: Disks in a sled are all the same type.

8: The sled can be either a short sled or a long sled. The types are:

• Long Sled - 4 drives of size 3.5"


• Short Sled - 3 drives of size 3.5"
• Short Sled - 3 or 6 drives of size 2.5"

9: The chassis comes in two different depths, the normal depth is about 37 inches
and the deep chassis is about 40 inches.

10: Large journals offer flexibility in determining when data should be moved to the
disk. Each node has a dedicated M.2 vault drive for the journal. A node mirrors
their journal to its peer node. The node writes the journal contents to the vault when
a power loss occurs. A backup battery helps maintain power while data is stored in
the vault.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 49
PowerScale Nodes

Gen 6.5 Hardware Components

Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.

Graphic shows F200 or F600 node pool.

8 5

1: Scaling out an F200 or an F600 node pool only requires adding one node.

2: For frontend connectivity, the F600 uses the PCIe slot 3.

3: Each F200 node has four SAS SSDs. Each F600 node has 8 NVMe SSDs.

4: Each Gen F200 and F600 node provides two ports for backend connectivity. The
PCIe slot 1 is used.

5: Redundant power supply units - When a power supply fails, the secondary
power supply in the node provides power. Power is supplied to the system equally
from both PSUs when the Hot Spare feature is disabled. Hot Spare is configured
using the iDRAC settings.

6: Disks in a node are all the same type.

7: The nodes come in two different 1U models. The graphic shows the F200.

8: The F200 frontend connectivity uses the rack network daughter card (rNDC).

PowerScale Concepts-SSP

Internal Use - Confidential


Page 50 © Copyright 2020 Dell Inc.
PowerScale Nodes

Important: The F600 nodes have a 4-port 1 GB NIC in the rNDC slot.
The NIC is not allocated to any OneFS function.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 51
PowerScale Nodes

Node Interconnectivity

1: Backend ports int-a and int-b. The int-b port is the upper port. Gen 6 backend
ports are identical for InfiniBand and Ethernet, and cannot be identified by looking
at the node. If Gen 6 nodes are integrated in a Gen 5 or earlier cluster, the backend
will use InfiniBand. Note that there is a procedure to convert an InfiniBand backend
to Ethernet if the cluster no longer has pre-Gen 6 nodes.

2: PowerScale nodes with different backend speeds can connect to the same
backend switch and not see any performance issues. For example, an environment
has a mixed cluster where A200 nodes have 10 GbE backend ports and H600
nodes have 40 GbE backend ports. Both node types can connect to a 40 GbE
switch without effecting the performance of other nodes on the switch. The 40 GbE
switch provides 40 GbE to the H600 nodes and 10 GbE to the A200 nodes.

3: Gen 6.5 backend ports use the PCIe slot.

4: There are two speeds for the backend Ethernet switches, 10 GbE and 40 GbE.
Some nodes, such as archival nodes, might not need to use all of a 10 GbE port
bandwidth while other workflows might need the full utilization of the 40 GbE port
bandwidth. The Ethernet performance is comparable to InfiniBand so there should
be no performance bottlenecks with mixed performance nodes in a single cluster.
Administrators should not see any performance differences if moving from
InfiniBand to Ethernet.

Gen 6 nodes can use either an InfiniBand or Ethernet switch on the backend.
InfiniBand was designed as a high-speed interconnect for high-performance

PowerScale Concepts-SSP

Internal Use - Confidential


Page 52 © Copyright 2020 Dell Inc.
PowerScale Nodes

computing, and Ethernet provides the flexibility and high speeds that sufficiently
support the PowerScale internal communications.

Gen 6.5 only supports Ethernet. All new, PowerScale clusters support Ethernet
only.

Warning: With Gen 6, do not plug a backend Ethernet topology into a


backend InfiniBand NIC. If you plug Ethernet into the InfiniBand NIC,
it switches the backend NIC from one mode to the other and will not
come back to the same state.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 53
PowerScale Nodes

Quick Scalability

A PowerScale cluster expansion takes 60 seconds. The primary purpose of NAS


approach is the “scale-out” part. An administrator can expand the storage by
adding a new node. In PowerScale, once the node is racked and cabled, adding it
to the cluster takes just a short few minutes. That is because OneFS’ policies will
automatically discover the node, set up addresses for the node, incorporate the
node into the cluster, and begin rebalancing capacity on all nodes to take
advantage of the new space. The node fully configures itself, is ready for new data
writes, and begins taking on data from the other nodes to Auto-Balance the entire
cluster.

The space available before scaling

The space available after scaling

Joined the cluster Fully configured

Ready to write

PowerScale Concepts-SSP

Internal Use - Confidential


Page 54 © Copyright 2020 Dell Inc.
PowerScale Nodes

Additional Features: Self-Encrypting Drives (SEDs)

• Data At Rest Encryption (DARE)


• The term “data at rest” refers to any data
sitting on your drives. DARE is used for
confidential or sensitive information.
• Self-Encrypting drives securely store confidential
data over its lifetime.
• It is used in regulated verticals.
• Federal governments
• Financial services
• Healthcare (HIPPA)
• Self-Encrypting drives enable retirement of
hardware without data compromise.
• PowerScale implements DARE using SEDs.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 55
PowerScale OneFS Operating System

PowerScale OneFS Operating System

PowerScale Concepts-SSP

Page 56 © Copyright 2020 Dell Inc.


PowerScale OneFS Operating System

PowerScale OneFS Operating System

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 57


PowerScale OneFS Operating System

Module Objectives

After completing this lesson, you will be able to:


• Explain OneFS.
• Discuss authentication and access.

PowerScale Concepts-SSP

Page 58 © Copyright 2020 Dell Inc.


PowerScale OneFS Operating System

OneFS - Distributed Clustered File System

The key to PowerScale scale-out NAS is the architecture of OneFS. Shown is a


Gen 6 cluster that can scale out to 66 nodes with a single spine switch for each
backend network.
• Spans nodes and runs on all nodes32.
• Grows dynamically33.
• Supports variable fault tolerance levels
• Reed-Solomon FEC.34

32 No master node that controls the cluster.

33When nodes are added, OneFS redistributes the content to use the resources of
the entire cluster.

34 As the system writes the data, it also protects the data.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 59


PowerScale OneFS Operating System

• FlexProtect35.
• Runs on all nodes
• Each node is a peer36.
• Prevents bottlenecking37.
• A copy of OneFS is on every cluster node.
• 10 GBE, 40 GbE (Gen 6): 10 GBE, 25 GBE, 100 GBE(Gen 6.5) and infiniBand
handle all intracluster communications.

35 Creates an n-way, redundant fabric that scales as nodes are added to the
cluster, providing 100% data availability even with four simultaneous node failures.

36 Each node shares the management workload and acts independently as a point
of access for incoming data request.

37 When there is a large influx of simultaneous requests.

PowerScale Concepts-SSP

Page 60 © Copyright 2020 Dell Inc.


PowerScale OneFS Operating System

Benefits of OneFS

The OneFS architecture is designed to optimize processes and applications across


the cluster.
• Concurrency38.
• Shared infrastructure
• Access to resources on any node in the cluster from any other node in the
cluster.
• Performance benefits of parallel processing.
• Improved utilization of resources - compute, disk, memory, networking.
• Because all nodes work together, the more nodes, the more powerful the
cluster gets.

38When a node is added to the cluster, it adds computing power, storage, caching,
and networking resources.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 61


PowerScale OneFS Operating System

Multiprotocol File Access

OneFS supports access to the same file using different protocols and
authentication methods simultaneously. SMB clients that authenticate using Active
Directory (AD), and NFS clients that authenticate using LDAP, can access the
same file with their appropriate permissions applied.
• OneFS translates Windows Security Identifiers (SIDS) and UNIX User Identities
(UIDs) into a common identity format.
• Different authentication sources.
• Permissions activities are transparent to client.
• Authenticate against correct source.
• File access behavior as protocol expects.
• Correct permissions applied - stores the appropriate permissions for each
identity or group.

• File and directory permissions


• User and group identities

PowerScale Concepts-SSP

Page 62 © Copyright 2020 Dell Inc.


PowerScale OneFS Operating System

Authentication

Authentication services offer a layer of security by verifying user credentials before


allowing access to the files. Authentication answers the question, “Are you really
who you say you are?”

Ensure that interactions between authentication types are understood before


enabling multiple methods on the cluster.

1
5 2

1: Active Directory (AD): The primary reason for joining the cluster to an AD
domain is to let the AD domain controller perform user and group authentication.

2: Lightweight Directory Access Protocol (LDAP): An advantage of LDAP is the


open nature of its directory services and the ability to use LDAP across many
platforms.

3: Network Information Service (NIS): Sun Microsystem directory access protocol.

4: Local or File Provider: OneFS supports local user and group authentication
using the web administration interface.

5: SSH authentication: SSH multifactor authentication supported.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 63


PowerScale OneFS Operating System

Policy-Based Automation

Automatically move data between


different tiers of storage

To run functions, OneFS creates automated policies.


• Repeatable - automated policies make processes repeatable, decreasing the
time spent manually managing the cluster.
• Policies managed throughout the cluster - a change to the configuration is a
change to the configuration on every node in the cluster.
• Executes policies as a cohesive system.
• Policies drive every process.

• Includes the way data is distributed across the cluster and on each node.
• Includes how client connections get distributed among the nodes, when and
how maintenance tasks are run.

PowerScale Concepts-SSP

Page 64 © Copyright 2020 Dell Inc.


PowerScale OneFS Operating System

Management Interfaces

The OneFS management interface is used to perform various administrative and


management tasks on the PowerScale cluster and nodes. Management capabilities
vary based on which interface is used. The different types of management
interfaces in OneFS are:

• Serial Console39
• Web Administration Interface (WebUI)40
• Command Line Interface (CLI)41

39The serial console is used for initial cluster configurations by establishing serial
access to the node designated as node 1.

40The browser-based OneFS web administration interface provides secure access


with OneFS-supported browsers. This interface is used to view robust graphical
monitoring displays and to perform cluster-management tasks.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 65


PowerScale OneFS Operating System

• Platform Application Programming Interface (PAPI)42


• Front Panel Display43

41The command-line interface runs isi commands to configure, monitor, and


manage the cluster. Access to the command-line interface is through a secure shell
(SSH) connection to any node in the cluster.

42The PAPI is divided into two functional areas: one area enables cluster
configuration, management, and monitoring functionality, and the other area
enables operations on files and directories on the cluster.

43The Front Panel Display is located on the physical node or chassis. It is used to
perform basic administrative tasks onsite.

PowerScale Concepts-SSP

Page 66 © Copyright 2020 Dell Inc.


PowerScale OneFS Operating System

Built-In Administration Roles

So who is allowed to access and make configuration changes using the cluster
management tools? In addition to the integrated root and admin users, OneFS
provides role-based access control (RBAC). With RBAC, you can define privileges
to customize access to administration features in the OneFS WebUI, CLI, and for
PAPI management.
• Grant or deny access to management features.
Configured user with restricted privileges

Restricted options are not


displayed

Root user privileges

• RBAC
• Set of global admin privileges
• Five preconfigured admin roles
• Zone RBAC (ZRBAC)
• Set of admin privileges specific to an access zone
• Two preconfigured admin roles
• Can create custom roles.
• Assign users to one or more roles.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 67


PowerScale OneFS Operating System

Secure Remote Services

Dell Technologies
Support Location

Cluster
Location

If there is an issue with your cluster, there are two types of support available. You
can manually upload logfiles to the Dell Technologies support FTP site, or use
Secure Remote Services.
• Manually FTP upload logfiles
• As needed.
• Support requests logfiles.
• Secure Remote Support
• Broader product support.
• Manual logfile uploads.
• 24x7 remote monitoring - node-by-node basis and sends alerts regarding
the health of devices.
• Allows remote cluster access - requires permission.
• Secure authentication with AES 256-bit encryption and RSA digital
certificates.
• Log files provide detailed information about the cluster activities.
• Remote session that is established through SSH or the WebUI - support
personnel can run scripts that gather diagnostic data about cluster settings and
operations. Data is sent to a secure FTP site where service professionals can
open support cases and troubleshoot on the cluster.

PowerScale Concepts-SSP

Page 68 © Copyright 2020 Dell Inc.


Data Management and Security

Data Management and Security

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 69
Data Management and Security

Data Management and Security

PowerScale Concepts-SSP

Page 70 © Copyright 2020 Dell Inc.


Data Management and Security

Module Objectives

After completing this lesson, you will be able to:


• Explain data distribution, I/O optimization, and protection.
• Discuss quotas and deduplication.
• Define data resiliency, data recovery, and retention.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 71
Data Management and Security

Data Distribution Across Cluster

Node Pool

- Nodes assigned based on type


Tiers
- Function as single data target
location - Groups of node pools

- Can write data to tier - if


Policy Options SmartPools is licensed, data
written to specific node pool
- Default policy: write anywhere

- Data goes to node pools having


the most available space

Data distribution is how OneFS spreads data across the cluster. Various models of
PowerScale nodes, or node types can be present in a cluster. Nodes are assigned
to node pools based on the model type, number of drives, and the size of the
drives. The cluster can have multiple node pools, and groups of node pools can be
combined to form tiers of storage. Data distributes among the different node pools
that are based on the highest percentage of available space. This means that the
data target can be a pool or a tier anywhere on the cluster.

PowerScale Concepts-SSP

Page 72 © Copyright 2020 Dell Inc.


Data Management and Security

Data IO Optimization

Manage directories or
Manage cluster-wide by files
default
random

concurrent

sequential

Configurable pre-fetch cache

You can optimize data input and output to match the workflows for your business.
By default, optimization is managed cluster-wide, but you can manage individual
directories or individual files. The data access pattern can be optimized for random
access, sequential access, or concurrent access. For example, sequential
optimization has aggressive prefetching. The prefetch, or read ahead, is an
optimization algorithm that attempts to predict what data is needed next, before the
request is made. When clients open larger files, especially streaming formats like
video and audio, OneFS assumes that you will watch minute four of the video after
minute three. Prefetch proactively loads minutes four, five, and sometimes even six
into memory before it is requested. Prefetch delivers those minutes faster than
returning to the hard drive for each request. With OneFS, you can configure the
prefetch cache characteristics to work best with the selected access pattern.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 73
Data Management and Security

Data Protection for Simultaneous Failures

Performance optimization is the first thing a customer notice about their cluster in
day to day operations. But what does the average administrator notice second?
They notice when a cluster has issues after they notice how great it works. They
want it fast, and they want it to work. That is a reason why data protection is
essential.

Data protection level refers to how many components in a cluster can malfunction
without loss of data.
• Flexible and configurable.
• Virtual hot spare - allocate disk space to hold data as it is rebuilt when a disk
drive fails.
• Select FEC protection by node pool, directory, or file.
• Extra protection creates more FEC stripes, increasing overhead.
• Standard functionality is available in the unlicensed version of SmartPools.

As an example, a research and development department have a node pool that is


dedicated to testing. Because the test data is not production data, the minimal N+1
protection is set. The customer database, however, is a valuable asset. Customer
data is written to a different node pool set and to a higher level of protection such
as, N+4.

PowerScale Concepts-SSP

Page 74 © Copyright 2020 Dell Inc.


Data Management and Security

User Quotas for Capacity Management

You can subdivide capacity usage by assigning storage quotas to users, groups,
and directories.
• Policy-based quota management.
• Nesting - place a quota on a department, and then a smaller quota on each
department user, and a different quota on the department file share.
• Thin provisioning - shows available storage even if capacity is not available.
• Quota types
• Accounting - informational only, can exceed quota.
• Enforcement soft limit - notification sent when exceeded
• Enforcement hard limit - deny writes.
• Customizable quota notifications.
• Requires SmartQuotas license.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 75
Data Management and Security

Deduplication for Data Efficiency

Deduplication provides an automated way to increase storage efficiency. OneFS


finds duplicate sets of data blocks, and then stores only a single copy of any data
block that is duplicated.
• Consolidates duplicate data blocks.
• Post process - analyzes data that is already stored.
• Block-level deduplication at the 8-K block level on files over 32 KB.
• Directory level granularity.
• Dry-run assessment tool - test drive.
• Requires SmartDedupe license.

PowerScale Concepts-SSP

Page 76 © Copyright 2020 Dell Inc.


Data Management and Security

Data Visibility and Analytics

InsightIQ is a powerful tool that monitors one or more clusters and then presents
data in a robust graphical interface with reports you can export. You can examine
the information and break out specific information you want, and even take
advantage of usage growth and prediction features. InsightIQ offers:
• Monitor system usage - performance and file system analytics.
• Requires a server or VMware system external to cluster.
• Free InsightIQ license.

Powerful multi cluster Graphical presentation with Elaborate drill-down and


monitoring tool. reporting data. breakout capabilities.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 77
Data Management and Security

Data Integrity - FEC Protection

Each stripe is protected separately with forward error correction (FEC) protection
blocks, or parity. Shown is a 1-megabyte file that is divided into two stripe units with
N+2 protection.
• Protected at data stripe - one or two data or protection stripe units are contained
on a single node for any given data stripe.
• Striped across nodes.
• Variable protection levels - set separately for node pools, directories, or even
individual files.
• Set at node pool, directory, or file.
• High availability is integrated - data is spread onto many drives and multiple
nodes, all ready to help reassemble the data when a component fails.

PowerScale Concepts-SSP

Page 78 © Copyright 2020 Dell Inc.


Data Management and Security

Data Resiliency - Snapshots

Data resiliency is the ability to recover past versions of a file that has changed over
time. Sooner or later, every storage admin gets asked to roll back to a previous
“known good” version of a file. OneFS provides this capability using snapshots.
• File change rollback technology - called snapshots.
• Copy-on-write (CoW) - writes the original blocks to the snapshot version first,
and then writes the data to the file system, incurs a double write penalty but less
fragmentation.
• Redirect-on-write (RoW) - writes changes into available file system space and
then update pointers to look at the new changes, there is no double write
penalty but more fragmentation.
• Policy-based
• Scheduled snapshots
• Policies determine the snapshot schedule, path to the snapshot location,
and snapshot retention periods.
• Deletions happen as part of a scheduled job, or are deleted manually.
• Out of order deletion allowed, but not recommended.
• Some system processes use with no license required.
• Full capability requires SnapshotIQ license.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 79
Data Management and Security

Data Recovery - Backup

Gen 6 with the fiber channel combo card.

PowerScale supports NDMP for integration with backup applications such as


Symantec, EMC, CommVault, and IBM. A backup application external to the cluster
manages the backup process.
• Backup is managed over the external network in one of two ways.
• Direct to backup device over LAN - slower performance.
• Gen 5 Backup Accelerators or Gen 6 fiber channel combo card.
• NDMP support comes standard.

PowerScale Concepts-SSP

Page 80 © Copyright 2020 Dell Inc.


Data Management and Security

Data Recovery - Replication

Source Target

Replication keeps a copy of data from one cluster on another cluster. OneFS
replicates during normal operations, from one PowerScale cluster to another.
Replication may be from one to one, or from one to many PowerScale clusters.

Cluster-to-cluster synchronization

Cluster-to-cluster synchronization

• Scheduled replication over LAN or WAN.


• PowerScale to PowerScale only.
• One-way replication.

Two replication types

The two types of replication are:

• Copy - new files on the source are copied to the target, while files deleted on
the source remain unchanged on the target.
• Synchronization - only works in one direction and both the source and target
clusters maintain identical file sets, except that files on the target are read-only.

Policy-based synchronization jobs

Per directory or for specific types of data and can set exceptions to include or
exclude specific files.

• Manual start

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 81
Data Management and Security

• On schedule
• When changes made

Bandwidth throttling

Bandwidth throttling - used on replication jobs to optimize resources for high priority
workflows.

PowerScale Concepts-SSP

Page 82 © Copyright 2020 Dell Inc.


Data Management and Security

Data Retention

Data retention is the ability to prevent data from being deleted or modified before
some future date. In OneFS, you can configure data retention at the directory level,
so that different directories can have different retention policies. You can also use
policies to automatically commit certain types of files for retention.
• Two modes of retention
• Enterprise (more flexible) - enable privileged deletes by an administrator.
• Compliance (more secure) - designed to meet SEC regulatory requirements.
Once data is committed to disk, individuals cannot change or delete the data
until the retention clock expires - OneFS prohibits clock changes.
• Compatible with SyncIQ replication.
• Requires SmartLock license.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 83
PowerScale and Big Data

PowerScale and Big Data

PowerScale Concepts-SSP

Page 84 © Copyright 2020 Dell Inc.


PowerScale and Big Data

PowerScale and Big Data

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 85


PowerScale and Big Data

Module Objectives

After completing this lesson, you will be able to:


• Identify PowerScale’s Big Data position.
• Define Edge-to-Core-to-Cloud

PowerScale Concepts-SSP

Page 86 © Copyright 2020 Dell Inc.


PowerScale and Big Data

What Is Big Data?

• Big Data is a collection of data so large, diverse, and fast-changing that it is


difficult for traditional technology to efficiently process and manage.
• Big Data has too much volume, velocity, and variety.
• It is difficult to process and store using traditional means.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 87


PowerScale and Big Data

Big Data - Volume, Velocity, Variety

The “Three v's”– volume, velocity, and variety – often arrive together. When they
combine, administrators truly feel the need for high performance, higher capacity
storage. The three V's generate the challenges of managing Big Data.

Growing data has also forced an evolution in storage architecture over the years
due to the amount of maintained data. PowerScale is a Big Data solution because
it can handle the volume, velocity, and variety that defines the fundamentals of Big
Data.

1: Challenge: Nonflexible data protection. When you have Big Data volumes of
information to store, it had better be there, dependably. If an organization relies on
RAID to protect against data loss or corruption, the failure of a single disk drive
causes a disproportionate inconvenience. The most popular RAID implementation
scheme allows the failure of only two drives before data loss. (A sizable Big Data
installation easily has more than 1000 individual hard drives, so odds are at least
one drive is down at any time.) The simpler answer is to protect data using a
different scheme.

What is meant by volume? Consider any global website that works at scale. One
example of Big Data volume is the YouTube press page that says YouTube ingests
100 hours of video every minute.

2: What is an example of velocity? Machine-generated workflows produce massive


volumes of data. For example, the longest stage of designing a computer chip is
physical verification. Where the chip design is tested in every way to see not only if
it works, but also if it works fast enough. Each time researchers fire up a test on a

PowerScale Concepts-SSP

Page 88 © Copyright 2020 Dell Inc.


PowerScale and Big Data

graphics chip prototype, sensors generate many terabytes of data per second.
Storing terabytes of data in seconds is an example of Big Data velocity.

3: Perhaps the best example of variety is the migration of the world to social media.
On a platform such as Facebook, people post all kinds of file formats: text, photos,
video, polls, and more. Many kinds of data at that scale represent Big Data variety.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 89


PowerScale and Big Data

Big Data Challenges: Volume

Conventional Challenge PowerScale's Answer

Complex data architecture44 Single volume/single LUN

Low utilization of raw capacity45 High (80%+) utilization

Non-flexible data protection Scalable resiliency

44Challenge: SAN and scale-up NAS data storage architectures encounter a


logical limit at 16 TBs. This means that no matter what volume of data arrives, a
storage administrator has to subdivide it into partitions smaller than 16 terabytes.
The smaller partitions cause silos of data. To simplify this challenge, scale-out NAS
such as an PowerScale cluster holds everything in one single volume with one
LUN. PowerScale can scale seamlessly without architectural hard stops forcing
subdivisions on the data.

45 Challenge: SAN and scale-up NAS architectures must reserve much of the raw
capacity of the system for management and administrative overhead. Overhead
includes RAID parity disks, metadata for all the LUNs and mega LUNs, duplicate
copies of the file system, and so on. As a result, conventional SAN and NAS
architectures often use half of the raw capacity available, because of the headroom
for each separate stack of storage. Suppose that you have seven different silos of
data. When you put them in one large volume, you immediately get back the
headroom from six of the seven stacks. In that way, PowerScale offers high
utilization. PowerScale customers routinely use 80% or more of raw disk capacity.

PowerScale Concepts-SSP

Page 90 © Copyright 2020 Dell Inc.


PowerScale and Big Data

Big Data Challenges: Velocity

Conventional Challenge PowerScale's Answer

Difficult to scale performance46 Linear scalability

Silos of data47 No hot spots

46 Some data storage architectures use two controllers, sometimes called servers
or filers, to run a stack of many hard drives. You can scale capacity by adding more
hard drives, but it is difficult to scale performance. In a given storage stack, the
hard drives offer nothing but capacity. All the intelligence of the system, including
computer processing and RAM, must come from the two filers. If the horsepower of
the two filers becomes insufficient, the architecture does not enable you to pile on
more filers. You start over with another stack and two more filers. In contrast, every
node in an PowerScale cluster contains capacity plus computing power plus
memory. The nodes can work in parallel, so each node you add scales out linearly.
In other words, all aspects of the cluster scale up, including capacity and
performance.

47 Due to the architectural restrictions, SAN and scale-up NAS end up with several
isolated stacks of storage. Many sites have a different storage stack for each
application or department. A backup storage stack is an example. Instead, an
administrator has to manually arrange a data migration. If the R&D stack performs
product testing that generates results at Big Data velocity, the company may
establish an HPC stack, which could reach capacity rapidly. Other departments or
workflows may have independent storage stacks with lot of capacity remaining, but
there is no automated way for R&D to offload their HPC overflow. In contrast, an
PowerScale cluster distributes data across all its nodes to keep them all at equal
capacity. You do not have one node that is overworked while other nodes sit idle.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 91


PowerScale and Big Data

Challenge48 Parallel processing

Many manual processes49 Policy-driven

There are no hot spots, and thus, no manual data migrations. If the goal is to keep
pace with Big Data velocity, automated balancing makes more sense.

48In conventional storage, a file is typically confined to a RAID stripe. That means
that the maximum throughput of reading that file is limited to how fast those drives
can deliver the file. In modern workflows where a hundred engineers or a thousand
digital artists access a file, the RAID drives cannot keep up. Perhaps the two filers
on that stack cannot process that many requests efficiently. With PowerScale,
every node has at least a dozen drives, plus more RAM and more computer
processing, for more caching and better concurrent access. When there is heavy
demand for a file, several nodes can deliver it.

49 Besides manual data migrations, conventional storage has many more manual
processes. A SAN or a scale-up NAS administrator spends a significant amount of
time creating and managing LUNs, partitioning storage, establishing mounts,
launching jobs, and so on. In contrast, PowerScale is policy-driven. Once you
define your policies, the cluster does the rest automatically.

PowerScale Concepts-SSP

Page 92 © Copyright 2020 Dell Inc.


PowerScale and Big Data

Big Data Challenges: Variety

A scale-out Data Lake is a large storage solution where vast amounts of data from
other solutions or locations are combined into a single store. Elements of a data
lake are:
• Digital repository to store massive data.
• Variety of formats.
• Can do computations and analytics on original data.
• Helps address the variety issue with Big Data.
• Data can be secured, analyzed, and actions taken based on insights.
• Enterprises can eliminate the cost of having silos of information.
• Provides scaling capabilities in terms of capacity, performance, security, and
protection.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 93


PowerScale and Big Data

Big Data Positioning of PowerScale

PowerScale scale-out NAS architecture simplifies managing


Big Data.
• Scale-out NAS – simplifies Big Data
management.
• 1,000's of PB of file-based data – one volume,
one namespace, one file system
• Purpose-built to simplify Big Data challenges.
• Multiprotocol capable.

PowerScale Concepts-SSP

Page 94 © Copyright 2020 Dell Inc.


PowerScale and Big Data

PowerScale OneFS: Scale-Out Data Lake

Single Volume / Simplicity & Ease of


File System Use

High Performance Linear Scalability

Unmatched
Easy Growth
Efficiency

Cloud Tiering
Ready Hadoop Enabled

PowerScale is the industry leading scale-out clustered storage solution. It provides


a single volume of data storage at a massive scale that is easy to use and manage.
It offers linear scalability and readiness for performance applications, Hadoop
analytics, and other workflows.

A Data Lake is a central data repository that stores data from various sources, such
as file shares, web apps, and the cloud. It enables businesses to access the same
data for various uses and enables the manipulation of data using various clients,
analyzers, and applications. The data is real-time production data with no need to
copy or move it from an external source, like another Hadoop cluster, into the Data
Lake. The Data Lake provides tiers that are based on data usage, and the ability to
instantly increase the storage capacity when needed. This slide identifies the key
characteristics of a scale-out Data Lake.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 95


PowerScale and Big Data

PowerScale CloudPools

The PowerScale CloudPools software enables you to select from various public
cloud services or use a private cloud. CloudPools offers the flexibility of another tier
of storage that is off-premise and off-cluster. Essentially what CloudPools do is
provide a lower TCO50 for archival-type data.
• Treat cloud storage as another cluster-connected tier.
• Policy-based automated tiering
• Address rapid data growth and optimize data center storage resources - use
valuable on-site storage resources for active data.
• Send rarely used or accessed data to cloud.
• Seamless integration with data – retrieve at any time.

50 CloudPools optimize primary storage with intelligent data placement. CloudPools


eliminates management complexity and enables a flexible choice of cloud
providers.

PowerScale Concepts-SSP

Page 96 © Copyright 2020 Dell Inc.


PowerScale and Big Data

• Data remains encrypted in cloud until retrieval.


• Connect to ECS, another PowerScale cluster, Amazon S3, Virtustream,
Microsoft Azure, Google cloud, and Alibaba.
• Policies automatically move specified files to Cloud.

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 97


PowerScale and Big Data

PowerScale and Edge-to-Core-to-Cloud

PowerScale can consolidate file-based, unstructured data into a Data Lake. It


eliminates costly storage silos, simplifies management, increases data protection,
and acquires more value from your data assets. With integrated multiprotocol
capabilities, PowerScale can support a wide range of traditional and next-
generation applications on a single platform. Support includes powerful Big Data
analytics that provide you with better insight and use of your stored information.

Edge locations are often inefficient islands of storage, running with limited IT
resources, and inconsistent data protection practices. Data at the edge generally
lives outside of the Data Lake, making it difficult to incorporate into data analytics
projects. The edge-to-core-to-cloud approach extends the Data Lake to edge
locations and out into the cloud. It enables consolidation, protection, management,
and backups of remote edge location data.

PowerScale Concepts-SSP

Page 98 © Copyright 2020 Dell Inc.


Course Summary

Course Summary

PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 99


Course Summary

Course Summary

Now that you have completed this course, you can:


→ Compare structured and unstructured data.
→ Identify the PowerScale physical architecture.
→ Discuss nodes workflow application, node model details, and adding new
nodes to the cluster.
→ Describe the PowerScale OneFS operating system.
→ Explain data management and security in PowerScale.
→ Discuss PowerScale with a Big Data solution.

PowerScale Concepts-SSP

Page 100 © Copyright 2020 Dell Inc.


Appendix

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 101
Appendix

PowerScale Nodes
Individual PowerScale nodes provide the data storage capacity and processing
power of the PowerScale scale-out NAS platform. All of the nodes are peers to
each other and so there is no single 'master' node and no single 'administrative
node'.

• No single master
• No single point of administration

Administration can be done from any node in the cluster as each node provides
network connectivity, storage, memory, non-volatile RAM (NVDIMM) and
processing power found in the Central Processing Units (CPUs). There are also
different node configurations, compute, and capacity. These varied configurations
can be mixed and matched to meet specific business needs.

Each contains.

• Disks

• Processor

• Cache

• Front-end network connectivity

PowerScale Concepts-SSP

Page 102 © Copyright 2020 Dell Inc.


Appendix

Tip: Gen 5 and Gen 6 nodes can exist within the same cluster. Every
PowerScale node is equal to every other PowerScale node of the
same type in a cluster. No one specific node is a controller or filer.

PowerScale Concepts-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 103
PowerScale Concepts-SSP

© Copyright 2020 Dell Inc. Page 104


POWERSCALE
ADMINISTRATION

COURSE GUIDE

(V3)

PARTICIPANT GUIDE
Internal Use - Confidential
PowerScale Administration-SSP1

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page i
Table of Contents

NAS, PowerScale, and OneFS ...................................................................................... 5


Network Attached Storage ................................................................................................... 6
PowerScale........................................................................................................................ 15
PowerScale Management Interfaces.................................................................................. 35
Common Cluster Operations .............................................................................................. 43
OneFS Directory Structure ................................................................................................. 48

Configuring the Foundation for Access ................................................................. 53


Authentication Providers .................................................................................................... 54
Access Zones .................................................................................................................... 65
Groupnets .......................................................................................................................... 73
Subnet - SmartConnect Zones ........................................................................................... 80
IP Address Pools ............................................................................................................... 87

Configuring Identity Management and Authorization ........................................... 96


Role-Based Access Control ............................................................................................... 97
User Identity Mapping ...................................................................................................... 107
Authorization .................................................................................................................... 117

Configuring Client Access to Data ....................................................................... 131


OneFS Caching ............................................................................................................... 132
SMB Shares..................................................................................................................... 143
NFS Exports .................................................................................................................... 156
S3 Buckets....................................................................................................................... 163
HDFS and Swift ............................................................................................................... 171

Foundations of Data Protection and Data Layout ............................................... 173


File Striping ...................................................................................................................... 174
Data Protection ................................................................................................................ 182
Protection Management ................................................................................................... 200
Data Layout ..................................................................................................................... 211

PowerScale Administration-SSP1

Page ii © Copyright 2020 Dell Inc.


Configuring Storage Pools .................................................................................... 216
Storage Pools .................................................................................................................. 217
File Pools ......................................................................................................................... 226
SmartPools ...................................................................................................................... 239
CloudPools ...................................................................................................................... 248

Configuring Data Services .................................................................................... 261


File Filtering ..................................................................................................................... 262
SmartQuotas.................................................................................................................... 269
SmartDedupe................................................................................................................... 283
SnapshotIQ ...................................................................................................................... 293
SyncIQ ............................................................................................................................. 305
SmartLock........................................................................................................................ 326

Monitoring Tools .................................................................................................... 333


PowerScale HealthCheck ................................................................................................ 334
InsightIQ .......................................................................................................................... 341
DataIQ v1 ........................................................................................................................ 352
isi statistics ........................................................................................................... 381

Appendix ............................................................................................... 389

Glossary ................................................................................................ 425

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page iii


NAS, PowerScale, and OneFS

PowerScale Administration-SSP1

Internal Use - Confidential


Page 4 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

NAS, PowerScale, and OneFS

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 5
NAS, PowerScale, and OneFS

Network Attached Storage

Scenario

IT Manager: You are responsible for the administration and


management of the PowerScale cluster. We have a new installed cluster
that is powered. The cluster has initial IP addresses and DNS
configured. Now, before you jump in and start exploring its capabilities, I
want you to explain a few things.

Your Challenge: The IT manager wants you to compare the


PowerScale to traditional NAS platforms, and describe scale-up and
scale-out architecture.

Storage Technologies

DAS

In the early days of system data, corporations1 stored data on hard drives in a
server. To minimize risk, corporations mirrored the data on a RAID. This technique
is called Direct Attached Storage (DAS).

1The intellectual property of the company depended entirely upon that hard drive's
continued functionality.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 6 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

RAID

DAS

SAN

As applications proliferated, soon there were many servers, each with its own DAS.
This worked fine, with some drawbacks2. Due to this limitation with DAS, SAN was
introduced which effectively utilized volume manager and RAID.

Volume Manager RAID

SAN

NAS

SAN was set up for servers, not personal computers3 (PCs).

2If one server’s DAS was full while another server’s DAS was half empty, the
empty DAS could not share its space with the full DAS.

3PCs worked differently from the storage file server and the network
communications in PCs, only communicate from one file system to another file
system.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 7
NAS, PowerScale, and OneFS

The breakthrough came when corporations put employee computers on the


network, and added to the storage file system to communicate with users.

From this, Network Attached Storage (NAS) was born.

NAS works pretty well, but there is room for improvement4.

File System Volume Manager RAID

NAS

CAS

Content Address Storage (CAS) is object-based storage that separates metadata


from its objects. CAS has three properties: Clip Descriptor File (CDF), object,
metadata. The CDF contains addresses that point to object data and metadata.

4 The server is spending as much time servicing employee requests as it is doing


the application work it was meant for. The file system does not know where data is
supposed to go, because that is the volume manager’s job. The volume manager
does not know how the data is protected; that is RAID’s job. If high-value data
needs more protection than other data, you need to migrate the data to a different
volume that has the protection level that data needs. So there is opportunity to
improve NAS.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 8 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Cloud

Cloud storage stores data over the Internet to a cloud provider. The cloud provider
manages and protects the data. Typically, cloud storage is delivered on demand
with just-in-time capacity and costs.

NAS Overview

NAS provides the advantages of server consolidation by eliminating the need for
multiple file servers.

• Consolidates the storage that is used by the clients onto a single system,
making it easier to manage the storage.
• Uses network and file-sharing protocols to provide access to the file data5.
• Uses its own operating system6 and integrated hardware and software
components to meet specific file-service needs.

Scale-Up versus Scale-Out Architecture

PowerScale clusters are a NAS solution. There are two types of NAS architectures;
scale-up and scale-out.

5NAS enables both UNIX and Microsoft Windows users to share the same data
seamlessly.

6 Its operating system is optimized for file I/O and, therefore, performs file I/O better
than a general-purpose server. As a result, a NAS device can serve more clients
than general-purpose servers and provide the benefit of server consolidation.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 9
NAS, PowerScale, and OneFS

Scale-Up

• With a scale-up platform, if more storage is needed, another independent NAS


system is added to the network.
• A scale-up solution has controllers that connect to trays of disks and provide the
computational throughput.
• Traditional NAS is great for specific types of workflows, especially those
applications that require block-level access.

Controller with
disk shelves

Independent systems on
network - separate
Clients points of management

Structured or Unstructured
storage

Additional storage - Usually


restricted to tens or hundreds of
TBs

Graphic highlighting adding controllers for a scale-up solution.

Scale-Out

• With a clustered NAS solutions, or scale-out architecture, all the NAS boxes, or
PowerScale nodes, belong to a unified cluster with a single point of
management.
• In a scale-out solution, the computational throughput, disks, disk protection, and
management are combined and exist for a single cluster.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 10 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Unstructured storage

PowerScale cluster

1000+ PBS

Up to 252 nodes Clients

Adding storage, adds compute and


bandwidth

Graphic highlighting adding nodes for a scale-out solution.

Scale-Out NAS

Scale-out NAS7 is now a mainstay in most data center environments. The next
wave of scale-out NAS innovation has enterprises embracing the value8 of NAS
and adopting it as the core of their infrastructure.

7The PowerScale scale-out NAS storage platform combines modular hardware


with unified software to harness unstructured data. Powered by the OneFS
operating system, a PowerScale cluster delivers a scalable pool of storage with a
global namespace.

8 Enterprises want to raise the standard on enterprise grade resilience, with a no


tolerance attitude toward data loss and data unavailable situations and support for
features to simplify management. Organizations see massive scale and
performance with smaller data center rack footprints that the performance-centric
workloads drives.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 11
NAS, PowerScale, and OneFS

1: The unified software of the platform provides centralized web-based and


command-line administration to manage the following features:

• A cluster that runs a distributed file system.


• Scale-out nodes that add capacity and performance.
• Storage options that manage files and tiering.
• Flexible data protection and high availability.
• Software modules that control costs and optimize resources.

OneFS Operating System

PowerScale Administration-SSP1

Internal Use - Confidential


Page 12 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

With traditional NAS systems the file system9, volume manager10, and the
implementation of RAID11 are all separate entities.

OneFS is the operating system and the underlying file system that drives and
stores data.
OneFS is a single file system that performs the duties of the volume manager and
applies protection.
OneFS is built on FreeBSD.
• Creates a single file system for the cluster.12
• Volume manager and protection.13
• Data shared across cluster.14
• Scale resources.15

9The file system is responsible for the higher-level functions of authentication and
authorization.

10 The volume manager controls the layout of the data.

11 RAID controls the protection of the data.

12As nodes are added, the file system grows dynamically and content is
redistributed.

13 OneFS performs the duties of the volume manager and applies protection to the
cluster as a whole. There is no partitioning, and no need for volume creation. All
data is striped across all nodes.

14Because all information is shared among nodes, the entire file system is
accessible by clients connecting to any node in the cluster.

15Each PowerScale storage node contains globally coherent RAM, meaning that,
as a cluster becomes larger, it also becomes faster. When adding a node, the
performance scales linearly.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 13
NAS, PowerScale, and OneFS

Challenge

IT Manager:
Open participation question:
Question: What is the difference between scale-up and scale-out
architecture?

PowerScale Administration-SSP1

Internal Use - Confidential


Page 14 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

PowerScale

Scenario

IT Manager: Now that you have an understanding of the scale-out and


scale-up storage and the OneFS operating system, we will first focus on
the PowerScale nodes.

Your Challenge: The IT manager wants to ensure you have a good


understanding of the PowerScale hardware. Discuss the benefits of the
PowerScale nodes and identify the type of workflow suited for the
different PowerScale platforms.

PowerScale Hardware Overview

Generation 6 (or Gen 6) chasis and Generation 6.5 nodes

PowerScale includes all-flash, hybrid, and archive storage systems.

Gen 6 highlights.
Gen 6.5 highlights.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 15
NAS, PowerScale, and OneFS

Gen 6 Hardware Components

Gen 6 requires a minimum of four nodes to form a cluster. You must add nodes to
the cluster in pairs.

The chassis holds four compute nodes and 20 drive sled slots.

Both compute modules in a node pair power-on immediately when one of the
nodes is connected to a power source.

Gen 6 chassis

1 10 9

2 8
4
6

3
5 7

1: The compute module bay of the two nodes make up one node pair. Scaling out a
cluster with Gen 6 nodes is done by adding more node pairs.

2: Each Gen 6 node provides two ports for front-end connectivity. The connectivity
options for clients and applications are 10 GbE, 25 GbE, and 40 GbE.

3: Each node can have 1 or 2 SSDs that are used as L3 cache, global namespace
acceleration (GNA), or other SSD strategies.

4: Each Gen 6 node provides two ports for back-end connectivity. A Gen 6 node
supports 10 GbE, 40 GbE, and InfiniBand.

5: Power supply unit - Peer node redundancy: When a compute module power
supply failure takes place, the power supply from the peer node temporarily
provides power to both nodes.

6: Each node has five drive sleds. Depending on the length of the chassis and type
of the drive, each node can handle up to 30 drives or as few as 15.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 16 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

7: Disks in a sled are all the same type.

8: The sled can be either a short sled or a long sled. The types are:

• Long Sled - four drives of size 3.5"


• Short Sled - three drives of size 3.5"
• Short Sled - three or six drives of size 2.5"

9: The chassis comes in two different depths, the normal depth is about 37 inches
and the deep chassis is about 40 inches.

10: Large journals offer flexibility in determining when data should be moved to the
disk. Each node has a dedicated M.2 vault drive for the journal. A node mirrors
their journal to its peer node. The node writes the journal contents to the vault when
a power loss occurs. A backup battery helps maintain power while data is stored in
the vault.

Gen 6.5 Hardware Components

Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.

Graphic shows F200 or F600 node pool.

1
5

8 2

7 4

1: Scaling out an F200 or an F600 node pool only requires adding one node.

2: For front-end connectivity, the F600 uses the PCIe slot 3.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 17
NAS, PowerScale, and OneFS

3: Each Gen F200 and F600 node provides two ports for backend connectivity. The
PCIe slot 1 is used.

4: Redundant power supply units - When a power supply fails, the secondary
power supply in the node provides power. Power is supplied to the system equally
from both PSUs when the Hot Spare feature is disabled. Hot Spare is configured
using the iDRAC settings.

5: Disks in a node are all the same type. Each F200 node has four SAS SSDs.

6: The nodes come in two different 1U models, the F200 and F600. You need
nodes of the same type to form a cluster.

7: The F200 front-end connectivity uses the rack network daughter card (rNDC).

8: Each F600 node has 8 NVMe SSDs.

Important: The F600 nodes have a 4-port 1 GB NIC in the rNDC


slot. OneFS does not support this NIC on the F600.

PowerScale Node Specifications

PowerScale offers nodes for different workloads of performance and capacity. The
table below shows some of the node specifications. To get the latest and a
complete list of specification and compare between the node offerings, browse the
product page.

Node Storag Drive Drives Infrastructu Front-End Compute


Type e Type Capacitie per re Networkin per node
s chassi Networking g per
s per node node

PowerScale Administration-SSP1

Internal Use - Confidential


Page 18 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

F800 2.5" 1.6 TB, 60 2 InfiniBand 2 x 10 Ultra


SSD 3.2 TB, connections GbE compute
3.84 TB, supporting (SFP+) or bundle
7.68 TB, QDR links or 2 x 40 • 16 Core,
15.4 TB 2 x 40 GbE GbE 2.6Ghz,
(QSFP+) (QSFP+) Intel®
Xeon®
Processo
r E5-
2697A v4
• RAM-
256 GB
DDR4

F810 2.5" 3.84 TB, 60 2 x 40 GbE 2 x 10 Ultra


SSD 7.68 TB, (QSFP+) GbE compute
15.36 TB (SFP+) or bundle
2 x 40 • 16 Core,
GbE 2.6Ghz,
(QSFP+) Intel®
Xeon®
Processo
r E5-
2697A v4
• RAM-
256 GB
DDR4

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 19
NAS, PowerScale, and OneFS

F600 2.5" 15.36 TB, 8/ 2 x 100 GbE 2 x 10/25 R640 base


SSD 30.72 TB, node 2 x 25 GbE GbE platform
61.4 TB 2 x 100 • 2nd
GbE Generatio
n Intel®
Xeon®
Scalable
Processo
rs
• 128GB
or 192GB
or 384GB
DDR4

F200 2.5" 3.8 TB, 4/ 2 x 10/25 2 x 10/25 R640 base


SSD 7.68 TB, node GbE GbE platform
15.36 TB • 2nd
Generatio
n Intel®
Xeon®
Scalable
Processo
rs
• 48GB or
96GB
DDR4

PowerScale Administration-SSP1

Internal Use - Confidential


Page 20 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

H400 3.5" 2 TB, 4 60 2 InfiniBand 2 x 10GE Medium


* SATA TB, 8 TB, connections (SFP+) compute
12 TB supporting or 2 x 25 bundle
QDR links or GbE • 4 Core,
2 x 10 GbE (SFP28) 2.2Ghz,
(SFP+) Intel®
Xeon®
Processo
r D-1527
• RAM- 64
GB
DDR4

H500 3.5" 2 TB, 4 60 2x 2 x 10GE High


* SATA TB, 8 TB, InfiniBand or (SFP+) or compute
or 12 TB 2 x 40 GbE 2 x 40 bundle
per node GbE • 10 Core,
(QSFP+) 2.2Ghz,
per node Intel®
or 2 x 25 Xeon®
GbE Processo
(SFP28) r E5-2630
v4
• RAM-
128 GB
DDR4

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 21
NAS, PowerScale, and OneFS

H560 3.5" 10 TB or 80 2 InfiniBand 2 x 10 Turbo


0* SATA 12 TB connections GbE compute
supporting (SFP+) bundle
QDR links or or 2 x 40 • 14 Core,
2 x 40 GbE GbE 2.2Ghz,
(QSFP+) (QSFP+) Intel®
or 2 x 25 Xeon®
GbE Processo
(SFP28) r E5-2680
v4
• RAM-
256 GB
DDR4

H600 2.5" 600 GB 120 2x 2 x 10GE Turbo


* SAS or 1.2 TB InfiniBand or (SFP+) compute
2 x 40 GbE or 2 x 40 bundle
per node GbE per • 14 Core,
node 2.2Ghz,
or 2 x 25 Intel®
GbE Xeon®
(SFP28) Processo
r E5-2680
v4
• RAM-
256 GB
DDR4

PowerScale Administration-SSP1

Internal Use - Confidential


Page 22 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

A200 3.5" 2 TB, 4 60 2x 2 x 10GE Low


* SATA TB, 8 TB, InfiniBand or (SFP+) per compute
or 12 TB 2 x 10 GbE node bundle
per node or 2 x 25 • 2 Core,
GbE 2.2Ghz,
(SFP28) Intel®
Pentium
®
Processo
r D1508
• RAM- 16
GB
DDR4

A200 3.5" 10 TB or 80 2x 2 x 10GE Low


0* SATA 12 TB InfiniBand or (SFP+) per compute
2 x 10 GbE node bundle
per node or 2 x 25 • 2 Core,
GbE 2.2Ghz,
(SFP28) Intel®
Pentium
®
Processo
r D1508
• RAM- 16
GB
DDR4

* Has 1 or 2 SSDs per node for cachingThe H600.

PowerScale Features

The design goal for the PowerScale nodes are to keep the simple ideology of NAS,
provide the agility of the cloud, and the cost of commodity. Click each tab to learn

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 23
NAS, PowerScale, and OneFS

more on the features provided by PowerScale. See the student guide for more
information.

Performance and Scale

Performance and Scale

PowerScale clusters optimizes performance at a Petabyte (PB) scale by:

• Optimizing components to maximize performance


• Leveraging standard technology to increase focus on scale

Some of the benefits include:


• Increased performance per usable Terabyte (TB)
• Enable lower latency apps to leverage scale-out NAS.
• Predictable performance at scale

Use Case: Media and Entertainment 16

16A Media and Entertainment production house needs high single stream
performance at PB scale that is cost optimized. The organization requires cloud
archive in a single namespace, archive optimized density with a low Total Cost of
Ownership (TCO) solution. This environment typically has large capacities and
employs new performance technologies at will.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 24 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Data Protection

Data Protection and Availability

PowerScale provides enterprise grade resilience and data protection with:


• Eliminating single point of failure
• Small fault domains
• Predictable failure handling at PB densities

Use Case: Financial sectors17

Sizing

Support evolving needs with lower TCO

17Financial sectors rely heavily on data protection and availability to operate. Data
loss such as customer transactions or system downtime can negatively affect the
business.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 25
NAS, PowerScale, and OneFS

The Gen 6x platforms addresses the challenges of agility and lower TCO by:
• Dedicated cache drives
• Modular architecture
• Non-disruptive upgrades

Some of the benefits include:


• Gen 6 cluster in a box and simple growth path
• Customizable solution
• Same building blocks irrespective of cluster profile

Use Case: Start-up company18

Computer and storage permutations are wrapped into bundles in order to


significantly increase both performance and deterministic performance projection.
The predefined compute bundles optimize memory, CPU, and cache to simplify
configuration selection based on a customer's performance, capacity, and cost
profile.

In order to focus on scale, PowerScale leverages standard technologies to


eventually target a greater than 400 node capacity. With the OneFS 8.2 and higher,
the cluster maximum node limit is 252 nodes. Changes to the back-end
infrastructure, such as adopting Ethernet for back-end communication between
nodes, allows PowerScale to push through the limitations set by older technologies.

PowerScale has no dependency on the flash boot drive. Gen 6 nodes boot from
boot partitions on the data drives. These drives are protected using erasure coding
to remove the dependency on dedicated boot drives. Next, PowerScale uses SSD
drives for the journal to remove the NVRAM dependency present on Gen 5 nodes.
There are now multiple distributed copies of the journal.

18A simplicity and agility use case is a small start-up company growing at rapid
pace, who needs to start with limited capacity and then grow on demand for scale
and new workloads.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 26 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

By creating smaller failure domains with significantly fewer drives in each node pool
and neighborhood, increases the reliability of the system by reducing the spindle-
to-CPU ratio. The increased reliability enables the cluster to use larger capacity
drives, without the risk of overburdening the system in the event of a drive failure.
PowerScale enables predictable failure handling at Petabyte (PB) densities.

Gen 6 platforms have dedicated cache drives for dedicated cache. The caching
options offered are 1 or 2 SSD configurations in various capacities to maximize
front end performance. Gen 6 hardware is focused on support and serviceability,
based on a modular architecture with full redundancy. It is possible to increase
performance with data in place, increase cache without disruption, and upgrade
speeds and feeds non-disruptively.

PowerScale Family

The Gen 6x family has different offerings that are based on the need for
performance and capacity. You can scale out compute and capacity separately.
OneFS runs on all nodes. Click each tab to learn more about the different offerings.

F-Series

The F-series nodes sit at the top of both performance and capacity, with the all-
flash arrays. The all-flash platforms can accomplish 250-300k protocol operations
per chassis, and get 15 GB/s aggregate read throughput from the chassis. Even
when the cluster scales, the latency remains predictable.

• F800
• F810

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 27
NAS, PowerScale, and OneFS

• F600
• F200

H-Series

After F-series nodes, next in terms of computing power are the H-series nodes.
These are hybrid storage platforms that are highly flexible and strike a balance
between large capacity and high-performance storage to provide support for a
broad range of enterprise file workloads.

• H400
• H500
• H5600
• H600

A-Series

The A-series nodes namely have lesser compute power compared to other nodes
and are designed for data archival purposes. The archive platforms can be
combined with new or existing all-flash and hybrid storage systems into a single
cluster that provides an efficient tiered storage solution.

• A200
• A2000

PowerScale Administration-SSP1

Internal Use - Confidential


Page 28 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Node Interconnectivity

1: Backend ports int-a and int-b. The int-b port is the upper port. Gen 6 backend
ports are identical for InfiniBand and Ethernet, and cannot be identified by looking
at the node. If Gen 6 nodes are integrated in a Gen 5 or earlier cluster, the backend
will use InfiniBand. Note that there is a procedure to convert an InfiniBand backend
to Ethernet if the cluster no longer has pre-Gen 6 nodes.

2: PowerScale nodes with different backend speeds can connect to the same
backend switch and not see any performance issues. For example, an environment
has a mixed cluster where A200 nodes have 10 GbE backend ports and H600
nodes have 40 GbE backend ports. Both node types can connect to a 40 GbE
switch without effecting the performance of other nodes on the switch. The 40 GbE
switch provides 40 GbE to the H600 nodes and 10 GbE to the A200 nodes.

3: Gen 6.5 backend ports use the PCIe slot.

4: There are two speeds for the backend Ethernet switches, 10 GbE and 40 GbE.
Some nodes, such as archival nodes, might not need to use all of a 10 GbE port
bandwidth while other workflows might need the full utilization of the 40 GbE port
bandwidth. The Ethernet performance is comparable to InfiniBand so there should
be no performance bottlenecks with mixed performance nodes in a single cluster.
Administrators should not see any performance differences if moving from
InfiniBand to Ethernet.

Gen 6 nodes can use either an InfiniBand or Ethernet switch on the backend.
InfiniBand was designed as a high-speed interconnect for high-performance

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 29
NAS, PowerScale, and OneFS

computing, and Ethernet provides the flexibility and high speeds that sufficiently
support the PowerScale internal communications.

Gen 6.5 only supports Ethernet. All new, PowerScale clusters support Ethernet
only.

Warning: With Gen 6, do not plug a backend Ethernet topology into


a backend InfiniBand NIC. If you plug Ethernet into the InfiniBand
NIC, it switches the backend NIC from one mode to the other and
will not come back to the same state.

PowerScale Networking Architecture

OneFS supports standard network communication protocols IPv4 and IPv6.


PowerScale nodes include several external Ethernet connection options, providing
flexibility for a wide variety of network configurations19.

Network: There are two types of networks that are associated with a cluster:
internal and external.

19In general, keeping the network configuration simple provides the best results
with the lowest amount of administrative overhead. OneFS offers network
provisioning rules to automate the configuration of additional nodes as clusters
grow.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 30 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Front-end, External Network

Client/Application PowerScale Storage


Layer Layer

Ethernet

Protocols: NFS, SMB, S3, Ethernet Backend communication


HTTP, FTP, HDFS, SWIFT Layer (PowerScale internal)

F200 cluster showing supported frontend protocols.

Clients connect to the cluster using Ethernet connections20 that are available on all
nodes.

The complete cluster is combined with hardware, software, networks in the


following view:

Back-end, Internal Network

Double click image for enlarge view.

20Because each node provides its own Ethernet ports, the amount of network
bandwidth available to the cluster scales linearly.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 31
NAS, PowerScale, and OneFS

OneFS supports a single cluster21 on the internal network. This back-end network,
which is configured with redundant switches for high availability, acts as the
backplane for the cluster.22

Leaf-Spine Backend Network

The Gen 6x back-end topology in OneFS 8.2 and later supports scaling a
PowerScale cluster to 252 nodes. See the participant guide for more details.

22 downlinks per leaf - 40 Gb ports 10 uplinks per


Dell Z9100 switches leaf - 100 Gb ports

27 uplinks per
spine switch

4 leaf switches = max


of 88 nodes

Max scale out to 132 nodes with


2 spine switches

Leaf-Spine topology for a PoweScale cluster with up to 88 nodes.

Leaf-Spine is a two level hierarchy where nodes connect to leaf switches, and leaf
switches connects to spine switches. Leaf switches do not connect to one another,
and spine switches do not connect to one another. Each leaf switch connects with

21 All intra-node communication in a cluster is performed across a dedicated


backend network, comprising either 10 or 40 GbE Ethernet, or low-latency QDR
InfiniBand (IB).

22 This enables each node to act as a contributor in the cluster and isolating node-
to-node communication to a private, high-speed, low-latency network. This back-
end network utilizes Internet Protocol (IP) for node-to-node communication.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 32 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

each spine switch and all leaf switches have the same number of uplinks to the
spine switches.

The new topology uses the maximum internal bandwidth and 32-port count of Dell
Z9100 switches. When planning for growth, F800 and H600 nodes should connect
over 40 GbE ports whereas A200 nodes may connect using 4x1 breakout cables.
Scale planning enables for nondisruptive upgrades, meaning as nodes are added,
no recabling of the backend network is required. Ideally, plan for three years of
growth. The table shows the switch requirements as the cluster scales. In the table,
Max Nodes indicate that each node is connected to a leaf switch using a 40 GbE
port.

Challenge

IT Manager:
Open participation question:
Question: What are the differences between Gen 6 nodes and
Gen 6.5 nodes?

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 33
NAS, PowerScale, and OneFS

Resources

Link to Gen 6 Info Hub.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 34 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

PowerScale Management Interfaces

Scenario

IT Manager: Good work. I think you understand the PowerScale building


blocks. Before managing the cluster, it is important to know about the
different management interfaces used to administer the cluster.

Your Challenge: The manager wants you to explain the different


administration interfaces and discuss the isi command structure.

Management Interfaces Overview

The OneFS management interface is used to perform various administrative and


management tasks on the PowerScale cluster and nodes. Management capabilities
vary based on which interface is used. The different types of management
interfaces in OneFS are:

• Serial Console
• Web Administration Interface (WebUI)
• Command Line Interface (CLI)
• Platform Application Programming Interface (PAPI)
• Front Panel Display

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 35
NAS, PowerScale, and OneFS

Serial Console Video

This video provides an overview on the serial console. See the student guide for a
transcript of the video.

Click to launch video.

https://edutube.emc.com/Player.aspx?vno=jHnaLyBuvlzyrARCLAU/jw==&autoplay
=true

Four options are available for managing the cluster. The web administration
interface (WebUI), the command-line interface (CLI), the serial console, or the
platform application programming interface (PAPI), also called the OneFS API. The
first management interface that you may use is a serial console to node 1. A serial
connection using a terminal emulator, such as PuTTY, is used to initially configure
the cluster. The serial console gives you serial access when you cannot or do not
want to use the network. Other reasons for accessing using a serial connection
may be for troubleshooting, site rules, a network outage, and so on. Shown are the
terminal emulator settings.

The configuration Wizard automatically starts when a node is first powered on or


reformatted. If the Wizard starts, the menu and prompt are displayed as shown.
Choosing option 1 steps you through the process of creating a cluster. Option 2 will
exit the Wizard after the node finishes joining the cluster. After completing the

PowerScale Administration-SSP1

Internal Use - Confidential


Page 36 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

configuration Wizard, running the isi config command enables you to change
the configuration settings.

isi config

Edit Wizard settings

Common commands -
shutdown, status, name
Change
s
prompt
to >>>
Other "isi" commands not available in
configuration console

Double-click the image to enlarge.

The isi config command, pronounced "izzy config," opens the configuration
console. The console contains configured settings from the time the Wizard started
running.

Use the console to change initial configuration settings. When in the isi config
console, other configuration commands are unavailable. The exit command is
used to go back to the default CLI.

Web Administration Interface (WebUI)

OneFS
version
User must have logon privileges

Connect to
any node in
cluster over
HTTPS on
port 8080

Multiple browser support

Double-click the image to enlarge.

The WebUI is a graphical interface that is used to manage the cluster.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 37
NAS, PowerScale, and OneFS

The WebUI requires at least one IP address configured23 on one of the external
Ethernet ports present in one of the nodes.

Example browser URLs:


• https://192.168.3.11:8080
• https://engineering.dees.lab:8080

To access the web administration interface from another computer, an Internet


browser is used to connect to port 8080. The user must login using the root
account, admin account, or an account with log on privileges. After opening the
web administration interface, there is a four-hour login timeout. In OneFS 8.2.0 and
later, the WebUI uses the HTML5 doctype, meaning it is HTML5 compliant in the
strictest sense, but does not use any HTML specific features. Previous versions of
OneFS require Flash.

Command Line Interface (CLI)

The CLI can be accessed in two ways:

• Out-of-band24
• In-band25

Both methods are done using any SSH client such as OpenSSH or PuTTY. Access
to the interface changes based on the assigned privileges.

OneFS commands are code that is built on top of the UNIX environment and are
specific to OneFS management. You can use commands together in compound

23The Ethernet port IP address is either configured manually or by using the


Configuration Wizard.

24Accessed using a serial cable connected to the serial port on the back of each
node. As many laptops no longer have a serial port, a USB-serial port adapter may
be needed.

25 Accessed using external IP address that is configured for the cluster.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 38 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

command structures combining UNIX commands with customer facing and internal
commands.

4
1
5

3 6

1: The default shell is zsh.

2: OneFS is built upon FreeBSD, enabling use of UNIX-based commands, such as


cat, ls, and chmod. Every node runs OneFS, including the many FreeBSD kernel
and system utilities.

3: Connections make use of Ethernet addresses.

4: OneFS supports management isi commands. Not all administrative


functionalities are available using the CLI.

5: The CLI command use includes the capability to customize the base command
with the use of options, also known as switches and flags. A single command with
multiple options result in many different permutations, and each combination
results in different actions performed.

6: The CLI is a scriptable interface. The UNIX shell enables scripting and execution
of many UNIX and OneFS commands.

Caution: Follow guidelines and procedures to appropriately


implement the scripts to not interfere with regular cluster
operations. Improper use of a command or using the wrong
command can be potentially dangerous to the cluster, the node, or
to customer data.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 39
NAS, PowerScale, and OneFS

CLI Usage

Can use common UNIX


tools

"help" shows needed privileges

Shows syntax and usage

Option
explanation

Double-click the image to enlarge.

The man isi or isi --help command is an important command for a new
administrator. These commands provide an explanation of the available isi
commands and command options. You can also view a basic description of any
command and its available options by typing the -h option after the command.

Platform Application Programming Interface (PAPI)

The Platform Application Programming Interface, or PAPI, is a secure and


scriptable26 interface for managing the cluster.

HTTPS is used in PAPI to encrypt communications.

OneFS applies authentication and RBAC controls to PAPI commands to ensure


that only authorized commands are run.

The example shows a description for https://:8080/platform/quota/quotas1

26A chief benefit of PAPI is its scripting simplicity, enabling customers to automate
their storage administration.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 40 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

1: Structured like URLs that execute on a browser that supports authentication

2: PAPI conforms to the REST architecture. An understanding of HTTP/1.1 (RFC


2616) is required to use the API.

3: Some commands are not PAPI aware, meaning that RBAC roles do not apply.
These commands are internal, low-level commands that are available to
administrators through the CLI. Commands not PAPI aware: isi config, isi
get, isi set, and isi services

4: The number indicates the PAPI version. If an upgrade introduces a new version
of PAPI, some backward compatibility ensures that there is a grace period for old
scripts to be rewritten.

Front Panel Display

Front Panel Display of a Gen 6 chassis.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 41
NAS, PowerScale, and OneFS

The Gen 6 front panel display is an LCD screen with five buttons used for basic
administration tasks27.

The Gen 6.5 front panel has limited functionality28 compared to the Gen 6.

Challenge

Lab Assignment: Launch the lab image and connect to the cluster
using the WebUI and the CLI.

27Some of them include: adding the node to a cluster, checking node or drive
status, events, cluster details, capacity, IP and MAC addresses.

28You can join a node to a cluster and the panel display node name after the node
has joined the cluster.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 42 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Common Cluster Operations

Scenario

IT Manager: I want you to familiarize yourself with the common cluster


tasks before you begin to manage the cluster. Now, we received two
new nodes for the cluster and I was told the nodes have been put in the
rack, cabled up, and are powered on. I want you to examine the
licensing before adding the nodes.

Your Challenge: The new IT manager has given you a task to describe
the OneFS licensing and add the new nodes to the PowerScale cluster.

Licensing

Evaluation licensing
No individual per-
enabled from cluster
feature keys

Upgrades translate
keys into file

Old licensing system


not used on new
OneFS versions

WebUI Cluster management > Licensing > Open Activation File Wizard or use the "isi license"
command.

In OneFS 8.1 and later a single license file contains all the licensed feature
information in a single location.

Administrators can enable evaluation licenses directly from their cluster.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 43
NAS, PowerScale, and OneFS

Device ID and Logical Node Number

Some features use the LNN and


others use the device ID
"isi config" and then "lnnset"
command to change

Device ID cannot be
changed

Unique for each new node - not


reused

LNN can be changed

Changing the LNN 3 to LNN 5 to maintain the sequential numbering of the nodes.

You should have an understanding of the two different numbers that identify a
node. The numbers are the device ID and logical node number or LNN.

The status advanced command from the isi config sub menu shows the
LNNs and device ID.

The lnnset command is used to change an LNN.

When a node joins a cluster, it is assigned a unique node ID number. If you remove
and rejoin a node from the cluster, the node is assigned a new device ID.

You can change an LNN in the configuration console. To change the LNN to
maintain the sequential numbering of the nodes use lnnset <OldNode#>
<NewNode#>.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 44 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Adding Nodes to Cluster

New node boots to


config Wizard

New nodes add CPU,


memory and capacity

Automatically assigned node


number and IP address from initial
config range

Imaged to match cluster OneFS version

Joining a node to cluster using Configuration Wizard.

When adding new nodes to a cluster, the cluster gains more CPU, memory, and
disk space. The methods for adding a node are:
• Front panel
• Configuration Wizard
• WebUI
• CLI

Join the nodes in the order that the nodes should be numbered.

Adding a node not connected to the external network (NANON) increases the
storage and compute capacity of the cluster.

Nodes are automatically assigned node numbers and IP addresses on the internal
and external networks. A node joining the cluster with a newer or older OneFS
version is automatically reimaged to match the OneFS version of the cluster. A
reimage may take up to 5 minutes.

Compatibility

Hardware compatibility is a concern when combining dissimilar Gen 6.5 nodes. For
example, when adding a single F200 node with 48 GB RAM to an F200 node pool

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 45
NAS, PowerScale, and OneFS

that has nodes with 96 GB of RAM. Without compatibility, a minimum of three F200
nodes with 48 GB RAM is required, which creates a separate node pool.

Node series compatibility depends upon the amount of RAM, the SSD size, number
of HDDs, and the OneFS version.

Resource File: The PowerScale Supportability and Compatibility


Guide covers software, protocols, and hardware.

Cluster Shutdown

CLI uses the "isi


config" sub menu

Can shutdown
entire cluster

Can shutdown a node


using the LNN

Using the Configuration Wizard to shutdown node 4

PowerScale Administration-SSP1

Internal Use - Confidential


Page 46 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Administrators can restart or shutdown the cluster using the WebUI29 or the CLI30.

Caution: Do not shut down nodes using the UNIX shutdown –p


command, halt command, or reboot command. Using the UNIX
command may result in RAM not flushing properly.

Challenge

Lab Assignment: Launch the lab and add a node using the
Configuration Wizard and add a node using the WebUI.

29The WebUI Hardware page has a tab for Nodes to shut down a specific node, or
the Cluster tab to shut down the cluster.

30
Native UNIX commands do not elegantly interact with OneFS, because the
OneFS file system is built as a separate layer on top of UNIX.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 47
NAS, PowerScale, and OneFS

OneFS Directory Structure

Scenario

IT Manager: Good, looks like you know what the different PowerScale
management tools are. Now I want you to focus on the directory
structure that OneFS uses. This is important as it sets up the directory
structure we will use moving foreward.

Your Challenge: The IT manager wants to ensure you can configure


the directory structure that conforms to the organization's governance.

Directory Structure Overview

OneFS root
directory

The directory structure is a 2-dimensional construct that organizes files into a


hierarchy of folders.

• The structure should be fixed and scalable.


• One top-level organizational construct can only be subdivided in a limited way.

At the core of OneFS, is the single file system across the cluster (/ifs). The single
file system in practice is a common directory structure.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 48 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

OneFS Integrated Directories

The graphic shows the OneFS built-in directories.

Using or intervening with the built-in directory paths is not recommended unless
explicitly instructed to do so.

• Using a single file system starting with a newly created directory under /ifs is
recommended.
• For example, in the simplest form, you can create /ifs/engineering where
the engineering department data is the top-level directory for the engineering
organization.

Directory Structure Tiering

The graphic shows the recommended directory structure.

Workflow-type Data grouped for logical


root grouping and integration
purposes

OneFS root
Cluster root

Authentication
and segregation
root Location to situate data and create
exports and shares as per
requirement

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 49
NAS, PowerScale, and OneFS

Warning: Having no directory structure, or a poor structure,


designed upfront can create a disruptive activity when the end user
is required to fix the structure.

Directory Structure Example 1

The graphic shows an example of a designed directory structure.

Use case:
• A company that is named X-Attire plans to implement a single cluster for their
engineering team.
• After conversations with the customer, you identify that the customer does not
plan to have another cluster for remote disaster recovery.
• The company name or authentication domain name is used as the access zone
name (x-attire).

Access zones are covered in another topic

PowerScale Administration-SSP1

Internal Use - Confidential


Page 50 © Copyright 2020 Dell Inc.
NAS, PowerScale, and OneFS

Directory Structure Example 2

Use case:
• X-Attire plans to implement a disaster recovery solution.
• X-Attire wants to replicate the Boston/homedirs directory to the Seattle data
center.
• from Seattle, they plan to replicate the /groupdirs directory to Boston.
• Having the directory structure design up front makes the implementation easier.

SyncIQ is covered in another topic.

Directory Structure Permissions

On the /ifs directory, do not set inherited ACLs and do not propagate ACL
values.

Permissions on levels 1 through 5 are customer-specific and you should define the
appropriate permissions and inherited permissions starting at the appropriate level.

ACLs and POSIX mode bits are covered in other topics.

The table shows the recommended permissions at each directory tier.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 51
NAS, PowerScale, and OneFS

Challenge

Lab Assignment: Go to the lab and build the base directories. The
base directories are used throughout your implementation of the
PowerScale cluster.

PowerScale Administration-SSP1

Internal Use - Confidential


Page 52 © Copyright 2020 Dell Inc.
Configuring the Foundation for Access

Configuring the Foundation for Access

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 53


Configuring the Foundation for Access

Authentication Providers

Scenario

IT Manager: Now, the next thing to do is get the cluster pointed to the
Active Directory and LDAP servers. Before our clients can access files
that are stored on the cluster, they must be authenticated. Make sure
that you have a good understanding of the authentication providers that
the cluster supports.

Your Challenge: You are tasked to add authentication providers to the


PowerScale cluster. Before adding authentication providers, you need to
know a few things. The manager wants you to explain the supported
authentication providers and configuring the NTP service.

Authentication Provider Overview

Authentication settings for the clusters are managed using an authentication


provider. Click each authentication provider that is mentioned below to know more.

1 2 3

4 5 6

1: Active Directory is a Microsoft implementation of Lightweight Directory Access


Protocol (LDAP), Kerberos, and DNS technologies that can store information about
network resources. Active Directory can serve many functions, but the primary

PowerScale Administration-SSP1

Page 54 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

reason for joining the cluster to an Active Directory domain is to perform user and
group authentication.

2: The Lightweight Directory Access Protocol (LDAP) is a networking protocol that


enables you to define, query, and modify directory services and resources. OneFS
can authenticate users and groups against an LDAP repository to grant them
access to the cluster.

3: The Network Information Service (NIS) provides authentication and identity


uniformity across local area networks. OneFS includes a NIS authentication
provider that enables you to integrate the cluster with the NIS infrastructure. NIS,
can authenticate users and groups when they access the cluster.

4: A file provider enables you to supply an authoritative third-party source of user


and group information to a PowerScale cluster. A third-party source is useful in
UNIX and Linux environments that synchronize the /etc/passwd, /etc/group, and
etc/netgroup files across multiple servers.

5: The local provider provides authentication, and lookup facilities for user accounts
added by an administrator.

6: Kerberos is a network authentication provider that negotiates encryption tickets


for securing a connection. OneFS supports Microsoft Kerberos and MIT Kerberos
authentication providers on a cluster. If you configure an Active Directory provider,
support for Microsoft Kerberos authentication is provided automatically. MIT
Kerberos works independently of Active Directory.

Note: The MIT Kerberos authentication provider is used with NFS,


HTTP, and HDFS.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 55


Configuring the Foundation for Access

Authentication Provider Structure

clients Client access protocols

Local security daemon

Authentication
Authentication provider
source / directory

Access control architectural components that show two configured access zones.

The lsassd, pronounced “L-sass-D,” is the OneFS authentication daemon.

lsassd is between the access protocols and the lower-level services providers.

The lsassd daemon mediates between the authentication protocols that clients
use and the authentication providers in the third row.

The authentication providers check their data repositories, which are shown on the
bottom row. The process determines user identity and subsequent access to files.

Active Directory Overview

Function

Active Directory can serve many functions, but the primary reason for joining the cluster to an AD domain is to enable domain
users to access cluster data.

To join the cluster to AD, specify the fully qualified domain name, which can be
resolved to an IPv4 or an IPv6 address, and a username with join permission.
Areas to consider:
• Creates a single AD machine account
• Establishes trust relationship

PowerScale Administration-SSP1

Page 56 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

• Supports NTLM and Microsoft Kerberos


• Each Active Directory provider must be associated with a groupnet
• Adding to an access zone
• Multiple AD instances

When the cluster joins an AD domain, OneFS creates a single AD machine


account. The machine account establishes a trust relationship with the domain and
enables the cluster to authenticate and authorize users in the Active Directory
forest. OneFS supports NTLM and Microsoft Kerberos for authentication of Active
Directory domain users. You can add an Active Directory provider to an access
zone as an authentication method for clients connecting through the access zone.
The access zone and the Active Directory provider must reference the same
groupnet. OneFS supports multiple instances of Active Directory on a PowerScale
cluster; however, only one Active Directory provider can be assigned per access
zone.

Active Directory Configuration Video

The video provides a demonstration of the configuration tasks for an Active


Directory authentication provider. See the student guide for a transcript of the
video.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 57


Configuring the Foundation for Access

Click to launch video.

Link:
https://edutube.emc.com/Player.aspx?vno=Xu/3IyDNSxbuNMOcLHrqBg==&autopl
ay=true

In this demonstration, we’ll go through the steps needed to configure the


PowerScale cluster for Active Directory. Let’s navigate to Access and then to
Authentication providers page. The Active Directory tab is the default selection.
Note that for a multi-mode implementation, connecting to the LDAP server first
establishes the proper relationships between UNIX and AD identities. If AD is
added before joining an LDAP domain, some authentication challenges and
permissions issues may occur, and additional work is needed to remediate these
changes.

Select the Join a domain button. This demonstration shows the barest configuration
to join a domain. Start by entering the provider name. The NetBIOS requires that
computer names be 15 characters or less. Two to four characters are appended to
the cluster name you specify to generate a unique name for each node. If the
cluster name is more than 11 characters, you can specify a shorter name in the
Machine Name field. Enter the user name of the account that has the right to add
computer accounts to the domain, and then enter the account password. The
Enable Secure NFS checkbox enables users to log in using LDAP credentials, but
to do this, Services for NFS must be configured in the AD environment.

OneFS is RFC 2307-compliant. Use Microsoft Active Directory with Windows


Services for UNIX and RFC 2307 attributes to manage Linux, UNIX, and Windows
systems. Integrating UNIX and Linux systems with Active Directory centralizes
identity management and eases interoperability, reducing the need for user-
mapping rules.

Shown is the CLI equivalent command used to join Active Directory. To display a
list of command options, run the isi auth ads create -h command at the
CLI. Now, before connecting to an LDAP server you should decide which optional

PowerScale Administration-SSP1

Page 58 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

customizable parameters you want to use. Refer the Isilon Web Administration
Guide for details on each of the settings.

Click the Join button. While joining the domain, the browser window displays the
status of the process and confirms when the cluster has successfully joined the AD
domain. The join creates a single computer account is for the entire cluster.

And that is the most basic configuration. Note that AD and LDAP both use TCP
port 389. Even though both services can be installed on one Microsoft server, the
cluster can only communicate with one of services if they are both installed on the
same server. This concludes the demonstration.

Network Time Protocol (NTP) Overview

Active Directory and Kerberos depend on accurate time


Internally, cluster nodes
use NTP to coordinate
time settings
Cluster time sets
cluster’s date and
time settings

If multiple NTP Risk: if cluster drifts


servers config, from SMB time,
first on list is authentication fails
first used

NTP configured on the cluster.

Time synchronization is one of the most frequent problems administrators have


with authentication. Both Active Directory and Kerberos depend upon accurate
timing. If the time on the cluster drifts from the authentication server's time, AD
authentication fails.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 59


Configuring the Foundation for Access

• Synchronize to NTP source31


• Cluster time properties32
• Synchronize issues33
• SMB time34
• Node time35

31 The easiest method is to synchronize the cluster and the authentication servers
all to the same NTP source.

32The cluster time property sets the date and time settings, either manually or by
synchronizing with an NTP server. After an NTP server is established, setting the
date or time manually is not allowed.

33After a cluster is joined to an AD domain, adding an NTP server can cause time
synchronization issues. The NTP server takes precedence over the SMB time
synchronization with AD and overrides the domain time settings on the cluster.

34SMB time is enabled by default and is used to maintain time synchronization


between the AD domain time source and the cluster.

35 Nodes use NTP between themselves to maintain cluster time. When the cluster
is joined to an AD domain, the cluster must stay synchronized with the time on the
domain controller. If the time differential is more than five minutes, authentication
may fail.

PowerScale Administration-SSP1

Page 60 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

NTP Configuration

Optional key for NTP server - key file in /ifs

Can configure more than one server

Chimers nodes can contact the external Non-chimer nodes use chimers as NTP
NTP servers servers

WebUI > General settings > NTP page to configure NTP and chimer settings.

You can configure specific chimer nodes by excluding other nodes using the
isi_ntp_config {add | exclude} <node#> command. The list excludes
nodes using their node numbers that are separated by a space.

LDAP Overview

Function

OneFS can authenticate users and groups against an LDAP repository in order to grant them access to the cluster. OneFS
supports Kerberos authentication for an LDAP provider.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 61


Configuring the Foundation for Access

Each LDAP entry36 has a set of attributes37.

The LDAP service provider supports the following features:


• Uses a simple directory service that authenticates users and groups accessing
cluster.
• Supports netgroups and supports the ldapsam schema, which enables NTLM to
authenticate over SMB.
• Enables users to access resources between disparate directory services or as a
single sign-on resource.
• Each LDAP provider must be associated with a groupnet.38

LDAP Configuration Video

The video provides a demonstration of the configuration tasks for an LDAP


authentication provider. See the student guide for a transcript of the video.

36 Each entry consists of a distinguished name, or DN, which also contains a


relative distinguished name (RDN). The base DN is also known as a search DN
because a given base DN is used as the starting point for any directory search.

37Each attribute has a name and one or more values that are associated with it
that is similar to the directory structure in AD.

38 LDAP provider can be added to an access zone as an authentication method for


clients connecting through the access zone. An access zone may include at most
one LDAP provider. The access zone and the LDAP provider must reference the
same groupnet.

PowerScale Administration-SSP1

Page 62 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Click to launch video.

Link:
https://edutube.emc.com/Player.aspx?vno=JKBFLVJaUoqGz8DJmH4zqg==&autop
lay=true

In this demonstration, we’ll go through the steps needed to configure LDAP for the
PowerScale cluster. Let us navigate to Access and then to Authentication providers
page. Next, select the LDAP tab. Now click the Add an LDAP provider button.

For this demonstration, I am only showing the barest configuration. Let us give our
LDAP a provider name. Next, I will enter the URI to the LDAP server. You must
configure a base distinguished name. Often issues involve either misconfigured
base DNs or connecting to the LDAP server. The top-level names almost always
mimic DNS names; for example, the top-level Isilon domain would be dc=isilon,
dc=com for Isilon.com. Our environment is DEES and lab.

Shown is the CLI equivalent command used to configure LDAP. To display a list of
these commands, run the isi auth ldap create -h command at the CLI.
And that is the most basic configuration.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 63


Configuring the Foundation for Access

Now, before connecting to an LDAP server you should decide which optional
customizable parameters you want to use. If there are any issues while configuring
or running the LDAP service, there are a few commands that can be used to help
troubleshoot. The ldapsearch command runs queries against an LDAP server to
verify whether the configured base DN is correct. The tcpdump command verifies
that the cluster is communicating with the assigned LDAP server.

You have the option to enter a netgroup. A netgroup, is a set of systems that reside
in a variety of different locations, that are grouped together and used for permission
checking. For example, a UNIX computer on the 5th floor, six UNIX computers on
the 9th floor, and 12 UNIX computers in the building next door, all combined into
one netgroup.

Select the Add LDAP Provider button. After the LDAP provider is successfully
added, the LDAP providers page displays a green status. This means that the
cluster can communicate with the LDAP server. Note that AD and LDAP both use
TCP port 389. Even though both services can be installed on one Microsoft server,
the cluster can only communicate with one of services if they are both installed on
the same server. This concludes the demonstration.

Challenge

Lab Assignment:
• Join the cluster to Active Directory
• Configure the cluster for LDAP

PowerScale Administration-SSP1

Page 64 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Access Zones

Scenario

IT Manager: Now that you have configured the cluster for Active
Directory and LDAP, it is time to take the next step in implementation.
You are configuring access zone for two organizations, finance and
engineering. Finance is a Microsoft Windows environment and
engineering is a Linux environment. Before you configure the cluster, I
want to ensure you understand access zones and what they do.

Your Challenge: The IT manager has tasked you to explain what an


access zone is, what it does, and how to configure access zones.

Access Zone Overview Video

This video provides an overview for access zones. See the student guide for a
transcript of the video.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 65


Configuring the Foundation for Access

Click to launch video.

Link: https://edutube.emc.com/Player.aspx?vno=w/pzpXjL6ZCFlcdx0riu5A

Although the default view of a cluster is that of one physical machine, you can
partition a cluster into multiple virtual containers called access zones. Access
zones enable you to isolate data and control who can access data in each zone.
Access zones support configuration settings for authentication and identity
management services on a cluster. Configure authentication providers and
provision protocol directories, such as SMB shares and NFS exports, on a zone-by-
zone basis. Creating an access zone, automatically creates a local provider, which
enables you to configure each access zone with a list of local users and groups.
You can also authenticate through a different authentication provider in each
access zone.

Access Control Architectural Components

The OneFS identity management maps users and groups from separate directory
services to provide a single combined identity. It also provides uniform access
control to files and directories, regardless of the incoming protocol.

The table defines the components of access zones.

PowerScale Administration-SSP1

Page 66 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

External Protocols

Click image to enlarge.

Clients use the external access protocols to connect to the PowerScale cluster.
The supported protocols are SMB, NFS, S3, HTTP, FTP, HDFS, and SWIFT.

lsassd Daemon

Click to enlarge image.

The lsassd (L-sass-d) daemon mediates between the external protocols and the
authentication providers, with the daemon contacting the external providers for user
lookups.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 67


Configuring the Foundation for Access

External Providers

Click to enlarge image.

Besides external protocols, there are also external authentication providers.


External directories hold lists of users that the internal providers contact to verify
user credentials. Once a user identity has been verified, OneFS generates an
access token. The access token is used to allow or deny a user access to the files
and folders on the cluster.

Internal Providers

Click to enlarge image.

Internal providers sit within the cluster operating system and are the Local, or File
Providers.
• File provider - authoritative third-party source of user and group information.
• Local provider - provides authentication and lookup facilities for user accounts
added by an administrator.
• Local provider automatically created in access zone.

PowerScale Administration-SSP1

Page 68 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Access Zone Planning - Base Directory

1 4
2

1: Separate authentication from /ifs/eng access zone.

2: Access zone base directories for dvt and eng.

3: The /ifs/eng/hardware directory can be a base directory for another access


zone. This is not a good practice.

4: The /ifs/eng base directory partitions data from the /ifs/dvt directory.

5: The base directory of the default System access zone is /ifs and cannot be
modified. Avoid using the OneFS built-in directories as base directories.

A base or root directory defines the tree structure of the access zone.

The access zone cannot grant access to any files outside of the base directory,
essentially creating a unique namespace.

Using access zones is the recommended method of separating data. However, a


few workflows can benefit from having one access zone being able to see the
dataset of another access zone.

Overlapping example: Creating a /ifs/eng/hardware for the access zone base,


which is inside the eng access zone base directory. Overlapping access zones
enables the eng workers to put data on a cluster, while enabling the dvt workers to
take that data and use it. When you set it up this way, you maintain the different
authentication contexts while enabling the second group access.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 69


Configuring the Foundation for Access

Access Zone Configuration - Demonstration

This demonstration provides a look at access zone configuration. See the student
guide for a transcript of the video.

Error! Hyperlink reference not valid.

Link: https://edutube.emc.com/Player.aspx?vno=08ieHpVlyvyD+A8mTzHopA

In this demonstration, we will go through the steps to create access zones using
the WebUI and the CLI. First, let’s use the WebUI.

Navigate to Access and then to the Access zones page. Note that the System
access zone is shown in the table. The System zone is created by OneFS. Select
the Create an access zone button. In the window, enter the zone name for the new
access zone. Next enter the zone base directory. This should be unique, and you
should avoid using the OneFS built-in directories such as /ifs/data. Our base
directory is /ifs/sales.

Since we have not created this directory before creating the access zone, select
the checkbox to create the base directory automatically. Notice that we already
configured the authentication providers. This access zone is dedicated for the
Active Directory users. Add the AD provider and then select Create zone.

PowerScale Administration-SSP1

Page 70 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Next, we will create another access zone using the CLI. We are logged in via SSH
to node 1 and using the isi zone command. The name of this access zone is
engineering. The unique base directory is /ifs/engineering. Since the
/ifs/engineering directory does not exist, use the option to create it. And
finally, we will add the LDAP authentication provider to the zone.

Next verify that the zones are created. Use the list option. Moving back to the
WebUI, check the access zone page to verify the zones display. Instead of waiting
for the refresh, click on another page and then back.

This demonstration showed configuring access zones using the WebUI and the
CLI. This concludes the demonstration.

Access Zone Considerations

Listed are areas to consider when configuring and discussing access zones.
• The number of access zones should not exceed 50.
• As a good practice, configure an access zone for a specific protocol if multi-
protocol access is not needed. For example, an implementation with both NFS
and SMB access should have an access zone for the NFS access and another
access zone for the SMB access.
• Access zones and authentication providers must be in only one groupnet.
• Authentication sources are joined to the cluster and "seen" by access zones -
multiple instances of the same provider in different access zones is not
recommended.
• Authentication providers are not restricted to one specific zone.
• Only join AD providers not in same forest (untrusted forest).
• Shared UIDs in same zone can potentially cause UID/GID conflicts.
• You can overlap data between access zones for cases where workflows require
shared data - however, overlapping adds complexity that may lead to issues
with client access.

Access Zone Best Practices

You can avoid configuration problems on the cluster when creating access zones
by following best practices guidelines.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 71


Configuring the Foundation for Access

Best Practice Detail

Create unique base directory. Achieves data isolation. Base directories


can overlap only if workflows share data.

System zone is for global admin Employ ZRBAC for zone administration.
access only.

Create zones to isolate data for Do not isolate if workflow requires shared
different clients. data.

Avoid overlapping UID/GID ranges Potential for UID/GID conflicts if overlap in


for providers in same zone. same zone.

Challenge

Lab Assignment: You have the authentication providers configured,


now create the access zones for the environment and add the
authentication providers to the access zones.

PowerScale Administration-SSP1

Page 72 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Groupnets

Scenario

IT Manager: You should configure the access zones before you


configure networking. Now, you will examine the networking
components of OneFS. Ensure you understand groupnets and how
groupnets strengthen multitenancy.

Your Challenge: The IT manager has tasked you to explain groupnets.

Network Configuration Planning

Configure Tenant and


DNS server

Configure SmartConnect
IP address, VLAN, and
MTU on the subnet

Configure dynamic or static IP address


pool for node external network ports

Groupnets reside at the top tier of the networking hierarchy and are the
configuration level for managing multiple tenants on your external network.

Groupnets contain one or more subnets.

By default, OneFS builds Groupnet0, Subnet0, and Pool0.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 73


Configuring the Foundation for Access

A subnet can also be called the SmartConnet zone and contain one or more pools.
Pools enable more granular network configuration.

Multi-Tenancy Overview

SmartConnect: isilon.xattire.com

192.168.0.0/24

192.168.2.0/24
SmartConnect: isilon.gearitup.com

Example of two tenants with a separate groupnet.

Groupnets are the configuration level for managing multiple tenants39 on the
external network of the cluster.

Multi-tenancy is the ability to host multiple organizations in a single cloud,


application, or storage device. Each organization in the environment is called a
tenant.

In the X-Attire scenario, the solution must treat each business unit as a separate
and unique tenant with access to the same cluster. The graphic shows how each
organization has its own groupnet and access zone.

39Even with no plans to use multi-tenancy, a good practice is to organize data


based on access zones. Organizing is for both security purposes and to enable
compartmentalization of failover by, for instance, AD domain.

PowerScale Administration-SSP1

Page 74 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Multi-tenancy Considerations

Groupnets are an option for those clusters that will host multiple companies,
departments, or clients that require their own DNS settings. Some areas to
consider are:
• DNS settings are per groupnet
• Create another groupnet only if separate DNS settings required.
• Follow proper build order:
1. Create groupnet
2. Configure authentication provider
3. Create access zone, and add authentication provider
4. Configure subnet with SmartConnect
5. Create pool, and add access zone
• In a multiple tenant solution, a share can span access zones. Combining
namespaces and overlapping shares is an administrative decision.

Important: Leave the System zone in Groupnet0.

Groupnets and Access Zones Video

This video provides an overview of the groupnet and access zone relationship. See
the student guide for a transcript of the video.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 75


Configuring the Foundation for Access

Click to launch video.

Link:
https://edutube.emc.com/Player.aspx?vno=b4A2l5FzF2na/Txqk2AUTA==&autopla
y=true

Because groupnets are the top networking configuration object, they have a close
relationship with access zones and the authentication providers. Having multiple
groupnets on the cluster means that you are configuring access to separate and
different networks, which are shown as org1 and org2. Different groupnets enable
portions of the cluster to have different networking properties for name resolution.
Configure another groupnet if separate DNS settings are required. If necessary, but
not required, you can have a different groupnet for every access zone. The
limitation of 50 access zones enables the creation of up to 50 groupnets.

When the cluster joins an Active Directory server, the cluster must know which
network to use for external communication to the external AD domain. Because of
this, if you have a groupnet, both the access zone and authentication provider must
exist within same groupnet. Access zones and authentication providers must exist
within only one groupnet. Active Directory provider org2 must exist in within the
same groupnet as access zone org2.

PowerScale Administration-SSP1

Page 76 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

WebUI for Configuration

The maximum number of


DNS server IP addresses is
limited to 3

Rotate is the only option The maximum number of DNS


search domains is limited to 6

DNS caching for the


groupnet
Enable appending node DNS search lists
to client DNS inquiries directed at
SmartConnect service IPs

The graphic shows the Cluster management > Network configuration > external network > Add
a groupnet window.

When creating a groupnet with access zones and providers in the same zone, you
need to create them in the proper order:
1. Create the groupnet.
2. Create the access zone and assign to the groupnet.
3. Create the subnet and pool.
4. Add the authentication provider and associate them with the groupnet
5. Associate the authentication providers with the access zone.

CLI for Configuration

When creating a groupnet with access zones and providers in the same zone, you
should create them in the proper order.

Order Function Command Syntax

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 77


Configuring the Foundation for Access

1 Create groupnet isi network groupnets create


<id> --dns-servers=<ip> isi
auth ads create <name> <user>
Example: isi network groupnets
create groupnet1 --dns-servers
192.168.4.10 --dns-search
org1.com

2 Create Authentication isi auth ads create <name>


Providers <user> --groupnet=<groupnet
name>

3 Create access zone isi zone zones create <name>


<path> --auth-providers=<list
of auth providers> --
groupnet=<groupnet name>

4 Create subnet isi network subnets create


<id> <addr-family> { ipv4 |
ipv6} <prefix-len>

5 Create pool isi network pools create <id>


--access-zone=<zone name>

Tip: You cannot recreate an already defined subnet. A defined


subnet is only used once.

PowerScale Administration-SSP1

Page 78 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Challenge

IT Manager:
Because you configure the network components together, you will not
go to the lab until the other topics are discussed. Open participation
question:
Question: When would you create a groupnet?

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 79


Configuring the Foundation for Access

Subnet - SmartConnect Zones

Scenario

IT Manager: I do not understand what the function of SmartConnect is.


I would like you to do some research and set it up to see what it does.

Your Challenge: The IT manager wants you to explain the


SmartConnect benefits and configure SmartConnect.

SmartConnect Overview Video

This video provides an overview of SmartConnect. See the student guide for a
transcript of the video.

Click to launch video.

PowerScale Administration-SSP1

Page 80 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Link: https://edutube.emc.com/Player.aspx?vno=L7mXSvTcNQl8+LLKzNEzkw

SmartConnect enables client connections to the storage cluster using a single


hostname or however many host names a company needs. It provides load
balancing and dynamic NFS failover and failback of client connections across
storage nodes to provide optimal utilization of the cluster resources. SmartConnect
eliminates the need to install client-side drivers, enabling administrators to manage
large numbers of clients if a system fails.

SmartConnect provides name resolution for the cluster. The cluster appears as a
single network element to a client system. Both cluster and client performance can
be enhanced when connections are more evenly distributed.

SmartConnect simplifies client connection management. Based on user


configurable policies, SmartConnect Advanced applies intelligent algorithms (as in,
CPU utilization, aggregate throughput, connection count, or Round-robin).
SmartConnect distributes clients across the cluster to optimize client performance.
SmartConnect can be configured into multiple zones that can be used to ensure
different levels of service for different groups of clients. SmartConnect can remove
nodes that have gone offline from the request queue, and prevent new clients from
attempting to connect to an unavailable node. Also, SmartConnect can be
configured so new nodes are automatically added to the connection balancing pool.

In Isilon OneFS 8.2, SmartConnect supports connection service for 252 nodes.

SmartConnect Architecture

SmartConnect: isilon.xattire.com
SIP: 192.168.0.100 - 192.168.0.104

192.168.0.0/24

192.168.2.0/24

SmartConnect: isilon.gearitup.com
SIP: 192.168.2.100 - 192.168.2.104

The example shows two unique groups using the same cluster, X-Attire and GearItUp.

You can configure SmartConnect into multiple zones to provide different levels of
service for different groups of clients.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 81


Configuring the Foundation for Access

For example, SmartConnect directs X-Attire users to F800 nodes for their needed
performance. GearItUp users access the H500 nodes for general-purpose file
sharing. The zones are transparent to the users.

The SmartConnect Service IPs40 (SSIP or SIP) are addresses that are part of the
subnet.

Important: To configure SmartConnect, you must also create


records on the customer DNS servers. If the clients use DNS for
name resolution, configure the DNS server to forward cluster name
resolution requests to the SmartConnect service.

SmartConnect Licensing

The table shows the differences between the SmartConnect basic and
SmartConnect Advanced.

SmartConnect Basic SmartConnect Advanced (licensed)


(unlicensed)

40Do not put the SIPs in an address pool. The SIPs are a virtual IP within the
PowerScale configuration, it is not bound to any of the external interfaces.

PowerScale Administration-SSP1

Page 82 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

• Static IP allocation • Dynamic allocation


• Multiple subnets • NFSv3 failover
• Single pool per subnet • Plus static IP allocation
• Single SC DNS zone per • Multiple subnets
subnet • Multiple pools per subnet
• One balancing option: Round- • Multiple SC zone names per subnet
robin
• Four balancing options:
• Round-robin, Connection count,
Throughput, and CPU usage
• Can set rebalance policy
• Up to 6 SSIPs per subnet

SmartConnect Configuration Components

The SIPs, SmartConnect zone, and the DNS entries are the configuration
components for SmartConnect.

• SmartConnect service IPs


• IP addresses pulled out of subnet
• Never used in pool
• Interfaces with DNS server
• Minimum of two, maximum of six per subnet
• SmartConnect Zone name
• One name per pool
• Friendly name for users (seen as servers on the network)
• sales.isilon.xattire.com - \\sales
• mktg.isilon.xattire.com -\\mktg
• DNS:

• Add NS delegation record for SmartConnect Zone.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 83


Configuring the Foundation for Access

• Add A or AAAA record for the SmartConnect Service IPs.

SmartConnect Configuration - Create SmartConnect Zone


Demonstration

This demonstration shows the initial network configuration for the cluster. See the
student guide for a transcript of the video.

Click to launch demonstration.

Link: https://edutube.emc.com/Player.aspx?vno=4hL0i4iBe2BLqJzlT4dN/Q

In this demonstration, we’ll go through the step for an initial configuration of the
cluster external network. The demonstration shows configuring SmartConnect and
a dedicated pool for an access zone.

First, login to the WebUI and navigate to the Cluster management, Network
configuration page. The External network tab is selected by default. Note that
groupnet0 and subnet0 is automatically created by OneFS. On the subnet0 line,

PowerScale Administration-SSP1

Page 84 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

select View / Edit. There are no values for SmartConnect. Select Edit. Go to the
SmartConnect service IPs and enter the range of SmartConnect IP addresses.
OneFS versions prior to OneFS 8.2 do not allow you to enter a range of IP
addresses. For this demonstration we will be using a SmartConnect service name.

Select Save changes. The CLI equivalent to add the SmartConnect service
address is the isi network subnet modify command. Now that
SmartConnect is configured, we will configure the IP address pool for the access
zone. On the subnet0 line, click on the More dropdown and select Add pool.

Enter the pool name and then select the access zone. For this implementation the
authentication providers and the access zones are already created.

Next enter the range of IP address for this pool. Select the external node interfaces
that will carry the client traffic. The SmartConnect basic fully qualified zone name is
sales.dees.lab. We have the SmartConnect advanced license activated. Here is
where we can configure the advanced functions. For the demonstration, we will
keep the default settings. Select Add pool. The CLI equivalent to create a pool is
the isi network pools create command.

This demonstration showed the initial configuration of the network. This concludes
the demonstration.

SmartConnect Considerations

Listed are some areas to consider when discussing SmartConnect.


• DNS Integration:
− DNS primer
− DNS host record
− DNS Delegation best practices
− Cluster name resolution process example
• Never put SIP address in an IP address pool.
• Start with round-robin balancing then modify for workflow.
• DNS servers (not SmartConnect) handle the client DNS requests.
• Ensure that appropriate firewall ports are open.
• SyncIQ requires static allocation.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 85


Configuring the Foundation for Access

• Static pools are best used for stateful clients, and dynamic pools are best for
stateless clients.
• Time-to-live value41.

Challenge

IT Manager:
Because you configure the network components together, you will not
go to the lab until the other topics are discussed. Open participation
question:
Question: What are the SmartConnect Advanced benefits?

41 SmartConnect DNS delegation server answers DNS queries with a time-to-live of


0 so that the answer is not cached. Not caching the answer distributes the IP
addresses successfully. Certain DNS servers, such as Windows Server 2003,
2008, and 2012, fixes the value to one second. Many clients requesting an address
within the same second causes all of them to receive the same address. In some
situations, barriers to deploying SmartConnect happen, in which case other means
should be specified in the solution design.

PowerScale Administration-SSP1

Page 86 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

IP Address Pools

Scenario

IT Manager: So that covers networking at the groupnet and subnet


levels. Now, examine IP address pools and then configure networking
on our cluster.

Your Challenge: The IT manager has tasked you to discuss the IP


address pool settings and configure IP address pools.

IP Address Pools

OneFS configures
groupnet0, subnet0,
pool0

Control connectivity to
access zones

WebUI Cluster management > Network configuration page.

IP address pools are allocated to external network interfaces.

More subnets are configured as either IPv4 or IPv6 subnets. Other IP address
pools are created within subnets and associated with a node, a group of nodes,
NIC ports or aggregated ports.

The pools of IP address ranges in a subnet enable you to customize42 how users
connect to your cluster.

42 Customization is vital for clusters that have different node types.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 87


Configuring the Foundation for Access

Use case: Say that X-Attire adds 4 F800 nodes for a video media group. X-Attire
wants the video media team to connect directly to the F800 nodes to use various
high I/O applications. The administrators can separate the X-Attire connections.
Access to the home directories connect to the front end of the H500 nodes while
the video media group accesses the F800 nodes. This segmentation keeps the
home directory users from using bandwidth on the F800 nodes.

Link Aggregation

Physical NIC

Single
Logical NIC

Physical NIC

Aggregation combining the two physical interfaces into a single, logical interface.

Configure link aggregation, or NIC aggregation, on the pool.

Configure for each node - cannot configure across node.

Link aggregation provides improved network throughput and redundancy.

The network interfaces are added to an IP address pool one at a time or as an


aggregate.

Aggregation modes43 apply to all aggregated network interfaces in the IP address


pool.

43The link aggregation mode determines how traffic is balanced and routed among
aggregated network interfaces.

PowerScale Administration-SSP1

Page 88 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Link Aggregation Modes

OneFS supports dynamic and static aggregation modes.

Click each tab to learn more about the link aggregation modes.

LACP

Link Aggregation Control Protocol, or LACP, is a dynamic aggregation mode that


supports the IEEE 802.3ad.

Configure LACP at the switch level and on the node. Enables the node to negotiate
interface aggregation with the switch.

LACP mode is the default aggregation mode.

PowerScale Node

Node negotiates interface aggregation with switch -


balances outgoing traffic across interfaces

Logical NIC

Physical NIC

Switch
Physical NIC

Round Robin

Round robin is a static aggregation mode that rotates connections through the
nodes in a first-in, first-out sequence, handling all processes without priority.

Round robin balances outbound traffic across all active ports in the aggregated link
and accepts inbound traffic on any port.

Client requests are served one after the other based on their arrival.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 89


Configuring the Foundation for Access

PowerScale Node
Rotates connections in a first-in, first-
out sequence

Logical NIC
1
3
Physical NIC 1 Incoming client requests

5 6 7 8

Physical NIC 2
4
2

The graphic shows, client request 2, client request 3 and so on follow client request 1.

Note : Round Robin is not recommended if the cluster is using TCP/IP workloads.

Failover

Active/Passive failover is a static aggregation mode that switches to the next active
interface when the primary interface becomes unavailable. The primary interface
handles traffic until there is an interruption in communication. At that point, one of
the secondary interfaces takes over the work of the primary.

PowerScale Node Switches to the next active interface when the


primary interface becomes unavailable

Logical NIC Incoming client requests

1 2 3 5 6
Physical NIC 1

4
Physical NIC 2

In the graphic, the nodes serve the incoming client requests. If any of the nodes become
unavailable or interrupted due to an issue, the next active node takes over and serves the upcoming
client request.

FEC

Fast Ethernet Channel, or FEC, is a static aggregation method.

Typically used with older Cisco switches - LACP preferred in new generation
PowerScale nodes.

PowerScale Administration-SSP1

Page 90 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

FEC accepts all incoming traffic and balances outgoing traffic over aggregated
interfaces that is based on hashed protocol header information that includes source
and destination addresses.

Accepts all incoming traffic and balances outgoing traffic


over aggregated interfaces
PowerScale Node

Outgoing traffic
Logical NIC
Incoming client requests 3 1
2
Physical NIC 1
6 5 4 3 2 1
6 5 4
Physical NIC 2

The graphic shows, the node accepts and serves all the incoming client requests. The node
balances outgoing traffic.

Link Aggregation Mapping

Logical network interface, or LNI, numbering corresponds to the physical


positioning of the NIC ports as found on the back of the node.

Network interfaces added to an IP address pool as an aggregated interface are


included when viewing a list of network interfaces on a node. Aggregated LNIs are
listed in the order in which they are created. NIC names correspond to the network
interface name as shown in command-line interface, such as ifconfig and
netstat.

Logical Network Interface Aggregated LNI


(LNI)

Gen 5 ext-1, ext-2, ext-3, ext-4 • ext-agg = ext-1 + ext-2


• ext-agg2 = ext-3 + ext-4
• ext-agg3 = ext-1 + ext-2
+ ext-3 + ext-4

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 91


Configuring the Foundation for Access

ext-1, ext-2, 10gige-1, 10gige- • ext-agg = ext-1 + ext-2


2 • 10gige-agg-1 = 10gige-
1 + 10gige-2

Gen 6 10gige-1, 10gige-2 10gige-agg-1 = 10gige-1 +


10gige-2

40gige-1, 40gige-2 40gige-agg-1 = 40gige-1 +


40gige-2

mgmt-1 1GbE interface for system


management

Note: Number of logical network interfaces vary based on node


model.

Allocation Method

An administrator can choose an allocation method of either static pools or dynamic


pools when configuring IP address pools on the cluster. Each tab defines the
allocation methods.

PowerScale Administration-SSP1

Page 92 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Static

Output showing the network settings. Click image to enlarge.

A static pool is a range of IP addresses that allocates only one IP address at a


time. OneFS allocates a single IP address from the pool to the chosen NIC.

If there are more IP addresses than nodes, new nodes that are added to the pool
get the additional IP addresses.

Once allocating an IP address, the node keeps the address indefinitely unless
deleting the member interface from the pool, or removing the node from the cluster.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 93


Configuring the Foundation for Access

Dynamic

Click image to enlarge.

Dynamic IP allocation is only available with SmartConnect Advanced.

Dynamic pools are best used for stateless protocols such as NFSv3. Also configure
for NFSv4 with continuous availability (CA).

Dynamic IP allocation ensures that all available IP addresses in the IP address


pool are assigned to member interfaces when the pool is created.

Dynamic IP allocation has the following advantages:


• Enables NFS failover, which provides continuous NFS service on a cluster even
if a node becomes unavailable.
• Provides high availability because the IP address is always available to clients.

PowerScale Administration-SSP1

Page 94 © Copyright 2020 Dell Inc.


Configuring the Foundation for Access

Static and Dynamic Pools

One IP to each NIC in


Static pool No address
pool
reallocation

Multiple IPs to each NIC in


pool

Dynamic pool IPs reallocated after


failure

The graphic shows a two SmartConnect zones, each with different IP allocation methods.

Static pools are best used for SMB clients because of the stateful nature of the
SMB protocol.

Dynamic pools are best used for stateless protocols such as NFSv3. You can
identify a Dynamic range by the way the IP addresses present in the interface as
.110 -.112 or .113 -.115 instead of a single IP address like 0.10.

Challenge

Lab Assignment: You have the authentication providers and access


zones configured, now set up the SmartConnect zones and IP address
pools.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 95


Configuring Identity Management and Authorization

Configuring Identity Management and Authorization

PowerScale Administration-SSP1

Page 96 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Role-Based Access Control

Scenario

IT Manager: The next topic that I want you to understand is


administrative access. The organization has several administrators who
need management access, but I do not want to give everybody root
access.

Your Challenge: The IT manager has tasked you to add management


access to the PowerScale cluster. Before you configure management
access, ensure that you understand role-based access control, or RBAC
and zone-based RBAC, or ZRBAC. The manager expects you to
describe RBAC and ZRBAC, explain built-in roles and privileges, and
configure RBAC.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 97
Configuring Identity Management and Authorization

Overview

Configured user with restricted privileges

Restricted options are not


displayed

Root user privileges

Click the image to enlarge.

RBAC and ZRBAC administration defines the ability to perform specific


administrative functions to a specific privilege.

A user who is assigned to more than one role has the combined privileges of those
roles.

The root and admin users can assign others to built-in or custom roles that have
login and administrative privileges to perform specific administrative tasks.

The example shows that user Jane is assigned the Backup Administrator role.
Many of the privileges that user Root has are not visible to user Jane.

Role-based access enables you to separate out some administrative privileges and
assign only the privileges that a user needs. Granting privileges makes access to
the configuration of the cluster less restrictive.

Roles

OneFS includes built-in administrator roles with predefined sets of privileges that
you cannot modify. You can also create custom roles and assign privileges. Click
the tabs to learn more about each role.

PowerScale Administration-SSP1

Page 98 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Built-in Roles

Built-in roles44 are included in OneFS and have been configured with the most
likely privileges necessary to perform common administrative functions.

Click each built-in role to learn more about it.


• SecurityAdmin built-in role45
• SystemAdmin built-in role46
• AuditAdmin built-in role47
• BackupAdmin built-in role48
• VMwareAdmin built-in role49

44You cannot modify the list of privileges that are assigned to each built-in role.
However, you can assign users and groups to built-in roles.

45The SecurityAdmin built-in role enables security configuration on the cluster,


including authentication providers, local users and groups, and role membership.

46The SystemAdmin built-in role enables administration of all cluster configuration


that is not handled by the SecurityAdmin role.

47The AuditAdmin built-in role enables you to view all system configuration
settings.

48 The BackupAdmin built-in role enables backup and restore of files from /ifs.

49The VMwareAdmin built-in role enables remote administration of storage that is


needed by VMware vCenter.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 99
Configuring Identity Management and Authorization

Custom roles

Custom roles supplement built-in roles.

You can create custom roles50 and assign privileges mapped to administrative
areas in your PowerScale cluster environment.

Zone Built-in Roles

OneFS 8.2.0 introduces zone-aware RBAC, or ZRBAC. The ZRBAC feature


enhancement provides flexibility for organization administrators to manage
resources according to their specific organization. Click each zone built-in role to
learn more about it.
• ZoneAdmin51
• ZoneSecurity Admin52

The following list describes what you can and cannot do through roles:
• You can assign privileges to a role but not directly to users or groups.
• You can create custom roles and assign privileges to those roles.
• You can copy an existing role.
• You can add any user or group of users, to one or more roles as long as the
users can authenticate to the cluster.

50The VMwareAdmin built-in role enables remote administration of storage that is


needed by VMware vCenter.

51Enables administration of configuration aspects that are in the current access


zone.

52Enables administration of security configuration aspects that are in the current


access zone.

PowerScale Administration-SSP1

Page 100 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Role Creation Video

The video provides an overview of role creation. See the student guide for a
transcript of the video.

Click to launch video.

Link: https://edutube.emc.com/Player.aspx?vno=tQkWrNubtdORFBHxoRlMAg

This demonstration shows the steps to configure role-based access control or


RBAC and zone-aware RBAC, or ZRBAC. To frame the demonstration, I will use
the scenario of two new members on the IT team. I will assign the users with the
minimum needed privileges to manage the cluster for their job role.

Login as admin, a user that can assign privileges. Navigate to Access, Membership
and roles. On the Membership and roles page, note that the access zone selected
is System. Go to the Roles tab. Before moving on to the configuration, note that
OneFS has a number of built-in roles that cover most access needs. There may be
a need to define a custom role. In these instances, you can select the Create a
Role button. I will demonstrate this in a moment. A great place to learn more about
the different privileges is the Isilon OneFS Web Administration Guide.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 101
Configuring Identity Management and Authorization

Hayden is the administrator I am adding to the AuditAdmin role. Select View/Edit


and then Edit role. Next select Add a member to this role. In the Providers choices,
select Active Directory DEES.lab. Then select the domain. Remember, you must
join the cluster to the Active Directory domain to view the users. Hayden is a
member of the dees.lab domain. Select Hayden. Notice you can modify built-in
roles by adding or removing privileges. Save the changes.

The next example is to add a Windows administrator, Sai, to the sales access
zone. Adding Sai to a role specific to the access zone prevents him from
accidentally configuring Windows shares in other zones. In fact, Sai will have no
visibility into other zones. On the Roles tab, select the sales access zone. Note the
two built-in roles really do not provide the level of access for Sai. Create a role. The
role name is WinAdmin and add a short description. Shown is the CLI command to
create a zone role. Remember OneFS version 8.2 introduces zone-aware roles.

Previous version CLI commands do not have the --zone option. boston-2# isi
auth roles create --zone sales WinAdmin. Just as in the previous
example, add a member to this role. Select the provider and then the domain. Next
Search and select Sai. Now add privileges to the role. First, add the ability to log in
to the WebUI. Next, add the privilege to configure SMB. Give Read/write access to
this privilege. Now save the role. boston-2# isi auth roles modify
WinAdmin --zone sales --add-priv ISI_PRIV_LOGIN_PAPI --add-
priv ISI_PRIV_SMB –-add-user dees\\sai. Now verify the privileges of
the users.

Logout and then log in as Hayden, the AuditAdmin. The first indication is the
Access menu. Notice the options are missing. Navigating to Protocols, Windows
sharing, notice Hayden cannot create a share, only view. Also, since added to a
System zone role, Hayden can audit information in other zones. System zone
administrators are global.

Log out of the WebUI and login as Sai. You must login at an IP address or netBios
associated with the sales access zone. Viewing the Access options, Sai does not
have the privileges. Navigating to Protocols, Windows sharing, notice Sai cannot
switch to another access zone, but can configure SMB shares. This demonstration
stepped through configuring RBAC and ZRBAC. This concludes the demonstration.

PowerScale Administration-SSP1

Page 102 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Role Management

You can view, add, or remove members of any role. Except for built-in roles, whose
privileges you cannot modify, you can add or remove OneFS privileges on a role-
by-role basis.

View Roles

The table shows the commands that view role information.

Command Description

isi auth roles list A basic list of all roles on the cluster

isi auth roles list -- Detailed information about each role on


verbose the cluster, including member and
privileged list

isi auth roles view <role> Detailed information about a single role,
where <role> is the name of the role

View Privileges

User Privileges are performed through the CLI. The table shows the commands
that can view a list of your privileges or of another user.

Command Description

isi auth privileges --verbose A list of privileges

isi auth id A list of your privileges

isi auth mapping token <user> List of privileges for another user,
where <user> is a placeholder for
another user by name:

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 103
Configuring Identity Management and Authorization

Create, modify, and delete a custom role

You can create an empty custom role and then add users and privileges to the role.
Deleting a role does not affect the privileges or users that are assigned to it. Built-in
roles cannot be deleted.

The table shows the commands used to create, modify and delete a custom role.

Command Description

isi auth roles create <name> [-- To create a role, where <name> is
description <string>] the name that you want to assign
to the role and <string> specifies
an optional description

isi auth roles modify <role> [-- To add a user to the role, where
add-user <string>] <role> is the name of the role and
<string> is the name of the user

isi auth roles modify <role> [-- To add a privilege with read/write
add-priv <string>] access to the role, where <role> is
the name of the role and <string>
is the name of the privilege

isi auth roles modify <role> [-- To add a privilege with read-only
add-priv-ro <string>] access to the role, where <role> is
the name of the role and <string>
is the name of the privilege

isi auth roles delete <name> To delete a custom role, where


<name> is the name of the role
that you want to delete

Privileges

Privileges grant access to features with read or read/write access. Administrators


cannot modify built-in roles. ZRBAC provides flexibility for organization
administrators to manage resources according to their specific organization.

PowerScale Administration-SSP1

Page 104 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

List privileges

LOGIN_CONSOLE needed to SSH

LOGIN_PAPI needed to use the


WebUI

The graphic shows built-in roles that have a predefined set of privileges. Red outlines are the only
privileges available for ZRBAC.

Note: The WebUI privileges names differ from the names that are
seen in the CLI.

ZRBAC - ISI_PRIV_AUTH_Privilege

The zone-based ISI_PRIV_AUTH privilege enables non System zone


administrators to create and modify their zone authentication providers.

The graphic shows a local zone administrator, jane, logged in.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 105
Configuring Identity Management and Authorization

1: If zone is created by the system zone admins, only the system zone admins can
modify and delete. Local zone admin can only view and add access zones.

If zone is created by a nonsystem zone admin, both the zone admin and
nonsystem zone admin can view, modify, and delete.

2: ISI_PRIV_AUTH enables Access options

3: A zone administrator is logged in.

4: The IP address in the IP address pool associated with the access zone.

Challenge

Lab Assignment: Go to the lab and create user accounts for RBAC and
ZRBAC.

PowerScale Administration-SSP1

Page 106 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

User Identity Mapping

Scenario

IT Manager: Before you begin to configure the Windows shares, NFS


exports, and S3 buckets, you must be familiar on how OneFS manages
identity.

Your Challenge: The IT manager has tasked you to determine the on-
disk identity to configure on the cluster. Before configuring, you should
have an understanding of how identity management works. The
manager expects you to describe identity management, user tokens,
and on-disk identity.

Layers of Access

How the cluster is reached - SMB, NFS, S3, FTP, HTTP


Protocol

Authentication Identifies users using NIS, local files, LDAP or AD

Identity Assignment
Based on authentication or
mediated in cluster

Determines if user is authorized to access files


Authorization

Cluster connectivity has four layers of interaction. The third layer is identity
assignment. The layer is straightforward and based on the results of the
authentication layer.

There are some cases that need identity mediation within the cluster, or where
roles are assigned within the cluster that are based on user identity.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 107
Configuring Identity Management and Authorization

The focus of this topic is identity assignment.

Network and AIMA Hierarchy

Authentication, identity management, and authorization, or AIMA, ties into the


network hierarchy at different levels. The graphic shows how the AIMA hierarchy
ties into the network hierarchy.

1. The user connects to a SmartConnect zone name, which is tied to a subnet,


and SSIP.
2. The SmartConnect zone name is mapped to an access zone. The access zone
contains the authentication providers, directory services, user mapping, ID
mapping, and generates user tokens.
3. The access zone has a base directory where file permissions and user identities
on disk are applied.
4. Windows shares, NFS exports, and S3 buckets are created per access zone.

Identity Management

The OneFS identity management maps the users and groups from separate
services. The mapping provides a single unified identity on a cluster and uniform
access control to files and directories, regardless of the incoming protocol. Click on
the "i" icons for a high-level information about the process.

PowerScale Administration-SSP1

Page 108 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Authentication providers and protocols are covered in other topics.

1: When the cluster receives an authentication request, lsassd searches the


configured authentication sources for matches to an incoming identity. If the identity
is verified, OneFS generates an access token. This is an internal token that reflects
the OneFS identity management system. When a user attempts to access cluster
resources, OneFS allows or denies access based on matching the identity, user,
and group memberships to this same information on the file or folder.

2: The authentication providers uses OneFS to first verify a user identity after which
users are authorized to access cluster resources. The top layers are access
protocols – NFS for UNIX clients, SMB for Windows clients, and FTP and HTTP for
all.

3: Between the protocols and the lower-level services providers and their
associated data repositories, is the OneFS lsassd daemon. lsassd mediates
between the authentication protocols that clients and the authentication providers,
who check their data repositories for user identity and file access, use.

Access Token Overview Video

The video describes the access token generation. See the student guide for a
transcript of the video.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 109
Configuring Identity Management and Authorization

Click to launch video.

URL:
https://edutube.emc.com/Player.aspx?vno=MmSHIH1OvcP5nHsi0hd51g==&autopl
ay=true

When the cluster receives an authentication request, the lsassd searches the
configured authentication sources for matches to the incoming identity. If the
identity is verified OneFS generates an Access Token. Access Token form basis of
who you are when performing actions on the cluster. Shown is the output of the
users mapping token. The token supplies the primary owner and group identities to
use during file creation. For most protocols the access token is generated from the

PowerScale Administration-SSP1

Page 110 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

user name or from the authorization data that is received during authentication.
Access tokens are also compared against permissions on an object during
authorization checks. The access token includes all identity information for the
session OneFS exclusively uses the information in the token when determining if a
user has access to a particular resource.

Access Token Generation

Access tokens form the basis of who you are when performing actions on the
cluster. The tokens supply the primary owner and group identities to use during file
creation. When the cluster builds an access token, it must begin by looking up
users in external directory services. By default, the cluster matches users with the
same name in different authentication providers and treats them as the same user.
The ID-mapping service populates the access token with the appropriate identifiers.
Finally, the on-disk identity is determined.

Look up user in external Perform ID mapping Perform user mapping


directory services Determine on-disk
identity
- Only matches accounts - Matches accounts to
- Over SMB: AD preferred, with same name to map combine access tokens
LDAP can be appended IDs from different directory
services
- Over NFS: LDAP or NIS only

Primary Identities

OneFS supports three primary identity types, UIDs, GIDs, and SIDs.

UIDs and GIDs from Local, NIS, LDAP providers range from 1 to 65k.

OneFS automatically allocates UIDs and GIDs from the range 1,000,000-
2,000,000.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 111
Configuring Identity Management and Authorization

1
2

1: The user identifier, or UID, is a 32-bit string that uniquely identifies users on the
cluster. UNIX-based systems use UIDs for identity management.

2: The security identifier, or SID, is a unique identifier that begins with the domain
identifier and ends with a 32-bit Relative Identifier (RID). Most SIDs take the form
S-1-5-21-<A>-<B>-<C>-<RID>, where <A>, <B>, and <C> are specific to a domain
or system, and <RID> denotes the object inside the domain. SID is the primary
identifier for users and groups in Active Directory.

3: The group identifier, or GID, for UNIX serves the same purpose for groups that
UID does for users.

Secondary Identities

Secondary identifiers are names, such as usernames. Different systems such as


LDAP and Active Directory may not use the same naming convention to create
object names. There are many variations to entering or displaying a name. Click on
the highlighted icon to learn more.

PowerScale Administration-SSP1

Page 112 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

1: Windows provides a single namespace for all objects that is not case-sensitive,
but specifies a prefix that targets the dees Active Directory domain. UNIX assumes
unique case-sensitive namespaces for users and groups. For example, Sera and
sera can represent different objects.

2: Kerberos and NFSv4 define principals that require all names to have a format
similar to an email address. For example, given username sera and the domain
dees.lab, dees\sera and [email protected] are valid names for a single object in
Active Directory. With OneFS, whenever providing a name as an identifier, the
correct primary identifier of UID, GID, or SID is requested.

Multiple Identities

A concern for administrators when working in a multiprotocol environment is


making sure that users are treated the same regardless of protocol access.

The graphic shows a user that has both a Windows and Linux account. Multiple
identity, or multiprotocol access, could include configuring mapping to ensure user
IDs correctly map to one another.

OneFS is RFC 2307 compliant. Enable RFC 2307 to simplify user mapping.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 113
Configuring Identity Management and Authorization

See the participant guide for information about mapping challenges and
considerations.

Mapping is done either through an external authentication provider or through user-


mapping rules on the cluster. Another factor to consider is merging UIDs together
on the cluster from different environments. Do not put UIDs from different
environments and their authentication providers in the same access zone. When
there are two identifiers for the same user, build the user token with all appropriate
IDs. The final challenge in a multi protocol environment is to appropriately apply the
permissions. Verification may require some testing and experimenting on the
administrator's part to fully understand what different permission settings mean
when applied to a user.

ID Mapper Database

User ID mapping provides a way to control permissions by specifying security


identifiers, user identifiers, and group identifiers.

Click on the information icons to learn more.

1 3

1: The user mapper feature can apply rules to modify the user identity OneFS
uses, add supplemental user identities, and modify the group membership of a
user. The user mapping service combines user identities from different directory
services into a single access token. The mapping service then modifies it according
to the rules that you create.

2: OneFS uses the identifiers to check file or group ownership.

PowerScale Administration-SSP1

Page 114 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

3: Mappings are stored in a cluster-distributed database that is called the ID


mapper. The ID provider builds the ID mapper using incoming source and target
identity type—UID, GID, or SID. Only authoritative sources are used to build the ID
mapper.

4: Each mapping is stored as a one-way relationship from source to destination. If


a mapping is created, or exists, it has to map both ways. The two-way mappings
are presented as two complementary one-way mappings in the database. When
receiving an identity request, if a mapping exists between the specified source and
the requested type, OneFS returns the mapping.

On-Disk Identity

Identifies preferred
identity to store on
disk

Determines identity
stored in ACLs - SID
or UID/GIDs

The graphic shows the token of Windows user Sera with a UID as the on-disk identity.

OneFS uses an on-disk identity store for a single identity for users and groups.

On-disk identities enable administrators to choose storing UNIX or Windows


identity automatically or enables the system to determine the correct identity to
store.

Though OneFS creates a user token from information on other management


systems, OneFS stores an authoritative version of the identity as the preferred on-
disk identity.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 115
Configuring Identity Management and Authorization

On-Disk Identity Use Cases

The available on-disk identity types are Native, UNIX, and SID. The on-disk identity
is a global setting. Because most protocols require some level of mapping to
operate correctly, choose the preferred identity to store on-disk.

Native is default - applies to most


environments and assumes a mixed
environment

UNIX is typical for a UNIX only environment


- stores UIDs and GIDs. An incoming SID
will store the SID if no UID and GID are
found.

SID identity is typical for Windows only


environments - stores SIDs

The use case for the default Native setting is an environment that has NFS and
SMB client and application access. With the Native on-disk identity set, lsassd
attempts to locate the correct identity to store on disk by running through each ID-
mapping method. The preferred object to store is a real UNIX identifier. OneFS
uses a real UNIX identifier when found. If a user or group does not have a real
UNIX identifier (UID or GID), OneFS stores the real SID. Click on the highlighted
icon to learn more.

Troubleshooting Resources

For troubleshooting issues, first see:


http://www.emc.com/collateral/TechnicalDocument/docu63138.pdf

For a list of all customer troubleshooting guides: OneFS Customer Troubleshooting


Guides Info Hub.

Challenge

Your Challenge: It looks like you understand identity management.


Now go to the lab and configure the on-disk identity type for the cluster.

PowerScale Administration-SSP1

Page 116 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Authorization

Scenario

IT Manager: The final topic to understand before creating the Windows


shares, NFS exports, and S3 buckets is how OneFS handles
permissions to the files and directories.

Your Challenge: The IT manager wants to ensure you can describe


POSIX mode bits, Windows ACLs, and how OneFS handles both types
of permissions.

Permissions Overview

Like identities, OneFS also stores permissions on disk. However, storing


permissions is more complex than storing identities because each data access
protocol uses its own permissions model. The individual files and folders that
clients access over NFS or SMB can have UNIX permissions and Windows ACLs
assigned.

Multi-protocol access is covered in greater detail in the PowerScale


Advanced Administration course.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 117
Configuring Identity Management and Authorization

3
4
6
1
5

1: OneFS supports NFS and SMB protocols. It accesses the same directories and
files with different clients.

2: OneFS generates synthetic ACLs.

3: Authoritative permissions are stored on disk.

4: Clients have the same file access regardless of access protocol.

5: OneFS approximately maps ACLs and mode bits - no perfect one-to-one


mapping exists.

6: OneFS supports two types of authorization data on a file, access control lists, or
ACLs, and UNIX permissions, or POSIX mode bits.

Cross Protocol Access

ACL authoritative POSIX authoritative

Extensive DACLs - Read-Write-Execute User-


granular access Group-Others

Synthetic ACLs - limited


Approximated POSIX - only to 3 DACLs
for representation for LS on
export

To handle cross-protocol file access, OneFS stores an internal representation of


the permissions of a file system object, such as a directory or a file.

PowerScale Administration-SSP1

Page 118 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

The internal representation, which can contain information from either the POSIX
mode bits or the ACLs, is based on RFC 3530.

Click each item for more information:


• State53
• Synthetic ACLs54
• Authority55

POSIX Overview

In a UNIX environment, you modify permissions for users/owners, groups, and


others to allow or deny file and directory access as needed. Set the permissions
flags to grant permissions to each of these classes. Assuming the user is not root,
the class determines access the requested file.

53A file can only be in one of the states at a time. That state is authoritative. The
actual permissions on the file are the same, regardless of the state.

54 OneFS uses the internal representation to generate a synthetic ACL, which


approximates the mode bits of a UNIX file for an SMB client. Because OneFS
derives the synthetic ACL from mode bits, it can express only as much permission
information as mode bits can and not more.

55 OneFS must store an authoritative version of the original file permissions for the
file sharing protocol and map the authoritative permissions for the other protocol.
OneFS must do so while maintaining the security settings for the file and meeting
user expectations for access. The result of the transformation preserves the
intended security settings on the files. The result also ensures that users and
applications can continue to access the files with the same behavior.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 119
Configuring Identity Management and Authorization

1 2 3
4 5

1: User or owner permission

2: Group permissions

3: Others or everyone permissions

4: Configure permission flags to grant read (r), write (w), and execute (x)
permissions to users, groups, and others in the form of permission triplets. The
classes are not cumulative. OneFS uses the first class that matches. Typically,
grant permissions in decreasing order, giving the highest permissions to the file
owner and the lowest to users who are not the owner or the owning group.

5: These permissions are saved in 16 bits, which are called mode bits.

6: The information in the upper 7 bits can also encode what the file can do,
although it has no bearing on file ownership. An example of such a setting would
be the “sticky bit.”

Important: OneFS does not support POSIX ACLs, which are


different from Windows ACLs.

PowerScale Administration-SSP1

Page 120 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

POSIX in the WebUI

Only configurable as root

Triplets

9 mode bits

Triplet classes

Modify UNIX permissions in the WebUI on the File system > File system explorer page. Click
image to enlarge.

The graphic shows root user who is logged in and the /ifs/boston/hr
directory. Only root user can view and edit the owner and group of the object.

To assign read, write, or execute permissions to the specified account owner


(user), group members (group), and anyone (other), select or clear the mode bit
boxes. To apply setting changes, click Save changes.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 121
Configuring Identity Management and Authorization

chmod Command

read bit adds 4 to total, write bit


adds 2, and execute bit adds 1

Changing the permissions on a directory so that group members and all others can only read the
directory.

OneFS supports the standard UNIX tools for changing permissions: chmod and
chown. The change mode command, chmod, can change permissions of files and
directories. The man page for chmod documents all options.

Changes that are made using chmod can affect Windows ACLs.

chown Command

Only root user can change the


owner

The output shows that penni is an LDAP user who is responsible for the content of the
/ifs/boston/hr directory.

The chown command is used to change ownership of a file. Changing the owner of
a file requires root user access. The basic syntax for chown is chown [-R]

PowerScale Administration-SSP1

Page 122 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

newowner filenames. Using the -R option changes the ownership on the sub
directories.

The chgrp command changes the group. View the man pages for command
definitions.

Windows ACLs Overview

Sales group ACL

Access control
elements

No permissions = no access

List of advanced
List of basic permissions
permissions

On Windows host: Properties > Security tab > Advanced > Edit window

In a Windows environment, ACLs define file and directory access rights.

While you can apply permissions for individual users, Windows administrators
usually use groups to organize users, and then assign permissions to groups
instead of individual users.

Group memberships can cause a user to have several permissions to a folder or


file.

Windows includes many rights that you can assign individually or you can assign
rights that are bundled together as permissions. For example, the Read permission
includes the rights to read and execute a file while the Full Control permission
assigns all user rights. Full Control includes the right to change ownership and
change the assigned permissions of a file or folder.

When working with Windows, note the important rules that dictate the behavior of
Windows permissions. First, if a user has no permission that is assigned in an ACL,
then the user has no access to that file or folder. Second, permissions can be
explicitly assigned to a file or folder and they can be inherited from the parent
folder. By default, when creating a file or folder, it inherits the permissions of the

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 123
Configuring Identity Management and Authorization

parent folder. If moving a file or folder, it retains the original permissions. On a


Windows client, if the check boxes in the Permissions dialog are not available, the
permission are inherited. You can explicitly assign permissions. Explicit
permissions override inherited permissions. The last rule to remember is that Deny
permissions take precedence over Allow permissions. However, an explicit Allow
permission overrides an inherited Deny permission.

ACL Permission Policy Settings

OneFS has configurable ACL policies that manage permissions. You can change
the default ACL settings globally or individually, to best support the environment.
The global permissions policies change the behavior of permissions on the system.
For example, selecting UNIX only changes the individual ACL policies to
correspond with the global setting. The permissions settings of the cluster are
handled uniformly across the entire cluster, rather than by each access zone.

The WebUI > Access > ACL policy settings page and how the policy settings
translate in the CLI command output. You can also use the "isi auth settings acls
modify" command to configure the ACL settings.

1
2
3
4

1: Use case: Permissions operate in a mixed UNIX and Windows environment.

2: Use case: Permissions operate with UNIX semantics - prevents ACL creation.

3: Use case: Permissions operate with Windows semantics - errors for UNIX
chmod.

PowerScale Administration-SSP1

Page 124 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

4: Use case: Configure individual permission-policy settings. If a General ACL


Setting or Advanced ACL Setting needs changing, select the Custom
environment global setting.

Managing ACL Permissions

The output shows OneFS enhancements to the ls command.

1 4

1: The ls -le command shows actual permissions stored on disk and ACL from
security descriptor.

2: The ls -len command shows numerical (n) owner and group SID or UID/GID.

3: The ls -lean shows hidden (a) directories.

4: The long format includes file mode, number of links, owner, group, MAC label,
number of bytes, abbreviated month, day file last modified, hour file last modified,
minute file last modified, and the path name.

OneFS takes advantage of standard UNIX commands and has enhanced some
commands for specific use with OneFS.

The list directory contents, ls, command provides file and directory permissions
information, when using an SSH session to the cluster. PowerScale has added
specific options to enable reporting on ACLs and POSIX mode bits.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 125
Configuring Identity Management and Authorization

Tip: The ls command options are all designed for long notation
format, which is displayed when the -l option is used. The -l
option also displays the actual permissions that are stored on disk.

Synthetic vs Advanced ACLs

Not stored anywhere - dynamically generated as needed and


then discarded

Translated POSIX to an ACL - POSIX authoritative

Advanced ACLs on the file - ACLs authoritative

Running the ls -le command shows the synthetic ACLs for files and directories (the -d flag lists
directory entries).

A Windows client processes only ACLs, it does not process UNIX permissions.
When viewing the permission of a file from a Windows client, OneFS must translate
the UNIX permissions into an ACL.

Synthetic ACL is the name of the OneFS translation.

If a file has Windows-based ACLs (and not only UNIX permissions), OneFS
considers it to have advanced, or real ACLs56.

56Advanced ACLs display a plus (+) sign when listed using an ls –l, or as shown,
the ls -led command. POSIX mode bits are present when a file has a real ACL,
however these bits are for protocol compatibility and are not used for access
checks.

PowerScale Administration-SSP1

Page 126 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

Permission Authority Video

The video discusses authentication and authorization. See the student guide for a
transcript of the video.

Click to launch video.

Link:
https://edutube.emc.com/html5/videoPlayer.htm?vno=EN8uMS3WuRwjY4Q0mIUa
Zw

Let us begin with a look at authentication and authorization. Whereas


authentication is verifying a user identity, authorization grants users or group
permission to access files and directories. Authentication is logging into a system
using credentials. When logged in, authorization is what gives the user different
levels of access. As an analogy, an employee badge with a security access code is
proof as to who the individual is. The badge grants access to the door to the

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 127
Configuring Identity Management and Authorization

corporate building, thus the user has permission to enter. Share level permissions
work similarly in that users get access to the share before they can gain access to
any of the share directories. A user that has access to a directory (office) can then
access the files within the directory, providing permission to the file is given.

Access to a folder on an Isilon cluster is determined through two sets of permission


entries: POSIX mode bits and Windows ACLs. The graphic shows the /dvt folder
and two shares that are created underneath it. SMB access depends both of these
permissions and when the share permissions combine with file or directory
permissions, OneFS enforces the most restrictive set of permissions. For example,
if a user has no write permission to the /dvt share then the user cannot write to
the /linux and /win directories or files within the directories.

Two options are available when creating a share, Do not change existing
permissions and Apply Windows default ACLs. Understand the Apply
Windows default ACLs settings. This setting can destroy or at a minimum alter
explicitly defined directory permissions that are created on the share. For example,
carefully migrated permissions can change, creating more work and the potential of
causing data unavailability. Files and directories can be either POSIX authoritative
or ACLs authoritative.

A synthetic ACL does not exist on the file system and is not stored anywhere.
Instead, OneFS generates a synthetic ACL as needed, and then discards it. OneFS
creates the synthetic ACL in memory when a client that only understands ACLs,
such as Windows clients, queries the permissions on a file that only has POSIX
permissions.

With synthetic ACLs, POSIX mode bits are authoritative. POSIX mode bits handle
permissions in UNIX environments and govern the synthetic ACLs. Permissions
are applied to users, groups, and everyone, and allow or deny file and directory
access as needed. The read, write, and execute bits form the permissions triplets
for users, groups, and everyone. The mode bits can be modified using the WebUI
or the CLI standard UNIX tools such as chmod and chown. Since POSIX governs
the synthetic ACLs, changes made using chmod change the synthetic ACLs. For
example, running chmod 775 on the /ifs/dvt directory changes the mode bits to
read-write-execute for group, changing the synthetic ACL for the group. The same
behavior happens when making the access more restrictive, for example, running
chmod 755, changes the synthetic ACL to its corresponding permission. The
chmod behavior is different when ACLs are authoritative.

PowerScale Administration-SSP1

Page 128 © Copyright 2020 Dell Inc.


Configuring Identity Management and Authorization

In the example, the directory /ifs/dvt/win has a real ACL. The POSIX mode bits are
775. Running chmod 755 does not change to the POSIX mode bits since merging
775 with 755 gives the combined value of 775. Shown is an excerpt from the Isilon
cluster WebUI page that shows the different behaviors.

The first example shows that the share permission is everyone read-only although
the POSIX indicates read-write-execute. Windows users can write to the share
based on the synthetic ACLs. The second example shows POSIX at 755. Although
the ACL is set to a user with full control, the user cannot write to the share—POSIX
is authoritative.

The “+” indicates a real or native ACL that comes directly from Windows and is
applied to the file. Access control entries make up Windows ACLs. An administrator
can remove the real ACL permission using the chmod -b command. ACLs are
more complex than mode bits and can express a richer set of access rules.
However, not all POSIX mode bits can represent Windows ACLs any more than
Windows ACLs can represent POSIX mode bits.

Once a file is given an ACL, its previous POSIX mode bits are no longer
enforced—the ACL is authoritative. The first example shows a real ACL used,
POSIX set for 777, and the share permissions for the user set to read-only.
Although the POSIX show read-write-execute for everyone, the user cannot write
because of the ACL. In contrast, the second example shows the case where the
user can write.

Troubleshooting Resources

For troubleshooting issues, first see:


http://www.emc.com/collateral/TechnicalDocument/docu63137.pdf

For a list of the latest customer troubleshooting guides: OneFS Customer


Troubleshooting Guides Info Hub.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 129
Configuring Identity Management and Authorization

Challenge

Lab Assignment:
Log in to the cluster and verify the ACL policy setting.
• Permissions and ownership using the WebUI
• Permissions and ownership using the CLI
• ACL authoritative
• ACL policy setting

PowerScale Administration-SSP1

Page 130 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Configuring Client Access to Data

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 131


Configuring Client Access to Data

OneFS Caching

Scenario

IT Manager: The next thing that I would like to know more about is how
the PowerScale caches data.

Your Challenge: The IT manager wants you to describe caching in


OneFS and illustrate the caching process.

OneFS Caching Overview

Exploded view of a Gen 6 chassis.

4
2

1
3

1: Cache levels addresses the immediacy for information.

2: Accelerate access. The immediacy determines how the cache is refreshed, how
long the data is available, and how the data is emptied or flushed from cache.

3: Different cache levels to account for differing data immediacy. The cache levels
provide guidance to the immediacy of information from a client-side transaction
perspective.

PowerScale Administration-SSP1

Page 132 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

4: Cache is temporary. Because cache is a copy of the metadata and user data,
any data that is contained in cache is temporary and can be discarded when no
longer needed.

Caching maintains a copy of the metadata57 and/or the user data blocks in a
location other than primary storage.

Cache in OneFS is divided into levels. Each level serves a specific purpose in read
and write transactions.

Cache Levels

OneFS caching consists of the client-side level 1, or L1, cache and write coalescer,
and level 2, or L2 storage and node-side cache.

Both L1 cache and L2 cache are managed and maintained in RAM. However,
OneFS is also capable of using SSDs as level 3, or L3 cache.

L3 - node - side cache -


interacts with L2

L2 - node - side cache

L1 - client - side cache and


write coalescer

L3 cache interacts with the L2 cache and L3 is contained on SSDs.

57
The copy is used to accelerate access to the data by placing the copy on a
medium with faster access than the drives.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 133


Configuring Client Access to Data

Each cache has its own specialized purpose and works together to provide
performance improvements across the entire cluster.

L1 Cache

Client-side cache.

1: L1 cache allows all blocks for immediate read requests. Read cache is flushed
after a successful read transaction and write cache is flushed after a successful
write transaction. L1 cache collects the requested data from the L2 cache of the
nodes that contain the data.

L1 cache is the client-side cache. It is the buffer on the node that the client
connects, and is involved in any immediate client data transaction.

The write coalescer collects the write blocks and performs the additional process of
optimizing the write to disk.

Following a successful read transaction, the data in L1 cache is flushed or emptied


to provide space for other transactions.

L2 Cache

L2 cache.

PowerScale Administration-SSP1

Page 134 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

1: L2 cache is also contained in the node RAM. It is fast and available to serve L1
cache read requests and take data handoffs from the write coalescer. L2 cache
interacts with the data that is contained on the specific node. The interactions
between the drive subsystem, the HDDs, and the SSDs on the node go through the
L2 cache for all read and write transactions.

2: Interacts with node drives and L3 cache.

L2 cache is the storage side or node-side buffer. L2 cache stores blocks from
previous read and write transactions.

L2 buffers write transactions and L2 writes to disk and prefetches anticipated


blocks for read requests.

L2 cache works with the journaling process.

When full, flushes according to the age of the data.

L3 Cache

L3 cache.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 135


Configuring Client Access to Data

1: Extension of L2 cache.

2: SSD access is slower than access to RAM and is relatively slower than L2 cache
but faster than access to data on HDDs. L3 cache is an extension of the L2 read
cache functionality. Because SSDs are larger than RAM, SSDs can store more
cached metadata and user data blocks than RAM. When L3 cache becomes full
and new metadata or user data blocks are loaded into L3 cache, the oldest existing
blocks are flushed from L3 cache. Flushing is based on first in first out, or FIFO. L3
cache should be filled with blocks being rotated as node use requires.

L3 cache provides additional level of storage node-side cache using the SSDs as
read cache.

Good for random, read heavy workflows accessing the same data sets.

L3 cache has no prefetch.

Important: H-Series and A-Series have two SSD slots in each


node. In H-Series nodes you have the option to enable or disable
L3 cache. In A-Series nodes, you cannot disable L3 cache. As all
disks are SSDs in F-Series nodes, the L3 cache option does not
apply.

PowerScale Administration-SSP1

Page 136 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Caching Big Picture

The graphic shows an eight node cluster that is divided into two node pools with a
detailed view of one of the nodes.

3 1
4

1: Clients connect to L1 cache and the write coalescer. The L1 cache is connected
to the L2 cache on the other nodes and within the same node. The connection to
other nodes occurs over the internal network when data that is contained on those
nodes is required for read or write.

2: The L2 cache on the node connects to the disk storage on the same node. The
L3 cache is connected to the L2 cache and serves as a read-only buffer. The L2
cache on the node connects to the disk storage on the same node.

3: L3 extension from L2.

4: L1 talks to L2 on all cluster nodes.

5: Backend network.

Anatomy of a Read

When a client requests a file, the client-connected node uses the isi get
command to determine where the blocks that comprise the file are located.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 137


Configuring Client Access to Data

1: The first file inode is loaded, and the file blocks are read from disk on all other
nodes. If the data is not present in the L2 cache, data blocks are copied in the L2.
The blocks are sent from other nodes through the backend network.

2: If the data is already present in L2 cache, it is not loaded from the hard disks.
OneFS waits for the data blocks from the other nodes to arrive. Otherwise, the
node gets the data load from the local hard disks, and then the file is reconstructed
in L1 cache and sent to the client.

3: Data blocks are then reconstructed in L1.

Asynchronous Write Anatomy

When a client requests a file write to the cluster, the client-connected node
receives and processes the file.

PowerScale Administration-SSP1

Page 138 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

2 1

1: Cache writes until write coalescer is full, time limit is reached, or protocol
requests confirmation of delivery.

2: The client-connected node creates a write plan for the file including calculating
Forward Error Correction, or FEC. Data blocks assigned to the node are written to
the journal of that node. Data blocks assigned to other nodes travel through the
internal network to their L2 cache, and then to their journal.

At same time, data blocks that are assigned to other nodes go to L2.

3: Once all nodes have all the data and FEC blocks that are journaled, a commit is
returned to the client. Data blocks assigned to client-connected node stay cached
in L2 for future reads, and then data is written onto the HDDs.

4: The Block Allocation Manager, or BAM, on the node that initiated a write
operation makes the layout decisions. The BAM decides on where best to write the
data blocks to ensure that the file is properly protected. Data is copied to journal.
To decide, the BAM Safe Write, or BSW, generates a write plan, which comprises
all the steps that are required to safely write the new data blocks across the
protection group.

5: Once nodes have the data and FEC is journaled, nodes confirmation is sent to
client-connected node and a commit is sent to client.

6: Once complete, the BSW runs this write plan and guaranties its successful
completion. OneFS does not write files at less than the desired protection level.
Data is written to disks.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 139


Configuring Client Access to Data

L3 Cache Settings

L3 cache is enabled by default for all new node pools that are added to a cluster.

L3 cache is either on or off and no other visible configuration settings are available.

File system > Storage pools > SmartPools settings. Enabling and disabling L3 at
the global level and at the node pool level.

1: Enabled on new node pool by default

2: L3 cache cannot enable if node pool has no unprovisioned SSDs and it cannot
coexit with other SSD strategies.

CLI Commands

The following command are used to disable globally and to enable at the node pool
level.

• Global setting: isi storagepool settings modify --ssd-13-


cache-default-enabled no
• Node Pool setting: isi storagepool nodepools modify <pool
name> --13 yes

L3 Cache Considerations

The following points are the L3 cache considerations:

PowerScale Administration-SSP1

Page 140 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

• L3 cache cannot co-exist with other SSD strategies58 on the same node pool.
• SSDs in an L3 cache enabled node pool cannot participate as space used for
GNA.
• L3 acts as an extension of L2 cache regarding reads and writes59 on a node.
• You cannot enable L3 cache in all-flash nodes60.
• You cannot disable L3 cache in archive-type nodes (A200, A2000, NL410,
HD400).
• If changing the L3 cache behavior, migrating data and metadata from the SSDs
to HDDs can take hours.

CLI Cache Keys

The example shows the command to query historical statistics for cache. The first
command lists the keys that are related to cache.

A use case is, running the command to determine the L3 hit and miss stats to
indicate if the node pool needs more SSDs.

Also, you can use the isi_cache_stats and the isi_cache_stats -v


commands to view caching statistics.

58Such as metadata read acceleration, metadata read/write acceleration, and data


on SSD.

59The process of reading or writing, except for larger available cache, is


substantially unchanged.

60 On Gen 6x nodes all data drives are SSDs in the F800, F810, F600, and F200.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 141


Configuring Client Access to Data

1: The command lists the keys that are related to cache. The number and
granularity of available keys is numerous. The keys give administrators insight to
the caching efficiency and can help isolate caching related issues.

2: The command shows the key to list the L1 metadata read hits for node 2, the
node that is connected over SSH.

Challenge

IT Manager:
Open participation question:
Question: What does L1, L2, and L3 cache provide?

PowerScale Administration-SSP1

Page 142 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

SMB Shares

Scenario

IT Manager: The first thing that I would like you to configure is an SMB
share for the Windows users. I want you to create a single share for
now, and ensure that the Windows users have access.

Your Challenge: The IT manager has tasked you to create a share that
the Windows users can access. Before creating the shares, you must
know a few things. The manager wants you ensure you can describe
SMB Continuous Availability, enable SMB sharing, and create shares
and home directories.

Protocol Overview

Configure and create SMB shares for Windows users - created at the zone
level

Configure and create NFS exports for UNIX-type environments - created at


the zone level

Create a virtual rack for HDFS data-intensive distributed applications -


created at the zone level

Enable and configure FTP services - applies to the system

Enable and configure HTTP services - applies to the system

Configure and create Amazon S3 Buckets - created at the zone level

OneFS WebUI Protocols menu.

In addition to supporting common data-access protocols, such as SMB and NFS,


OneFS supports HDFS, FTP, HTTP, and S3.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 143


Configuring Client Access to Data

Important: Previous versions of OneFS show the Object storage


as Swift.

SMB Failover Overview

Network or Node
failure

SMB clients connects


to a single node /ifs/finance/
Data

Old behavior: If this node goes down or a network interruption, the client
needs to reconnect to the cluster manually.

SMB shares provide Windows clients network access to file system resources on
the cluster.

Too many disconnections prompt the clients to open help desk tickets with their
local IT department to determine the nature of the data unavailability.

Clients using SMB 1.0 and SMB 2.x use a time-out service.

PowerScale Administration-SSP1

Page 144 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

SMB Continuous Availability Overview

SMB 3.0 clients with SWP


receive a state change for fast
recovery.
/ifs/finance/
data

OneFS 8.0 and later: CA


enabled - reconnect
automatically

OneFS 8.0 and later support Continuous Availability, or CA.

CA61 enables a continuous workflow from the client-side with no appearance or


disruption.

SMB 3.0 clients use Service Witness Protocol.

SMB Server-Side Copy

Server-side copy offloads copy operations to the server when the involvement of
the client is unnecessary.

File data no longer traverses the network for copy operations that the server can
perform.

The server-side copy feature is enabled by default. To disable the feature, use the
CLI.

61Advanced algorithms are used to determine the metadata and user data blocks
that are cached in L3. L3 cached data is durable and survives a node reboot
without requiring repopulating.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 145


Configuring Client Access to Data

/ifs/finance/
Enabled by default
Server-side copy is data
disabled

Network

Double-click the image to enlarge


Network

Copied data
traverses the
network Server-side copy is

Note: In OneFS, server-side copy is incompatible with the SMB CA.


If CA is enabled for a share and the client opens a persistent file
handle, server-side copy is automatically disabled for that file.

Enabling and Disabling SMB Service

To enable SMB, in the WebUI, go to the Protocols > Windows sharing (SMB) > SMB server
settings tab.

PowerScale Administration-SSP1

Page 146 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

The SMB server settings page contains the global settings that determine how the
SMB file sharing service operates.

These settings include enabling or disabling support for the SMB service.

The SMB service is enabled by default.

A case62 for disabling the SMB service is when testing disaster readiness.

Share Creation Video

This video demonstrates the process of creating an SMB share, mapping the
share, and verifying access. See the student guide for a transcript of the video.

62 The organization fails over the production cluster or directory to a remote site.
When the remote data is available and users write to the remote cluster, all SMB
traffic should be halted on the production site. Preventing writes on the production
site prevents data loss when the remote site is restored back to the production site.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 147


Configuring Client Access to Data

Click to launch video.

Link:
https://edutube.emc.com/html5/videoPlayer.htm?vno=aMwue+nqUbFdOFoqKa98F
g

This demonstration shows the steps to configure SMB shares. Log in to the WebUI
as admin. The dashboard shows all the cluster nodes are healthy. The cluster is
running OneFS 8.2. Navigate to Protocols, Windows sharing. The SMB share will
be in the marketing access zone. Select Create an SMB share. The share I am
creating is called “general purpose”. I will add a description. The path
/ifs/marketing/GeneralPurpose does not exist so I will ensure it is created. This is a
Windows only share that did not previously exist so I will select Apply Windows
default ACLs. In the Members table I will give Everyone full control and then Create
share. The next step is to access the share from a Windows client. From the
Windows client, I will open Windows Explorer and map the share. Good. Now as a
simple test I am creating a text document. I will write some content and save. And
then I will open the document. This demonstration stepped through configuring,
mapping, and accessing an SMB share.

PowerScale Administration-SSP1

Page 148 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Share Creation

Settings Section

Choose the access zone before


creating the share

Share name - This is the


name the users will map to

Path of the share, Base


directory is /ifs/finance

In this example, the "regulations" was


not created before creating the share
Automatically create directory
- default is unchecked

The CLI equivalent are the isi smb shares create or isi smb shares modify commands.

Type the full path of the share in the path field, beginning with /ifs.

You can also browse to the share. If the directory does not exist, the Create SMB
share directory if it does not exist creates the required directory.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 149


Configuring Client Access to Data

Directory ACLs

Creating a new share for


window users

Select if adding a new share to


existing directory structure

Use caution when applying the default ACL settings as it may overwrite existing
permissions in cases where the data has been migrated onto the cluster.

When a cluster is set up, the default permissions on /ifs may or may not be
appropriate for the permissions on your directories.

PowerScale Administration-SSP1

Page 150 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Summary63

Home Directory Provisioning

Use of variable expansion

Each access zone has a path to


"home" automatically

Home directory automatically created-


/ifs/finance/home/<username>

OneFS supports the automatic creation of SMB home directory paths for users.

631) If adding a share to an existing directory structure, you likely do not want to
change the ACL, so select the Do not change existing permissions. 2) If creating a
share for a new directory, you will likely be changing permissions to the ACL to
grant Windows users rights to perform operations. Set the Apply Windows default
ACLs and then once the share is created, go into the Windows Security tab and
assign permissions to users as needed.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 151


Configuring Client Access to Data

Using variable expansion, user home directories are automatically provisioned.

Variables:
• %L64
• %D65
• %U66
• %Z67

64 %L expands to hostname of the cluster, in lowercase.

65 %D expands to the Netbios domain name.

66 %U expands to user name.

67%Z expands to the access zone name. If multiple zones are activated, this
variable is useful for differentiating users in separate zones.

PowerScale Administration-SSP1

Page 152 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Member, File Filter, and Advanced Settings

SMB 3.0 clients automatically failover


to another node when a network or
node fails

File filtering for the share can be enabled


to allow or deny file writes

The graphic shows the permissions that are changed to Full control.

If needed, administrators can apply the Members68 permissions.

Adjustments made to Advanced settings override the default settings for this
share only.

68The default permissions configuration is read-only access for the Everyone


account. Edit or Add member to enable users and groups to write to the share.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 153


Configuring Client Access to Data

You can make access zone global changes to the default values in the Default
share settings tab. Changing the default share settings is not recommended.

In the CLI, you can create shares using the isi smb shares create
command. You can also use the isi smb shares modify to edit a share and
isi smb shares list to view the current Windows shares on a cluster.

The share name can contain up to 80 characters, and can only contain
alphanumeric characters, hyphens, and spaces. The description field contains
basic information about the share. There is a 255-character limit. Description is
optional but is helpful when managing multiple shares.

Example for directory ACLs: Say that /ifs/eng is a new directory that was created
using the CLI. Windows users can create and delete files in the directory. When
creating the share, if the Do not change existing permissions is set and then users
attempt to save files to the share, an access denied occurs because Everyone has
read access. Even as an administrator you cannot modify the security tab of the
directory to add Windows users because the mode bits limit access to only Root.As
an example, /ifs/eng is and NFS export and you explicitly want the /ifs/eng mode bit
rights set based on UNIX client application requirements. Selecting the Apply
Windows default ACLs option as shown in the graphic, overwrites the original
ACLs, which can break the application. Thus, there is risk that is associated with
using Apply Windows default ACLs with an existing directory.

Example for home directories: To create a share that automatically redirects users
to their home directories, select the Allow variable expansion box. To automatically
create a directory for the user, check the Auto-create directories box. You may also
set the appropriate flags by using the isi smb command in the command-line
interface. In the graphic, 1) set up user access to their home directory by mapping
to /ifs/finance/home. Users are automatically redirected to their home directory
/ifs/finance/home/. 2) Expansion variables are used to automatically create a path
where the users store the home directory files. After the creation, users connecting
to this share are automatically redirected to their home directory according to the
used path variables. The access zone is implied, because all access for Active
Directory is done per access zone and each access zone has its own home
directory path.

PowerScale Administration-SSP1

Page 154 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Challenge

Lab Assignment: Now log in to the cluster and create home directories
and a general purpose share.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 155


Configuring Client Access to Data

NFS Exports

Scenario

IT Manager: Now that you have the Windows users able to access the
cluster, you configure access for the linux users. I want you to create an
export that the linux users can access. Have a good understanding of
NFS exports before implementing into the lab.

Your Challenge: The IT manager has tasked you to create an NFS


export and verify that clients can access the export. Make sure you can
discuss NFS, create exports, and mount the exports.

NFS Overview

1
3

1: NFS relies upon remote procedure call (RPC) for client authentication and port
mapping.

2: NFS is native to UNIX clients. You can configure NFS to enable UNIX clients to
access content stored on PowerScale clusters.

3: OneFS supports NFSv3 and NFSv4.

OneFS supports NFS protocol versions 3, 4, and Kerberos authentication.

PowerScale Administration-SSP1

Page 156 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Exporting a directory enables accessing the data that is hosted on the cluster.

NFS is enabled by default in the cluster.

Click here to learn more about connectivity.

NFSv4 Continuous Availability

OneFS 8.0 and


later

Node or network
issue

User with dynamic


Automatically fail pools
over to
another node
NFS client

CA is enabled by default.

Clients transparently fail over to another node when a network or node fails.

No manual intervention on the client side.

Enabling and Disabling NFS

To enable and disable NFS using the WebUI, click Protocols > UNIX sharing (NFS)
> Global settings tab.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 157


Configuring Client Access to Data

1 2

1: Export settings are on the access zone level.

If changing a value in the Export settings, that value changes for all NFS exports in
the access zone. Modifying the access zone default values is not recommended.
You can change the settings for individual NFS exports as you create them, or edit
the settings for individual exports as needed.

2: Enabling NFSv4 requires entering the domain in the Zone settings page.

3: NFSv3 enabled by default

4: NFSv4 disabled by default

5: The NFS service is enabled by default.

If NFSv4 is enabled, specify the name for the NFSv4 domain in the NFSv4 domain
field on the Zone setting page.

You can customize the user/group mappings, and the security types (UNIX and/or
Kerberos), and other advanced NFS settings.

The NFS global settings determine how the NFS file sharing service operates. The
settings include enabling or disabling support for different versions of NFS.
Enabling NFSv4 is nondisruptive, and it runs concurrently with NFSv3. Enabling
NFSv4 does not impact any existing NFSv3 clients.

PowerScale Administration-SSP1

Page 158 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Configuration steps on the UNIX sharing (NFS) page have the possibilities to
reload the cached NFS exports configuration to ensure that any DNS or NIS
changes take effect immediately.

NFS Export Creation - Looking Closer at Settings

Create and manage NFS exports using either the WebUI or the CLI. For the CLI,
use the isi nfs exports command.

Export per access zone.

Protocols > UNIX sharing (NFS) > NFS exports page, Create an export option.
Highlighted are the paths to export.

1: Add multiple directory paths. A network hostname, an IP address, a subnet, or a


netgroup name can be used for reference.

2: Description - 255 characters limit.

3: Specifying no clients allows all clients on the network access to the export.

4: Rule order of precedence: Root clients, always read/write clients, Always read-
only clients, and then clients.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 159


Configuring Client Access to Data

You can enter a client by host name, IPv4 or IPv6 address, subnet, or netgroup.
Client fields:
• Clients - allowed access to the export
• Always read-write clients - allowed read/write access regardless of export's
access restriction setting
• Always read-only clients - allowed read-only access regardless of export's
access restriction setting
• Root clients - map as root

OneFS can have multiple exports with different rules that apply the same directory.
A network hostname, an IP address, a subnet, or a netgroup name can be used for
reference. The same export settings and rules that are created here apply to all the
listed directory paths. If no clients are listed in any entries, no client restrictions
apply to attempted mounts.

When multiple exports are created for the same path, the more specific rule takes
precedence. For example, if the 192.168.3 subnet has read-only access and
192.168.3.3 client has read/write access.

PowerScale Administration-SSP1

Page 160 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

NFS Export Creation - Looking Closer at Permissions

Restrict and allow permissions

Allow mounting sub directories below


the path to be the mounted path

You can configure customized mapping

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 161


Configuring Client Access to Data

Permissions settings can restrict access to read-only and enable mount access to
subdirectories. Other export settings are user mappings.69

The Advanced settings require advanced knowledge.70

NFS Considerations

Following are considerations for NFS:

• NFSv3 and NFSv4 CA clients should use dynamic IP address pools.


• With OneFS 8.0 and later, PowerScale supports up to 40 K exports.

Challenge

Lab Assignment: Now that you have learned how to create an export,
you are ready to create the NFS directory, export the directory, and
mount it to the Centos client.

69The "root user mapping" default is to map root users to nobody, and group is
none. The default Security type is "UNIX (system)". Scrolling down in the "Create
an export" window shows the "Advanced settings".

70 Uninformed changes to these advanced settings could result in operational


failures. Ensure that you understand the consequences of your changes before
saving. Any adjustments made to these settings override the default settings for
this export only. While it is not recommended, any changes made to the default
values are done through the "Export settings" tab. "Advanced settings" are
performance settings, client compatibility settings, and export behavior settings.

PowerScale Administration-SSP1

Page 162 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

S3 Buckets

Scenario

IT Manager: We are considering using the PowerScale cluster to store


and share our S3 content. I want you to see how we can configure S3
on the cluster should we decide to implement the functionality.

Your Challenge: The IT manager has tasked you to create an S3


bucket. The manager wants you to describe the S3 integration with
PowerScale.

S3 Overview

OneFS namespace
Objects stored in buckets

Multi-protocol access to objects

Amazon Simple Storage Service (S3) is an AWS service that provides object
storage through a web interface. OneFS 9.0.x and later support S3 as a tier 1
protocol. OneFS S3 value:

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 163


Configuring Client Access to Data

• Multi-protocol access71
• Multi-tenancy - access zone aware
• Latency and IOPs equivilent to other OneFS protocols
• Evolve the PowerScale data lake story:
• Single namespace and multi-protocol access
• Concurrent access72 to objects and files
• Interoperability with OneFS data services such as snapshots, WORM, quotas,
SnycIQ, and others

Implementation - Creating an S3 Bucket

Enable S3 Service

Enable the service. By default the service is cleared and disabled.

CLI command to change the port settings:


isi s3 settings global modify

71
Support interoperability between all OneFS supported protocols. File system
mapping: Object to file, object to directory, and bucket to base directory.

72 Supports locking and access control semantics.

PowerScale Administration-SSP1

Page 164 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Default ports

WebUI Protocols > Object storage (S3) page, Global settings tab. Click the image to enlarge.

Zone Settings

Configure the root path.

CLI command to set the root path:


isi s3 settings zone modify

Use if virtual host style is needed. For instance,


the base domain for bucket3.engineering.dees.lab
is engineering.dees.lab

Click the image to enlarge.

Object storage (S3) Page

You can create buckets using the Object storage (S3) page or using the isi s3
buckets create command.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 165


Configuring Client Access to Data

Can create S3 buckets per access zone

Two buckets created with root as the


owner

WebUI Protocols > Object storage (S3) page. Click the image to enlarge.

Create Bucket

The example shows creating a bucket.

CLI command to create the bucket and add ACL:


isi s3 buckets create bucket3 /ifs/engineering/bucket4 --
create-path --owner root --acls
name=dees\\john,type=user,perm=READ --zone=System

PowerScale Administration-SSP1

Page 166 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Only characters a-z,0- You can choose a user from the


Name not editable after creation Owner not editable after creation configured authentication providers
9, and '-'

1 2 You can select a user

S3 ACLs enable you to manage access to buckets


and objects.

Click the image to enlarge.

Complete Bucket Create

The graphic shows the Create a Bucket fields completed and the command to view
a created bucket.

AD user with read permissions

Can add more users and ACLs

Click the image to enlarge.

S3 Bucket Table

The Buckets tab shows the created buckets in a list view.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 167


Configuring Client Access to Data

CLI command to list the buckets:


isi s3 buckets list

Click the image to enlarge.

Key Management

A key must be created to authenticate the access. Key management from WebUI
facilitates generation of secret keys and access ID. The example show key creation
using the CLI.

Create a key for the bucket


owner

Access ID and Secret Key are


needed to authenticate

Can view the created keys

Click the image to enlarge.

PowerScale Administration-SSP1

Page 168 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

Accessing the S3 Bucket

The example shows using an Amazon S3 browser to connect to the configured


buckets on the PowerScale cluster.

Considerations

Listed are areas to consider:


• OneFS S3 is not a full AWS implementation - not a 1:1 with AWS.
• User can have two secret keys for transient period, not forever.
• No sorting or searching buckets in the WebUI
• File system ACLs are checked even if bucket ACL allows
• 16 TB object size limitation
• Use SyncIQ to replicate S3 buckets
• Use SmartPools to tier S3 buckets
• Use SnapshotIQ to version S3 buckets

Services

• /var/log/s3.log for general errors

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 169


Configuring Client Access to Data

• /var/log/lwsmd.log for problems with service mgmt such as service startup


issues
• CELOG - logs service start failure, user identity query failure, SBT bucket ID
Invalid, and SBT full
• SRS data includes buckets, log-level, global settings, zone settings, and
components of the service registry
• You can use isi statistics and isi performance for S3 metrics

Challenge

Lab Assignment: Now log in to the cluster and create an S3 bucket.


Add objects to the bucket and access the data over SMB.

PowerScale Administration-SSP1

Page 170 © Copyright 2020 Dell Inc.


Configuring Client Access to Data

HDFS and Swift

Hadoop Introduction

Software library framework

Allows for distributed processing of large data


sets

Analyzes data across groups of computers


using simple programming models

Tool of choice for big data analytics

Requires license

The Hadoop Distributed File System (HDFS) protocol enables a cluster to work
with Apache Hadoop, a framework for data-intensive distributed applications.

In a typical enterprise environment, Hadoop analyzes existing data to improve


processes and performance depending on the business model.

Click to view the entire HDFS topic.

Swift Overview

OneFS supports Swift, an object storage interface compatible with the OpenStack
Swift 1.0 API. Swift is a hybrid between the two storage types, storing Swift
metadata as an alternative data stream. Through Swift, users can access file-
based data that is stored on the cluster as objects. The Swift API is implemented
as Representational State Transfer, or REST, web services over HTTP or HTTPS.
Since the Swift API is considered a protocol, content and metadata can be ingested
as objects and concurrently accessed through protocols that are configured on the
cluster. The cluster must be licensed to support Swift.

Swift enables storage consolidation for applications regardless of protocol, which


can help eliminate storage silos. In environments with petabytes of unstructured
data, Swift can automate the collection, store, and manage the data, such as in a
data lake, for later analysis. Swift can be used to automate data-processing
applications to store objects on an Isilon cluster and analyze the data with Hadoop

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 171


Configuring Client Access to Data

through the OneFS HDFS. Swift benefits include secure multitenancy for
applications through access zones while protecting the data with capabilities such
as authentication, access control, and identity management. Manage data through
enterprise storage features such as deduplication, replication, tiering, performance
monitoring, snapshots, and NDMP backups. Swift balances the workload across
the cluster nodes through SmartConnect and stores object data more efficiently
with FEC instead of data replication.

Automate data-processing applications

Swift client
access

Clients - SMB, NFS,


HDFS access

Dissimilar protocol storage


consolidation

Click to see the entire Swift topic.

PowerScale Administration-SSP1

Page 172 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Foundations of Data Protection and Data Layout

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 173
Foundations of Data Protection and Data Layout

File Striping

Scenario

IT Manager: I am not sure how the cluster does striping. I want you to
do some research and let me know how the operating system stripes a
file.

Your Challenge: The IT manager wants you to describe how files are
broken up for file stripes and diagram the high-level file striping steps.

Introduction to File Striping

Four node Gen 6 example (+2d:1n).

OneFS protects files as the data is being written. Striping protects the cluster data
and improves performance. To understand OneFS data protection, the first step is
grasping the concept of data and forward error correction or FEC stripes.

PowerScale Administration-SSP1

Page 174 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

• File Stripes - files are logically segmented into 128 KB stripe units to calculate
protection
• FEC stripe unit - FEC stripe unit is the calculated piece of data protection
• Data stripe units + FEC stripe units = Stripe width.In the graphic, the stripe
width is 12 (eight data [1 MB file data] + 4 FEC)
• 16 data stripe units + 4 FEC = Maximum Stripe width of 20.
• 16 data stripe units = 2 MB. Files larger than 2 MB have multiple data stripe
units.

Data and FEC Stripe Units

The data stripe units and protection stripe units are calculated for each file stripe by
the Block Allocation Manager (BAM) process73.

F200 example with +1n protection.

1
2

73The BAM process calculates 128-KB FEC stripe units to meet the protection
level for each file stripe. The higher the protection level, the more FEC stripes units
are calculated.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 175
Foundations of Data Protection and Data Layout

1: A file is divided into 128-KB data stripes unit.

2: Each data stripe unit consists of sixteen 8K blocks.

16 X 8K = 128-KB

3: The protection is calculated based on the requested protection level for each file
stripe using the data stripe units that are assigned to that file stripe.

4: The combined 128-KB stripe units are called the Stripe Width. A single file stripe
width can contain up to 16, 128-KB data stripe units for a maximum size of 2 MB as
the files data portion. A large file has thousands of file stripes per file that is
distributed across the node pool.

File Striping Steps

The steps shows a simple example of the write process. The client saves a file to
the node it is connected to. The file is divided into data stripe units. The data stripe
units are assembled into the maximum stripe widths for the file. FEC stripe units
are calculated to meet the Requested Protection level. Then the data and FEC
stripe units are striped across nodes.

Step 1

OneFS stripes the data stripe units and FEC stripe units across the node pools.
Some protection schemes74 use more than one drive per node.

74OneFS uses advanced data layout algorithms to determine data layout for
maximum efficiency and performance. Data is evenly distributed across nodes in
the node pool as it is written. The system can continuously reallocate where the
data is stored and make storage space more usable and efficient. Depending on
the file size and the stripe width, as the cluster size increases, the system stores

PowerScale Administration-SSP1

Page 176 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

The client saves a file to the node it is connected.

File

Client

Graphic shows Gen 6 cluster with a simple example of the write process.

Step 2

If the file is greater than 128 KB, then the file is divided into data stripe units.

large files more efficiently. Every disk within each node is assigned both a unique
GUID (global unique identifier) and logical drive number. The disks are subdivided
into 32-MB cylinder groups that are composed of 8-KB blocks. Each cylinder group
is responsible for tracking, using a bitmap, whether its blocks are used for data,
inodes or other metadata constructs. The combination of node number, logical
drive number, and block offset make the block or inode address, which the Block
Allocation Manager controls.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 177
Foundations of Data Protection and Data Layout

The node divides the file into data stripe units

Data Stripe Units


File

Step 3

The node that the client connects to is the node that performs the FEC calculation.

The node calculates the FEC stripe units

FEC Stripe Unit

Data Stripe Units

PowerScale Administration-SSP1

Page 178 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Step 4

The data stripe units are assembled to maximum stripe width for the file. Also, here
the protection level that is configured is N+1n75.

Stripe width is assembled

Step 5

Depending on the write pattern, the data and FEC stripes might be written to one
drive per node or two drives per node. The important take away is that files
segment into stripes of data, FEC is calculated and this data distributes across the
cluster.

75 one disk per node/one FEC

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 179
Foundations of Data Protection and Data Layout

Data and FEC stripe units are striped across nodes

Considerations: File Striping

Listed are areas to consider when discussing file striping.


• The maximum of 16 data stripe units per file stripe means the maximum file
portion in a file stripe is 2 MB (16 x 128 KB).
• If a file does not fill the 128 KB stripe unit, the stripe unit is not padded (the
extra capacity is usable by the cluster).
• Files less than 128 KB are mirrored - not erasure coded. For example, a 100 KB
file with 2d:1n protection has a 3x mirror.
• The file size and protection level determine the capacity efficiency.
• At 80% capacity consumption, the organization should begin the process of
adding more nodes to prevent the cluster from going beyond 90%. Do not
exceed 90% capacity consumption.

PowerScale Administration-SSP1

Page 180 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Challenge

IT Manager:
Open participation questions:
Question: What does OneFS consider a small file and how are
small files put on disks for protection?

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 181
Foundations of Data Protection and Data Layout

Data Protection

Scenario

IT Manager: The manager gestures to the technical document and then


looks at you, "All this information is giving me a headache. I am going to
need your help. I do not have the time to read and understand all the
intricacies of data protection. Configure the proper data protection and
then let me know how it works and how it is configured."

Your Challenge: The IT manager wants you to describe data protection


levels in OneFS, define stripes and stripe units, and discuss the different
protection levels.

OneFS Data Protection

Data protection is one of the variables that are used to determine how data is laid
out. OneFS is designed to withstand multiple simultaneous component failures
while still affording access to the entire file system and dataset.
• OneFS uses the Reed-Solomon algorithm
• The data can be protected up to an N+4n scheme
• In OneFS, protection is calculated per individual files

Important: Files smaller than 128 KB are treated as small files.


Due to how OneFS applies protection, small files are mirrored.

PowerScale Administration-SSP1

Page 182 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Data Protection Improvements

In Gen 6 nodes, data protection and efficiency focus on:


• Mirrored Journal
• Smaller neighborhood76

In Gen 6.5 nodes, the journal is stored on an NVDIMM that is battery protected.

Data Protection Terms

N+Mn

N+Mn is the primary protection level in OneFS. N +Mn

• N77 Data Stripes + Number of nodes


or
• M78 Data Stripes + Number of drives

76Smaller neighborhoods improve efficiency by the fact that the fewer devices you
have within a neighborhood, the less chance that multiple devices will
simultaneously fail.

77 The “N” is the number of data stripes.

78The M value represents the number of simultaneous tolerable drive failures on


separate nodes without data loss. It also represents the number of FEC stripe units
per protection stripe.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 183
Foundations of Data Protection and Data Layout

• Mn79
• N+Mn80
• N=M81
• N>M82

The number of sustainable drive failures are per disk pool. Multiple drive failures on
a single node are equivalent to a single node failure. The drive loss protection level
is applied per disk pool.

Protection Drive Node Minimum Maximum


Level Failures failures Node Pool Stripe Width
Size

N+1n 1 1 3 nodes - (2 17- (16 data +


data + 1 FEC) 1 FEC)

N+2n 2 2 5 nodes - (3 18 - (16 data +


data + 2 FEC) 2 FEC)

N+3n 3 3 7 nodes - (4 19 - (16 data +


data + 3 FEC) 3 FEC)

79 The “Mn” is the number of simultaneous drive or node failures that can be
tolerated without data loss.

80 The available N+Mn Requested Protection levels are plus one, two, three, or four
“n” (+1n, +2n, +3n, and +4n). With N+Mn protection, only one stripe unit is written
to a single drive on the node.

81 If N equals M, the protection overhead is 50 percent. For example, with N+2n, a


file size 256 KB has a 50% protection overhead (256 KB = 2 stripe units).

82N must be greater than M to gain efficiency from the data protection. If N is less
than M, the protection results in a level of FEC calculated mirroring.

PowerScale Administration-SSP1

Page 184 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

N+4n 4 4 9 nodes - (5 20 - (16 data +


data + 4 FEC) 4 FEC)

N+Md:Bn Protection

N + Md : Bn

Data stripes + Number of drives and number of nodes

The “d” is the number of drives and “n” is the number of nodes. So N+3d:1n reads
as N+3 drives or 1 node.

Unlike N+Mn, N+Md:Bn has different values for the number of drive loss and node
losses that are tolerated before data loss may occur. When a node loss occurs,
multiple stripe units are unavailable from each protection stripe and the tolerable
drive loss limit is reached when a node loss occurs.
• M83
• d84
• Colon (:)85

83In this protection level, M is the number of drives per node onto which a stripe
unit is written.

84 The number of drives.

85 The: (colon) represents an “or” conjunction.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 185
Foundations of Data Protection and Data Layout

• B86
• n87

With Gen 6x, for better reliability, better efficiency, and simplified protection, using
+2d:1n, +3d:1n1d, or +4d:2n is recommended.

Minimum number of nodes in a node pool.88

Actual Protection Nomenclature

The Actual protection is represented differently than requested protection. . The


table shows the representation for the requested protection and the actual
protection.

N is replaced in the actual protection with the number of data stripe units for each
protection stripe. If there is no / in the output, it implies a single drive per node.
Mirrored file protection is represented as 2x to 8x in the output.

86 The B value represents the number of tolerated node losses without data loss.

87 “n” is the number of nodes.

88Remember that Gen 6 requires a minimum of 4 nodes of the same type, so


where the minimum number of nodes of three is indicated, for Gen 6 this is four.
Gen 6.5 requires a minimum of 3 nodes of the same type.

PowerScale Administration-SSP1

Page 186 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Outputs all files in


No /<#> implies directory or single
one drive per node file information

Drives per
node
N+2/2

Data stripe FEC stripe units


units per per stripe
stripe

The graphic shows viewing the output showing Actual protection on a file from the isi get command.
The output displays the number of data stripe units plus the number of FEC stripe units that are
divided by the number of disks per node the stripe is written to

Overhead Protection levels

The protection overhead for each protection level depends on the file size and the
number of nodes in the cluster. The percentage of protection overhead declines as
the cluster gets larger. In general, N+1n protection has a protection overhead equal
to the capacity of one node, N+2n to the capacity of two nodes, N+3n to the
capacity of three nodes, and so on.

Data mirroring requires significant storage overhead and may not always be the
best data-protection method. Example89

89 If you enable 3x mirroring, the specified content is explicitly duplicated three


times on the cluster; depending on the amount of content being mirrored, this can
require a significant amount of capacity.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 187
Foundations of Data Protection and Data Layout

Blue - 50% efficient Yellow - Mirrored Bold - Max efficiency


Protection reached

The table shows the relative protection overhead associated with each FEC requested protection
level. Indicators include when the FEC protection would result in mirroring.

MTTDL

MTTDL deals with how long you can go without losing data. MTTDL is used to
calculate the OneFS suggested protection.
• Accommodate failures90
• Disk pools91
• MTBF92

90Because there are so many disk drives in a large PowerScale installation, it is


common for a drive to be down at one time or another. Where other systems try to
harden against failures, PowerScale accommodates them. OneFS expects that any
device could fail at any point in time.

91Disk pools improve MTTDL because they create more failure domains, improving
the statistical likelihood of tolerating failures over the lifetime of the equipment.

92Mean Time Before Failure (MTBF) refers to individual component failure.


PowerScale subscribes to the "all devices do fail" philosophy (MTTDL), whereas

PowerScale Administration-SSP1

Page 188 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Quorum

For the cluster to properly function and accept data


writes, a quorum of nodes must be active and
responding.

• Greater than 50% available93


• No quorum - no writes94
• Protection level to minimum number of nodes95

N+2n vs. N+2d:1n Data Protection

There are six data stripe units to write a 768-KB file. The desired protection
includes the ability to sustain the loss of two hard drives.

MTBF is a single-component view of reliability. MTTDL is a better measure of what


customers care about.

93For a quorum, more than half the nodes must be available over the internal,
backend network to allow writes. An eight-node Gen 6 cluster, for example,
requires a five-node quorum.

94 If there is no node quorum, reads may occur, depending upon where the data
lies on the cluster but for the safety of new data, no new information will be written
to the cluster. So, if a cluster loses its quorum, the OneFS file system becomes
read-only and will allow clients to access data but not to write to the cluster.

95 Each protection level requires a minimum number of nodes. For example,


N+2d:1n needs a minimum of four Gen 6 nodes. Why? You can lose one node and
still have three nodes up and running, greater than 50%. You must keep quorum to
keep the cluster writeable.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 189
Foundations of Data Protection and Data Layout

1 2 3

1: Using N+2n protection, the 768-KB file will be placed into three separate data
stripes, each with two protection stripe units. Six protection stripe units are required
to deliver the requested protection level for the six data stripe units. The protection
overhead is 50 percent.

2: Using N+2d:1n protection the same 768-KB file requires one data stripe, two
drives wide per node and only two protection stripe units. The eight stripe units are
written to two different drives per node. The protection overhead is the same as the
eight node cluster at 25 percent.

3: If there is a eight node cluster, two FEC stripe units would be calculated on the
six data stripe units using an N+2n protection level. The protection overhead in this
case is 25 percent.

PowerScale Administration-SSP1

Page 190 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Mirrored Data Protection

Mirroring is used to protect the file metadata and some system files that exist under
/ifs in hidden directories. Mirroring can be explicitly96 set as the requested
protection level in all available locations.

Use Case97

2
2x to 8x
x
Mirroring. Original file plus
- The protection blocks are copies of the original set of data 1 to 7 copies.
3
blocks.
x

- 2x to 8x mirror settings. # indicates total number of data 4


instances. x

5
-The protection is explicitly set and the required mirroring is x
selected.
6
-Actual protection is applied for other Requested Protection
x
Levels
7
x

8
x

96 Mirroring is set as the actual protection on a file even though another requested
protection level is specified under certain conditions. If the files are small, the FEC
protection for the file results in a mirroring. The loss protection requirements of the
requested protection determine the number of mirrored copies. Mirroring is also
used if the node pool is not large enough to support the requested protection level.
For example, five nodes in a node pool with N+3n Requested Protection, saves the
file at 4X mirror level, the actual protection.

97 One particular use case is where the system is used to only store small files. A
file of 128 KB or less is considered a small file. Some workflows store millions of 1
KB to 4-KB files. Explicitly setting the requested protection to mirroring can save
fractions of a second per file and reduce the write ingest time for the files.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 191
Foundations of Data Protection and Data Layout

FEC Protection - Single Drive Per Node

Single Drive per Node

Data Stripe Unit

Stripe

FEC Stripe Unit

Some protection schemes use a single drive per node per protection stripe. The
graphic shows only a single data stripe unit, or a single FEC stripe unit is written to
each node. These protection levels are N+M or N+Mn.

PowerScale Administration-SSP1

Page 192 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Example: N+Mn Protection Stripe

The table shows each requested N+Mn Requested Protection level over the
minimum number of required nodes for each level. The data stripe units and
protection stripe units98 can be placed on any node pool and in any order.

+ 1n + 2n +3n +4n N+Mn Level

Data Data Data Data Node 1

Data Data Data Data Node 2

FEC Data Data Data Node 3

FEC Data Data Node 4

FEC FEC Data Node 5

FEC FEC Node 6

FEC FEC Node 7

FEC Node 8

FEC Node 9

The number of data stripe units depends on the size of the file and the size of the
node pool up to the maximum stripe width. As illustrated, N+1n has one FEC stripe

98The number of data stripe units depends on the size of the file and the size of the
node pool up to the maximum stripe width. N+1n has one FEC stripe unit per
protection stripe, N+2n has two, N+3n has three, and N+4n has four. N+2n and
N+3n are the two most widely used Requested Protection levels for larger node
pools, node pools with around 15 nodes or more. The ability to sustain both drive or
node loss drives the use when possible.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 193
Foundations of Data Protection and Data Layout

unit per protection stripe, N+2n has two, N+3n has three, and N+4n has four. N+2n
and N+3n are the two most widely used Requested Protection levels for larger
node pools, node pools with around 15 nodes or more. The ability to sustain both
drive or node loss drives the use when possible.

FEC Protection - Multiple Drives Per Node

Multiple Drives per Node

N+M:B or N+Md:Bn protection protection schemes use multiple drives per node.
The multiple drives contain parts of the same protection stripe. Multiple data stripe
units and FEC stripe units are placed on a separate drive on each node.

Protection:
N+2d:1n

Stripe

The graphic shows an example of a 1 MB file with a Requested Protection of +2d:1n. Four stripe
units, either data or protection stripe units are placed on separate drives in each node. Two drives
on different nodes per disk pool can simultaneously be lost or a single node without the risk of data
loss.

N+Md:Bn Protection Levels

One stripe with multiple stripe units per node.

PowerScale Administration-SSP1

Page 194 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Protection Level Drives Failures Node Failures Maximum Stripe


Width

N+2d:1n 2 1 18 - (16 data + 2


FEC)

N+3d:1n 3 1 19 - (16 data + 3


FEC)

N+4d:1n 4 1 20 - (16 data + 4


FEC)

FEC Protection - Advanced

Advanced Protection

In addition to N+Md:Bn, there are two advanced99 forms of Requested Protection.


The benefit to the advanced N+Md:Bn protection levels are they provide a higher

99 The available Requested Protection levels N+3d:1n1d and N+4d:2n. N+3d:1n1d


includes three FEC stripe units per protection stripe, and provides protection for
three simultaneous drive losses, or one node and one drive loss. The higher
protection provides the extra safety during data rebuilds that are associated with
the larger drive sizes of 4 TB and 6 TB. The maximum number of data stripe units
is 15 and not 16 when using N+3d:1n1d Requested Protection. N+4d:2n includes
four FEC stripe units per stripe, and provides protection for four simultaneous drive
losses, or two simultaneous node failures.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 195
Foundations of Data Protection and Data Layout

level of node loss protection. Besides the drive loss protection, the node loss
protection is increased.

Protection Level Drive failures Other failures Maximum Data


Stripe Width

N+3d:1n1d 3 1 node + 1 drive 18 - (15 data + 3


FEC)

N+4d:2n 4 2 nodes 20 - (16 data + 4


FEC)

Example: Advanced N+Mn:Bn Protection Stripe

The table shows examples of the advanced N+Md:Bn protection schemes100. Two
drives per node per protection stripe. The number of FEC stripe units does not
equal the number of drives that are used for the protection stripe. Even if one node
is lost, there is still a greater level of protection available.

N+Md:Bn Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Drive


Level

+3d:1n1d Data Data FEC Data Data Data 1


3 FEC stripe Data FEC Data FEC Data Data 2
units, 2 Drive
per Node

100 Like other protection levels, the data stripe units and FEC stripe units are placed
on any node in the node pool and on any drive. N+3d:1n1d is the minimum
protection for node pools containing 6-TB drives. The use of N+4d:2n is expected
to increase especially for smaller to middle sized node pools as larger drives are
introduced.

PowerScale Administration-SSP1

Page 196 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

+4d:2n Data Data FEC Data FEC Data 1


4 FEC stripe Data FEC Data FEC Data Data 2
units, 2
Drives per
Node

Protection Overhead

The protection overhead for each protection level depends on the file size and the
number of nodes in the cluster. The percentage of protection overhead declines as
the cluster gets larger.

• N+1n101
• N+2n102
• N+3n103
• Data Mirroring104

For better reliability, better efficiency, and simplified protection, use N+2d:1n,
N+3d:1n1d, or N+4d:2n, as indicated with a red box.

101 N+1n protection has a protection overhead equal to the capacity of one node.

102 N+2n protection has a protection overhead equal to the capacity two nodes.

103N+3n is equal to the capacity of three nodes, and so on. OneFS also supports
optional data mirroring from 2x-8x, enabling from two to eight mirrors of the
specified content.

104 Data mirroring requires significant storage overhead and may not always be the
best data-protection method. For example, if you enable 3x mirroring, the specified
content is explicitly duplicated three times on the cluster. Depending on the amount
of content being mirrored, the mirrors can require a significant amount of capacity.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 197
Foundations of Data Protection and Data Layout

50% efficient Mirrored Protection

The table shows the relative protection overhead that is associated with each FEC requested
protection level available in OneFS. Indicators include when the FEC protection would result in
mirroring.

Considerations

As the cluster scales, the default protection may need adjusting. You may not want
to apply a higher protection to the entire cluster. Although you get better protection,
it is less efficient. Listed are areas to consider.

• The suggested protection feature is enabled on new clusters.105


• Higher protection levels impact utilization for small files.
• As protection increases, performance decreases.106

105 On cluster upgrades, the feature is disabled by default.

106Because the system is doing more work to calculate and stripe the protection
data – impact is approximately linear.

PowerScale Administration-SSP1

Page 198 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

• Large107 archive clusters (20+ nodes) often require N+3.


• Set requested protection to mirroring or use SFSE for workflows with small108
files.
• Gen 6 recommends N+2d:1n or N+3d:1n1d protection.
• Protect critical datasets109 with different policies.

Challenge

Lab Assignment: Review the data protection levels:


• Node pool protection levels
• Directory level protection
• File level protection

107 Other clusters work well with N+2 or N+2d:1n.

108 Some workflows store millions of 1 KB to 4 KB files.

109The customer may want to protect some repositories at a higher level than the
cluster default.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 199
Foundations of Data Protection and Data Layout

Protection Management

Scenario

IT Manager: Okay, I think I get protection levels, but it seems there is


more to data protection than I thought.

Your Challenge: The IT Manager wants to differentiate suggested,


requested, and actual protection. Explain the editing of file pool and
node pool protection, and discuss the editing of file and directory level
protection.

Data Protection Types

2 3

1: Requested Protection is what is configured, it determines the amount of


redundant data on the cluster.

2: Mirrored protection copies data to multiple locations, it can have 2 to 8 mirrors.

3: Suggested is the protection OneFS recommends and cannot be modified.

4: Actual is the level of protection OneFS applies to data. It can be more than
requested protection but never less.

PowerScale Administration-SSP1

Page 200 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Requested Protection

Requested Protection configuration is available at multiple levels. Each level is


used to control protection for specific reasons. A requested protection level is
assigned to every node pool. In OneFS, you set the requested protection at the
directory or individual file level. Management of the requested protection levels is
available using the WebUI, CLI, or PAPI.

Directory path

File

H600

Cluster wide - default


protection

A200

Node pool - default


protection

Requested Protection Settings

Cluster-wide settings

The cluster-wide default data protection setting is made using the default file
pool110 policy.

110The View default policy details window displays the current default file pool
policy settings. The current protection is displayed under requested protection. The
default setting is to use the requested protection setting at the node pool level as
highlighted in the Edit default policy details window.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 201
Foundations of Data Protection and Data Layout

Sets default file pool policy - applies to all files


without higher policy

Recommended to use the node pool protection


setting

Available settings from the Requested


protection drop down menu

To view or edit the default setting, go to File system > Storage pools > File pool policies, and
click View / Edit on the Default policy. isi file pool policy modify finance --set-
requested-protection +3:1, sets the requested protection for the file pool policy at +3d:1n.

Node pool settings

The default file pool policy protection setting uses the node pool or tier setting.
When a node pool is created, the default requested protection111 that is applied to
the node pool is +2d:1n.

The current requested protection for each node pool is displayed in the Tiers and
node pools section.

111The minimum requested protection for an archive-series node pool is +3d:1n1d.


To meet the minimum, modify the archive-series node pool requested protection.

PowerScale Administration-SSP1

Page 202 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Sets requested protection per node


pool

Minimum protection should meet


suggested protection

Drop down to expand the requested


protection options

Click Save changes after selecting the


new requested protection level

To view and edit the requested protection setting for the node pools in the WebUI, go to the File
system > Storage pools > SmartPools page. isi storagepool nodepools modify
v200_25gb_2gb --protection-policy +2n, sets the requested protection of a node pool to
+2n.

Directory and file settings

OneFS stores the properties for each file. To view the files and the next level
subdirectories, click the specific directory.

Manual settings112

112 Manual settings can be used to modify the protection on specific directories or
files. The settings can be changed at the directory, subdirectory, and file level. Best
practices recommend against using manual settings, because manual settings can
return unexpected results and create management issues as the data and cluster
age. Once manually set, reset the settings to default to use automated file pool
policy settings, or continue as manually managed settings. Manual settings
override file pool policy automated changes. Manually configuring is only

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 203
Foundations of Data Protection and Data Layout

Manual settings use case113

Search button to search a file or Browse to directly


open a directory or file

Uncheck to configure manually


To modify
protection level

To view directories and files on the cluster, go to File System > File system explorer.

recommended for unique use cases. Manual changes are made using the WebUI
File system explorer or the CLI isi set command.

113The isi set -p 4x -A on /ifs/finance/data use case for setting a directory


requested protection is that the /ifs/finance/data directory requires a 4x mirror
whereas all other node pool directories use the +2d:1n node pool setting.

PowerScale Administration-SSP1

Page 204 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Use Case - Node Pool or Directory Requested Protection

Requested protection level


on the H600 node pool set
at +2d:1n

H600

A200

Requested protection level


on the A200 node pool set
at +3d:1n1d

The graphic shows a workflow that moves data to an archive tier of storage.

SmartPools file pool policies automate data management including applying


requested protection settings to directories and files, the storage pool location, and
the I/O optimization settings.

• Archive tie on an A200 node pool


• File pool policy moves data from production H600 node pool to archive pool
• Protection on production node pool is higher than protection of archive node
pool
• You can set Requested protection settings at the node pool level or at the
directory level

Suggested Protection

Suggested protection refers to the visual status and CELOG event notification
when node pools are set below the calculated suggested protection level.

Suggested protection is important when monitoring the risk of data loss.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 205
Foundations of Data Protection and Data Layout

View of node pool


Calculates and stores MTTDL for
requested protection
each node pool
drop-down list

Data is at risk when below the


MTTDL

As the cluster scales, OneFS


changes the suggested
protection

Caution: It is recommended that you do not specify a setting below


suggested protection. OneFS periodically checks the protection
level on the cluster, and alerts you if data falls below the
recommended protection.

Not using the suggested protection does not mean that data loss occurs, but it
does indicate that the data is at risk. Avoid anything that puts data at risk. What
commonly occurs is a node pool starts small and then grows beyond the configured
requested protection level. The once adequate +2d:1n requested protection level
becomes no longer appropriate, but is never modified to meet the increased
protection requirements.Not using the suggested protection does not mean that
data loss occurs, but it does indicate that the data is at risk. Avoid anything that
puts data at risk. What commonly occurs is a node pool starts small and then
grows beyond the configured requested protection level.

The once adequate +2d:1n requested protection level becomes no longer


appropriate, but is never modified to meet the increased protection
requirements.Not using the suggested protection does not mean that data loss
occurs, but it does indicate that the data is at risk. Avoid anything that puts data at
risk. What commonly occurs is a node pool starts small and then grows beyond the
configured requested protection level. The once adequate +2d:1n requested
protection level becomes no longer appropriate, but is never modified to meet the
increased protection requirements.

PowerScale Administration-SSP1

Page 206 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Suggested Protection Status

The Suggested protection feature provides a method to monitor and notify users
when the requested protection setting is different than the suggested protection for
a node pool.

SmartPools module health status - suggested To modify the settings, click on View/Edit
protection is part of the reporting in the tab

Indicates v200_24gb_2gb node pool with a requested protection level that is different
than the suggested

The notification shows the suggested setting and node pools that are within suggested protection
levels are not displayed.

Actual Protection

The actual protection114 applied to a file depends on the requested protection level,
the size of the file, and the number of node pool nodes.

The rules are:

• Actual protection must meet or exceed the requested protection level.

114 The actual protection level is the protection level OneFS sets. Actual protection
is not necessarily the same as the requested protection level.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 207
Foundations of Data Protection and Data Layout

• Actual protection may change in the interests of efficiency. Case 1115


• Actual protection depends upon file size. Case 2116
• Both cases117

Orange - mirroring,
low minimum size for
requested protection

Blue - minimum for


requested protection

Bold - actual
requested protection

Gray - actual greater


than max nodes at
requested protection

Red - actual
The chart indicates the actual protection that is applied to a file according to the number of nodes in the node pool. If protection changes
actual protection does not match the requested protection level, it may have been changed to be more efficient given the from requested
file or number of nodes in the node pool. protection

115A requested protection of +2d:1n and there is a 2-MB file and a node pool of at
least 18 nodes, the file is laid out as +2n.

116A 128-KB file is protected using 3x mirroring, because at that file size the FEC
calculation results in mirroring.

117 In both cases, the actual protection applied to the file exceeds the minimum
drive loss protection of two drives and node loss protection of one node. The
exception to meeting the minimum requested protection is if the node pool is too
small and unable to support the requested protection minimums. For example, a
node pool with four nodes and set to +4n requested protection. The maximum
supported protection is 4x mirroring in this scenario.

PowerScale Administration-SSP1

Page 208 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Actual Protection Representation

The actual protection is represented differently than requested protection. The


graphic shows the output showing actual protection on a file from the isi get
command.

Outputs all files in a directory or


single file information

Represents the requested protection and the actual


protection

No / in the output implies a single drive per node

N+2/2
Drives per node
Output

Data stripe FEC stripe units


units per stripe per stripe

Tip: COAL in the output shows if write-coalescing is enabled.


Enabled118 is recommended for optimal write performance.

118With asynchronous writes, OneFS buffers writes in memory. However, if you


want to disable this buffering, you should configure the applications to use
synchronous writes. If that is not possible, disable write-coalescing, also known as
SmartCache.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 209
Foundations of Data Protection and Data Layout

isi get

The isi get command provides detailed file or directory information. The primary
options are –d <path> for directory settings and –DD <path>/<filename> for
individual file settings.

The graphic shows the isi get –DD output. The output has three primary
locations containing file protection. The locations are a summary in the header, line
item detail settings in the body, and detailed per stripe layout per drive at the
bottom.

Challenge

IT Manager:
Open participation questions:
Question: What is a use case for setting requested protection at
the cluster level? At the node pool level? At the directory level?

PowerScale Administration-SSP1

Page 210 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Data Layout

Scenario

IT Manager: You are doing a great job. Now, examine how OneFS lays
out the data on disks.

Your Challenge: The IT manager wants to understand data layout.


Describe the different data access pattern, illustrate an access pattern
using concurrency and streaming.

Data Layout Overview

4
1
3
2

2
3

1
4

1: The number of nodes in a node pool affects the data layout because data
spreads across all nodes in the pool. The number of nodes in a node pool
determines how wide the stripe can be.

2: The nomenclature for the protection level is N+Mn, where N is the number of
data stripe units and Mn is the protection level. The protection level also affects
data layout. You can change the protection level down to the file level, and the
protection level of that file changes how it stripes across the cluster.

3: The file size also affects data layout because the system employs different
layout options for larger files than for smaller files to maximize efficiency and

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 211
Foundations of Data Protection and Data Layout

performance. Files smaller than 128 KB are treated as small files. Due to the way
that OneFS applies protection, small files are triple mirrored.

4: The access pattern modifies both prefetching and data layout settings that are
associated with the node pool. Disk access pattern can be set at a file or directory
level so you are not restricted to using only one pattern for the whole cluster.

There are four variables that combine to determine how OneFS lays out data.

The variables make the possible outcomes almost unlimited when trying to
understand how the cluster behaves with varying workflow with differing variables.

You can manually define some aspects of how it determines what is best, but the
process is automated.

Data Access Patterns

An administrator can optimize layout decisions that OneFS makes to better suit the
workflow. The data access pattern influences how a file is written to the drives
during the write process.

1: Concurrency is the default data access pattern. It is used to optimize workflows


with many concurrent users accessing the same files. The preference is that each
protection stripe for a file is placed on the same drive or drives depending on the
2
requested protection level. For example, a large file with 20 protection stripes, each
stripe unit from each protection stripe would prefer placement on the same drive in
each node. Concurrency influences the prefetch caching algorithm to prefetch and
cache a reasonable amount 3 of anticipated data during a read access.

2: Use Streaming for large streaming workflow data such as movie or audio files.
Streaming prefers to use as many drives as possible, within the given pool, when
writing multiple protection stripes for a file. Each file is written to the same sub pool
within the node pool. Streaming maximizes the number of active drives per node as
the streaming data is retrieved. Streaming also influences the prefetch caching
algorithm to be highly aggressive and gather as much associated data as possible.
The maximum number of drives for streaming is five drives per node across the
node pool for each file.

PowerScale Administration-SSP1

Page 212 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

3: A random access pattern prefers using a single drive per node for all protection
stripes for a file, like a concurrency access pattern. With random however, the
prefetch caching request is minimal. Most random data does not benefit from
prefetching data into cache.

Access Pattern Example: Streaming with 1 MB File

A 1 MB file is divided into eight data stripe units and three FEC units. The data is
laid out in three stripes. With a streaming access pattern, more spindles are
preferred. 1 MB file split into eight stripe unit and three stripes - streaming uses
spindles.

Streaming

N +1n

1024 KB
file
8 X 128
KB chunk
3 stripes and 3
drives wide

Streaming prefers more disks

The graphic is a representation of a Gen 6 chassis with four nodes. Each node has five drive sleds.
Each drive sled has three disks. The orange disk represents a neighborhood. The disk that is used
is in the same neighborhood (orange), do not traverse to disks in the other neighborhoods (gray)

Access Pattern Example: Concurrency with 1-MB File

A 1-MB file is divided into eight data stripe units and three FEC units. The data is
laid out in three stripes, one drive wide.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 213
Foundations of Data Protection and Data Layout

Concurrency

N+ 1

1024 KB
file
8 X 128
KB chunk

3 stripes and 1
drive wide

Concurrency prefers one drive per node for the


file

The graphic is a representation of a Gen 6 chassis with four nodes. Each node has five drive sleds.
Each drive sled has three disks. The orange disk represents a neighborhood.

Tip: For more examples of data layout using concurrency click


here.

Data Layout Management

Configuring the data access pattern is done on the file pool policy, or manually at
the directory and file level. Set data access patterns using the WebUI or use isi
set for directory and file level or isi file pool policy for file pool policy
level.

PowerScale Administration-SSP1

Page 214 © Copyright 2020 Dell Inc.


Foundations of Data Protection and Data Layout

Modify either the


Set on file pool policy or default policy or an
manually at the directory or existing file pool policy.
Concurrency is the
file level
default data access
pattern

For WebUI Administration, go to File systems > Storage pools > File pool policies.

Challenge

IT Manager:
Open participation questions:
Question: What is the preferred file layout with a streaming
access pattern?

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 215
Configuring Storage Pools

Configuring Storage Pools

PowerScale Administration-SSP1

Page 216 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Storage Pools

Scenario

IT Manager: Before you configure file policies and tiering data, I want
you to explain the components of storage pools.

Your Challenge: The IT manager has tasked you to describe storage


pool components.

Storage Pools Overview

Storage Pools are an abstraction layer that encompasses disk pools,


neighborhoods, node pools, and tiers.

Storage pools monitor the health and status at the node pool level. Using storage
pools, multiple tiers of nodes (node pools) can all co-exist within a single file
system, with a single point of management.

Node Pool

F800 - high performance node pool

Storage Pool Node Pool

H400 - low performance node pool

Node Pool

A200 -archive node pool

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 217


Configuring Storage Pools

Storage Pool Anatomy

Storage pools differ between Gen 6 nodes and F200/600 nodes.

Gen 6 drive sleds have three, four, or six drives whereas the F200 has 4 drive bays
and the F600 has 8 drive bays.

Drives are segmented into disk pool, creating a failure domain.

Disk pool - failure


domain
Node pool - identical
hardware attributes - created
automatically
Neighborhood - spans 4 to
Disk pool - failure 19 nodes
domain

Gen 6.5 Neighborhood -


spans 3 to 39 nodes

The graphic shows a Gen 6 node pool that has two chassis, eight nodes, and each node having five
drive sleds with three disks.

Storage Pool Components

Exploring the building blocks and features of storage pools helps understand the
underlying structure when moving data between tiers. The storage pool
components, SmartPools, File Pools and CloudPools, are covered in detail in other
topics.

PowerScale Administration-SSP1

Page 218 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Disk Pool

Disk pools are the smallest unit and are a subset of


Tier
neighborhoods. 1

Disk pools provide separate failure domains. Each


drive within the sled is in a different disk pool,
lessening the chance for data unavailability.

Data protection stripes or mirrors do not span119


disk pools.

Neighborhood Tier
2

Neighborhoods
are a group of disk pools and can span from 4
up to 19 nodes for Gen 6 nodes. Nodes have a
single neighborhood from 1-to-19
nodes. Neighborhoods are
automatically assigned and not configurable.

The graphic shows a 20-node


cluster with two neighborhoods.

Subpool/Neighborhood F200/600 Gen 6

119Not spanning disk pools the granularity at which files are striped to the cluster.
Disk pool configuration is automatic and cannot be configured manually. Removing
a sled does not cause data unavailability as only one disk per disk pool is
temporarily lost.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 219


Configuring Storage Pools

Ideal number of nodes 20 nodes 10 nodes

Max number of nodes 39 nodes 19 nodes

Node pool splits at node number 40 20

Gen 6 Neighborhood

A Gen 6 node pool splits into two neighborhoods when adding the 20th node 120.
One node from each node pair moves into a separate neighborhood.

Though a chassis-wide failure is highly unlikely, OneFS takes precautions against


chassis failure once a cluster is large enough. Nodes sharing a chassis are split
across fault domains, or neighborhoods, to reduce the number of node failures
occurring within one fault domain. The split is done automatically.

Each
Single neighborh neighborhood
ood, 3 disk pools has 3 disk pools
in a 3 diskper
drive sled
example

At 40 nodes,protection against
chassis failure

Gen 6 neighborhoods - each color represents a disk pool.

120After the 20th node added up to the 39th node, no 2 disks in a given drive sled
slot of a node pair share a neighborhood. The neighborhoods split again when the
node pool reaches 40 nodes.

PowerScale Administration-SSP1

Page 220 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Gen 6 Chassis Failure

The graphic shows a 40 node cluster used to illustrate a chassis failure. Once the
40th node is added, the cluster splits into four neighborhoods, labeled NH 1
through NH 4.

At 40 nodes, no disks in nodes are in At 40 nodes, splits to 4


same disk pool as other node in chassis neighborhoods

The splits place each node in a chassis


into a failure domain different from the
other three nodes in the chassis. Protects
against a very unlikely chassis failure.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 221


Configuring Storage Pools

Node Pool

A node pool is a group of similar or identical nodes.


A node pool is the lowest granularity of storage
space that users manage.

OneFS can group multiple node pools with similar


performance characteristics into a single tier with
the licensed version of SmartPools.

Creating multiple tiers in a cluster can meet the


business requirements and optimize storage usage.

The maximum number of like nodes in a node pool


is 252. The graphic shows an 8-node
cluster with two node pools.

SmartPools

SmartPools is a licensable software module that provides basic features in an


unlicensed state and advanced features when licensed.

SmartPools Basic121

121The basic version of SmartPools supports virtual hot spares, enabling space
reservation in a node pool for re protection of data. OneFS implements SmartPools

PowerScale Administration-SSP1

Page 222 © Copyright 2020 Dell Inc.


Configuring Storage Pools

SmartPools Advanced122

File Pools

File pools are the SmartPools logical layer, at which file pool policies are applied.

File pool policies provide a single point of management to meet performance,


requested protection level, space, cost, and other requirements.

User created, and defined policies are set on the file pools.

CloudPools

CloudPools is an extension of the SmartPools tiering capabilities in the OneFS


operating system. The policy engine seamlessly optimizes data placement that is
transparent to users and applications.

Moving the cold archival data to the cloud, lowers storage cost and optimizes
storage resources.

CloudPools offers the flexibility of another tier of storage that is off-premise and off-
cluster.

CloudPools eliminates management complexity and enables a flexible choice of


cloud providers.

basic by default. You can create multiple node pools, but only a single tier and only
a single file pool. A single tier has only one file pool policy that applies the same
protection level and I/O optimization settings to all files and folders in the cluster.

122More advanced features are available in SmartPools with a license. With the
advanced features you can create multiple tiers and file pool policies that direct
specific files and directories to a specific node pool or a specific tier. Advanced
features include the ability to create multiple storage tiers, multiple file pool policy
targets, and multiple file pool policies.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 223


Configuring Storage Pools

Node Loss: A loss of a node does not automatically start reprotecting data. Many
times a node loss is temporary, such as a reboot. If N+1 data protection is
configured on a cluster, and one node fails, the data is accessible from every other
node in the cluster. If the node comes back online, the node rejoins the cluster
automatically without requiring a rebuild. If the node is physically removed, it must
also be smartfailed. Only Smartfail nodes when needing to remove from the cluster
permanently.

Storage Pool CLI

The graphic shows the isi storagepool settings view command with user
configured settings highlighted.

Serviceability

Listed are the CLI options that can help get information about storage pools.

• To view the storage pool status and details


• isi storagepool list
• To view the health of storage pools

• isi status -p

PowerScale Administration-SSP1

Page 224 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Challenge

Lab Assignment: Go to the lab and verify the storage pool settings.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 225


Configuring Storage Pools

File Pools

Scenario

IT Manager: Our media team needs their storage on disks that do not
compete with the other disk.

Your Challenge: The IT manager has tasked you to segregate data into
different node pools.

File Pool Policies Overview

File pool policy


optimized for Tier
F600 1
random access
+2d:1n

File pool policy


optimized for Tier 2
concurrent
+2d:1n
access

H400

File pool policy Tier 3


optimized for
streaming access +3d:1n1d

F200

Cluster with 3 node pools each with a file pool policy.

PowerScale Administration-SSP1

Page 226 © Copyright 2020 Dell Inc.


Configuring Storage Pools

File pool policies automate file movement, enabling users to identify and move
logical groups of files.
• User-defined filters123
• File-based, not hardware-based124
• User-defined or default protection and policy settings125

The example shows that each policy has a different optimization and protection
level. A file that meets the policy criteria for tier 3 is stored in the tier 3 node pool
with +3d:1n1d protection. Also, the file is optimized for streaming access.

Default File Pool Policy

The default file pool policy is defined under the default policy.

Select each information "i" button for setting details.

123Files and directories are selected using filters and apply actions to files
matching the filter settings. The policies are used to change the storage pool
location, requested protection settings, and I/O optimization settings.

124
Each file is managed independent of the hardware, and is controlled through the
OneFS operating system.

125 Settings are based on the user-defined and default storage pool policies. File
pool policies add the capability to modify the settings at any time, for any file or
directory.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 227


Configuring Storage Pools

2
1

3
4

1: The individual settings in the default file pool policy apply to files without settings
that are defined in another file pool policy that you create. You cannot reorder or
remove the default file pool policy.

2: To modify the default file pool policy, click File system, click Storage pools,
and then click the File pool policies tab. On the File pool policies page, next to
the Default policy, click View/Edit.

3: You can choose to have the data that applies to the Default policy target a
specific node pool or tier or go anywhere. Without a license, you cannot change
the anywhere target. If existing file pool policies direct data to a specific storage
pool, do not configure other file pool policies with anywhere.

4: You can define the SSD strategy for the Default policy.

5: You can specify a node pool or tier for snapshots. The snapshots can follow the
data, or go to a different storage location.

6: Assign the default requested protection of the storage pool to the policy, or set a
specified requested protection.

PowerScale Administration-SSP1

Page 228 © Copyright 2020 Dell Inc.


Configuring Storage Pools

7: Under I/O optimization settings, SmartCache is enabled by default. SmartCache


writes data to a write-back cache instead of immediately writing the data to disk.
OneFS can write the data to disk at a time that is more convenient.

8: In the Data access pattern section, you can choose between Random,
Concurrency, or Streaming.

• Streaming access enables aggressive prefetch (also called read-ahead) on


reads, increases the size of file coalescers in the OneFS write cache, and
changes the layout of files on disk (uses more disks in the FEC stripes).
Streaming is most useful in workloads that do heavy sequential reads and
writes.
• Random essentially disables prefetch for both data and metadata. Random is
most useful when the workload I/O is highly random. Using Random greatly
reduces the cache "pollution" that could result from all the random reads, for
example prefetching blocks into cache that are never read.
• Concurrency, the default access setting, is a compromise between Streaming
and Random. Concurrency enables some prefetch, which helps sequential
workloads, but not so much that the cache gets "polluted" when the workload
becomes more random. Concurrency is for general purpose use cases, good
for most workload types or for mixed workload environments.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 229


Configuring Storage Pools

File Pool Policies Use Case

Name the policy.

Configuring the matching criteria.


This example shows using a
filename match.

The tier is created on the Storage


pools > SmartPools page, or using
the "isi storagepool" command.

The targeted files get a 3x mirror


protection level.

The targeted files use a streaming


access pattern.

Click the image to enlarge.

This example is a use case where a media-orientated business unit wants greater
protection and an access pattern that is optimized for streaming.

A tier that is called media_tier with a node pool has been created.

The business unit targets their mp4 marketing segments to the media_tier where
the hosting application can access them.

PowerScale Administration-SSP1

Page 230 © Copyright 2020 Dell Inc.


Configuring Storage Pools

File Pool Policy Filters

Modify time (mtime)

Access time (atime)

Metadata change time


(ctime)

Create time (birthtime)

Drop-down list of filter choices to build the policy criteria.

Create the filters in the File matching criteria section when creating or editing a
file pool policy.

Filter elements:
• Filter type126

126 File pool policies with path-based policy filters and storage pool location actions
are run during the write of a file matching the path criteria. Path-based policies are
first started when the SmartPools job runs, after that they are started during the
matching file write. File pool policies with storage pool location actions, and filters
that are based on other attributes besides path, write to the node pool with the
highest available capacity. The initial write ensures that write performance is not
sacrificed for initial data placement.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 231


Configuring Storage Pools

• Operators127
• Multiple criteria128

SSD Options

With the exception of F-Series nodes, if a node pool has SSDs, by default the L3
cache is enabled on the node pool. To use the SSDs for other strategies, first
disable L3 cache on the node pool. Manually enabling SSD strategies on specific
files and directories is not recommended.

Select each tab for more information.

Metadata Read Acceleration

SSDs for Metadata Read Acceleration is the recommended setting. The setting
uses one metadata mirror, other mirrors and data on HDDs.

Pros Cons

Do not need numerous SSDs to be Does not help random writes –


effective. metadata update hits HDD.

Benefits random reads by allowing


quicker access to metadata.

127Operators can vary according to the selected filter. You can configure the
comparison value, which also varies according to the selected filter and
operator. The Ignore case box should be selected for files that are saved to the
cluster by a Windows client.

128The policy requires at least one criterion, and allows multiple criteria. You can
add AND or OR statements to a list of criteria. Using AND adds a criterion to the
selected criteria block. Files must satisfy each criterion to match the filter. You can
configure up to three criteria blocks per file pool policy.

PowerScale Administration-SSP1

Page 232 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Helps Job Engine - all random Usually shows small SSD utilization:
lookups and treewalks are faster as clients may ask “Where is the value”
one copy of metadata is always on or complain it was over configured
SSD.

Metadata Read/Write Acceleration

Metadata read/write acceleration requires more SSD space. Writes all metadata
mirrors to SSDs and can consume up to six times more SSD space.

Pros Cons

Metadata is on SSDs - speeds random Need many SSDs to be effective,


lookups and treewalks. typically need four times the
metadata read amount.

Hard to size - who knows how many


files they will have.

Metadata updates hit SSDs - speeds Overfilling SSDs can have significant
up creates, writes, and deletes impact – manage with care.
including SnapShot deletes.
Does not show the full utilization until
the file system capacity is high.

Data and Metadata

Use SSDs for data and metadata requires the most space. Writes all data and
metadata for a file on SSDs.

Pros Cons

Only way to guarantee data pins to Expensive


SSDs - good for small, intense
workloads

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 233


Configuring Storage Pools

Can co-habit with metadata Must manage path capacity to avoid


acceleration - cannot mix with L3 on overfilling SSDs - directory quota can
same node pool. help.

Use file pool policies designating Must manage total SSD capacity
specific path for the data on SSDs. utilization - can push metadata from
SSD, which has a wide impact.

Heavy workloads may cause


queueing to SSD, slowing metadata
operations for other workloads.

Avoid SSDs

Using the avoid SSDs option affects performance. This option writes all file data
and all metadata mirrors to HDDs. Typically, use this setting when implementing L3
cache and GNA in the same cluster. You create a path-based file pool policies that
targets an L3 cache enabled node pool. The data SSD strategy and snapshot SSD
strategy for this L3 cache enabled node pole should be set to ‘Avoid SSD’.

PowerScale Administration-SSP1

Page 234 © Copyright 2020 Dell Inc.


Configuring Storage Pools

File Pool Policies Jobs

The FilePolicy job on the WebUI Cluster management > Job operations > Job types page.

File pool policies are applied to the cluster by a job.


• SetProtectPlus job129 - SmartPools unlicensed
• SmartPools job130 - SmartPools licensed

129 The SetProtectPlus job applies the default file pool policy.

130
When SmartPools is licensed, the SmartPools job processes and applies all file
pool policies. By default, the job runs at 22:00 hours every day at a low priority.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 235


Configuring Storage Pools

• FilePolicy job131 - find files needing policy changes (OneFS 8.2.0)


• SmartPoolsTree job132 - Selectively apply SmartPools file pool policies

Policy Template

Policy templates on the WebUI File system > Storage pools > File pool policies page.

Template settings are preset to the name of the template along with a brief
description. You can change the settings.

Template has a configured filter to achieve the specified function.

Template considerations:
• Opens a partially populated, new file pool policy.
• You must rename the policy.
• You can modify and add criteria and actions.
• Use in web administration interface only.

131 Uses a file system index database on the file system instead of the file system
itself to find files needing policy changes. By default, the job runs at 22:00 hours
every day at a low priority. The FilePolicy job was introduced in OneFS 8.2.0.

132The SmartPoolsTree job is used to apply selective SmartPools file pool policies.
The job runs the "isi filepool apply" command. The Job Engine manages the
resources that are assigned to the job. The job enables for testing file pool policies
before applying them to the entire cluster.

PowerScale Administration-SSP1

Page 236 © Copyright 2020 Dell Inc.


Configuring Storage Pools

File Pool Policies Order

The order of the policy matters.


• The first matching policy is applied.
• Create external policy list with filter criteria such as path or file name.
• Prioritize match filter criteria order.
• Reorder policies to match prioritization.
• Default policy completes unassigned actions.

File Pool Policy Considerations

Plan to add more node capacity when the cluster reaches 80% so that it does not
reach 90%. The cluster needs the extra capacity for moving around data, and for
the VHS space to rewrite data when a drive fails. Listed are more considerations.
• Avoid overlapping file policies where files may match more than one rule. If data
matches multiple rules, only the first rule is applied.
• File pools should target a tier and not a node pool within a tier.
• You can use the default policy templates as examples.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 237


Configuring Storage Pools

Serviceability

Example output of the 'isi filepool apply <path/file> -n -v -s' command with truncated output.

Listed here are the CLI options that can help get information about file pools.
• If file pool policy rules are not being applied properly, check the policy order.
• Test file pool policy before applying.

• isi filepool apply


• Syntax: isi filepool apply <path/file> -n -v -s
• Options:
• -n is to test but not apply.
• -v is for verbose output.
• -s prints statistics on processed files.

Challenge

Lab Assignment: Go to the lab and configure a file pool policy.

PowerScale Administration-SSP1

Page 238 © Copyright 2020 Dell Inc.


Configuring Storage Pools

SmartPools

Scenario

IT Manager: Before you configure file tiering, I want you to explain to me


the OneFS SmartPools settings.

Your Challenge: The IT manager has tasked you to describe the


SmartPools settings and then configure SmartPools.

SmartPools Overview

SmartPools enables the grouping of nodes into storage units that include node
pools, CloudPools, and tiers.

With SmartPools, you can segregate data based on its business value, putting data
on the appropriate tier of storage with appropriate levels of performance and
protection.

Different generations133 of PowerScale storage can co-exist within a single storage


pool.

Use SmartPools to manage global storage pool settings.

133Node pool membership changes through the addition or removal of nodes to the
cluster. Typically, tiers are formed when adding different node pools on the cluster.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 239


Configuring Storage Pools

File system > Storage pools page.

SmartPools Licensing

SmartPools is a licensable software module that provides basic features in an


unlicensed state and advanced features when licensed.

Because of the availability to have multiple data target locations, some additional
target options are enabled in some global settings.

Function Unlicensed Licensed

Automatic node pool Yes Yes


provisioning

Number of tiers Multiple Multiple

Number of file pool One (default file Multiple


policies, pool policy)
File pool policy targets

File pool policy filters No Multiple

Policy-based protection One Multiple


level
Metadata acceleration
setting
I/O optimization
Snapshot target

PowerScale Administration-SSP1

Page 240 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Specify spillover target. No Yes

VHS and GNA Yes Yes

SmartPool Settings

Cache Statistics

The isi_cache_stats command accurately assess the performance of the


various levels of cache at a point in time. Statistics for L1, L2 and L3 cache are
displayed for both data and metadata.

Output showing the L3 statistics.

GNA

SmartPools can automatically transfer data among tiers with different performance
and capacity characteristics.

Global namespace acceleration, or GNA, enables the use of SSDs for metadata
acceleration across the entire cluster.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 241


Configuring Storage Pools

Minimum 1.5% of all disk capacity must be SSD and 20%


of nodes must contain SSDs

Use SSDs to store metadata mirror in different


node pools

Accelerates all namespace reads across cluster

L3 cache enable node pools not considered


in GNA calculations

Click image to enlarge.

CLI command to enable GNA: isi storagepool modify --global-


namespace-acceleration-enabled yes

GNA Aspects

The table highlights the pros and cons of enabling GNA.

Pros Cons

Allows metadata read acceleration Difficult to manage and size the disk.
for non-SSD nodes - need some
nodes with SSDs Hard rules and limits

Helps Job Engine and random reads Links expansion of one tier to another
tier to adhere to the limits

L3Cache

L3 cache is enabled by default for all new node pools that are added to a cluster.

L3 cache is either on or off and no other visible configuration settings are available.

Any node pool with L3 cache enabled is excluded from GNA space calculations
and do not participate in GNA enablement.

PowerScale Administration-SSP1

Page 242 © Copyright 2020 Dell Inc.


Configuring Storage Pools

You cannot enable L3 cache if node


pool has no unprovisioned SSDs

Enabled on new node pool by


default
L3 cache cannot coexit with other SSD
strategies.

The left graphic shows global setting. The right graphic shows L3 cache enable or disable on each
node pool separately. Click image to enlarge.

VHS

Virtual hot spare, or VHS, allocation enables space to rebuild data when a drive
fails.

When selecting the option to reduce the amount of available space, free-space
calculations exclude the VHS reserved space.

OneFS uses the reserved VHS free space for write operations unless you select
the option to deny new data writes.

Default - all available free space on a cluster used


to rebuild data

Allocate by percentage of disk space, or the number of


virtual drives, or a combination of both

1 to 4 virtual drives in each node pool

0 to 20 percent to total disk space in


each node pool

Click image to enlarge.

Command example that reserves 10% capacity for VHS: isi storagepool
settings modify --virtual-hot-spare-limit-percent 10

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 243


Configuring Storage Pools

Spillover

Spillover is node capacity overflow management.

With the licensed SmartPools module, you can direct data to spillover to a specific
node pool or tier group.

If spillover is disabled, file is not moved to another node pool.

Options configure how OneFS handles a write operation


when a node pool is full.

Direct data to spillover to a specific node


pool or tier group

Click image to enlarge.

CLI command to disable spillover: isi storagepool settings modify --


spillover-enabled no

Actions

The SmartPools action settings provide a way to enable or disable managing


requested protection settings and I/O optimization settings.

If you clear the box (disable), SmartPools does not modify or manage settings on
the files.

PowerScale Administration-SSP1

Page 244 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Override manually managed request


protection

Override manually managed I/O


optimization

Overrides any manually managed requested


protection setting or I/O optimization

Useful if manually managed settings were made using file system


explorer or the isi set command.

Click image to enlarge.

CLI command for setting Automatically Manage Protection to none: isi


storagepool settings modify --automatically-manage-protection
none.

CLI command to set the Automatically Manage I/O Optimization: isi


storagepool settings modify --automatically-manage-io-
optimization {all | files_at_default | none}.

Protection example: If a +2d:1n protection is set and the disk pool suffers three
drive failures, the data that is not lost can still be accessed. Enabling the option
ensures that intact data is still accessible. If the option is disabled, the intact file
data is not accessible.

GNA can be enabled if 20% or more of the nodes in the cluster contain SSDs and
1.5% or more of the total cluster storage is SSD-based. The recommendation is
that at least 2.0% of the total cluster storage is SSD-based before enabling GNA.
Going below the 1.5% SSD total cluster space capacity requirement automatically
disables GNA metadata. If you SmartFail a node that has SSDs, the SSD total size
percentage or node percentage containing SSDs could drop below the minimum
requirement, disabling GNA. Any node pool with L3 cache enabled is excluded
from GNA space calculations and do not participate in GNA enablement.

GNA also uses SSDs in one part of the cluster to store metadata for nodes that
have no SSDs. The result is that critical SSD resources are maximized to improve
performance across a wide range of workflows.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 245


Configuring Storage Pools

VHS example: If specifying two virtual drives or 3%, each node pool reserves
virtual drive space that is equivalent to two drives or 3% of their total capacity for
VHS, whichever is larger. You can reserve space in node pools across the cluster
for this purpose, equivalent to a maximum of four full drives. If using a combination
of virtul drives and total disk space, the larger number of the two settings
determines the space allocation, not the sum of the numbers.

SSD Usage Comparison

Comparison of L3 cache with the other SSD usage strategies

Assists With L3 Metadata Metadata GNA Data on


Read Read/Write SSD

Metadata Read Yes Yes Yes Yes No

Metadata Write No 1 Mirror All Mirrors 1 Additional No


Mirror

Data Read Yes No No No Yes

Data Write No No No No Yes

Job Engine Yes Yes Yes Yes No


Performance

Granularity Node Manual Manual Global Manual


Pool

Ease of Use High Medium Medium Medium Lowest

SmartPools Considerations

Listed are areas to consider when discussing SmartPools.

• SmartPools automatic provisioning divides equivalent node hardware into disk


pools. Subdividing the node disks into separately protected disk pools increases
resiliency against multiple disk failures.

PowerScale Administration-SSP1

Page 246 © Copyright 2020 Dell Inc.


Configuring Storage Pools

• Disk pools are not user configurable, and a disk drive is only a member on one
disk pool or neighborhood.
• Node pools must have at least four nodes for Gen 6 and at least three nodes for
the F200/600. The default is one node pool per node type and configuration.
• The file pool policy default is all files are written anywhere on cluster. To target
more node pools and tiers, activate the SmartPools license.

Challenge

Lab Assignment: Configure SmartPools.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 247


Configuring Storage Pools

CloudPools

Scenario

IT Manager: Next, take the file pool policies to the CloudPools level. For
some of the long-term archive data, the group is looking at cloud
options.

Your Challenge: The IT manager wants you to explain CloudPools and


how file pool policies are used with CloudPools.

CloudPools Overview and Example Video

CloudPools offers the flexibility of another tier of storage that is off-premise and off-
cluster. Essentially what CloudPools do is provide a lower TCO134 for archival-type
data. Customers who want to run their own internal clouds can use a PowerScale
installation as the core of their cloud.

The video provides a CloudPools overview and use case. See the student guide for
a transcript of the video.

134CloudPools optimize primary storage with intelligent data placement.


CloudPools eliminates management complexity and enables a flexible choice of
cloud providers.

PowerScale Administration-SSP1

Page 248 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Click to launch video.

Click to enlarge graphic.

Link:
https://edutube.emc.com/html5/videoPlayer.htm?vno=wx4VTLcN32kSlHGFwGLE1
Q

Shown is an Isilon cluster with twelve nodes. A key benefit of CloudPools is the
ability to interact with multiple cloud vendors. Shown in the graphic are the
platforms and vendors that are supported as OneFS 8.1.1.

CloudPools is an extension of the SmartPools tiering capabilities in the OneFS


operating system. The policy engine seamlessly optimizes data placement that is
transparent to users and applications. Moving the cold archival data to the cloud,
lowers storage cost and optimizes storage resources.

Let us look at an example, each chassis in the cluster represents a tier of storage.
The topmost chassis is targeted for the production high-performance workflow and

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 249


Configuring Storage Pools

may have node such as F800s. When data is no longer in high demand,
SmartPools moves the data to the second tier of storage. The example shows the
policy moves data that is not accessed and that is over thirty days old. Data on the
middle tier may be accessed periodically. When files are no longer accessed for
more than 90 days, SmartPools archives the files to the lowest chassis or tier such
as A200 nodes.

The next policy moves the archive data off the cluster and into the cloud when data
is not accessed for more than 180 days. Stub files that are also called SmartLinks
are created. Stub files consume approximately 8 KB space on the Isilon cluster.
Files that are accessed or retrieved from the cloud, or files that are not fully moved
to the cloud, have parts that are cached on the cluster and are part of the stub file.
The storing of CloudPools data and user access to data that is stored in the cloud
is transparent to users.

CloudPools files undergo a compression algorithm and then are broken into their 2
MB cloud data objects or CDOs for storage. The CDOs conserve space on the
cloud storage resources. Internal performance testing does note a performance
penalty for a plane compression and decompressing files on read. Encryption is
applied to file data transmitting to the cloud service. Each 128 KB file block is
encrypted using a AES 256 encryption. Then transmitted as an object to the cloud.
Internal performance testing notes a little performance penalty for encrypting the
data stream.

CloudPools Considerations

Private and supported providers


CloudPools is a licensed
PowerScale, ECS, Amazon S3 ,
feature Microsoft Azure, Virtuestream, Google
cloud and Alibaba cloud

Writes to the cloud file are


Access performance based cached locally - cache is flushed
on cloud connection at designated intervals

CloudPools uses the SmartPools framework to move data and state information to
off-cluster storage while retaining the ability to read, modify, and write to data.

PowerScale Administration-SSP1

Page 250 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Consider the following:


• Compression135
• 2 MB CDO size
• Compliance mode136

Resources: See the CloudPools Administration Guide in the


PowerScale Info Hub for information not covered in this topic, such
as best practices and troubleshooting.

CloudPools Administration

Configure and manage CloudPools from the WebUI File system, Storage pools
page, CloudPools tab. Managing CloudPools using the CLI is done with the isi
cloud command.

135 In OneFS 8.2, CloudPools compress data before sending it over the wire.

136 CloudPools in OneFS 8.2 prevents enabling compliance mode on stubs.


Archiving a file before it is committed and moving a stub into a compliance directory
is denied.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 251


Configuring Storage Pools

CloudPools Tab

Configure the connection details for a cloud


service

Configure the CloudPool

File system >Storage pools page > CloudPools page.

Once the SmartPools and CloudPools licenses are applied, the WebUI shows the
cloud storage account options.

After a cloud storage account is defined and confirmed, the administrator can
define the cloud pool itself.

The file pool policies enable the definition of a policy to move data out to the cloud.

PowerScale Administration-SSP1

Page 252 © Copyright 2020 Dell Inc.


Configuring Storage Pools

Cloud Storage Account

Must be unique

Type of the cloud account

The URI must use HTTPS and


match the URI used to set up the
cloud account.

The User Name is the name that


is provided to the cloud provider.

The Key is the account password


that is provided to (or received
from) the cloud provider.

The graphic shows the window for creating a cloud storage account.

Cloud Storage Target

After creating a storage account, create a CloudPool and associate or point it to the
account.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 253


Configuring Storage Pools

The Name must be unique to the


cluster.
Type of cloud account - the drop-
down list has the supported options.

The Vendor name and Description are


optional fields.

Activated after the Type is selected and


the configured storage accounts are listed
on the drop-down list.

The graphic shows the window to Create a CloudPool.

CloudPools SmartLink

Run the isi get -D command to see files archived to the cloud using
CloudPools.

The example checks to see if the local version on the cluster is a SmartLink file.

If the SmartLinked field returns True, the file is archived.

If the output is False, the file is not archived.

PowerScale Administration-SSP1

Page 254 © Copyright 2020 Dell Inc.


Configuring Storage Pools

File Pool Policies - CloudPools

Excerpt from the WebUI > Storage pools page.

SmartPools file pool policies are used to move data from the cluster to the selected
CloudPools storage target.

When configuring a file pool policy, you can apply CloudPools actions to the
selected files.

CloudPools Settings

You may want to modify the settings for the file pool policy based on your
requirements. Modifications are not necessary for most workflows. You can elect to
encrypt and compress data.

1
2
3

4
7
5
8
6
9

10

1: The default CloudPools setting allows you to archive files with snapshot
versions, but you can change the default setting.

2: You can encrypt data prior to archiving it to the cloud. Cloud data is decrypted
when accessed or recalled.

3: You can compress data prior to archiving to the cloud. Cloud data is
decompressed when accessed or recalled.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 255


Configuring Storage Pools

4: Set how long to retain cloud objects after a recalled file replaces the SmartLink
file. After the retention period, the cloud objects garbage collector job cleans up the
local resources allocated for the SmartLink files, and removes the associated cloud
objects.

5: If a SmartLink file has been backed up and the original SmartLink file is
subsequently deleted, associated cloud objects are deleted only after the retention
time of the backed-up SmartLink file has expired.

6: If a SmartLink file has been backed up and the original SmartLink file is
subsequently deleted, associated cloud objects are deleted only after the original
retention time, or a longer incremental or full backup retention period, has expired.

7: Specifies how often SmartLink files modified on the cluster are written to their
associated cloud data objects.

8: Determines whether cloud data is cached when a file is accessed on the local
cluster.

9: Specifies whether cloud data is fully or partially recalled when you access a
SmartLink file on the cluster.

10: Specifies how long the system retains recalled cloud data that is in the cache of
associated SmartLink files.

The graphic shows various default advanced CloudPool options that are configured.

CLI for CloudPools

The output of the isi cloud command shows the actions that you can take.

PowerScale Administration-SSP1

Page 256 © Copyright 2020 Dell Inc.


Configuring Storage Pools

1 3 5 7 10
2 4 6 8 9

1: Use to grant access to CloudPool accounts and file pool policies. You can add
and remove cloud resource, list cluster identifiers, and view cluster details.

2: Used to manage CloudPool accounts. You can create, delete, modify, and
view a CloudPool account, and list the ClouldPool accounts.

3: Use to archive or recall files from the cloud. Specify files individually, or use a file
matching pattern. Files that are targeted for archive must match the specified file
pool policy, or any file pool policy with a cloud target.

4: Use to manage CloudPools TLS client certificates. You can delete, import,
modify, view, and list certificates.

5: Use to manage CloudPool jobs. Use to cancel, create, pause, resume,


list, and view jobs. A CloudPools system job such as cache-writeback cannot
be canceled.

6: Use to configure and manage a CloudPool pool. You can create, delete,
modify, list, and view pools. OneFS no longer accesses the associated cloud
storage account when it is deleted. If a file pool policy references the CloudPool,
OneFS does not allow the delete.

7: Use to manage network proxies. You can create, delete, modify, list, and
view proxies. CloudPools prevents deletion of a proxy that is attached to a cloud
storage account.

8: Files that are stored in the cloud can be fully recalled using the isi cloud
recall command. Recall can only be done using the CLI. When recalled, the full
file is restored to its original directory. The file may be subject to the same file pool
policy that originally archived it, and rearchive it to the cloud on the next
SmartPools job run. If re-archiving is unintended, the recalled file should be moved
to a different, unaffected, directory. The recalled file overwrites the stub file. You
can start the command for an individual file or recursively for all files in a directory
path.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 257


Configuring Storage Pools

9: Use to manage CloudPool top-level settings. You can list and modify
CloudPool settings, and regenerate the CloudPool master encryption key.

10: Use to restore the cloud object index (COI) for a cloud storage account on the
cluster. The isi cloud access add command also restores the COI for a cloud
storage account.

C2S Cloud Support

Commercial Cloud Service, or C2S, is the federal government private cloud.


Federal customers are mandated to use the C2S cloud.

• Support137

137C2S support delivers full CloudPools functionality for a target endpoint, and
supports the use with C2S Access Portal (CAP), and X.509 client certificate

PowerScale Administration-SSP1

Page 258 © Copyright 2020 Dell Inc.


Configuring Storage Pools

• Integration138
• No Internet connection139

CloudPools Limitations

Listed are limitations to CloudPools.


• File recall requires administrative action
• File spillover is not supported

In a standard node pool, file pool policies can move data from high-performance
tiers to storage tiers and back as defined by their access policies. However, data
that moves to the cloud remains stored in the cloud unless an administrator
explicitly requests data recall to local storage. If a file pool policy change is made
that rearranges data on a normal node pool, data is not pulled from the cloud.
Public cloud storage often places the largest fees on data removal, thus file pool
policies avoid removal fees by placing this decision in the hands of the
administrator.

The connection between a cluster and a cloud pool has limited statistical features.
The cluster does not track the data storage that is used in the cloud, therefore file
spillover is not supported. Spillover to the cloud would present the potential for file
recall fees. As spillover is designed as a temporary safety net, once the target pool
capacity issues are resolved, data would be recalled back to the target node pool
and incur an unexpected fee.

authority. C2S also provides support (from AIMA) to securely store certificates,
validate, and refresh if needed.

138The CloudPools C2S feature offers an integrated solution with AWS


Commercial Cloud Services (C2S), a private instantiation of the AWS commercial
cloud.

139This service is 'air gapped' which means it has no direct connection to the
Internet.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 259


Configuring Storage Pools

Statistic details, such as the number of stub files on a cluster or how much cache
data is stored in stub files and would be written to the cloud on a flush of that
cache, is not easily available. No historical data is tracked on the network usage
between the cluster and cloud either in writing traffic or in read requests. These
network usage details should be viewed from the cloud service management
system.

Challenge

Open participation questions:


Question: What is restored when the recall command is used on
a CloudPool?

PowerScale Administration-SSP1

Page 260 © Copyright 2020 Dell Inc.


Configuring Data Services

Configuring Data Services

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 261
Configuring Data Services

File Filtering

Scenario

IT Manager: Now that you have a good understanding of storage pools,


let's investigate file filtering. There are some types of files we do not
want stored in our production directories.

Your Challenge: The IT manager wants you to explain file filtering and
configure the shares to filter unnecessary files.

File Filtering Overview

The graphic shows that .avi files are prevented from writing to
the finance access zone.

PowerScale Administration-SSP1

Page 262 © Copyright 2020 Dell Inc.


Configuring Data Services

File filtering enables administrators to deny or allow file access on the cluster that is
based on the file extension.
• Denies writes for new files.
• Prevents accessing existing files.
• Explicit deny lists.140
• Explicit allow lists.141
• No limit to extension list.
• Per access zone.142
• Configurable for the SMB defaults143.
• No license is required.

140Explicit deny lists are used to block only the extensions in the list. OneFS
permits all other file types to be written. Administrators can create custom
extension lists based on specific needs and requirements.

141
Explicit allow list permits access to files only with the listed file extensions.
OneFS denies writes for all other file types.

142 The top level of file filtering is set up per access zone. When you enable file
filtering in an access zone, OneFS applies file filtering rules only to files in that
access zone.

143OneFS does not take into consideration which file sharing protocol was used to
connect to the access zone when applying file filtering rules. However, you can
apply additional file filtering at the SMB share level.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 263
Configuring Data Services

Management of Existing Cluster Files

If enabling file filtering on an access zone with existing shares or exports, the file
extensions determine access to the files.
• User denied access.144
• Administrator access.145

144 Users cannot access any file with a denied extension. The extension can be
denied through the denied extensions list, or because the extension was not
included as part of the allowed extensions list.

145 Administrators can still access existing files. Administrators can read the files or
delete the files. Administrators with direct access to the cluster can manipulate the
files.

PowerScale Administration-SSP1

Page 264 © Copyright 2020 Dell Inc.


Configuring Data Services

• No filter = access to files.146


• Applies only to supported protocols.147

File Filtering Use Cases

Click each point to learn more about its use case.

• Enforces organization policies148


• Meet compliance requirements149
• Limit large-size files content to share150

146How the file filtering rule is applied to the file determines where the file filtering
occurs. If a user or administrator accesses the cluster through an access zone or
SMB share without applying file filtering, files are fully available.

147 File filters are applied only when accessed over the supported protocols.

148 A use case to enforce file filtering is to adhere to organizational policies.

149 With the compliance considerations today, organizations struggle to meet many
of the requirements. For example, many organizations are required to make all
emails available for litigation purpose. To help ensure that email is not stored
longer than wanted, deny storing .pst.

150Another use case is to limit the cost of storage. Organizations may not want
typically large files, such as video files, to be stored on the cluster, so they can
deny .mov or .mp4 file extension.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 265
Configuring Data Services

• Avoid potential copywrite infringement issues151


• Isolate access zone or share for only a specific file use152

File Filtering Configuration

When you enable file filtering in an access zone, OneFS applies file filtering rules
only to files in that access zone.

151
An organizational legal issue is copyright infringement. Many users store their
.mp3 files on the cluster and open a potential issue for copyright infringement.

152 Another use case is to limit an access zone for a specific application with its
unique set of file extensions. File filtering with an explicit allow list of extensions
limits the access zone or SMB share for its singular intended purpose.

PowerScale Administration-SSP1

Page 266 © Copyright 2020 Dell Inc.


Configuring Data Services

Configure File Filtering

1. Select access
zone

3. Select to add or
deny

2. Enable -
unchecked by
default

4. Add extensions - does not


permit the use of wildcards or
special characters, only the (.)
period.

Access zone level: Web UI: Access > File filter > File filter settings.

You can configure file filtering at different levels.

You can configure file filters on the Protocols > Windows sharing (SMB) >
Default share settings page153.

Modify File Filtering

File filtering settings can be modified by changing the filtering method or editing file
extensions.

• Browse to Access > File Filter, and select the access zone that needs to be
modified from the Current Access Zone drop down list.
• Clear Enable file filters check box to disable file filtering in access zone.

153 Configuring file filters on individual SMB shares enables more granular control.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 267
Configuring Data Services

• Select to deny or allow and then enter the extension of the file, and click submit.
• Click the Remove Filter button next to the extension to remove a file name
extension.

CLI: isi smb shares create and isi smb shares modify commands. If
using RBAC, the user must have the ISI_PRIV_FILE_FILTER privilege.

Challenge

Your Challenge: Login to the cluster and configure file filtering on an


SMB share.

PowerScale Administration-SSP1

Page 268 © Copyright 2020 Dell Inc.


Configuring Data Services

SmartQuotas

Scenario

IT Manager: One of the lessons we learned was that a small percentage


of users would consume a large portion of the storage capacity. To fix
the problem we implemented quotas. I want you to do the same on the
PowerScale system.

Your Challenge: The IT manager wants you to discuss the types of


quotas, explain quota overhead, and configure quotas on the directories.

SmartQuotas Overview Video

This video provides an overview for SmartQuotas. See the student guide for a
transcript of the video.

Click to launch video.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 269
Configuring Data Services

Link:
https://edutube.emc.com/Player.aspx?vno=tCIE1bGAUz6k3W1ic8tZfw==&autoplay
=true

SmartQuotas is a software module that is used to limit, monitor, thin provision, and
report disk storage usage at the user, group, and directory levels. Administrators
commonly use file system quotas for tracking and limiting the storage capacity that
a user, group, or project can consume. SmartQuotas can send automated
notifications when storage limits are exceeded or approached.

Quotas are a useful way to ensure that a user or department uses only their share
of the available space. SmartQuotas are also useful for enforcing an internal
chargeback system. SmartQuotas contain flexible reporting options that can help
administrators analyze data usage statistics for their Isilon cluster. Both
enforcement and accounting quotas are supported, and various notification
methods are available.

Before OneFS 8.2, SmartQuotas reports the quota free space only on directory
quotas with a hard limit. For user and group quotas, SmartQuotas reports the size
of the entire cluster capacity or parent directory quota, not the size of the quota.
OneFS 8.2.0 includes enhancements to report the quota size for users and groups.
The enhancements reflect the true available capacity that is seen by the user.

SmartQuotas Implementation

You can choose to implement accounting quotas or enforcement quotas. The table
below displays the difference between the types.

Accounting Quotas Enforcement Quotas

Monitors disk usage Monitors and limits disk usage

Analysis and planning Enable notification

Threshold subtypes - advisory Threshold subtypes - hard and soft

PowerScale Administration-SSP1

Page 270 © Copyright 2020 Dell Inc.


Configuring Data Services

Enforcement Quotas

Enforcement quotas support three subtypes and are based on administrator-


defined thresholds:
• Hard quotas
• Soft quotas
• Advisory quotas

Quota Types

There are six types of quotas that you can configure.

1: Directory and default directory quotas: Directory quotas are placed on a


directory, and apply to all directories and files within that directory, regardless of
user or group. Directory quotas are useful for shared folders where many users
store data, and the concern is that the directory grows unchecked.

2: User and default user quotas: User quotas are applied to individual users, and
track all data that is written to a specific directory. User quotas enable the
administrator to control the capacity any individual user consumes in a particular
directory. Default user quotas are applied to all users, unless a user has an
explicitly defined quota for that directory. Default user quotas enable the
administrator to apply a quota to all users, instead of individual user quotas.

3: Group and default group quotas: Group quotas are applied to groups and limit
the amount of data that the collective users within a group can write to a directory.
Group quotas function in the same way as user quotas, except for a group of
people and instead of individual users. Default group quotas are applied to all

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 271
Configuring Data Services

groups, unless a group has an explicitly defined quota for that directory. Default
group quotas operate like default user quotas, except on a group basis.

Caution: Configuring any quotas on the root of the file system


(/ifs) could result in significant performance degradation.

Default Directory Quotas

With default directory quotas, you can apply a template configuration to another
quota domain.

Directory default (template


quota created - 10 GB

Do not inherit the 10 GB


directory quota

Quota domains inherited and linked directory quota -


10 GB

The graphic shows a 10-GB default directory quota.

The graphic shows an example of creating a 10-GB hard quota, default directory
quota on the /ifs/sales/promotions directory. The directory default quota is not in
and of itself a quota on the promotions directory. Directories below the promotions
directory, such as the /Q1 and /Q2 directories inherit and apply the 10 GB quota.
The /Q1 domain and the /Q2 domain are independent of each other. Sub
directories such as /storage and /servers do not inherit the 10 GB directory
quota.Given this example, if the /Q2 folder reaches 10 GB, that linked quota is
independent of the 10 GB default directory quota on the parent directory.
Modifications to default directory quota, promotions, reflect to inherited quotas
asynchronously. Inheritance is seen when listing quotas, querying inheriting quota
record, or when I/O happen in the sub directory tree.

PowerScale Administration-SSP1

Page 272 © Copyright 2020 Dell Inc.


Configuring Data Services

Creating Default Directory Quotas

The default directory quota is created using the CLI154.

You can use the WebUI to view the created quotas and their links. See the student
guide for information about quota links.

Creating and viewing default directory quotas.

The top example shows creating a template on the Features directory. The
directory has a hard limit of 10 GB, an advisory at 6 GB, and a soft limit at 8 GB
with a grace period of 2 days.

The Unlink option makes the quota independent of the parent, meaning
modifications to the default directory quota no longer apply to the sub directory.
This example shows removing the link on the Screen_shots sub directory and then
modifying the default directory quota on the parent, Quota, directory. Remove the
link using the button on the WebUI or isi quota quotas modify --
path=/ifs/training/Features/Quota/Screen_shots --
type=directory --linked=false. Using the --linked=true option re-links
or links to the default directory quota.

154 The 'isi quota' command is used to create the default directory quota.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 273
Configuring Data Services

Quota Accounting

Count all snapshot data in usage limits - sum of the current directory and any
snapshots of that directory 1 KB file

Report protection overhead and meta data –


8 KB +

Report physical size without overhead –


8 KB

Report the actual size of file – 1 KB

The reporting behavior on a 1 KB file.

The quota accounting options are Include snapshots in the storage quota, 155and

Enforce the limits for this quota based on:

• File system logical size156 (default)

155Tracks both the user data and any associated snapshots. A single path can
have two quotas that are applied to it, one without snapshot usage (default) and
one with snapshot usage. If snapshots are in the quota, more files are in the
calculation.

156Enforces the File system logical size quota limits. The default setting is to only
track user data, not accounting for metadata, snapshots, or protection.

PowerScale Administration-SSP1

Page 274 © Copyright 2020 Dell Inc.


Configuring Data Services

• Physical size157
• Application logical size158 (OneFS 8.2 and later)

Overhead Calculations

Most quota configurations do not need to include overhead calculations.

If configuring overhead settings, do so cautiously, because they can affect the


amount of disk space that is available to users.

40 GB user quota 10 GB file Include data protection overhead

Example: 10 GB file enforces 20 GB against


quota

2x data protection

Snapshot and protection overhead typically not


used

The graphic shows an example of quota enforcement. 40 GB of capacity on the


/ifs/finance directory restricts the user. The setting for the quota is using the
Physical size option.If the directory is configured with a 2x data protection level and
the user writes a 10 GB file, the file consumes 20 GB of space. The consumption is

157Tracks the user data, metadata, and any associated FEC or mirroring overhead.
This option can be changed after the quota is defined.

158 Tracks the usage on the application or user view of each file. Application logical
size is typically equal or less than file system logical size. The view is in terms of
how much capacity is available to store logical data regardless of data reduction,
tiering technology, or sparse blocks. The option enforces quotas limits, and reports
the total logical data across different tiers, such as CloudPools.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 275
Configuring Data Services

10 GB for the file and 10 GB for the data-protection overhead. The user has
reached 50% of the 40 GB quota by writing a 10 GB file to the cluster.

Quotas and Thin Provisioning

Total quota amount can exceed


cluster capacity 200 TB quota

150 TB capacity

Cluster can reach maximum capacity before


75 TB quota reaching quotas

SmartQuotas supports thin provisioning, also known as over provisioning, which


enables administrators to assign quotas above the physical cluster capacity.

• Capacity reached, but quota can be under limit.159


• Adding nodes.160

159 With thin provisioning, the cluster can be full even while some users or
directories are well under their quota limit. Configuring quotas that exceed the
cluster capacity enables a smaller initial purchase of capacity/nodes.

160
Thin provisioning lets you add more nodes as needed, promoting a capacity on-
demand model.

PowerScale Administration-SSP1

Page 276 © Copyright 2020 Dell Inc.


Configuring Data Services

• Management reduction.161
• Careful monitoring.162

Quota Nesting

Nesting quotas is having multiple quotas within the same directory structure.

User quota = 25 GB

Directory = 1 TB
Directory can be any size up to 1 TB -
each user can only store 25 GB
Directory structure cannot exceed 1
TB

Directory quota = 800 GB

Directory structure cannot exceed 800


GB

No quota
Nesting - multiple quotas within same
directory structure

Directory structure cannot exceed 1 TB

The example shows that all quotas are hard enforced.

At the top of the hierarchy, the /ifs/sales folder has a directory quota of 1 TB. Any
user can write data into this directory, or the /ifs/sales/proposals directory, up to a
combined total of 1 TB. The /ifs/sales/promotions directory has a user quota
assigned that restricts the total amount that any single user can write into this
directory to 25 GB. Even though the parent directory (sales) is below its quota
restriction, a user is restricted within the promotions directory. The
/ifs/sales/customers directory has a directory quota of 800 GB that restricts the
capacity of this directory to 800 GB. However, if users place 500 GB of data in the

161Setting larger quotas initially reduces administrative management as more user


access the cluster.

162Thin provisioning requires monitoring cluster capacity usage carefully. If a quota


exceeds the cluster capacity, nothing prevents users from consuming all available
space, which results in service outages for all users and cluster services.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 277
Configuring Data Services

/ifs/sales/proposals directory, users can only place 500 GB in the other directories.
The parent directory cannot exceed 1 TB.

Percent-Based Advisory and Soft Limits

Create example

Modify example

View example

In OneFS 8.2.0 and later, you can view advisory and soft quota limits as a percent
of the hard quota limit.

Only advisory and soft quota limits can be defined.

A hard limit must exist to set the advisory and soft percentage.

Administrators cannot set both an absolute and a percent-based limit on a


directory.

PowerScale Administration-SSP1

Page 278 © Copyright 2020 Dell Inc.


Configuring Data Services

Quota Notifications

Rules

Email integrates with AD or LDAP email configuration

Configure multiple email recipients

Templates located in /etc/ifs

PowerScale WebUI notification configuration window.

Administrators can configure notifications to send alerts when the provisioned


storage approach storage maximums enabling more storage to be purchased as
needed.

Quota events can generate notifications.

Send notifications by email or through a cluster event. See the student guide for
more information.

The email option sends messages using the default cluster settings. You can send
the email to the owner of the event, or to an alternate contact, or both the owner
and an alternate. You can also use a customized email message template. Use a
distribution list to send the email to multiple users.

If using LDAP or Active Directory to authenticate users, the cluster uses the user
email setting that is stored within the directory. If no email information is stored in
the directory, or if a Local or NIS provider authenticates, you must configure a
mapping rule.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 279
Configuring Data Services

Quota Notification Template

The graphic shows one of the available quota templates that are located in the
/etc/ifs directory.

• PAPI support163.
• OneFS 8.2 enhancements164.

163PAPI supports an email ID list in the action_email_address property:


{"action_email_address": ["[email protected]","[email protected]"].

164In OneFS 8.2.0, administrators can configure quota notification for multiple
users. The maximum size of the comma-separated email ID list is 1024 characters.
The isi quota command option --action-email-address field accepts multiple
comma-separated values.

PowerScale Administration-SSP1

Page 280 © Copyright 2020 Dell Inc.


Configuring Data Services

Template Variables

An email template contains variables. You can use any of the SmartQuotas
variables in your templates.

Considerations

Listed are best practices to consider when discussing SmartQuotas.


• Too many nested quotas can limit performance.
− A single directory with overlapping quotas can also degrade performance.
• Thin provisioning can exceed cluster capacity.
• Most customers do not include overhead and snapshots in quota limits.
• If quota limits include overhead and snapshots, you may need to set larger
quota limits.
− Cloned and deduplicated files are treated as ordinary files by SmartQuotas.
• Test notifications to avoid surprises (i.e, incorrectly configured mail relay).
• OneFS 8.2:

• Increased from 20,000 quota limits per cluster to 500,000 quota limits per
cluster.
• Quota notification daemon optimized to handle about 20 email alerts per
second.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 281
Configuring Data Services

• Support for the rpc.quotad service in the NFS container with some
statistics.

Best Practice:
• Do not enforce quotas on file system root (/ifs).
• Do not configure quotas on SyncIQ target directories.

Challenge

Lab Assignment: The next OneFS feature to implement is


SmartQuotas. Go to the lab and configure user, group, and directory
quotas.

PowerScale Administration-SSP1

Page 282 © Copyright 2020 Dell Inc.


Configuring Data Services

SmartDedupe

Scenario

IT Manager: The cluster is hosting home directories for the users. Much
of the data is shared and has multiple copies. Deduplication should help
address the inefficient use of space.

Your Challenge: The IT manager wants you to describe the benefits of


deduplication, explain how deduplication works, and schedule
deduplication on a directory.

SmartDedupe Overview

Information technology managers are challenged with managing explosive data


growth.

Business data is often filled with significant amounts of redundant information.

SmartDedupe is an asynchronous batch job that identifies identical storage blocks


across the pool. The job is transparent to the user.

Multiple instances of
Single instance of
identical data
data

OneFS Deduplicates at the


File metadata not Saves one copy of
block level
deduplicated blocks
deduplicated

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 283
Configuring Data Services

An example of redundant information is whenever multiple employees store email


attachments, multiple copies of the same files are saved or replicated. This action
leads to multiple copies of the same data, which take up valuable disk capacity.
Data deduplication is a specialized data reduction technique that allows for the
reduction of duplicate copies of data.

SmartDedupe Architecture

The SmartDedupe architecture consists of five principle modules: Deduplication


Control Path, Deduplication Job, Deduplication Engine, Shadow Store, and
Deduplication Infrastructure.

4 3 5

1: The SmartDedupe control path consists of PowerScale OneFS WebUI, CLI and
RESTful PAPI, and is responsible for managing the configuration, scheduling, and
control of the deduplication job.

2: One of the most fundamental components of SmartDedupe, and deduplication in


general, is ‘fingerprinting’. In this part of the deduplication process, unique digital
signatures, or fingerprints, are calculated using the SHA-1 hashing algorithm, one
for each 8KB data block in the sampled set.

When SmartDedupe runs for the first time, it scans the data set and selectively
samples blocks from it, creating the fingerprint index. This index contains a sorted
list of the digital fingerprints, or hashes, and their associated blocks. Then, if they

PowerScale Administration-SSP1

Page 284 © Copyright 2020 Dell Inc.


Configuring Data Services

are determined to be identical, the block’s pointer is updated to the already existing
data block and the new, duplicate data block is released.

3: Shadow stores are similar to regular files but are hidden from the file system
namespace, so cannot be accessed via a path name. A shadow store typically
grows to a maximum size of 2GB, with 32,000 files referring each block. If the
reference count limit is reached, a new block is allocated, which may or may not be
in the same shadow store. Also shadow stores do not reference other shadow
stores. And snapshots of shadow stores are not permitted because the data that is
stored in shadow stores cannot be overwritten.

4: The primary user facing component of PowerScale SmartDedupe is the


deduplication job. This job performs a file system tree-walk of the configured
directory, or multiple directories, hierarchy. The Job Engine performs the control,
impact management, monitoring and reporting of the deduplication job in a similar
manner to other storage management and maintenance jobs on the cluster..

5: Architecturally, the duplication job, and supporting dedupe infrastructure, consist


of the following four phases: Sampling, Duplicate Detection, Block Sharing and
Index Update.

SmartDedupe Considerations

Following are areas to consider for SmartDedupe:

• SmartDedupe License165

165SmartDedupe is included as a core component of PowerScale OneFS but


requires a valid product license key in order to activate. This license key can be
purchased through the PowerScale account team.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 285
Configuring Data Services

• Best for static files and directories166


• Post process167 - not immediate - eventual
• F810 and H5600 In-line data deduplication168
• Asynchronous169 - does not block writes.
• Per disk pool170
• File metadata is not deduplicated.171

166
Deduplication is most effective for static or archived files and directories - less
modified files equals less negative effect.

167To avoid increasing write latency, deduplication is done on data-at-rest. The


data starts out at the full literal size on the drives, and might get deduplicated hours
or days later.

168
In-line data deduplication and in-line data compression is supported in the F810
and H5600 platforms in OneFS 8.2.1.

169Deduplication does not occur across the length and breadth of the entire cluster,
but only on each disk pool individually.

170 Data that is moved between node pools may change what level of deduplication
is available. An example would be a file pool policy that moves data from a high-
performance node pool to nearline storage. The data would no longer be available
for deduplication for the other data on the high-performance node pool, but would
be newly available for deduplication on nearline storage.

171
Metadata is changed more frequently, sometimes in trivial ways, leading to poor
deduplication.

PowerScale Administration-SSP1

Page 286 © Copyright 2020 Dell Inc.


Configuring Data Services

• Encrypted, compressed, and files less than 32 KB172


• Shadow store173 – 2 GB default size – up to 256,000 blocks storable
• Replication and backup behavior174
• Snapshots175

172SmartDedupe does not deduplicate encrypted or compressed files. Also, files


that are 32 KB or smaller are not deduplicated, because doing so would consume
more cluster resources than the storage savings are worth.

173The default size of a shadow store is 2 GB, and each shadow store can contain
up to 256,000 blocks. Each block in a shadow store can be referenced up to
32,000 times.

174 When deduplicated files are replicated to another PowerScale cluster or backed
up to a tape device, the deduplicated files no longer share blocks on the target
cluster or backup device. Although you can deduplicate data on a target
PowerScale cluster, you cannot deduplicate data on an NDMP backup device.
Shadow stores are not transferred to target clusters or backup devices. Because of
this, deduplicated files do not consume less space than non deduplicated files
when they are replicated or backed up. To avoid running out of space, ensure that
target clusters and tape devices have free space to store deduplicated data.

175SmartDedupe will not deduplicate the data stored in a snapshot. However, you
can create snapshots of deduplicated data. If deduplication is enabled on a cluster
that already has a significant amount of data stored in snapshots, it will take time
before the snapshot data is affected by deduplication. Newly created snapshots will
contain deduplicated data, but older snapshots will not.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 287
Configuring Data Services

• One deduplication job runs at a time176


− Schedule deduplication to run during the cluster’s low usage hours, by
default, the SmartDedupe job runs automatically.
− After the initial dedupe job, schedule incremental dedupe jobs to run about
every two weeks, depending on the size and rate of change of the dataset.
− Run SmartDedupe with the default "low" impact Job Engine policy.
• Rehydrates files from shadow store177

SmartDedupe Function

A job in the OneFS Job Engine178 runs through blocks that are saved in every disk
pool, and compares the block hash values.179

176Only one deduplication job can run at a time - uses CPU and memory
resources, and you should run at non peak or off hour times.

177Once file is undeduplicated, it cannot be re-deduplicated. Before rehydrating,


ensure that sufficient cluster capacity exists to hold the undeduplicated directory.

178 The job first builds an index of blocks, against which comparisons are done in a
later phase, and ultimately confirmations and copies take place. The deduplication
job can be a time consuming, but because it happens as a job the system load
throttles, the impact is absolute. Administrators find that their cluster space usage
has dropped once the job completes.

179If a match is found, and confirmed as a true copy, the block is moved to the
shadow store, and the file block references are updated in the metadata.

PowerScale Administration-SSP1

Page 288 © Copyright 2020 Dell Inc.


Configuring Data Services

2 3 4 5

1: Files greater than 32 KB

2: Compare 8 KB blocks.

3: Find matching blocks.

4: Matching blocks moved to shadow store

5: Free blocks

6: Save block references in metadata.

SmartDedupe Use Cases

Data on an enterprise typically contains substantial quantities of redundant


information.

SmartDedupe is typically used in the following ways:

Use Cases Considerations

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 289
Configuring Data Services

Home directories180 Compressed versus


uncompressed data

Archival files181 Unique versus replica


files

Uncompressed virtual Rapid change versus


machine images182 near-static

Note: Rapid changes in the file system tend to undo deduplication,


so that the net savings achieved at any one time is low. If in doubt,
or attempting to establish the viability of deduplication, perform a
dry run.

180A home directory scenario where many users save copies of the same file can
offer excellent opportunities for deduplication.

181 Static, archival data is seldom changing, therefore the storage that is saved may
far outweigh the load dedupe places on a cluster. Deduplication is more justifiable
when the data is relatively static.

182 Workflows that create many copies of uncompressed virtual machine images
can benefit from deduplication. Deduplication does not work well with compressed
data, the compression process tends to rearrange data to the point that identical
files in separate archives are not identified as such. Environments with many
unique files do not duplicate each other, so the chances of blocks being found
which are identical are low.

PowerScale Administration-SSP1

Page 290 © Copyright 2020 Dell Inc.


Configuring Data Services

SmartDedupe Jobs

Because the sharing phase is the slowest deduplication phase, a dry run, or
DedupeAssessment, returns an estimate of capacity savings.

Editing the Dedupe or DedupeAssessment jobs enables the administrator to


change the Default priority, Default impact policy, and Schedule.

Jobs associated with deduplication, Dedupe, and, DedupeAssessment. The


administrator can start the dry run and edit the job type.

1: The assessment enables a customer to decide if the savings that are offered by
deduplication are worth the effort, load, and cost.

2: Dedupe works on datasets which are configured at the directory level, targeting
all files and directories under each specified root directory. Multiple directory paths
can be specified as part of the overall deduplication job configuration and
scheduling.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 291
Configuring Data Services

SmartDedupe Administration

The WebUI SmartDedupe management is under the File system menu options.
Enter the paths for deduplication183 from the Settings tab.

From the Deduplication window, you can start a deduplication job and view any generated reports.

Challenge

Lab Assignment: Run deduplication assessment, run deduplication,


and view reports.

183 Selecting specific directory gives the administrator granular control to avoid
attempting to deduplicate data where no duplicate blocks are expected, like large
collections of compressed data. Deduplicating an entire cluster without considering
the nature of the data is likely to be inefficient.

PowerScale Administration-SSP1

Page 292 © Copyright 2020 Dell Inc.


Configuring Data Services

SnapshotIQ

Scenario

IT Manager: I think we need to use snapshots to give our users the


ability to recover files.

Your Challenge: The IT manager wants you to describe snapshot


behavior, identify snapshot types, and configure and manage snapshot
functionality.

SnapshotIQ Overview

SnapshotIQ snapshots are logical pointers to data stored on a cluster at a specific


point in time.

If you modify a file and determine that the changes are unwanted, you can copy or
restore the file from the earlier file version.

You can use snapshots to stage content to export, and ensure that a consistent
point-in-time copy of the data is replicated or backed up.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 293
Configuring Data Services

The graphic represents the blocks for production data and the snapshot of that production data. The
snapshot is preserving the original blocks B and E after they have changed (B' and E').

Important: A SnapshotIQ license184 is not required for all snapshot


functions.

184 Some OneFS operations generate snapshots for internal system use without
requiring a SnapshotIQ license. If an application generates a snapshot, and a
SnapshotIQ license is not configured, the snapshot can be still accessed. However,
all snapshots that OneFS operations generate are automatically deleted when no
longer needed. You can disable or enable SnapshotIQ at any time. Note that you
can create clones on the cluster using the "cp" command, which does not require a
SnapshotIQ license.

PowerScale Administration-SSP1

Page 294 © Copyright 2020 Dell Inc.


Configuring Data Services

Snapshot Operations

Bloc
k D'

File System Usage

Snapshot Usage

copy original
block to Block A
snapshot

Block B

Snapshot File
Block C

Block D

Block D'

Snapshot create: Snapshots are created almost instantaneously regardless or the


size185 of the file or directory.

Snapshot growth: as the data is modified and only the changed data blocks are
contained186 in snapshots.

185A snapshot is not a copy of the original data, but only an extra set of pointers to
the original data. At the time it is created, a snapshot consumes a negligible
amount of storage space on the cluster. The original file references the snapshots.

186If data is modified on the cluster (Block D’ in the graphic), only one copy of the
changed data is made. With CoW the original block (Block D) is copied to the

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 295
Configuring Data Services

Snapshot consumption: A snapshot consumes187 only the necessary space to


restore the files contained in the snapshot.

Copy on Write and Redirect on Write

OneFS uses both Copy on Write, or, CoW and Redirect on Write, or, RoW.

CoW are typically user-generated snapshots and RoW are typically system-
generated snapshots.

Both methods have pros and cons, and OneFS dynamically picks the snapshot
method to use to maximize performance and keep overhead to a minimum.

A
COW ROW
A
B
Snapshot
Snapshot
B
C

C
File File System D
System

D D
' B
'

The graphic shows changes that are made to, D. Changes incur a double write penalty, there is less
fragmentation of the HEAD file, which is better for cache prefetch and related file reading functions.

snapshot. The snapshot maintains a pointer to the data that existed at the time that
the snapshot was created.

187Snapshots do not consume a set amount of storage space, there is no


requirement to pre-allocate space for creating a snapshot. If the files that a
snapshot contains have not been modified, the snapshot consumes no additional
storage space on the cluster. The amount of disk space that a snapshot consumes
depends on the amount of data that is stored by the snapshot and the amount of
data the snapshot references from other snapshots.

PowerScale Administration-SSP1

Page 296 © Copyright 2020 Dell Inc.


Configuring Data Services

Ordered and Unordered Deletions

An ordered deletion is the deletion of the oldest snapshot of a directory. Ordered


deletion is recommended for datasets with a lower rate of change.

An unordered deletion is the deletion of a snapshot that is not the oldest snapshot
of a directory. For more active data, the configuration and monitoring overhead is
slightly higher, but fewer snapshots are retained.

Ordered - same duration period and delete oldest first

Unordered - multiple schedules, different retentions - deletions not oldest first

The benefits of unordered deletions that are compared with ordered deletions
depend on how often the snapshots that reference the data are modified. If the
data is modified frequently, unordered deletions save space. However, if data
remains unmodified, unordered deletions are not likely to save space, and it is
recommended that you perform ordered deletions to free cluster resources.

In the graphic, /ifs/org/dir2 two has two snapshot schedules. If the retention period
on schedule 1 is longer than the retention period on schedule 2, the snapshots for
the directory are deleted out of order. Unordered deletions can take twice as long
to complete and consume more cluster resources than ordered deletions. However,
unordered deletions can save space by retaining a smaller total number of blocks
in snapshots.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 297
Configuring Data Services

Creating Snapshots

You can create snapshots by configuring a snapshot schedule or manually


generating an individual snapshot.

• Creating more than one snapshot per directory is advantageous.


• Use shorter expiration periods188.
• Use the isi snapshot list | wc –l command to check the available
snapshots.

188Use shorter expiration periods for snapshots that are generated more
frequently, and longer expiration periods for snapshots that are generated less
frequently.

PowerScale Administration-SSP1

Page 298 © Copyright 2020 Dell Inc.


Configuring Data Services

Accessing Snapshot Files

OneFS tracks snapshots in the .snapshot directory. Click each tab for information
about snapshot structure and access.

Snapshot location

Snapshot files are in two places.

• Snapshots are within the path that is snapped189.


• You can view the .snapshot files is at the root of the /ifs190 directory.
• With SmartPools, snapshots can physically reside on a different storage tier
than the original data.

Accessing snapshots

There are two paths through which to access snapshots.

189For example, if snapping a directory located at /ifs/edu/students/name1, view


the hidden .snapshot directory using the CLI or Windows Explorer. The path would
look like /ifs/edu/students/name1/.snapshot.

190From /ifs all the .snapshots on the system can be accessed, but users can only
open the .snapshot directories for which they already have permissions. Without
access rights users cannot open or view any .snapshot file for any directory.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 299
Configuring Data Services

• Access through the /ifs/.snapshot191 directory.


• Access the .snapshot directory in the path192 where the snapshot was taken.

Preserving Permissions

Snapshots can be taken at any point in the directory tree. Each department or user
can have their own snapshot schedule.

The snapshot preserves193 the file and directory permissions at that point in time of
the snapshot.

191This is a virtual directory where all the snaps listed for the entire cluster are
stored.

192 To view the snapshots on /ifs/eng/media, user can change directory (cd) to
/ifs/eng/media and access /.snapshot

193The snapshot owns the changed blocks and the file system owns the new
blocks. If the permissions or owner of the current file is changed, it does not affect
the permissions or owner of the snapshot version.

PowerScale Administration-SSP1

Page 300 © Copyright 2020 Dell Inc.


Configuring Data Services

The snapshot of /ifs/sales/forecast/dave can be accessed from /ifs/.snapshot or


/ifs/sales/forecast/dave/.snapshot. Permissions for ../dave are maintained, and the ability to
traverse the .snapshot directory matches those permissions.

Restoring Snapshots

If data is accidentally erased, lost, corrupted, or compromised, clients can restore


the data from the snapshot.

Restore Theory

A A

File
B Snapshot
System
Time 1

C
D
D
Snapshot
Time 2
E

Restore Target

Client

The graphic illustrates CoW.

The graphic show a simple example of CoW.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 301
Configuring Data Services

For example, a directory has writes and snapshots at different times:


• Time 1: A,B,C,D are preserved in Snapshot Time 1.
• Time 2: A,B,C,D’ are preserved in Snapshot Time 2.

More data is written into the directory:


• Time 3: A’,B,C,D’
• Time 4: A’,B,C,D’, E

Since no snapshot is taken after Time 2, data corruption to A’ or E is not restorable


from a snapshot.

QUESTION: What happens when the user wants to recover block A data that was
overwritten in Time 3 with A’?

A backup snapshot is automatically created before copying A back to the directory.

Restore from Windows

Mapped share from the


PowerScale cluster

Right-click and select


Properties

List associated snapshots


with modification time

Snapshot options

Clients with Windows Shadow Copy Client can restore the data from the snapshot.

PowerScale Administration-SSP1

Page 302 © Copyright 2020 Dell Inc.


Configuring Data Services

Restore from NFS Host

cd to the .snapshot directory

List snaps at directory level

List point in time copies of the files in To recover a file, use the "mv" or "cp"
the directory command

Clients accessing the export over NFS can navigate using the .snapshot directory.

To recover a deleted file, right-click the folder that previously contained the file,
click Restore Previous Version, and select the required file to recover. To restore a
corrupted or overwritten file, right-click the file itself, instead of the folder that
contains file, and then click Restore Previous Version.

No additional storage is consumed and the restore is instant when restoring the
production file from a snap using RoW. Snapshot Time 2 has preserved A. A
backup snapshot is automatically created before copying A back to the file system.
The backup is a failback or safety mechanism should the restore from the snap be
unacceptable and the user wants to revert to A’.

SnapshotIQ Considerations

Listed are areas to consider when discussing snapshots.

• Always set expiration to prevent snaps filling cluster to capacity.


• Total cluster snap limit: 20,000 - best practice is 1000 limit per directory.
• Run concurrent schedules with different frequencies/expiration.
• SnapshotDelete job must run to completion.
• Manual snapshot deletion is not recommended – set up to expire when created.
• Deleting snapshots out of order may cause newer snapshots, which are
dependent on data that is being removed to have to copy the blocks before
deletion.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 303
Configuring Data Services

• Backup, SyncIQ, Clones, File System Analytics use snapshots – no license


required.
• Use aliases - alias names use most recent version of snapshot and eases
readability for application restores.

Challenge

Lab Assignment: Create a snapshot schedule, create snapshots, and


use a snapshot to restore data.

PowerScale Administration-SSP1

Page 304 © Copyright 2020 Dell Inc.


Configuring Data Services

SyncIQ

Scenario

IT Manager: One of the things I am interested in is SyncIQ. I would like


to the investigate the feature and see if it can help make our
environment more efficient.

Your Challenge: The IT manager wants you to describe SyncIQ and


configure a SyncIQ policy.

SyncIQ Overview Video

SyncIQ delivers unique, highly parallel replication performance that scales with the
dataset to provide disaster recovery. The video provides an overview of SyncIQ.
See the student guide for a transcript of the video.

The SyncIQ topic covers a foundation for SyncIQ. The PowerScale


Advanced Administration course provides a more in-depth examination
of SyncIQ.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 305
Configuring Data Services

Click to launch video.

Link:
https://edutube.emc.com/Player.aspx?vno=OZC9t92nwmWVLWNjfT/+5w==&autop
lay=true

Shown is a cluster with the source directory using SyncIQ to replicate data to a
remote target directory. OneFS SyncIQ uses asynchronous replication, enabling
you to maintain a consistent backup copy of your data on another Isilon cluster.
Asynchronous replication is similar to an asynchronous file write.

The target system passively acknowledges receipt of the data and returns an ACK
once the target receives the entire file or update. Then the data is passively written
to the target. SyncIQ enables you to replicate data from one PowerScale cluster to
another. Activate a SyncIQ license on both the primary and the secondary Isilon
clusters before replicating data between them. You can replicate data at the
directory level while optionally excluding specific files and sub-directories from
being replicated.

SyncIQ creates and references snapshots to replicate a consistent point-in-time


image of a SyncIQ domain. The SyncIQ domain is the root of the replication, such
as /ifs/finance. Metadata, such as ACLs and alternate data streams are replicated
along with data. SyncIQ offers automated failover and failback capabilities. If a
primary cluster becomes unavailable, failover and failback enable continued
operations on another Isilon cluster. In SyncIQ, an administrator creates and then
starts the replication policy. A policy is like an invoice list of what should get
replicated and how. A SyncIQ job does the work of replicating the data. OneFS
8.2.0 and later supports over-the-wire encryption to protect against man-in-the-
middle attacks, making data transfer between OneFS clusters secure.

SyncIQ Deployment Topology

Meeting and exceeding the data replication governance requirements of an


organization are critical for an IT administration. SyncIQ exceeds these

PowerScale Administration-SSP1

Page 306 © Copyright 2020 Dell Inc.


Configuring Data Services

requirements by providing an array of configuration options, ensuring


administrators have flexible options to satisfy all workflows with simplicity.

Under each deployment, the configuration could be for the entire cluster or a
specified source directory. Also, the deployment could have a single policy that is
configured between the clusters or several policies, each with different options
aligning to RPO and RTO requirements.

Click the tabs to know more about each type of deployment Typologies.

One-to-one

In the most common deployment scenario of SyncIQ, data replication is configured


between a single source and single target cluster as illustrated in the graphic
below.

One-to-many

SyncIQ supports data replication from a single source cluster to many target
clusters, allowing the same dataset to exist in multiple locations, as illustrated in the
graphic below. A one-to-many deployment could also be referenced as a hub-and-
spoke deployment, with a central source cluster as the hub and each remote
location representing a spoke.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 307
Configuring Data Services

Many-to-one

The many-to-one deployment topology is essentially the flipped version of the one-
to-many explained in the previous section. Several source clusters replicate to a
single target cluster as illustrated in the graphic below. The many-to-one topology
may also be referred to as a hub-and-spoke configuration. However, in this case,
the target cluster is the hub, and the spokes are source clusters.

Local Target

A local target deployment allows a single Isilon cluster to replicate within itself
providing the SyncIQ powerful configuration options in a local cluster as illustrated
in the graphic below. If a local target deployment is used for disaster readiness or
archiving options, the cluster protection scheme and storage pools must be
considered.

Cascaded

A cascaded deployment combines the previous deployments. It allows a primary


cluster to replicate to a secondary location, next to a tertiary location, and so on as
illustrated in the graphic below. Essentially, each cluster replicates to a next in
chain.

PowerScale Administration-SSP1

Page 308 © Copyright 2020 Dell Inc.


Configuring Data Services

SyncIQ Considerations and Limits

Considerations

Listed are areas to consider when configuring SyncIQ:


• Do not configure the /ifs directory as a SyncIQ domain.
• SyncIQ runs as jobs under its own Job Engine194.
• Can perform semi-automated195 failovers and failbacks.

Capabilities

The various capabilities of SyncIQ are:

194The SyncIQ Job Engine is separate from the cluster maintenance activity Job
Engine in OneFS. SyncIQ runs based on SyncIQ policies that you can schedule or
run as required manually.

195Semi-automated failovers from source to target, and semi-automated failback


from target to original source. Failover and failback only include the cluster
preparation activities and do not include DNS changes, client redirection or any
required networking changes.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 309
Configuring Data Services

• Stop a failover in progress and revert196.


• Source and target snapshots197.
• Maximum transmission units.198

196 The semi-automated failover process preserves the synchronization


relationships between the source and target clusters. SyncIQ is RBAC ready,
enabling you to configure administration roles. For organizations automating
processes, PAPI integration is available.

197 The SyncIQ process uses snapshots on both the source and target snapshots.
No SnapshotIQ license is required for basic SyncIQ snapshots on either the source
or target clusters. These snapshots are only used for SyncIQ jobs. SyncIQ
snapshots are single-instance snapshots and OneFS only retains the latest or last-
known good version.

198
SyncIQ can support larger maximum transmission units or MTU over the LAN or
WAN. SyncIQ supports auto-negotiation of MTU sizes over WAN connections. The
MTU across the network is negotiated by the network.

PowerScale Administration-SSP1

Page 310 © Copyright 2020 Dell Inc.


Configuring Data Services

• Import snapshots199.
• OneFS 8.2 and above provides over-the-wire encryption200 and bandwidth
reservation201 at a policy level.

199SyncIQ has the capability to import manually taken snapshots to use as the
point-in-time reference for synchronization consistency. You can add new nodes
while a sync job runs. There is no requirement to stop the sync job before adding
new nodes. Functionality enables the ability to create a point-in-time report showing
the SyncIQ worker activity.

200In-flight encryption makes data transfer between OneFS clusters secure. The
function benefits customers who undergo regular security audits and/or
government regulations.

201The SyncIQ bandwidth setting at the global level splits the bandwidth
reservation evenly among all policies. Using the CLI, you can make bandwidth
reservations for individual policies.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 311
Configuring Data Services

Limitations

The graphic shows the SyncIQ policy scheduling options.

The limitations of SyncIQ are:


• SyncIQ does not offer high availability (HA)202.
• Discourage a complete failover and failback test203.

202The target cluster contains a copy of the source data synchronized on a


schedule. The implementation is active on the source cluster with a read-only copy
on the secondary cluster. It is used for disaster recovery or to maintain a second
copy of the data only.

203 Performing a complete failover and failback test on a monthly or quarterly basis
is discouraged. Perform failover testing if quiescing writes to the source (prevent
changing the data) and all SyncIQ policies are successfully run a final time to
assure complete synchronization between source and target. Failing to perform a
final synchronization can lead to data loss.

PowerScale Administration-SSP1

Page 312 © Copyright 2020 Dell Inc.


Configuring Data Services

• Failover not needed for data retrieval204.


• Scheduling options205.

Compatibility

The table shows the versions of OneFS you can synchronize using SyncIQ. Target
cluster running OneFS 7.1.x version of OneFS is no longer supported. For
information about the support and service life-cycle dates for hardware and
software products, see the Isilon Product Availability Guide.

Source Target cluster running OneFS


Cluster
7.2.x 8.0.x 8.1.x 8.2.x

OneFS 7.1 Yes Yes Yes Yes

OneFS 7.2 Yes Yes Yes Yes

OneFS 8.0.x Yes Yes Yes Yes

OneFS 8.1.x Yes Yes Yes Yes

OneFS 8.2.x Yes Yes Yes Yes

204Retrieving a copy of the data from the target cluster does not require a failover.
The target is a read-only copy of the data. Perform a copy operation to make a
copy of the read-only data on the target cluster to a location outside of the SyncIQ
domain on the target, or to a location on the source cluster, or to the client.

205The 'Whenever the source is modified' option is not for continuous replication.
OneFS does not offer a continuous replication option. This option is for specific
workflows that have infrequent updates and require distribution of the information
as soon as possible.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 313
Configuring Data Services

CloudPools

SyncIQ can synchronize CloudPools data from the CloudPools aware source
cluster to a PowerScale target cluster.

SyncIQ provides data protection for CloudPools data and provides failover and
failback capabilities.

SyncIQ uses the CloudPools API tools to enable support.

The processes and capabilities of SyncIQ are based on the OneFS version
relationship between the source cluster and the target cluster. This relationship
determines the capabilities and behaviors available for SyncIQ policy replication.

Important: Shares, exports, cluster configuration, networking info,


metadata, licenses, etc. are not replicated. Employing tools such as
isi backup, application such as Superna Eyeglass, or a PS
engagement are often required to implement a complex solution.

SyncIQ Administrative Functions

Select each tab for an overview of each SyncIQ function.

Failover

Failover is the process of changing the role of the target replication directories into
the role of the source directories for assuming client read, write, and modify data
activities.

PowerScale Administration-SSP1

Page 314 © Copyright 2020 Dell Inc.


Configuring Data Services

Change from read-only to


read-write

Source

Target

The example shows a failover where the client accesses data on the target cluster.

Failback

A failback206 is the process of restoring the source-to-target cluster relationship to


the original operations where client activity is again on the source cluster.

Like failover, you must select failback for each policy. You must make the same
network changes to restore access to direct clients to the source cluster.

206A failback can happen when the primary cluster is available once again for client
activities. The reason could be from any number of circumstances including the
natural disasters are no longer impacting operations, or site communication or
power outages have been restored to normal. You must failback each SyncIQ
policy.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 315
Configuring Data Services

Source

Target

Replicate changes back


to the source

Changes to read-write once failback


completes

The example shows a failback where the client accesses source data.

Failback Preparation

Source cluster

To initiate a failback, the Resync-prep option is used. Resync-prep creates a mirror


policy for the replication policy on the primary cluster and secondary cluster.

Resync-prep prepares the source cluster to receive the changes made to the data
on the target cluster.

The mirror policy is placed under Data Protection > SyncIQ > Local Targets on the
primary cluster. On the secondary cluster, the mirror policy is placed under Data
Protection > SyncIQ > Policies.

PowerScale Administration-SSP1

Page 316 © Copyright 2020 Dell Inc.


Configuring Data Services

Failover Revert

A failover revert undoes a failover job in process207. Use revert before writes
occur208 on the target.

Changes not preserved

Source

Target

Stops failover job and enables


replication to target

207Failover revert stops the failover job and restores the cluster to a sync ready
state. Failover reverts enables replication to the target cluster to once again
continue without performing a failback.

208Use revert if the primary cluster once again becomes available before any writes
happen to the target. A temporary communications outage or if doing a failover test
scenario are typical use cases for a revert.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 317
Configuring Data Services

SyncIQ Replication Policies

Policy - what data, source and


destination, and how often

SyncIQ

SyncIQ jobs move data

Policy - governs replication between


source and target

SyncIQ policies209 govern data replication.

A SyncIQ policy specifies the clusters210 that are replicating.

SyncIQ jobs do the work211.

209You create and start replication policies on the primary cluster. A policy
specifies what data is replicated, where the data is replicated to, and how often the
data is replicated.

210The primary cluster holds the source root directory, and the secondary cluster
holds the target directory. There are some management capabilities for the policy
on both the primary and secondary clusters, though most of the options are on the
primary.

211SyncIQ jobs are the operations that do the work of moving the data from one
PowerScale cluster to another. SyncIQ generates these jobs according to
replication policies.

PowerScale Administration-SSP1

Page 318 © Copyright 2020 Dell Inc.


Configuring Data Services

Creating the SyncIQ Policy

The panels describe the files for creating the SyncIQ policy. Refer to the student
guide for more information.

Settings

Creating a SyncIQ policy is done of the Data protection > SyncIQ > Policies page
or using the isi sync policy create command.

Unique name

One time copy

File deletion protection SyncIQ domain checked every 10 seconds


Incrementally updates copy

Source cluster protection


Adhoc updates Use case - content distribution, EDA

Use case - test/dev workflows


User generated snapshot basis for replication

Granular updates - typical use case Use case - one-to-many scenario

The graphic shows the SyncIQ policy Settings fields. Click the image to enlarge.

Source Cluster - Directories

The Source root directory is the SyncIQ domain.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 319
Configuring Data Services

SyncIQ domain root Replicates only listed paths and ignores unlisted -
use with caution

Do not use /ifs as the root

Replicates all paths except those listed

Granular control over data replicated

File criteria filters

* Indicates filters available for


Copy policies

Click the image to enlarge.

Target Cluster

The target cluster identification is required for each policy.

SmartConnect IP address or FQDN Local host for same cluster replication

Restrict target nodes

SyncIQ domain root path

Default name for snapshot alias is modifyable

Recommended to set an expiration

Click the image to enlarge.

Advnaced

The final segment of the policy creation are the advanced fields.

PowerScale Administration-SSP1

Page 320 © Copyright 2020 Dell Inc.


Configuring Data Services

Prioritize policies
Data details written to /var/log/isi_migrate.log

Perform checksum on each file data


packet
Runs DomainMark in advance

Automatically deletes report after


expiration
Applicable for synchronization policy only

Determine if deep copy needed

Primary and secondary must be OneFS 8.0 or


higher

Force a deep copy

Settings: In the Settings section, assign a unique name to the policy. Optionally you
can add a description of the policy. The Enable this policy box is checked by
default. If you cleared the box, it would disable the policy and stop the policy from
running. Next designate whether a Copy policy or a Synchronize policy. The
replication policy can be started using one of four different run job options:
Manually, On a Schedule, Whenever the source is modified, or Whenever a
snapshot of the source directory is taken.

Source cluster directories: In the Source Cluster criteria, the Source root directory
is the SyncIQ domain. The path has the data that you want to protect by replicating
it to the target directory on the secondary cluster. Unless otherwise filtered,
everything in the directory structure from the source root directory and below
replicates to the target directory on the secondary cluster.

Includes and excludes: The Included directories field permits adding one or more
directory paths below the root to include in the replication. Once an include path is
listed that means that only paths listed in the include path replicate to the target.
Without include paths all directories below the root are included. The Excluded
directories field lists directories below the root you want explicitly excluded from the
replication process. You cannot fail back replication policies that specify includes or
exclude settings. The DomainMark job does not work for policies with subdrectories
mentioned in Include or Exclude. Using includes or excludes for directory paths
does not affect performance.

File matching criteria: The File matching criteria enables the creation of one or
more rules to filter which files do and do not get replicated. Creating multiple rules
connect them together with Boolean AND or OR statements. When adding a new

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 321
Configuring Data Services

filter rule, click either the Add an “And” condition or Add an “Or” condition links. File
matching criteria says that if the file matches these rules then replicate it. If the
criteria does not match the rules, do not replicate the file.

Target: Snapshots are used on the target directory to retain one or more consistent
recover points for the replication data. You can specify if and how these snapshots
generate. To retain the snapshots SyncIQ takes, select Enable capture of
snapshots on the target cluster. SyncIQ always retains one snapshot of the most
recently replicated delta set on the secondary cluster to facilitate failover,
regardless of this setting. Enabling capture snapshots retains snapshots beyond
the time period that is needed for SyncIQ. The snapshots provide more recover
points on the secondary cluster.

Advanced: The Priority field in the Advanced settings section enables policies to be
prioritized. If more than 50 concurrent SyncIQ policies are running at a time,
policies with a higher priority take precedent over normal policies. If the SyncIQ
replication is intended for failover and failback disaster recovery scenarios,
selecting Prepare policy for accelerated failback performance prepares the
DomainMark for the failback performance. The original source SyncIQ domain
requires a DomainMark. Running a DomainMark during the failback process can
take a long time to complete. You can retain SyncIQ job reports for a specified
time. With an increased number of SyncIQ jobs in OneFS 8.0, the report retention
period could be an important consideration. If tracking file and directory deletions
that are performed during synchronization on the target, you can select to Record
deletions on synchronization.

Deep copy: The Deep copy for CloudPools setting applies to those policies that
have files in a CloudPools target. Deny is the default. Deny enables only stub file
replication. The source and target clusters must be at least OneFS 8.0 to support
Deny. Allow the SyncIQ policy determine if a deep copy should be performed.
Force automatically enforces a deep copy for all CloudPools data that are
contained within the SyncIQ domain. Allow or Force are required for target clusters
that are not CloudPools aware.

PowerScale Administration-SSP1

Page 322 © Copyright 2020 Dell Inc.


Configuring Data Services

Copy vs Synchronize Policies

A SyncIQ policy can copy or synchronize source data to meet organizational goals.
When creating a SyncIQ policy, choose a replication type of either sync 212 or
copy213.

Copy Policy Synchronize Policy

• Goal - retain deleted data. • Goal - source cluster protection.


• Makes a one time full copy of the • Makes a one time full copy of the
source directory to the target source directory to the target
directory. directory.
• Runs manually. • Continues to make incremental
copies of the changes in the source
• Copy retains deleted source data
directory to the target directory.
on target.
• Removes deleted source data on
• Files that are deleted from source
target.
are not deleted from target.
• Files that are deleted from source
• Not secure file retention -
are deleted from target.
SmartLock.
• No file deletion protection.

212 If a mirrored copy of the source is the goal, create a sync policy.

213If the goal is to have all source data that is copied and to retain deleted file
copies, then create a copy policy.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 323
Configuring Data Services

Tip: You can always license SnapshotIQ on the target cluster and
retain historic SyncIQ associated snapshots to aid in file deletion
and change protection.

SyncIQ Configuration Video

The video details a basic SyncIQ use case, configuring replication between two
clusters. See the student guide for a transcript of the video.

Click to launch video.

Link:
https://edutube.emc.com/Player.aspx?vno=6cyyA4XvBqkyHJwXs6ltdg==&autoplay
=true

PowerScale Administration-SSP1

Page 324 © Copyright 2020 Dell Inc.


Configuring Data Services

Challenge

Lab Assignment: Configure a SyncIQ policy.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 325
Configuring Data Services

SmartLock

Scenario

IT Manager: I need a directory that has WORM protection. It does not


need to follow SEC 17a-4 rules. How can you set this up?

Your Challenge: The IT manager wants you to describe SmartLock, the


types of SmartLock operations, and configure SmartLock.

SmartLock Overview

SmartLock is a licensed software application that enables cost-effective and


efficient protection against accidental, premature, or malicious deletion or
modification of data.
• WORM

PowerScale Administration-SSP1

Page 326 © Copyright 2020 Dell Inc.


Configuring Data Services

• SyncIQ integration214
• OneFS data services integration215

SmartLock Concepts

Before configuring SmartLock on a cluster, you must familiarize yourself with a few
concepts to fully understand the SmartLock requirements and capabilities.

• Retention Period
• Compliance
• WORM

SmartLock Operating Modes

There are two SmartLock operation modes available to the cluster: SmartLock
compliance mode216 and SmartLock enterprise mode217.

Before creating SmartLock directories, you must activate a SmartLock license on


the cluster.

Compliance Enterprise

214SmartLock integrates with SyncIQ to provide failover capabilities and retention


on the SyncIQ source and target.

215SmartLock seamlessly integrates with OneFS core capabilities and add-on


software for snapshots, replication, provisioning, backup and restore, virtual
environments and other key functions.

216
You can create compliance directories only if the cluster has been upgraded to
SmartLock compliance mode.

217 SmartLock enterprise mode is the default SmartLock operation mode.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 327
Configuring Data Services

Only use if SEC 17a-4 must be Does not restrict cluster to follow SEC
followed. 17a-4 rules.

Configured during initial cluster install. Data not modified until retention dates
have passed.

Root is disabled - must use References system clock


compadmin account.

Admin tasks using sudo command

References non changeable *Privilege deletes can be enabled.


Compliance Mode clock

No option for privilege deletes.

* If you own a file and have the ISI_PRIV_IFS_WORM_DELETE privilege or are


logged in through the root user account, you can delete the file before the retention
period passes through the privileged delete feature. The privileged delete feature is
not available for compliance directories.

SmartLock Directory Types

1: OneFS supports standard non-WORM directories on the same cluster with


SmartLock directories.

PowerScale Administration-SSP1

Page 328 © Copyright 2020 Dell Inc.


Configuring Data Services

2: Enterprise SmartLock directories are data retention directories that do not meet
SEC regulatory compliance requirements. Enterprise directories are the most
commonly used directories in a SmartLock configuration. Enterprise SmartLock
directories enable administrators or RBAC enabled users the ability to delete files,
which are known as privileged deletes. You can enable or turn on, temporarily
disable or turn off, or permanently disable privileged deletes. The Enterprise
directory may be fully populated with data or empty when creating or modifying.

3: Compliance SmartLock directories are data retention directories that meet SEC
regulatory compliance requirements. Set up the cluster in Compliance mode to
support Compliance SmartLock directories.

When using SmartLock, there are two types of directories: enterprise and
compliance. A third type of directory is a standard or non-WORM218 directory.

You can upgrade219 an empty Enterprise SmartLock directory to a Compliance


SmartLock directory.

If using the compliance clock, you must copy data into the Compliance SmartLock
directory structure before committing the data to a WORM state.

SmartLock Configuration

In this use case the administrator wants to create a WORM directory where files
are locked down for a month. Once moved into the folder, the files are committed to
WORM.

Create a WORM domain from the WebUI File system > SmartLock page and select
Create domain or using the CLI "isi worm domains command.

218
OneFS supports standard non-WORM directories on the same cluster with
SmartLock directories.

219When you upgrade, privileged deletes are disabled permanently and cannot be
changed back.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 329
Configuring Data Services

5
6

1: Setting to "On" enables the root user to delete files that are currently committed
to a WORM state.

2: Setting the SmartLock domain.

3: The default retention period is assigned when committing a file to a WORM state
without specifying a day to release the file from the WORM state.

4: The minimum retention period ensures that files are retained in a WORM state
for at least the specified period of time. The maximum retention period ensures that
files are not retained in a WORM state for more than the specified period of time.

5: After a specified period, a file that has not been modified is committed to a
WORM state.

6: Files committed to a WORM state are not released from a WORM state until
after the specified date, regardless of the retention period.

SmartLock CLI Example

Use case:
• The administrator requires a WORM directory where files are in a WORM state
for at least 30 days and are removed from the WORM state after 60 days.

PowerScale Administration-SSP1

Page 330 © Copyright 2020 Dell Inc.


Configuring Data Services

• The default retention is 60 days.


• Set minimum and maximum retention dates.

CLI:

# isi worm domains create /ifs/finance/freeze_file -d


use_max -m 30D -x 60D --mkdir
o -d use_max uses the maximum retention as the default retention
o --mkdir creates the directory since it does not exist
o Duration syntax is in the format YMWDhms
Use the isi worm domains view command to verify the settings.

Committing Files to WORM

For a file to have a file retention date applied, and set to a read-only state, you
must commit the file to WORM.

Until the files are committed to WORM, files that are in a SmartLock directory act
as standard files that you can move, modify, or delete.

You can commit files manually or by using autocommit.

Manual Commit Autocommit Period

First set the retention date on the file, then Set per SmartLock domain
commit the file to WORM.
Sets a time period from when
the file was last modified on a
directory

Commit files to WORM state using Windows After the time period expires,
controls or UNIX commands the file is automatically
Example: # chmod ugo-w committed to WORM.
/ifs/finance/worm/JulyPayroll.xls

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 331
Configuring Data Services

SmartLock Considerations

Listed are areas to consider when discussing SmartLock.


• Retention settings apply to enterprise and compliance - explicit, default,
minimum, and maximum, retention date override.
• The system clock is the standard cluster time clock that is used for non-WORM
directories and Enterprise SmartLock directories.
• The compliance clock is used for Compliance SmartLock directories only. Set it
one time. The clock slowly drifts towards system clock (can drift up to 14 days
per year).
• Use compliance mode clusters only to meet the needs for regulatory
requirements.
• Root user is disabled on Compliance Mode cluster - use compadmin to
manage cluster.
• No auto delete of files – files past retention period must be identified.
• Limited search capability for expired files – individually test each file.
• You can use the isi worm files view command to verify the retention
status for any file.
• Do not use rm -rf . Command option r deletes all files and directories
recursively, and option f avoids prompting before deleting.
• In OneFS versions later than OneFS 8.0.1, SyncIQ failback is supported on
SmartLock directories.

Challenge

Lab Assignment: Configure WORM on a directory.

PowerScale Administration-SSP1

Page 332 © Copyright 2020 Dell Inc.


Monitoring Tools

Monitoring Tools

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 333


Monitoring Tools

PowerScale HealthCheck

Scenario

IT Manager: To understand the health of the cluster, I want you to run


periodic checks on different OneFS services. It might be a good idea to
create periodic health reports for out weekly meetings.

Your Challenge: The IT manager has tasked you to create a


HealthCheck evaluation and schedule.

PowerScale Administration-SSP1

Page 334 © Copyright 2020 Dell Inc.


Monitoring Tools

HealthCheck Overview

WebUI, Cluster management > HealthCheck page. Click the image to enlarge.

The OneFS HealthCheck tool is a service that helps evaluate the cluster health
status and provides alerts to potential issues.

You can use HealthCheck to verify the cluster configuration and operation,
proactively manage risk, reduce support cycles and resolution times, and improve
uptime.

CLI command: isi healthcheck

CLI example to view the checklist items: isi healthcheck checklists list

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 335


Monitoring Tools

Checklists and Checklist Items

For the CLI equivalent output use the "isi healthcheck checklists view cluster_capacity" command.
Click on the image to enlarge.

The graphic shows that the checklist items for the cluster_capacity check. The
HealthCheck terms and their definition are:

• Checklist - a list of one or more items to evaluate


• Checklist item - an evaluated article such as node capacity

Checklist Item Parameters

You can use the CLI to view the parameters of a checklist item. The example
shows viewing the node_capacity item parameters.

PowerScale Administration-SSP1

Page 336 © Copyright 2020 Dell Inc.


Monitoring Tools

The node_capacity item is an item in the cluster_capacity check.

Running a HealthCheck

Running an evaluation on the cluster_capacity checklist. Click the image to enlarge.

By default, a HealthCheck evaluation runs once a day at 11:00 AM. You can run a
HealthCheck using the WebUI.

The example shows selecting the Run option for the cluster_capacity checklist. The
HealthCheck table shows the status of the checklist.

CLI example of an evaluation:


isi healthcheck evaluation run cluster_capacity

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 337


Monitoring Tools

HealthCheck Schedule

WebUI HealthCheck scheduler. Click the image to enlarge.

You can manage the HealthCheck schedules of the checklists. By default, the
basic checklist is scheduled.

CLI example of creating a schedule called "capacity" for the cluster_capacity


checklist:
isi healthcheck schedules create capacity "Every day at 10
PM" cluster_capacity

PowerScale Administration-SSP1

Page 338 © Copyright 2020 Dell Inc.


Monitoring Tools

Viewing an Evaluation

Evaluation showing
failures

Viewing the evaluation from the WebUI HealthChecks tab.

You can view the evaluation from the HealthChecks tab or the Evaluations tab. For
a failed evaluation, the file will show the checklist items that failed.

CLI example of viewing a failed evaluation:


isi healthcheck evaluation view basic20200427T0400

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 339


Monitoring Tools

HealthCheck Resources

Link to the Info Hub

Challenge

Lab Assignment: Login to the cluster and create a HealthCheck


schedule and run a HealthCheck evaluation.

PowerScale Administration-SSP1

Page 340 © Copyright 2020 Dell Inc.


Monitoring Tools

InsightIQ

Scenario

IT Manager: We have InsightIQ installed in the lab. I want you to


explore the application for monitoring purposes.

Your Challenge: The IT manager wants you to describe the InsightIQ


functions and configure InsightIQ.

InsightIQ Overview

isi_stat_d

InsightIQ host
http

Client

FSA datastore

http

Datastore

InsightIQ focuses on PowerScale data and performance. Listed are key benefits for
using InsightIQ. Refer to the student guide for more information.
• Determine whether a storage cluster is performing optimally.
• Compare changes in performance across multiple metrics, such as CPU usage,
network traffic, protocol operations, and client activity.
• Correlate critical storage cluster events with performance changes.
• Determine the effect of workflows, software, and systems on storage cluster
performance over time.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 341


Monitoring Tools

• View and compare properties of the data on the file system.


• Identify users who are using the most system resources and identify their
activity.

InsightIQ is available for no charge and provides advanced analytics to optimize


applications, correlate workflow and network events. It provides tools to monitor
and analyze cluster performance and file systems. Cluster monitoring includes
performance, capacity, activity, trending, and analysis. InsightIQ runs on separate
hardware from the clusters that it monitors, and provides a graphical output for
trend observation and analysis. It does not take cluster resources beyond the data
collection process. InsightIQ retains a configurable amount of historic information
about the statistics it collects. To prevent collection of a large backlog of data,
InsightIQ retains datasets to provide trending information over a year, but these
settings are configurable.

InsightIQ has a straightforward layout of independent components. Inside the


PowerScale cluster, the isi_stat_d generates and collects monitoring and
statistical data. The isi_api_d presents the data, which also handles PAPI calls,
over HTTP. The InsightIQ datastore can be local to the host or external using an
NFS mount from the PowerScale cluster, or any NFS-mounted server. The
datastore must have at least 70 GB of free disk space. File System Analytics (FSA)
data is kept in a database on the cluster. InsightIQ accesses the cluster through
PAPI rather than an NFS mount.

InsightIQ is accessed through any modern web browser. If loading InsightIQ on a


Red Hat or CentOS Linux system, Dell Technologies provides it in the form of an
RPM package.

InsightIQ Dashboard

The DASHBOARD provides an aggregated cluster overview and a cluster-by-


cluster overview.

PowerScale Administration-SSP1

Page 342 © Copyright 2020 Dell Inc.


Monitoring Tools

Aggregated view

Metrics

Monitoring 3 clusters

Cluster health
Cluster-by-cluster breakout

InsightIQ dashboard showing three clusters configured. Click image to enlarge.

You can modify the view to represent any time period where InsightIQ has
collected data. Also, breakouts and filters can be applied to the data. In the
Aggregated Cluster Overview section, you can view the status of all monitored
clusters as a whole. There is a list of all the clusters and nodes that are monitored.
Total capacity, data usage, and remaining capacity are shown. Overall health of the
clusters is displayed. There are graphical and numeral indicators for connected
clients, active clients, network throughput, file system throughput, and average
CPU usage. Depending on the chart type, preset filters enable you to view specific
data. For example, In/Out displays data by inbound traffic compare with outbound
traffic.

You can also view data by file access protocol, individual node, disk, network
interface, and individual file or directory name. If displaying the data by the client
only, the most active clients are represented in the displayed data. Displaying data
by event can include an individual file system event, such as read, write, or lookup.
Filtering by operation class displays data by the type of operation being performed.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 343


Monitoring Tools

Capacity Analysis

The capacity analysis pie chart is an estimate of usable capacity is based on the
existing ratio of user data to overhead220.

Click the image to enlarge.

220 There is an assumption that data usage factors remain constant over more use.
If a customer uses the cluster for many small files and then wants to add some
large files, the result is not precisely what the system predicts.

PowerScale Administration-SSP1

Page 344 © Copyright 2020 Dell Inc.


Monitoring Tools

Default Reports

- Cluster activity and capacity - Data about files

- Determine cluster perfromance - Identify types of data and storage location

- Investigate issue - Uses File System Analytics

Click the image to enlarge.

You can monitor clusters through customizable reports that display detailed cluster
data over specific periods of time.

• Performance reports
• File system reports
• Live reporting

Capacity Reporting and Forecasting

Click the image to enlarge.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 345


Monitoring Tools

You can drill down to file system reporting to get a capacity reporting interface that
displays more detail about usage, overhead and anticipated capacity.

• Get usage profile


• Forecasting

The graphic shows the Capacity Forecast, displaying the amount data that can be
added to the cluster before the cluster reaches capacity.

The administrator can select cluster information and use that as a typical usage
profile to estimate when the cluster reaches 90% full. The information is useful for
planning node/cluster expansion ahead of time to avoid delays around procurement
and order fulfillment.

The Plot data shows the granularity of the reporting available. The Forecast data
shows the breakout of information that is shown in the forecast chart. Depending
on the frequency and amount of variation, outliers can have a major impact on the
accuracy of the forecast usage data.

Create Performance Report

Create custom live performance reports by clicking Performance Reporting > Create a New
Performance Report. Click the image to enlarge.

There are three types of reports On the Create a New Performance Report page.

• Live performance report from a template.

PowerScale Administration-SSP1

Page 346 © Copyright 2020 Dell Inc.


Monitoring Tools

• Live performance report that is based on a saved performance report.


• Live performance reports that is based on one of the template reports.

Click for configuration steps221.

221In the Create a New Performance Report area, in the Performance Report
Name field, type a name for the live performance report. Select the Live
Performance Reporting checkbox. In the Select the Data You Want to See area,
specify the performance modules that you want to view in the report. You can add
a performance module or modify an existing one. Repeat this step for each
performance module that you want to include. Save the report.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 347


Monitoring Tools

File System Analytics

Click the image to enlarge.

FSA provides detailed information about files and directories on a PowerScale


cluster.

InsightIQ collects the FSA data from the cluster for display to the administrator.

PowerScale Administration-SSP1

Page 348 © Copyright 2020 Dell Inc.


Monitoring Tools

• FSA results sets location - /ifs/.ifsvar/modules/fsa.222


• Result sets routinely deleted to save storage.
• You can set the maximum number of result sets to retain.
• FSAnalyze job runs daily.223

Enable FSA

Monitored Clusters page, Settings > Monitored Clusters. Click the image to enlarge.

Before you can view and analyze data usage and properties through InsightIQ, you
must enable the FSA feature.

222 Unlike InsightIQ datasets, which are stored in the InsightIQ datastore, FSA
result sets are stored on the monitored cluster in the /ifs/.ifsvar/modules/fsa
directory.

223The job collects information across the cluster, such as the number of files per
location or path, the file sizes, and the directory activity tracking.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 349


Monitoring Tools

Important: FSAnalyze runs by default in snapshot based mode


(OneFS 8.0 and later). The snapshots can consume large amounts
of cluster capacity.

To enable FSA, Open the Monitored Clusters page by clicking Settings > Monitored
Clusters. In the Actions column for the cluster that you want to enable or disable
FSA, click Configure. The Configuration page displays. Click the Enable FSA tab.
To enable the FSA job, select Generate FSA reports on the monitored cluster. To
enable InsightIQ for FSA report, select View FSA reports in InsightIQ.

If there are long time periods between the FSAnalyze job runs, the snapshot can
grow very large, possibly consuming much of the cluster's space. To avoid large
snapshot, you can disable the use of snapshots for FSAnalyze. Disabling snapshot
use means that the jobs may take longer to run.

Considerations

Listed are areas to consider for InsightIQ:


• InsightIQ 4.x supports all versions of OneFS from 7.0 and later.
• By default, web browsers connect to InsightIQ over HTTPS or HTTP using port
443 for HTTPS and port 80 for HTTP.
• A revert to a snapshot or modifications of the InsightIQ datastore can cause
datastore corruption.
• The maximum number of clusters that you can simultaneously monitor is based
on the system resources available to the Linux computer or virtual machine.
• It is recommended that you monitor no more than 8 storage clusters or 150
nodes with a single instance of InsightIQ.
• In large clusters (16+ nodes) with nodes that have limited CPU such as the
A200, the CPU usage of the FSAnalyze job can get large.

PowerScale Administration-SSP1

Page 350 © Copyright 2020 Dell Inc.


Monitoring Tools

Challenge

Lab Assignment: Now go to the lab and use InsightIQ to get a


performance baseline.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 351


Monitoring Tools

DataIQ v1

Scenario

IT Manager: How is monitoring and analyzing a cluster’s performance


and file systems is performed?

Your Challenge: The IT manager has asked you to explain DataIQ and
its available monitoring capabilities.

DataIQ Overview

The features DataIQ provides are discussed below.

1 2 3 4 5 6 7

1: DataIQ eliminates the problem of data silos by proving a holistic view into
heterogeneous storage platforms on-premises and in the cloud. A single pane of
glass view gives users a file-centric insight into data and enables intuitive
navigation.

2: DataIQ optimized near real-time scan, and high-speed file indexing deliver
immediate project and user information. Powerful search capabilities across
heterogeneous storage can locate data in seconds, no matter where it resides.
High-speed search and indexing scans and organizes files in "look aside" mode.

PowerScale Administration-SSP1

Page 352 © Copyright 2020 Dell Inc.


Monitoring Tools

3: DataIQ can ‘tag’ an attribute and use that tag to query millions of files across any
storage system. Tags enable business users, and IT, to view data in a true
business context. Tags give organizations the ability to see their data in the right
context, and to optimize their storage environment costs.

4: DataIQ enables data mobility with bi-directional movement between file and
object storage. The use of self-service archive capabilities to move files to the most
appropriate storage tier, such as archive or the cloud, empowers business owners.
Self-service enables content owners to move data from high-performance file
storage to an object archive.

5: With DataIQ, IT and storage admins gain understanding of their environment to


efficiently manage storage costs. They can report on the true cost of dormant and
redundant data and generate chargeback/showback views or cost recovery reports.
IT can also report on storage usage by project, and determine what files must be
cleaned up (such as duplicates or dark data).

6: DataIQ quickly scans file and object storage of all types. It can classify data
according to customer specification and provide instant rollup information. For
example, total tree size, average age of subtree data, 'last modified' date at any
point of folder structure. DataIQ generates fast and granular reports with business-
specific views and metrics, enabling rapid issue isolation. DataIQ integrates with IT
infrastructures to provide rights for AD and LDAP for users and group, as well as
APIs to enhance and extract business data. DataIQ plug-ins enable users to gain
additional insights. Plug-ins extend the GUI and launch internal scripts such as
Data Mover, Previewer, Audited Delete, Send to QA, and other custom scripts.

7: DataIQ monitors cluster health independent of the cluster status. DataIQ


monitors multiple clusters with massive node counts. It also configures and
receives alerts that are based on limits and issues.

DataIQ Implementation

DataIQ employs a traditional client/server model.

The DataIQ server scans the managed storage, saves the results in an index, and
provides access to the index.

Access is available from one or more GUI clients, CLI clients, and through the API
for application integration.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 353


Monitoring Tools

DataIQ

Windows
Clients

Linux
Clients

MAC
Clients

DataIQ Landing Page

After logging in to the DataIQ WebUI, the landing page is the Data Management
page.

Data Management, and Settings are the key functional pages.

Return to landing page

The example shows the landing page - Data Management.

Settings - Pages

Use the left and right arrows to view the Settings pages.

PowerScale Administration-SSP1

Page 354 © Copyright 2020 Dell Inc.


Monitoring Tools

Local settings

The examples show the two themes. Click to enlarge image.

The Local settings page allows you to personalize the theme of the DataIQ WebUI.

• Client maps224
• Viewable files and folders225

General management

You can configure email alerts and SRS on the General management page.

If a volume has the minimum free space threshold configured, an email is sent
when the threshold is triggered.

224 Client maps enable you to map the DataIQ path to the path that the client sees.

225 You can view or hide the hidden-type files and folders. You can also set how the
files and folders are viewed in the tables and lists.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 355


Monitoring Tools

Click image to enlarge.

Access and permissions

Click image to enlarge.

The Access and permissions page is where you can configure groups, add roles to
the groups, set authentication providers, and add users.

PowerScale Administration-SSP1

Page 356 © Copyright 2020 Dell Inc.


Monitoring Tools

Data management configuration

Click image to enlarge.

The Data management configuration page has four panels, Volumes, S3


endpoints, Plugins, and Other settings.

The Other settings include file type class and configuration files.

Licensing

Click image to enlarge.

From the Licensing page, you can manage and upload licenses generated from the
Software Licensing Central online portal.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 357


Monitoring Tools

Settings - Data Management Actions

Shown is an overview of the actions a user with the role of data owner can
perform. The actions are performed from the Data management settings page.
Use the left and right arrows to view the panels.

Volumes Panel

Text

Edits apply globally, settings at the volume Configure volume type, scan management, Volumes added to the scan group adopt the Change the scan management, delete the
level have precedence and hard link handling scan group settings, scan group settings have volume
precedence over volume settings

Click to enlarge.

From the Data management configuration page, Volumes panel, you can set
volume defaults, add and edit volumes, and create scan groups.

PowerScale Administration-SSP1

Page 358 © Copyright 2020 Dell Inc.


Monitoring Tools

S3 Endpoints Panel

Click to enlarge.

From the Settings, Data management configuration page, S3 enpoints panel226,


you can add an Amazon S3 instance.

226DataIQ enables you to setup the endpoint as a volume for scanning. To delete
an endpoint, go to the view breakout for the endpoint.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 359


Monitoring Tools

Other settings - File Type Classes

PowerScale Administration-SSP1

Page 360 © Copyright 2020 Dell Inc.


Monitoring Tools

Click to enlarge.

From the Data management settings page, Other settings panel, you can configure
file type classes.227

Other settings - Configuration Files

The example shows the configuration files and a breakout of the clientmap file. Click to enlarge.

The Data management settings has four configurations files that you can edit. The
files are listed in the Others setting panel:
• Clientmap configuration files allows you to view file paths as they are seen by
the user.
• Data management configuration file allows you to change DataIQ settings
• Viewfilter configuration file allows you to restrict the view of folders by group
• Autotagging configuration file allows you to setup and define tags

227File type classes allow you to scan the volumes by a file type class. For
example, you can make a class called images and then add .jpeg, .png, and .gif
extensions to the class.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 361


Monitoring Tools

Volume Management Options

Scroll through the carousel to view each of the volume management areas. You
can double click the images to enlarge.

Volume defaults

Set a capacity
threshold. When
triggered, flags the
volume

Set a dollar value to


the volume for
reporting purposes
(only $/month)

Prevents scan from


descending into folders
and indexing files that
match the pattern

Provides more
accurate reports on
volumes with hardlinks

The volume defaults are applied to new volumes and volumes without configured
settings.

The settings on volumes that are configured take precedence over the default
values.

Add Volume

Typically unused. Uses sleep period


Most use cases use VFS before issuing another file system
cmd

Does not need to Scan threads used for the volume.


correlate to the High number can impact
mountpoint performance

Set a value to the volume for


Mounted path on the reporting purposes
DataIQ server
Set a capacity threshold. When
triggered, flags the volume
If a member of a scan
group, scan configuration Typically employed when hard links
done at scan group level used extensively
Used to prevent an endless
descent into a broken file Prevents scan from descending into
system folders and indexing files that match
the pattern

The Add new volume window consists of three panels, the general settings, scan
configuration, and advanced settings.

PowerScale Administration-SSP1

Page 362 © Copyright 2020 Dell Inc.


Monitoring Tools

Scan Groups

Field definitions are


the same as those
discussed in the
previous window

You can create scan groups and add volumes with the same scan, TCO, and
minimum free space trigger to the group.

Settings in the scan group have precedence to the settings on the volume.

Editing Volumes

Settings discussed in the


Add Volume window.

The Edit window enables


you to delete volumes

If the volume belongs to a scan group and the scan group settings no longer apply,
you can remove the volume from the scan group and edit the volume settings.

Managing Configuration Files

The configuration files are on the Settings, Data management configuration


page. Consult the DataIQ Admin Guide for an in-depth description of the fields and

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 363


Monitoring Tools

settings for each configuration file. Select each page for an overview and use case
for the configurations.

File Type Class

Configuring file type classes enables report


statistics on classes of file types that are
based on the file extensions228.

Enabling File type classes consumes


additional memory and increases CPU
usage during scans.

228For example, a class that is called Video and a class that is called Image are
configured. The IT manager requests a report on the cost of video-type files and
the cost of image-type files. You can use the DataIQ Analyze feature to view the
storage consumption and cost of each class.

PowerScale Administration-SSP1

Page 364 © Copyright 2020 Dell Inc.


Monitoring Tools

Clientmap Configuration

Format

Supported path formats

Example mappings

Use the clientmap file to map virtual DataIQ paths to valid paths on a client.

Convert229 from virtual to client and from client path to virtual path.

By default, no client map is selected.

229Conversion from virtual paths to client paths occurs when copying paths to the
system clipboard. Conversion from client paths to DataIQ virtual paths occurs when
a client path is entered into a field such as a search field.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 365


Monitoring Tools

Data Management

Format

Option definition

Current option setting

The Data Management configuration file230 controls many functional aspects of


DataIQ.

Make changes to the file only when directed by Dell Support.

Administrators can also modify the file directly at


/usr/local/dataiq/etc/clarity.cfg.

230Modifying settings can impact DataIQ functionality. The defaults are typically
used. The file has a description of each setting.

PowerScale Administration-SSP1

Page 366 © Copyright 2020 Dell Inc.


Monitoring Tools

Viewfilter Configuration

Use the Stanford analyzer


to verify/validate REs

Format uses regular


expressions

Example filters

The Viewfilter configuration file231 enables you to create rules to restrict groups
from viewing folders.

You cannot filter the DataIQ Administrators group.

The configuration file is read:


• When DataIQ starts.

231 Viewfilter uses regular expressions (RE). If a volume or folder matches the RE
for the user's group, then that volume and folder are viewable for the user. If a user
is a member of more than one group, the user is only restricted from folders that
are restricted in all their groups.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 367


Monitoring Tools

• When the Group Config or User Config dialogs are opened.


• When the folder tree is updated.

Autotagging Configuration

Autotagging format uses


REs

Scroll down for notes and


example tags

Use auto-tagging232 to tag and track items. A use case is applying a tag to project
paths for use when determining a work order for a customer.

Auto-tagging occurs when a scan is done.

Administrators can also modify the file directly at


/usr/local/dataiq/etc/autotag.cfg.

Data Management Pages

Use the left and right arrows to view the Data Management pages.

232Auto-tagging attaches business context to the data DataIQ collects. Auto-


tagging enables you generate reports within a business context.

PowerScale Administration-SSP1

Page 368 © Copyright 2020 Dell Inc.


Monitoring Tools

Browse

Configure limits
and actions on
selected item

Flags for multiple Manage table


selections columns

Click image to enlarge.

The main functions of the Browse page are searching233, a panel that shows the
volumes in a tree view, a directory breakdown panel, a table that shows the files
within the selected folder, and an article details panel.

Flagging items in the table makes them reflective in the other data management
components.

233The search bar uses characters similar to Java regular expression (regex) such
as ^ for the beginning of filenames and $ for the ending of filenames.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 369


Monitoring Tools

Browse Details

Limit the number of bytes in the


Customize the table folder - applies to the entire
layout directory structure, blank is Initiate scan on selected
unlimited volume or path

Shown are configurable areas of the Browse page.

DataIQ performs regular scans234 on volumes.

234However, if data changes, updated files may not appear in file searches. Go to
the Actions panel and perform a scan on a volume or path to make sure you are
getting the latest information.

PowerScale Administration-SSP1

Page 370 © Copyright 2020 Dell Inc.


Monitoring Tools

Analyze

Hide or show options

Can analyze based


on tags

Hide or show legend

The Analyze page235 allows you to analyze volumes from a business context.

Flagged items

The Flagged items page lists the items the user marks as flagged.

235
The page enables you to view multi-dimensional project oriented data by cost
and size.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 371


Monitoring Tools

Tag management

Business rules configuration, also called auto-tagging, is used to tag tracked items
during a scan.

The Tag management page shows the results of scan when auto-tagging is
configured.

Jobs

The Jobs page shows a table of the jobs and their status as well as a details panel
for the selected job.

PowerScale Administration-SSP1

Page 372 © Copyright 2020 Dell Inc.


Monitoring Tools

Logs - Scan

The Logs page has two tabs, the Scan logs and the Error logs. The Scan logs table
show the generated logs from completed scan jobs.

DataIQ has two types of scans: Full236 and optimized237.

Logs - Error

The Error logs table shows the errors generated.

236A full scans is done the first time a storage file system is indexed. DataIQ walks
the entire file system, indexing every folder. This initial baseline scan ensures that
everything about the file system is known.

237An optimized scan is an incremental scan that only scans the folders where
there have been changes since the last full scan.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 373


Monitoring Tools

Auto-Tagging Example

The installer does not create the autotagging configuration file, but you can use the
sample file /usr/local/dataiq/etc/autotag.cfg.sample as a starting
point. Auto-tagging generally occurs when DataIQ scans a file system.

1. Backup File Content

Click the image to enlarge.

First make a copy of the existing Autotagging configuration file as a backup. The
graphic shows the location of the Autotagging configuration file on the Settings,
Data management configuration page.

PowerScale Administration-SSP1

Page 374 © Copyright 2020 Dell Inc.


Monitoring Tools

2. Reference Path

Click the image to enlarge.

Enter the path examples on their own line, preceded by comment (#).

3. Auto-Tagging Rule

Click the image to enlarge.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 375


Monitoring Tools

Enter the corresponding rule below each reference path. Having the commented
path makes it easier to understand the rule later and provides a reference for other
administrators.

Tags are automatically removed if the rule that created it no longer matches and
the tag has not been altered.

4. Simulate

Click the image to enlarge.

Once the auto-tagging rules are configured, Simulate and report, and then view
the results. The results panel lists each rule and the number of times it matched. If
the results look reasonable, Save and run the new rules.

The Simulate and report will indicate rules that are invalid.

PowerScale Administration-SSP1

Page 376 © Copyright 2020 Dell Inc.


Monitoring Tools

5. Analyze

Click the image to enlarge.

Go to the Data Management page and watch the auto-tab job details to see when
it completes. View the counts in the details window. Go to the Analyze page to
verify the generated tag sets and view the report.

Tip: Reference the DataIQ Administration Guide for in-depth


coverage of auto-tagging.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 377


Monitoring Tools

Regular Expression Resources

The graphic is an example of using the Stanford Analyzer.

• DataIQ Administration Guide


• Regular expression verification checks and troubleshooting:

• Put the RE from an existing rule or rule fragment in the Stanford Analyzer to
understand it (select Java). Modify the RE in the analyzer until it meets your
needs.
• Test in an RE tester (search for "Java regular expression tester"), and then
put into DataIQ and run in the simulator.

Plug-In Overview

Plug-ins extend DataIQ capabilities.

Plugins provide functions such as data transfer and audited delete to enable
administrators to manage data resources across storage platforms such as
PowerScale and ECS.

PowerScale Administration-SSP1

Page 378 © Copyright 2020 Dell Inc.


Monitoring Tools

The plug-ins DataIQ supports are listed. Click each plug-in for a brief description.
• Data Mover
• Audited Deletes
• Duplicate Finder
• Previewer

Tip: See the DataIQ Administrative Guide for details on plug-in


installation and settings.

Plug-in Examples

The graphics show WebUI excerpts of the plug-ins that are installed on a DataIQ
instance.

Installed and enabled plug-ins on the


Settings, Data management configuration
page

Previewer plug-in seen


in the Details panel,
Data Mover plug-in in the
Metadata tab
navigation panel on the
Data Management page

Double-click image to enlarge.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 379


Monitoring Tools

Challenge

Lab Assignment: Go to the lab and add the PowerScale cluster to the
DataIQ application.

PowerScale Administration-SSP1

Page 380 © Copyright 2020 Dell Inc.


Monitoring Tools

isi statistics

Scenario

IT Manager: You have a good understanding of HealthChecks, DataIQ,


and InsightIQ, but now I want you to know what CLI commands are
available for monitoring.

Your Challenge: The IT manager wants you to discuss the different


monitoring commands, explain the isi statistics functions, and
describe the difference between isi statistics and InsightIQ.

Statistics and Status Commands

The three main commands that enable you to view the cluster from the command
line are isi status, isi devices, and isi statistics.

isi statistics

The isi statistics command has approximately 1,500 combinations of data


you can display as statistical output of cluster operations. The statistics that are
collected are stored in an sqlite3 database that is under the /ifs folder on the
cluster.

The isi statistics command provides protocol, drive, hardware, and node
statistics238.

238Other services such as InsightIQ, the WebUI, and SNMP gather information
using the "isi statistics" command.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 381


Monitoring Tools

The output shows the operations by protocol. The example shows that NFS clients
are connected to node 6 with 278.5k bytes per second input rate.

Output for the general cluster statistics in a top-style display where data is continuously overwritten
in a single table.

isi devices

The isi devices command displays information about devices in the cluster and
changes their status. There are multiple actions available including adding drives
and nodes to the cluster. Use the isi devices command for drive states,
hardware condition, node management, and drive replacement management.

isi status

The isi status command displays information about the current status of the
cluster, alerts, and jobs. The example of the isi status output gives a general node
status, performance metrics, critical alerts, and Job Engine status.

PowerScale Administration-SSP1

Page 382 © Copyright 2020 Dell Inc.


Monitoring Tools

The --quiet option omits the alerts and Job Engine status output.

Tip: See the CLI Reference guide for a complete list of the
command options and output definitions.

Basic isi statistics Functions

The isi statistics command dumps all collected stats, and you can run the
"query" subcommand on a specific statistic.

Some of the functions are listed below:

• You can build a custom isi statistics query that is not in the provided
subcommands
• Cluster and node statistics from kernel counters
• isi_stats_d

• Most data collection


• Works with InsightIQ

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 383


Monitoring Tools

InsightIQ vs isi statistics

The table lists differences between isi statistics and InsightIQ.

In situations where InsightIQ is unavailable or malfunctioning, isi statistics is


a powerful and flexible way of gathering cluster data.

The isi statistics command within a cron job239 gathers raw statistics over a
specified time period.

InsightIQ isi statistics

Not licensed Not licensed

Graphical output Produces raw


CSV output on
demand

Not easily scripted Easily scripted

Not easily Automate with


automated cron

Use from remote Use from any


host node

Web user interface Produces data


similar to other
UNIX utilities,
such as top

239 A cron job can run on UNIX-based systems to schedule periodic jobs.

PowerScale Administration-SSP1

Page 384 © Copyright 2020 Dell Inc.


Monitoring Tools

Fixed interval Flexible sampling


interval

Example: Statistics for Drive Activity

The example output shows the isi statistics drive command for the SSD
drives on node 6.

Some column definitions:


• TimeInQ: Time in queue indicates how long an operation is queued on a drive.
Key for spindlebound clusters. A time in queue value of 10 to 50 milliseconds
equals Yellow zone, a time in queue value of 50 to 100 milliseconds equals
Red.
• Queued: Queue depth indicates how many operations are queued on drives. A
queue depth of 5 to 10 is considered heavy queuing.
• Busy: Disk percent busy can be helpful to determine that the drive is 100%
busy, but it does not indicate how much extra work might be in the queue.

Example: Statistics by Most Active Files and Directories

The examples shows isi statistics heat, which uses --long to include
more columns.

The head -10 option displays the first 10 most active most accessed files and
directories.

The example node 6 output shows the Timestamp in Epoch timestamp format,
Ops as protocol operations, the Event type and Class (getattr is a namespace
read), and LIN for the file or directory associated with the event.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 385


Monitoring Tools

Practical Skills

Combining large sets of collected data with log analysis can help identify long-term
trends and sources of trouble.

1: Sometimes it is not possible to use InsightIQ to troubleshoot as customers may


not allow new software and may have time or facilities constraints.

2: isi Statistics can fill the gaps. Skillful use of isi statistics can
produce equivalent information to what InsightIQ offers and contains many
performance-related options.

3: The isi statistics and isi_stats_d commands can help isolate or


identify issues where InsightIQ may not have visibility. Using isi statistics
keys can show specific metrics, such as isi statistics query current --
keys node.uptime displays the node uptime.

4: isi_cache_stats is used to examine the state of data that is in cache.

PowerScale Administration-SSP1

Page 386 © Copyright 2020 Dell Inc.


Monitoring Tools

Challenge

Lab Assignment: Now that you know which CLI commands are
available for monitoring, go to the lab and run the isi statistics
command.

PowerScale Administration-SSP1

© Copyright 2020 Dell Inc. Page 387


Appendix

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 389
Appendix

Course Materials
• Participant Guide
• Instructor laptop
• Projector and Speakers
• Internet access
• Whiteboard and markers

PowerScale Administration-SSP1

Page 390 © Copyright 2020 Dell Inc.


Appendix

Course Agenda

Day 1 Day 2 Day 3 Day 4 Day 5

AM Course Identity Data Data Monitoring


Introduction, Management Protection Services Labs
NAS, and and Data
PowerScale, Authorization Layout
and OneFS Labs Labs
Labs

Lunch

PM Foundations Client Storage Labs Labs


for Access Access Pools
Labs Labs Labs

Depending on course pace and student knowledge, module and lab exercise
schedule may be altered

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 391
Appendix

Introductions
• Name
• Company
• Job Role
• Experience
• Expectations

PowerScale Administration-SSP1

Page 392 © Copyright 2020 Dell Inc.


Appendix

DNS Primer

When discussing Domain Name System, or DNS, on a PowerScale cluster, there


are two facets to differentiate, DNS client and DNS server.

DNS is a hierarchical distributed database. The names in a DNS hierarchy form a


tree, which is called the DNS namespace. A set of protocols specific to DNS allows
for name resolution, more specifically, a Fully Qualified Domain Name, or FQDN, to
IP Address resolution.

Click the green "i" buttons to learn more.

1: A FQDN is the DNS name of an object in the DNS hierarchy. A DNS resolver
query must resolve an FQDN to its IP address so that a connection can be made
across the network or the Internet. If a computer cannot resolve a name or FQDN
to an IP address, the computer cannot make a connection, establish a session or
exchange information. An example of an FQDN looks like sales.isilon.xattire.com.

2: A single period (.) represents the root domain, and is the top level of the DNS
architecture.

3: Below the root domain are the top-level domains. Top-level domains represent
companies, educational facilities, nonprofits, and country codes such as *.com,
*.edu, *.org, *.us, *.uk, *.ca, and so on. A name registration authority manages the
top-level domains.

4: The secondary domain represents the unique name of the company or entity,
such as EMC, Isilon, Harvard, MIT.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 393
Appendix

5: The last record in the tree is the hosts record, which indicates an individual
computer or server.

PowerScale Administration-SSP1

Page 394 © Copyright 2020 Dell Inc.


Appendix

DNS Host Record: A or AAAA Record

NS records as a sub domain or


SmartConnect zone

Host (A) record - Easier to remember "centos" than


192.168.3.3

Host (A) record - SmartConnect service IP

The SmartConnect service IP on a PowerScale cluster must be created in DNS as


an address (A) record, also called a host entry.

What is an A record?240

For example, a server that is named centos would have an A record that mapped
the hostname centos to the IP address assigned to it: centos.dees.lab A
192.168.3.3 Where centos is the hostname, dees.lab is the domain name, and
centos.dees.lab is the FQDN.

240
An A-record maps the hostname to a specific IP address to which the user
would be sent for each domain or subdomain. It is simple name-to-IP resolution.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 395
Appendix

The Name Server Record, or NS records, indicate which name servers are
authoritative for the zone or domain.

More about NS records.241

Tip: In an IPv6 environment, use the AAAA record in DNS, and


consult with the network administrator to ensure that you are
representing the IPv6 addresses correctly.

241Companies that want to divide their domain into sub domains use NS records.
Sub domains indicate a delegation of a portion of the domain name to a different
group of name servers. You create NS records to point the name of this delegated
sub domain to different name servers.

PowerScale Administration-SSP1

Page 396 © Copyright 2020 Dell Inc.


Appendix

DNS Delegation Best Practices

Use one name server record for each


SmartConnect zone name or alias

Delegate to address (A)


records, not to IP addresses

You must create an address (A) record in DNS for the SmartConnect service IP.
Delegating to an A record means that if you failover the entire cluster, you can do
so by changing one DNS A record. All other name server delegations can be left
alone. In many enterprises, it is easier to update an A record than a name server
record, because of the perceived complexity of the process.

Delegationtion recommendation.242

242 The recommendation is to create one delegation for each SmartConnect zone
name or for each SmartConnect zone alias on a cluster. This method permits
failover of only a portion of the workflow—one SmartConnect zone—without

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 397
Appendix

Important: PowerScale does not recommend creating a single


delegation for each cluster and then creating the SmartConnect
zones as sub records of that delegation. More243.

affecting any other zones. This method is useful for scenarios such as testing
disaster recovery failover and moving workflows between data centers.

243Using this method would enable the PowerScale administrators to change,


create, or modify the SmartConnect zones and zone names as needed without
involving a DNS team, but causes failover operations to involve the entire cluster
and affects the entire workflow, not just the affected SmartConnect zone.

PowerScale Administration-SSP1

Page 398 © Copyright 2020 Dell Inc.


Appendix

SmartConnect Example - Cluster Name Resolution Process

The graphic shows how SmartConnect uses the X-Attire DNS server to provide a
layer of intelligence within the OneFS software application.

6
5

4 7

1
2

1: An NS record that delegates the subdomain isilon.xattire.com to the name server


with a hostname of SIP (sip.xattire.com). The isilon.xattire.com NS sip.xattire.com
states that clients looking to resolve isilon.xattire.com should query the NS
sip.xattire.com.

2: The A record maps the hostname sip.xattire.com to the IP address


192.168.0.100. Clients looking for isilon.xattire.com are forwarded to sip.xattire.com
and sip.xattire.com is found at 192.168.0.100.

3: All clients are configured to make requests from the resident DNS server using a
single DNS hostname. Because all clients reference a single hostname,
isilon.xattire.com, it simplifies the management for large numbers of clients.

4: The resident DNS server forwards the delegated zone lookup request to the
delegated zone server of authority, here the SIP address of the cluster.

5: SmartConnect evaluates the environment and determines which node (single IP


address) the client should connect to, based on the configured policies.

6: SmartConnect then returns this information to the DNS server, which, in turn,
returns it to the client.

7: The client then connects to the appropriate cluster node using the wanted
protocol.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 399
Appendix

NFS Connectivity

Click to play media.

Remote Procedure Call (RPC)

NFS relies upon remote procedure call (RPC) for client authentication and port
mapping. RPC is the NFS method that is used for communication between a client
and server over a network. RPC is on Layer 5 of the OSI model. Because RPC
deals with the authentication functions, it serves as gatekeeper to the cluster.

NFS connectivity

Procedure of NFS connectivity is:

PowerScale Administration-SSP1

Page 400 © Copyright 2020 Dell Inc.


Appendix

• The procedure always starts with a CALL from a client.244


• A server can reject a client CALL for one of two reasons.245
• Portmapper provides the client RPC process with service ports.246
• RPC services cannot run unless they register with portmapper.247

Let us look at the flow of a request by a client. When the RPC services start up on
the cluster, it registers with portmapper. The service tells portmapper what port
number it is listening on, and what RPC program numbers it is prepared to serve.

244 When the server receives the CALL, it performs the service that is requested
and sends back the REPLY to the client. During a CALL and REPLY, RPC looks for
client credentials, that is, identity and permissions.

245
If the server is not running a compatible version of the RPC protocol, it sends an
RPC_MISMATCH. If the server rejects the identity of the caller, it sends an
AUTH_ERROR.

246It acts as a gatekeeper by mapping RPC ports to IP ports on the cluster so that
the right service is offered.

247Clients calling for an RPC service need two pieces of information, the number of
the RPC program it wants to call and the IP port number.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 401
Appendix

HDFS Topic
• Data Lakes and Analytics
• HDFS Overview Video
• OneFS with Hadoop
• OneFS vs. Hadoop
• HDFS Administration
• Best Practices Resources
• Troubleshooting Resources

PowerScale Administration-SSP1

Page 402 © Copyright 2020 Dell Inc.


Appendix

Swift Topic
• File and Object Storage Differences
• Accounts, Containers, and Objects
• Configuring Isilon Swift Accounts
• Storage URL
• Isilon Swift Considerations and Limitations

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 403
Appendix

Journal Behavior for Node Pairs

When a node boots, it first checks its own vault resources before querying its
paired node. This way if the node can recover its journal from its own resources,
there is no need to query the paired node. But, if the journal is bad, the node can
identify the journal condition from its node state block data, and recovery should be
possible. There is a consequence to the nodes running in pairs. If a node runs
unpaired, it is under-protected.

Battery Battery Battery Battery

Journal Mirror Journal Mirror

Mirror Journal Mirror Journal

PowerScale Administration-SSP1

Page 404 © Copyright 2020 Dell Inc.


Appendix

Concurrency Examples
The process of striping spreads all write operations from a client248 across the
nodes of a cluster. Each tab illustrates a file that is broken down into chunks, after
which it is striped across disks249 in the cluster along with the FEC.

Concurrency 256 KB File

The graphic illustrates concurrency with a 256 MB file.

Concurrency

N+1n

256 KB file

128 KB chunk

128 KB chunk

128 KB FEC

248 A client is connected to only one node at a time. However when that client
requests a file from the cluster, the client connected node does not have the entire
file locally on its drives. The client-connected node retrieves and rebuilds the file
using the back-end network.

249 Even though a client is connected to only one node, when that client saves data
to the cluster, the write operation occurs in multiple nodes. The scheme is true for
read operations also.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 405
Appendix

Concurrency 128 KB File

All files 128 KB or less are mirrored. For a protection strategy of N+1 the 128 KB
file has 2 instances, the original data and one mirrored copy.

Concurrency

N+1n

128 KB file

128 KB FEC

Any file ≤ 128 KB is still FEC calculated,


but the result is a copy

Concurrency 192 KB File

The example shows a file that is not evenly distributed in 128 KB chunks. Blocks in
the chunk that are not used are free for use in the next stripe unit. Unused blocks in
a chunk are not wasted.

Concurrency N+1n 192 KB file

F200 3 Node Cluster

128 KB chunk

64 KB used

Less than 128 KB


used - mirrored
protection
64 KB unused - blocks not used are used in the next stripe
FEC blocks Mirror blocks unit, not wasted

Concurrency 1 MB with +2d:1n

The example shows +2d:1n protection of a 1 MB file. The file is divided into eight
data stripe units and three FEC units. The data is laid out in two stripes over two
drives per node to achieve the protection.

PowerScale Administration-SSP1

Page 406 © Copyright 2020 Dell Inc.


Appendix

Concurrency

N+2d:1n

1 MB file

8 x 128 KB chunk

Stripe depth is doubled

Blocks within the same stripe (stripe 1) are written to separate drives on
each node

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 407
Appendix

Data Lakes and Analytics

A Data Lake is a central data repository that enables organizations to access and
manipulate the data using various clients and protocols. The flexibility keeps IT
from managing and maintaining a separate storage solution (silo) for each type of
data such as SMB, NFS, Hadoop, SQL, and others.

The inclusion of platform-as-a-service, or PaaS, makes building 3rd platform


applications simple and efficient.

Click the i buttons in the graphic for information about ingest and OneFS storage.

1 2

1: A Data Lake-based ingest captures a wider range of datatype than were


possible in the past. Data is stored in raw, unprocessed forms to ensure that no
information is lost. Massively parallel processing and in memory technologies
enable data transformation in real time as data is analyzed. Because the Data Lake
has a single, shared repository, more tools can be made available on demand,
enabling data scientists and analysts to find insights. The Data Lake makes it
simple to surface the insights in a consistent way to executives and managers so
that decisions are made quickly.

2: Utilizing Isilon to hold the Hadoop data gives you all of the protection benefits of
the OneFS operating systems. You can select any of the data protection levels that
OneFS offers giving you both disk and node fault tolerance.

PowerScale Administration-SSP1

Page 408 © Copyright 2020 Dell Inc.


Appendix

Resource: For more information, goto the PowerScale and Isilon


technical documents and videos page.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 409
Appendix

HDFS Overview Video


The video provides an overview of a typical Hadoop topology and how the
PowerScale fits into a Hadoop solution. See the student guide for a transcript of the
video.

Click to launch video.

URL:
https://edutube.emc.com/Player.aspx?vno=wZCty171ec2RjiMSRZZe9g==&autopla
y=true

Shown is an Isilon cluster with twelve nodes. A key benefit of CloudPools is the
ability to interact with multiple cloud vendors. Shown in the graphic are the
platforms and vendors that are supported as OneFS 8.1.1.

CloudPools is an extension of the SmartPools tiering capabilities in the OneFS


operating system. The policy engine seamlessly optimizes data placement that is
transparent to users and applications. Moving the cold archival data to the cloud,
lowers storage cost and optimizes storage resources.

PowerScale Administration-SSP1

Page 410 © Copyright 2020 Dell Inc.


Appendix

Let us look at an example, each chassis in the cluster represents a tier of storage.
The topmost chassis is targeted for the production high-performance workflow and
may have node such as F800s. When data is no longer in high demand,
SmartPools moves the data to the second tier of storage. The example shows the
policy moves data that is not accessed and that is over thirty days old. Data on the
middle tier may be accessed periodically. When files are no longer accessed for
more than 90 days, SmartPools archive the files to the lowest chassis or tier such
as A200 nodes.

The next policy moves the archive data off the cluster and into the cloud when data
is not accessed for more than 180 days. Stub files that are also called SmartLinks
are created. Stub files consume approximately 8 KB space on the Isilon cluster.
Files that are accessed or retrieved from the cloud, or files that are not fully moved
to the cloud, have parts that are cached on the cluster and are part of the stub file.
The storing of CloudPools data and user access to data that is stored in the cloud
is transparent to users.

CloudPools files undergo a compression algorithm and then are broken into their 2
MB cloud data objects or CDOs for storage. The CDOs conserve space on the
cloud storage resources. Internal performance testing does note a performance
penalty for a plane compression and decompressing files on read. Encryption is
applied to file data transmitting to the cloud service. Each 128 KB file block is
encrypted using a AES 256 encryption. Then transmitted as an object to the cloud.
Internal performance testing notes a little performance penalty for encrypting the
data stream.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 411
Appendix

OneFS with Hadoop


Access protocols NameNode and DataNodes Hadoop compute nodes

Ambari, Cloudera, GreenPlum,


Hortonworks, IBM, Hadoop,
Pivotal

MapReduce - computations stays on


Data from all sources
Hadoop
resides on the cluster

To recap the overview, all production data resides on PowerScale. This removes
the task of exporting it from your production applications and importing it as with a
traditional Hadoop environment. The MapReduce continues to run on dedicated
Hadoop compute nodes. PowerScale requires this Hadoop front end to do the data
analysis. PowerScale holds the data so that Hadoop, applications, or clients can
manipulate it.

Resource: For supported platforms, see the Hadoop Distributions


and Products Supported by OneFS web page.

PowerScale Administration-SSP1

Page 412 © Copyright 2020 Dell Inc.


Appendix

OneFS vs. Hadoop


The table showcases the benefits of OneFS compared with Hadoop. For details,
click the underlined functions for more Information.

Function Hadoop OneFS

Data protection 3x mirror, no replication snapshots, clones,


SyncIQ

Data migration250 Needs landing zone. Data on cluster

Security251 Kerberos authentication AD, LDAP, and


unsupported Kerberos

Deduplication 3x mirror = 33% efficiency 80% storage


efficiency

Compliance and security No native encryption SEDs, ACLs, POSIX,


access zones, RBAC,
SEC compliant

250 Hadoop requires a landing zone to stage data before using tools to ingest data
to the Hadoop cluster. PowerScale enables cluster data analysis by Hadoop.
Consider the time that it takes to push 100 TB across the WAN and wait for it to
migrate before any analysis can start. PowerScale does in place analytics so no
data moves around the network.

251Hadoop assumes that all members of the domain are trusted. PowerScale
supports integrating with AD or LDAP, and gives you the ability to safely segment
access.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 413
Appendix

Multi distribution support252 1 physical HDFS = 1 Co-mingle physical


distribution of Hadoop and virtual versions.

Scaling253 Compute and storage that Scales compute or


is paired storage as needed.

252Each physical HDFS cluster can only support one distribution of Hadoop.
PowerScale can co-mingle physical and virtual versions of any Apache standards-
based distributions.

253Hadoop pairs the storage with the compute, so adding more space may require
you to pay for more CPU that may go unused. If you need more compute, you end
up with a lot of overhead space. With PowerScale you scale compute as needed or
storage as needed, aligning your costs with your requirements.

PowerScale Administration-SSP1

Page 414 © Copyright 2020 Dell Inc.


Appendix

HDFS Administration

The graphic shows the WebUI Protocols, Hadoop (HDFS), Settings page, and
the corresponding isi hdfs settings command output.

Click the i buttons for details in each area.

6 7

4
5

1: The Default block size determines how the HDFS service returns data upon
read requests from Hadoop compute client. The server-side block size determines
how the OneFS HDFS daemon returns data to read requests. Leave the default
block size at 128 MB. If the customer runs an older version of HDFS, consider a 64
MB block size. If the block size is set to high, many read/write errors and
performance problems occur. Tune on setup.

2: Default checksum type is used for old HDFS workflows. Because OneFS uses
forward error correction, checksums for every transaction are not used, as it can
cause a performance issue.

3: The HDFS Authentication type is on a per-access zone basis. The


authentication method can be Simple, Kerberos, or both.

4: The Ambari client/server framework is a third-party tool that enables you to


configure, manage, and monitor a Hadoop cluster through a browser-based
interface.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 415
Appendix

5: Odp version - on updates, the Hortonworks version must match the version that
is seen in Ambari. Version conflict is common when customer upgrades
Hortonworks. Can cause jobs not to run. Installation also fails when Odp version
does not match.

6: Proxy users for secure impersonation can be created on the Proxy users tab.
For example, create an Apache Oozie proxy user to securely impersonate a user
called HadoopAdmin. Enable the Oozie user to request that the HadoopAdmin user
perform Hadoop jobs. Apache Oozie is an application that can automatically
schedule, manage, and run Hadoop jobs.

7: On the Virtual racks tabs, nodes can be preferred along with an associated
group of Hadoop compute clients to optimize access to HDFS data.

Resource: An HDFS implementation is more involved than


discussed in this topic. See the HDFS Reference Guide for
complete configuration details.

PowerScale Administration-SSP1

Page 416 © Copyright 2020 Dell Inc.


Appendix

Best Practices Resources

• Visit the Using Hadoop with Isilon - Isilon Info Hub web page for documentation.
• Use the Isilon Hadoop tools to create users and groups in the local provider.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 417
Appendix

Troubleshooting Resources
There are several guides that are dedicated to troubleshooting an HDFS solution.

Resource: Using Hadoop with OneFS Info Hub.

PowerScale Administration-SSP1

Page 418 © Copyright 2020 Dell Inc.


Appendix

File and Object Storage Differences


File storage deals with a specific set of users who require shared access to a
specific set of files. Shared access led to file access permissions and locking
mechanisms, enabling users to share and modify files without effecting each
other’s changes. A file system stores data in a hierarchy of directories,
subdirectories, folders, and files. The file system manages the location of the data
within the hierarchy. If you want to access a specific file, you need to know where
to look for the file. Queries to a file system are limited. You can search for a specific
file type such as *.doc, or file names such as serverfile12*.*, but you cannot parse
through the files to find the content contained within them. Determining the context
of a file is also difficult. For example, should you store the file in an archival tier or
will you access the information regularly? It is difficult to determine the content of
the data from the limited metadata provided. A document might contain the minutes
of a weekly team meeting, or contain confidential personal performance evaluation
data.

Object storage combines the data with richly populated metadata to enable
searching for information by file content. Instead of a file that tells you the create or
modified date, file type, and owner, you can have metadata that tells you the
project name, formula results, personnel assigned, location of test and next run
date. The rich metadata of an object store enables applications to run analytics
against the data.

Object storage has a flat hierarchy and stores its data within containers as
individual object. An object storage platform can store billions of objects within its
containers, and you can access each object with a URL. The URL associated with
a file enables the file to be located within the container. Hence, the path to the
physical location of the file on the disk is not required. Object storage is well suited
for workflows with static file data or cloud storage.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 419
Appendix

File Storage Object Storage

Hierarchical structure Flat hierarchy

Manages location of data Data stored in containers

Limited metadata Not concerned with data location

Metadata:
Rich metadata:
File Name: Formula 5Xa
Object ID: 98765 Level: xxx
Created by: M.Smith
File Type: .doc
Created on: 9/9/14 Test date: xxx
Lab facility: Atlanta
File type: Word Patient trial: xxx
Building: 7
Patent: xxx
Lead Scientist: M. Smith
Approval ID: xxx
Description: xxx
Risk Assessment: xxx

PowerScale Administration-SSP1

Page 420 © Copyright 2020 Dell Inc.


Appendix

Accounts, Containers, and Objects


Shown is the Swift logical data layout. Accounts are the administrative control point
for containers and objects, containers organize objects, and objects contain user
data. For users to access objects, they must have an account on the system. An
account is the top of the hierarchy.

Object1

Administrative
Container1 Object2
control point
Contain user
data

Object3
Account

Object4

Container1

Organize objects Object5

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 421
Appendix

Configuring Isilon Swift Accounts


Administrators can create, delete, modify, or view Swift accounts. Administrators
can also define users who can access the accounts. The Swift account
management tasks are performed only through the CLI.

Administrators must provision the accounts before users can use the service. The
general steps are enable Swift license, decide upon file system user or group
ownership, create accounts using the isi swift command, and then assign
users access to account. Make any necessary file system permission changes if
you are relocating data into the account.

The example shows creating a Swift account in the sales access zone and using
an Active Directory user and group. The isi swift accounts list shows the
accounts that are created in the access zone. The isi swift accounts view
shows the account details.

isi swift accounts create <account name> <swiftuser> <swiftgroup> --


Syntax:
zone <string> --users <string>

PowerScale Administration-SSP1

Page 422 © Copyright 2020 Dell Inc.


Appendix

Storage URL
Shown is what a Swift Storage URL looks like. URIs identify objects in the form
http://<cluster>/v1/account/container/object. In the example shown,
192.168.0.1 identifies the cluster. HTTP requests are sent to an internal web
service listening on port 28080. This port is not configurable. HTTPS requests are
proxied through the Apache web server listening on port 8083. This port is not
configurable. OpenStack defines the protocol version /v1. The reseller prefix
/AUTH_bob, where /AUTH is a vestige of the OpenStack implementation's internal
details. The _bob portion of the URL is the account name used. The container /c1
is the container in which an object is stored and the object /obj1 is the object.

Web service
Cluster Protocol version Reseller prefix Account Container Object
listening port

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 423
Appendix

Swift Considerations and Limitations


Swift supports up to 150 concurrent active connections per cluster node. When
uploading objects or listing containers, the Swift service can become memory-
constrained and cause a service outage. To avoid an outage, maintain the Swift
Service memory load within 384 MB. Account and container listing requests initiate
a full file system walk from the requested entity. Workloads can expect longer
response times during the listing operations as the number of containers or objects
increase. To prevent response time issues, redistribute or reduce the objects and
containers until the response times are within the acceptable limits. You cannot
submit a PUT request to create a zero-length object because PUT is incorrectly
interpreted as a pseudo-hierarchical object. If the container is not empty, you
cannot submit a DELETE request to delete a container. As a best practice, delete
all the objects from the container before deleting the container. When
authenticating with Active Directory and Isilon Swift, the user name in the X-Auth-
User header must include the fully qualified AD domain name in the form test-
[email protected] unless the domain has been configured as the default
through the assume-default-domain configuration parameter in the AD provider
configuration.

Pre OneFS 8.0 Swift accounts are deactivated when upgrading to OneFS 8.0 and
later. After the upgrade, Swift no longer uses home directories for accounts. The
upgrade plan should determine which users are using Swift. Create new accounts
under the new Swift path, and then move the data from the old accounts into the
newly provisioned accounts. Swift is not compatible with the auditing feature.

• 150 concurrent active connections per node


• Cannot submit PUT request for 0 length object
• Container must be empty to DELETE
• User name must include FQDN of AD domain
• Upgrade from OneFS 7.2 requires new account provisioning
• Not compatible with auditing

PowerScale Administration-SSP1

Page 424 © Copyright 2020 Dell Inc.


Glossary
Cache - L1
Client-side cache. L1 cache refers to read transaction requests, or when a client
requests data from the cluster. L1 cache is stored in a segmented area of the node
RAM and as a result is fast. Related to L1 cache is the write cache or the write
coalescer that buffers write transactions from the client. The write cache is flushed
after successful write transactions. In OneFS, the two similar caches are
distinguished based on their read or write functionality. Client-side caching includes
both the in and out client transaction buffers.

Cache - L2
Storage side or node-side buffer. Buffers write transactions and L2 writes to disk
and prefetches anticipated blocks for read requests, sometimes called read ahead
caching. For write transactions, L2 cache works with the journaling process to
ensure protected committed writes. As L2 cache becomes full, it flushes according
to the age of the data. L2 flushes the least recently used, or LRU, data.

Chimer Nodes
By default, if the cluster has more than three nodes, three of the nodes are
selected as chimers. If the cluster has four nodes or less, only one node is selected
as a chimer. If no external NTP server is set, nodes use the local clock. Chimer
nodes are selected by the lowest node number that is not excluded from chimer
duty.

DataIQ Audited Deletes Plug-in


The Audited Deletes plug-in enable administrators to delete folders and files from
the DataIQ custom context menu, logging the actions. The plug-in asks for
confirmation before deleting anything, and logs all details of the delete operation.
The plug-in does not work with object stores such as S3, GCP, or ECS.

DataIQ Data Mover Plug-in


The Data Mover plug-in helps transfer files and folders more efficiently between file
systems. Data Mover is compatible with object storage such as Amazon S3,
Google Cloud Platform, and Dell ECS.

DataIQ Duplicate Finder Plug-in


The Duplicate Finder plug-in finds duplicate files across volumes and folders. The
plug-in does not work with object stores such as S3, GCP, or ECS.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 425
DataIQ Previewer Plug-in
The Preview plug-in shows a preview image of the file in the WebUI for common
file types. The supported graphic file extensions are: ".jpg", ".jpeg", ".tiff", ".tif",
".dpx", ".bmp", ".png", ".gif", ".tga", ".targa", ".exr", ".pcx", ".pict", ".ico". The
supported video file extensions are: ".mov", ".mp4", ".mpeg", ".mpg", ".ts", ".avi",
".mkv", ".wmf", ".wmv", ".mxf", ".ogv". The plug-in does not work with object stores
such as S3, GCP, or ECS.

Dynamic Aggregation Mode


A dynamic aggregation mode enables nodes with aggregated interfaces to
communicate with the switch so that the switch can use a comparable mode.

File Pool Policy


File pool policies enable you to filter files and directories and store them on specific
node pools or tiers according to criteria that you specify. You can change the
storage pool tier, change the optimization, and change the protection level if the file
or directory no longer requires greater protection. You can trigger the changes at
any time and on any directory or file.

File Provider
A file provider enables you to supply an authoritative third-party source of user and
group information to a clustr. A third-party source is useful in UNIX and Linux
environments that synchronize /etc/passwd, /etc/group, and etc/netgroup
files across multiple servers.

Front Panel Display


The Front Panel Display is located on the physical node or chassis. It is used to
perform basic administrative tasks onsite.

Generation 6 Hardware
The Gen 6 platforms reduce the data center rack footprints with support for four
nodes in a single 4U chassis. It enable enterprise to take on new and more
demanding unstructured data applications. The Gen 6 can store, manage, and
protect massively large datasets with ease. With the Gen 6, enterprises can gain
new levels of efficiency and achieve faster business outcomes.

Generation 6.5 Hardware

PowerScale Administration-SSP1

Page 426 © Copyright 2020 Dell Inc.


The ideal use cases for Gen 6.5 (F200 and F600) is remote office/back office,
factory floors, IoT, and retail. Gen 6.5 also targets smaller companies in the core
verticals, and partner solutions, including OEM. The key advantages are low entry
price points and the flexibility to add nodes individually, as opposed to a chassis/2
node minimum for Gen 6.

Global Namespace Acceleration (GNA)


GNA enables the use of SSDs for metadata acceleration across the entire cluster.
GNA also uses SSDs in one part of the cluster to store metadata for nodes that
have no SSDs. The result is that critical SSD resources are maximized to improve
performance across a wide range of workflows.

Groupnet
The groupnet is a top-level networking container that manages hostname resolution
against DNS nameservers and contains subnets and IP address pools. Every
subnet is assigned to a single groupnet. Each cluster has a default groupnet
named groupnet0. Groupnet0 contains an initial subnet, subnet0, an initial IP
address pool, pool0, and an initial provisioning rule, rule0. Groupnets are how the
cluster communicates with the world. DNS client settings, such as name servers
and a DNS search list, are properties of the groupnet. If the cluster communicates
to another authentication domain, it must find that domain. To find another
authentication domain, you need a DNS setting to route to that domain. With
OneFS 8.0 and later releases, groupnets can contain individual DNS settings,
whereas prior OneFS versions had a single global entry.

Hadoop
Hadoop is designed to scale up from a single server to thousands of servers.
Hadoop clusters dynamically scale up and down based on the available resources
and the required services levels. Performance varies widely for processing, and
queries can take a few minutes to multiple days depending on how many nodes
and the amount of data requested.

Home Directory
Home directory provisioning creates a single home share that redirects users to
their SMB home directories. If one does not exist, a directory is automatically
created.

InsightIQ File System Reports

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 427
File system reports include data about the files that are stored on a cluster. The
reports have use if, for example, you want to identify the types of data being stored
and where that data is stored. Before applying a file system report, enable InsightIQ
File System Analytics for that cluster.

InsightIQ Live Reporting


InsightIQ supports live versions of reports that are available through the InsightIQ
web application. You can create live versions of both performance and file system
reports. You can modify certain attributes as you view the reports, including the
time period, breakouts, and filters.

InsightIQ Performance Reports


Performance reports have information about cluster activity and capacity. For
example, to determine whether clusters are performing as expected, or if you want
to investigate the cause of a performance issue, the reports are useful.

isi get
The isi get command displays the protection settings on an entire directory path or,
as shown, a specific file without any options. The POLICY or requested protection
policy, the LEVEL or actual protection, the PERFORMANCE or data access pattern
are displayed for each file. Using with a directory path displays the properties for
every file and subdirectory under the specified directory path. Output can show files
where protection is set manually. Mirrored file protection is represented as 2x to 8x
in the output.

Job - Default Impact Policy


The default impact policy is the amount of system resources that the job uses
compared to other system maintenance jobs running simultaneously.

Job - Default Priority


The Default priority gives the job priority as compared to other system maintenance
jobs running simultaneously. You can modify the job priority, but it is not
recommended.

Job - Schedule
With the Schedule options, you can start the job manually or set to run on a
regularly scheduled basis.

LACP Aggregation Mode

PowerScale Administration-SSP1

Page 428 © Copyright 2020 Dell Inc.


LACP uses hashed protocol header information that includes the source and
destination address, and the VLAN tag, if available. LACP enables a network
device to negotiate and identify any LACP enabled devices and create a link. LACP
monitors the link status and if a link fails, fails traffic over. LACP accepts incoming
traffic from any active port. PowerScale is passive in the LACP conversation and
listens to the switch to dictate the conversation parameters.

Layers of Access
• Protocol Layer - The first layer is the protocol layer. Protocols may be Server
Message Block, or SMB, Network File System, or NFS, File Transfer Protocol,
or FTP, or some other protocol.
• Authentication Layer - The authentication layer identifies a user using a system
such as NIS, local files, or Active Directory.
• Identity Assignment Layer - The third layer is identity assignment. This layer is
straightforward and based on the results of the authentication layer, but there
are some cases that need identity mediation within the cluster, or where roles
are assigned within the cluster that are based on user identity.
• Authorization Layer - Finally, based on the established connection and
authenticated user identity, the file and directory permissions are evaluated. The
evaluation determines whether the user is entitled to perform the requested data
activities.

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 429
Leaf-Spine
Leaf-Spine is a two level hierarchy where nodes connect to leaf switches, and leaf
switches connects to spine switches. Leaf switches do not connect to one another,
and spine switches do not connect to one another. Each leaf switch connects with
each spine switch and all leaf switches have the same number of uplinks to the
spine switches.

Local Provider
Local authentication is useful when Active Directory, LDAP, or NIS directory
services are not configured or when a specific user or application needs access to
the cluster. Local groups can include built-in groups and Active Directory groups as
members

MTTDL
MTTDL is a statistical calculation that estimates the likelihood of a hardware failure
resulting in data loss. MTTDL is a system view of reliability and asks the question
“What happens when hardware does fail, and will I lose any data when it does?”

NAS
NAS is an IP-based, dedicated, high-performance file sharing and storage device.

NFS
Network File System, or NFS, is an open standard that UNIX clients use. The NFS
protocol enables a client computer to access files over a network. NFS clients
mount the OneFS export that is accessible under a client mountpoint. The
mountpoint is the directory that displays files from the server. The NFS service
enables you to create as many NFS exports as needed.

NFSv4 Continuous Availability


NFSv4 enables clients to transparently fail over to another node when a network or
node fails. The CA option enables movement from one node to another and no
manual intervention on the client side. Movement to another node enables a
continuous workflow from the client side with no appearance or disruption to their
working time. CA supports home directory workflows.

OneFS CLI
The command-line interface runs "isi" commands to configure, monitor, and
manage the cluster. Access to the command-line interface is through a secure shell
(SSH) connection to any node in the cluster.

PowerScale Administration-SSP1

Page 430 © Copyright 2020 Dell Inc.


OneFS Multi-Tenancy
With OneFS, multi-tenancy enables the PowerScale cluster to simultaneously
handle more than one set of networking configurations. Multi-Tenant Resolver, or
MTDNS is a subset of multi-tenancy that pertains to hostname resolution against
DNS name servers. Each tenant on the cluster can have its own network settings.
Before OneFS 8.0, you could only define one set of DNS servers on the cluster.

PaaS
PaaS combined with approaches like continuous integration and deployment can
measure application development cycles in the days and weeks rather than months
or years. The combinations can dramatically reduce the time it takes from having
an idea to identifying insight, to action, and creating value.

PAPI
The PAPI is divided into two functional areas: one area enables cluster
configuration, management, and monitoring functionality, and the other area
enables operations on files and directories on the cluster. A chief benefit of PAPI is
its scripting simplicity, enabling customers to automate their storage administration.

PowerScale A200
The A200 is an ideal active archive storage solution that combines near-primary
accessibility, value and ease of use.

PowerScale A2000
The A2000 is an ideal solution for high density, deep archive storage that
safeguards data efficiently for long-term retention.

PowerScale F200
Ideal for low-cost all-flash node pool for existing Gen6 clusters. Ideal for small,
remote clusters.

PowerScale F600
Ideal for small, remote clusters with exceptional system performance for small
office and remote office technical workloads.

PowerScale F800
Use the F800 for workflows that require extreme performance and efficiency.

PowerScale F810

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 431
Use the F810 for workflows that require extreme performance and efficiency. The
F810 also provides high-speed inline data deduplication and in-line data
compression. It delivers up to 3:1 efficiency, depending on your specific dataset
and workload.

PowerScale H400
The H400 provides a balance of performance, capacity and value to support a wide
range of file workloads. It delivers up to 3 GB/s bandwidth per chassis and provides
capacity options ranging from 120 TB to 720 TB per chassis.

PowerScale H500
The H500 is a versatile hybrid platform that delivers up to 5 GB/s bandwidth per
chassis with a capacity ranging from 120 TB to 720 TB per chassis. It is an ideal
choice for organizations looking to consolidate and support a broad range of file
workloads on a single platform.

PowerScale H5600
The H5600 combines massive scalability – 960 TB per chassis and up to 8 GB/s
bandwidth in an efficient, highly dense, deep 4U chassis. The H5600 delivers inline
data compression and deduplication. It is designed to support a wide range of
demanding, large-scale file applications and workloads.

PowerScale H600
The H600 is Designed to provide high performance at value, delivers up to 120,000
IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for high performance
computing (HPC) workloads that don’t require the extreme performance of all-flash.

Quotas - Accounting
Accounting quotas monitor, but do not limit, disk storage. With accounting quotas,
you can review and analyze reports to help identify storage usage patterns.
Accounting quotas assist administrators to plan for capacity expansions and future
storage requirements. Accounting quotas can track the amount of disk space that
various users or groups use.

Quotas - Advisory
Advisory quotas do not deny writes to the disk, but they can trigger alerts and
notifications after the threshold is reached.

Quotas - Default Directory Quota

PowerScale Administration-SSP1

Page 432 © Copyright 2020 Dell Inc.


Versions previous to OneFS 8.2.0 have default quotas for users and groups, but
not for directory quotas. Common directory quota workflows such as home
directories and project management folders, can have a default directory quota that
simplifies quota management.

Quotas - Enforcement
Enforcement quotas include the functionality of accounting quotas and enable the
sending of notifications and the limiting of disk storage. Enforcement quotas include
the functionality of accounting quotas and enable the sending of notifications and
the limiting of disk storage.

Quotas - Hard Quota


Hard quotas limit disk usage to a specified amount. Writes are denied after
reaching the hard quota threshold and are only permitted when the used capacity
falls below the threshold.

Quotas - Soft Quota


Soft quotas enable an administrator to configure a grace period that starts after the
threshold is exceeded. After the grace period expires, the boundary becomes a
hard quota, and writes are denied. If the usage drops below the threshold, writes
are again permitted.

Reed-Solomon
OneFS uses the Reed-Solomon algorithm, which is an industry standard method to
create error-correcting codes, or ECC, at the file level.

Reed-Solomon
OneFS uses the Reed-Solomon algorithm, which is an industry standard method to
create error-correcting codes, or ECC, at the file level.

RFC 2307 Compliant


Use Microsoft Active Directory with Windows Services for UNIX and RFC 2307
attributes to manage Linux, UNIX, and Windows systems. Integrating UNIX and
Linux systems with Active Directory centralizes identity management and eases
interoperability, reducing the need for user-mapping rules. Make sure your domain
controllers are running Windows Server 2003 or later.

Scale-out Solution

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 433
Not all clustered NAS solutions are the same. Some vendors overlay a
management interface across multiple independent NAS boxes. This gives a
unified management interface, but does not unify the file system. While this
approach does ease the management overhead of traditional NAS, it still does not
scale well.
With scale-out, a single component (node) of a system or cluster contains the
performance, compute, and capacity. As the need for capacity or compute power
increases, you add more nodes to the cluster. The node is not equivalent to a
scale-up controller as disk capacity is not added to a node. The cluster scales out
as nodes you add nodes, making it a much more scalable solution than a scale-up
implementation.

Scale-up Solution
The two controllers can run active/active or active-passive. For more capacity, add
another disk array. Each of these components is added individually. As more
systems are added, NAS sprawl becomes an issue.

Scale-up Storage
Scale-up storage is the traditional architecture that is dominant in the enterprise
space. High performance, high availability single systems that have a fixed capacity
ceiling characterize scale-up.

Serial Console
The serial console is used for initial cluster configurations by establishing serial
access to the node designated as node 1.

SmartConnect as a DNS Server


SmartConnect serves DNS information to inbound queries and as such acts as a
DNS server.

SmartDedupe
OneFS deduplication saves a single instance of data when multiple identical
instances of that data exist, in effect, reducing storage consumption. Deduplication
can be done at various levels: duplicate files, duplicate blocks in files, or identical
extents of data within files. Stored data on the cluster is inspected, block by block,
and one copy of duplicate blocks is saved, thus reducing storage expenses by
reducing storage consumption. File records point to the shared blocks, but file
metadata is not deduplicated.

SmartLock Compliance

PowerScale Administration-SSP1

Page 434 © Copyright 2020 Dell Inc.


Compliance is a regulatory requirement that carries certain restrictions as to how
retention must be implemented. The simple Securities and Exchange Commission
(SEC) Rule 17a-4(f) definition states that:“the requirement in paragraph (f)(2)(ii)(A)
of the rule permits use of an electronic storage system that prevents the
overwriting, erasing, or otherwise altering of a record during its required retention
period through the use of integrated hardware and software control codes.”This rule
is often seen as the regulatory standard that must be met for data retention by
other regulatory agencies. OneFS uses a specific compliance clock for SmartLock
Compliance retention. System integrity is one of the required elements to
guarantee that the retention of the file meets the compliance requirements. The
system must be secure and protect against modifications which could allow data to
be modified or deleted. Retention date integrity is another requirement that refers to
how the retention date is stored and accessed so that retention time requirements
are met.

SmartLock Retention Period


Retention is a time period where files are set to a read-only state and may not be
moved, modified, or deleted until a future date. When reaching the retention date,
you can once again modify or delete the file. Files from the PowerScale cluster are
never automatically deleted, and OneFS provides no automated means to delete
files with expired retention. The date varies by the internal and regulatory
requirements of the organization. A retention clock manages the date and time that
is associated with the retention date.

SmartLock WORM
SmartLock provides WORM (write-once/read-many) status on files. In a WORM
state, files can be read but not modified. "Committing" a file is changing a file from
a read/write state to a WORM state that has a retention expiration date. Files are
committed to a WORM state when using SmartLock.

SmartPools
SmartPools is a software module that enables administrators to define and control
file management policies within a cluster.

SmartPools Advanced License


The advanced feature, disk pool spillover management, enables the choice
whether write operations are redirected to another node pool when the target node
pool is full. If SmartPools is unlicensed, spillover is automatically enabled.

SmartPools Basic License

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 435
A single tier has only one file pool policy that applies the same protection level and
I/O optimization settings to all files and folders in the cluster. The basic version of
SmartPools supports virtual hot spares, enabling space reservation in a node pool
for reprotection of data. OneFS implements SmartPools basic by default.

SMB Continuous Availability (CA)


CA enables SMB clients to transparently and automatically failover to another node
if a network or node fails. CA is supported with Microsoft Windows 8, Windows 10,
and Windows 2012 R2 clients.

SMB Server-Side Copy


Clients using server-side copy can experience considerable performance
improvements for file copy operations, like CopyFileEx or "copy-paste" when using
Windows Explorer. Server-side copy only affects file copy or partial copy operations
in which the source and destination file handles are open on the same share and
does not work for cross-share operations.

SMB Service Witness Protocol (SWP)


Microsoft introduced an RPC-based-mechanism, called SWP. SWP provides a
faster recovery mechanism for SMB 3.0 clients to fail-over should their server go
down. SWP requires continuously available file shares and is aware of cluster or
scale-out storage. SWP observes the servers in use and if one is unavailable,
notifies the SMB client to release its file handle. The exchange happens within five
seconds, dramatically decreasing the time from the 30 seconds to 45 seconds
previously used with time-outs. SWP uses advanced algorithms to determine the
metadata and user data blocks that are cached in L3. L3 cached data is durable
and survives a node reboot without requiring repopulating.

SMB Time Out Service


The time-out services must wait for a specific period before notifying the client of a
server down. The time-outs can take 30 seconds to 45 seconds, which creates a
high latency that is disruptive to enterprise applications.

Snapshot - Redirect on Write (RoW)


RoW are system defined snapshots. RoW avoids the double write penalty by
writing changes to a snapshot protected file directly to another free area of the file
system. However, RoW has increased file fragmentation. RoW in OneFS is used
for more substantial changes such as deletes and large sequential writes.

Snapshot Manual Create

PowerScale Administration-SSP1

Page 436 © Copyright 2020 Dell Inc.


Manual snapshots are useful to create a snapshot immediately, or at a time that is
not specified in a snapshot schedule. For example, if planning to change the file
system, but are unsure of the consequences, capture the current file system state
using a snapshot before making changes.

Snapshot Schedule
The most common method is to use schedules to generate the snapshots. A
snapshot schedule generates snapshots of a directory according to a schedule. A
benefit of scheduled snapshots is not having to manually create a snapshot every
time wanted. An expiration period should be assigned to the snapshots that are
generated, automating the deletion of snapshots after the expiration period.

SnapshotIQ
OneFS snapshots are used to protect data against accidental deletion and
modification. Because snapshots are available locally, users can restore their data
without administrative intervention.

Stateless Connection
A stateless connection maintains the session or “state” information about the client
side. If a node goes down, the IP address that the client is connected to fails over
to another node in the cluster. The client would not know that their original node
had failed.

Static Aggregation Mode


Static modes do not facilitate communication between nodes and the switch.

Storage Pool Global Settings


Global settings include L3 cache enablement status, global namespace
acceleration (GNA) enablement, virtual hot spare (VHS) management, global
spillover settings, and more. You can use the "isi storagepool" command to
manage the SmartPools settings.

Virtual Hot Spare (VHS)


VHS is available with the licensed and unlicensed SmartPools module. By default,
all available free space on a cluster is used to rebuild data. The virtual hot spare
option reserves free space for this purpose. VHS provides a mechanism to assure
there is always space available and to protect data integrity when the cluster space
is overused.

WebUI

PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 437
The browser-based OneFS web administration interface provides secure access
with OneFS-supported browsers. This interface is used to view robust graphical
monitoring displays and to perform cluster-management tasks.

Windows ACL
A Windows ACL is a list of access control entries, or ACEs. Each entry contains a
user or group and a permission that allows or denies access to a file or folder.

PowerScale Administration-SSP1

Page 438 © Copyright 2020 Dell Inc.


PowerScale Administration-SSP1

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 439
LAB GUIDE -
POWERSCALE
ADMINISTRATION
Version [1]

LAB GUIDE

INSTRUCTOR GUIDE
Internal Use - Confidential
Dell Confidential and Proprietary

Copyright © 2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.

Lab Guide - PowerScale Administration

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page i
Table of Contents

Exploring the Lab Environment ................................................................................ 2


Introduction to PowerScale ..................................................................................... 11
Adding Nodes to the Cluster ................................................................................... 20
Using CLI commands .............................................................................................. 29
Designing a OneFS Directory Structure................................................................. 38
NTP ............................................................................................................................ 43
Authentication Providers ........................................................................................ 50
Configuring Access Zones ...................................................................................... 59
Configuring SmartConnect ..................................................................................... 66
RBAC and ZRBAC .................................................................................................... 84
On-disk Identity ...................................................................................................... 107
Permissions............................................................................................................ 114
SMB Shares ............................................................................................................ 129
NFS Exports ........................................................................................................... 147
S3............................................................................................................................. 163
HTTP - Optional ...................................................................................................... 178
Data Protection and File Layout ........................................................................... 187
SmartPools ............................................................................................................. 213
File Pool Policies ................................................................................................... 222
File Filtering ............................................................................................................ 237
SmartQuotas .......................................................................................................... 243
SmartDedupe .......................................................................................................... 270
SnapshotIQ ............................................................................................................. 285
SyncIQ..................................................................................................................... 305
SmartLock .............................................................................................................. 321

Lab Guide - PowerScale Administration

Page ii © Copyright 2020 Dell Inc.


HealthCheck ........................................................................................................... 332
Monitoring .............................................................................................................. 342

Lab Guide - PowerScale Administration

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page iii
Exploring the Lab Environment

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 1
Exploring the Lab Environment

Exploring the Lab Environment

Connect to the Lab Environment

In this lab, you perform the following tasks:


• Review the lab scenario and topology
• Review the lab guide structure
• Login to the lab environment

Login to the Lab Environment


To login to the Virtual Learning Platform (VLP) lab environment, follow these steps.
Use the VLP to complete the lab exercises. If you are in a classroom, use your
personal computer or a student computer. If you are taking the course online, you
may also use your home computer.

1. The On-Demand version of the PowerScale Administration is associated with


an On-Demand Lab to reinforce the content. Click here to register for the On-
Demand Lab(https://education.dellemc.com/content/emc/en-
us/csw.html?id=933227881).

NOTE: This step is only applicable to the On-Demand PowerScale


Administration course.

2. Launch the Chrome browser, and then go to


http://edulab.emc.com/emcedu/catalogs/

Note: Chrome is the preferred browser and delivers the best experience. If
you do not have Chrome, you can use the browser of your choice.

3. If you are in an Instructor-led in-class or an online course, login using the


credentials that your instructor provided.

Lab Guide - PowerScale Administration

Page 2 © Copyright 2020 Dell Inc.


Exploring the Lab Environment

4. Under the enrollments tab, click on Start This Lab.

5. A Welcome video opens, providing a lab orientation. If this is the first time you
are using the VLP, view the video. If the video does not automatically play, in
the upper right corner of your browser, click Help > Tips to review the EduLab
Orientation Video. The orientation video shows you how to deploy the lab,
access the console, and view the lab guide.

6. To learn more about the VLP, watch the EduLab Orientation video mentioned
in the previous step.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 3
Exploring the Lab Environment

Instructor Note: Ensure that all the students are able to access the
VLP, assist if otherwise.

Lab Guide - PowerScale Administration

Page 4 © Copyright 2020 Dell Inc.


Exploring the Lab Environment

Lab Scenario and Topology


Boston Media Corporation (BMC) is an organization that provides newspapers,
periodicals, and online streaming news. BMC has acquired a new PowerScale
cluster to help handle their storage growth while maintaining current staffing levels.
The name of the cluster is boston. There are currently seven IT employees in the
organization.

You are John and have worked at BMC for just over a month and have been
assigned the storage administrator for the cluster. Based on your storage
experiences, you acknowledge that the configuration and day-to-day maintenance
of storage infrastructure is complex and may require several administrators to
manage it effectively.

The IT manager has told you the previous NAS platform had challenges with
scaling the storage. Those systems were limited by their single file system
(namespace) size (16TB typical). BMC will need namespace capabilities of a much
larger scale and the ability to quickly adjust to unexpected growth as their customer
base expands.

The cluster has been racked, stacked, powered, and added to DNS. The IT
manager along with senior storage administrators have put together the necessary
documentation and steps required to configure the cluster. As a part of the team,
configure the cluster based on the documentation and record your progress using
the lab exercises. The initial implementation is targeted for three business units in
the BMC organization: Sales, Engineering and Marketing.

Your goal is to implement the PowerScale cluster so that the users from each
business unit can access the data. Once done, you will configure services
such as snapshots and quotas on the data.

1. In this course, you have a dedicated lab environment which includes:

• Jumpstation: Windows 2012 client

• Centos: CentOS client

• DC: Windows 2012 - DC/DNS

• InsightIQ

• DataIQ v1

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 5
Exploring the Lab Environment

• Six node cluster

Note that for usability purposes, the password for ALL systems is
Dees12345!

Note that the URL for the PowerScale WebUI used throughout this lab is
https://192.168.3.11:8080, the IP address for node-1. You can manage the
cluster through any active cluster node by using the IP address of that node.

Lab Guide - PowerScale Administration

Page 6 © Copyright 2020 Dell Inc.


Exploring the Lab Environment

Lab Guide Structure


The lab guide and lab exercises are presented in a skills-based structure. You can
perform the typical lab exercise in one of three ways: PEQ-based, WebUI step-by-
step instructions, and CLI step-by-step instructions.

Although most lab exercises are broken down into the three methods, not all lab
exercises can be strictly accomplished using only the WebUI or only the CLI. Some
of the lab exercises will not have a skills-based structure.

• PEQ-based: This method is intended for individuals with some PowerScale


administration experience or for those who want a challenging lab expierence.
PEQ stands for Pre Engagement Qualifier and contains only the information
needed to configure the system. For the PEQ-based implementation, you are
only given the information needed to complete the task, not the configuration
steps needed.
• WebUI step-by-step: This method leads you through the configuration by
detailing each step using the PowerScale WebUI. The exercises show
screenshots of expected field entries and outcomes. Typically, there is very
little, if any, command line used in this method.
• CLI step-by-step: This method leads you through the configuration by detailing
each step using the OneFS CLI. The exercise shows screenshots of commands
and the expected outputs. Typically, there is very little interaction with the GUI.
Some task such as configuring Windows DNS entries and mapping shares will
use the GUI.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 7
Exploring the Lab Environment

Lab Guide Layout


1. You can use the lab to complete the exercises in one of three methods; the
PEQ, WebUI, or CLI. Each exercise begins with a short scenario and a few
objectives.

a. The first page of the lab exercise is the PEQ. If you are experienced
or if you prefer a more challenging experience, try using the PEQ
method.

Lab Guide - PowerScale Administration

Page 8 © Copyright 2020 Dell Inc.


Exploring the Lab Environment

b. Following the PEQ pages are the step-by-step instructions for the
WebUI. Remember, you do not have to use this method if you are
configuring using the PEQ or CLI method.

c. Lastly, you can choose to configure using the CLI method.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 9
Exploring the Lab Environment

Lab Guide - PowerScale Administration

Page 10 © Copyright 2020 Dell Inc.


Introduction to PowerScale

Introduction to PowerScale

Scenario:

With the Implementation Engineer onsite, watch the Implementation Engineer run
the Configuration Wizard.

Your 4-node cluster is ready. Each node is named boston-<#>. Next, you will
verify that you can connect to the cluster via the OneFS WebUI and the CLI. Once
connected, check the basic health of the cluster and the available storage space.

For the introduction to the PowerScale, there is no skills-based exercises. You will
perform each task in a step-by-step manner.

In this lab, you perform the following tasks:


• Observe initial implementation
• Access the cluster using the WebUI
• Access the cluster using the CLI

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 11
Introduction to PowerScale

Initial Configuration Video


1. Link to the video:
https://edutube.emc.com/Player.aspx?vno=zyK3csP1f1h6F0cke3Ivog==&auto
play=true

Lab Guide - PowerScale Administration

Page 12 © Copyright 2020 Dell Inc.


Introduction to PowerScale

Access the Cluster using the WebUI


1. In the VLP, launch the Jumpstation and login as user DEES\John and
password Dees12345!

a. From the Jumpstation, launch a Chrome browser and navigate to


https://192.168.3.11:8080. This is the external IP address and port
for node-1 of the cluster. If you get the prompt about the connection
privacy, click Advanced and then click Proceed to <IP> (unsafe).

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 13
Introduction to PowerScale

2. Login to the WebUI for your new PowerScale cluster. Click on the Log in
button after entering the following credentials:

a. User name: root

b. Password: Dees12345!

3. On the Dashboard menu, the Cluster status tab is shown as default. You
may see failed events in the New event groups table. You can ignore the
events for the purposes of this lab.

4. Scroll down to the Status table. In the Status table, mouse over the icon in the
Status column for nodes with ID 1,2,3 and 4 to see their status description.

Lab Guide - PowerScale Administration

Page 14 © Copyright 2020 Dell Inc.


Introduction to PowerScale

Note: A node may have a status other than healthy (green) because of the
number of installed drives. Healthy status is not required for the purposes of
these lab exercises.

5. For your documentation, write down the IP address of each node.

TIP: You can click the ID number of each node to go to the Node status page.

a. Node 1 IP:

b. Node 2 IP:

c. Node 3 IP:

d. Node 4 IP:

Instructor Note: Prompt the students to find and view the IP address
for each node.

6. Examine the Cluster size chart presented in the Cluster Status tab and note
the available hard disk space. Check the chart to see the growth of available
space as you add additional nodes to your cluster.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 15
Introduction to PowerScale

Note: The lab image used does not reflect capacities you will see on a
production system.

Question: What is the current HDD capacity available?

7. Minimize the browser.

Lab Guide - PowerScale Administration

Page 16 © Copyright 2020 Dell Inc.


Introduction to PowerScale

Access the Cluster using the CLI


1. Launch the mRemoteNG application. mRemoteNG is the application that you
will use to SSH into the cluster.

Tip: Click the shortcut on the desktop or select from the task bar.

2. Start an SSH session to node 1 by double clicking Bos-node1. If you get a


PuTTY Security Alert, click Yes.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 17
Introduction to PowerScale

3. Login as root, press ENTER, then type the password Dees12345! and press
ENTER. Note the PowerScale OneFS version for your documentation.

4. According to the Implementation Engineer, the first thing you should always do
when getting to the CLI is to check the health of the cluster. To display
information about the cluster’s status, at the boston-1 prompt, press ENTER
after typing the isi status command.

Tip: Use pipe to more (| more) or to less (| less) to prevent from needing
to scroll to view the output. Use the spacebar to forward the output. The less
command is a more powerful version of more. When reading through large
text files, the less command loads each page into memory a page at a time,
making it more efficient.

Tip: You can also use the isi stat command, which has identical output.
Remember, all commands are case sensitive.

Lab Guide - PowerScale Administration

Page 18 © Copyright 2020 Dell Inc.


Introduction to PowerScale

5. Notice that the health of the cluster and the nodes is OK. Also, note the initial
available storage size with just the first four nodes in the cluster. The numbers
on your virtual cluster may not match the screenshots exactly but the location
of the information, and where you should look, will match the highlighted fields
in the screenshots throughout this document.

Question: What is the avail capacity for the cluster? How does this compare
to what the WebUI reported?

Instructor Note: The WebUI and CLI report the same storage
capacity.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 19
Adding Nodes to the Cluster

Adding Nodes to the Cluster

Scenario:

Now that you have validated the initial cluster build, you need to add the two
remaining nodes to the cluster. You have been given the procedures for how to do
this using the CLI and the WebUI.

The objective of the lab is to add two nodes to the cluster. If needed, use the
WebUI or the CLI step-by-step walk through portion of the exercise for guidance.

In this lab, you perform the following tasks:


• Add a cluster node using the Configuration Wizard.
• Add a cluster node using the WebUI.

Lab Guide - PowerScale Administration

Page 20 © Copyright 2020 Dell Inc.


Adding Nodes to the Cluster

Add Nodes to the Cluster


Add the nodes to the cluster using the Configuration Wizard and the WebUI. Verify
the nodes that are added.

1. Use the following information to join nodes:

a. Add one of the two remaining nodes to the cluster using the
Configuration Wizard. Open the node that you want to add from the
console.

i. Join the node to the existing boston cluster.

b. Add the remaining node to the cluster using the WebUI.

i. WebUI page: Cluster Management, Hardware


Configuration.

ii. Add the remaining node to the existing boston cluster.

Hint: In the WebUI, the node you want to add is identified by


its serial number.

c. Verify that the nodes are added to the cluster.

Hint: Browse the Dashboard menu options in the WebUI.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 21
Adding Nodes to the Cluster

CLI Step-by-Step Walk Through: Add Nodes


1. From the VLP environment, open the Console for the fifth node in the cluster
by clicking Node 5 in console and then click inside the pane to activate the CLI
of the node. The Console tab is located at the upper left corner of the VLP.

2. Since this node is not part of the cluster, you will see the Configuration Wizard
as shown:

3. To join Node 5 to the boston cluster using the Configuration Wizard, perform
the following actions:

a. At the Wizard prompt, type 2 and press Enter to join the node to an
existing cluster.

b. In Join cluster prompt, type 1 and press Enter to join Node 5 to the
boston cluster.

Note: This process may take a few minutes due to the nature of the
virtual environment.

4. Verify that the node as been added.

Lab Guide - PowerScale Administration

Page 22 © Copyright 2020 Dell Inc.


Adding Nodes to the Cluster

a. On the Jumpstation, login as DEES\John (password: Dees12345!).


Open mRemoteNG from the taskbar. Select Bos-node1, and SSH
using root/Dees12345!

b. At the boston-1 prompt, run the isi status command to verify that
node 5 has joined the cluster. Node 5 appears at the bottom of the list
with ID set to 5.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 23
Adding Nodes to the Cluster

WebUI Step-by-Step Walk Through: Add Nodes


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to Cluster management, Hardware Configuration. Click Add a


node.

3. The Add node to cluster window shows the nodes available to add. There
will be one node displayed. When multiple nodes are available, there is no way
to distinguish node order prior to adding them to the cluster. In a production,
real world implementation, when adding multiple nodes into a cluster, ensure
to get the serial number off the back of the node. You can join nodes to the
cluster in the order you want to join them, or the order they appear in the rack.

a. Select the node and then click Add node. The node joins the cluster.

Lab Guide - PowerScale Administration

Page 24 © Copyright 2020 Dell Inc.


Adding Nodes to the Cluster

Note: Do not add another node to the cluster until the first node is joined because
the nodes may join out of order and break your sequential IP numbering.

4. Now verify that the node is added, go back to the Dashboard, Cluster
overview page of the WebUI. On the Cluster status tab, confirm that the
node is added to the cluster. You may need to refresh the screen or click on
another option if the virtual environment is not responding.

a. On the Dashboard, verify Node 6 appears at the bottom of the list


with ID set to 6. Wait until node shows green before going to the next
step.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 25
Adding Nodes to the Cluster

b. View the Node status of Node 6 by clicking the node ID in the cluster
status tab. Notice that the node is assigned the next IP address (ext-
1) from the configuration range so that the IP addresses are
sequential. The configuration range was created when the
Implementation Engineer initially set up the cluster.

Note: If the Status column is red, wait a minute or two for the node to
finish joining. The status turns green and the values populate once
the node fully joins the cluster.

Lab Guide - PowerScale Administration

Page 26 © Copyright 2020 Dell Inc.


Adding Nodes to the Cluster

5. On the Cluster management menu, click Events and alerts.

a. Under Event Groups, look at the list of event notifications along with
the level in the Severity. If there are issues with the cluster or events
that you should be aware of, the Severity will be Warning or Critical.
You may not have any events such as those shown in the graphic.
The graphic shows event you may see because of the virtual
environment. Check the box at the top of the ID column to select all
the events.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 27
Adding Nodes to the Cluster

b. From the Select an action drop-down list, select Mark resolved. On


the confirmation dialog box, click Mark resolved. This resolves the
event, and you will not be notified again unless a new event occurs.

Lab Guide - PowerScale Administration

Page 28 © Copyright 2020 Dell Inc.


Using CLI commands

Using CLI commands

Scenario:

Now you have a 6 node cluster. Next, you need to validate the configuration of the
newly built cluster. The IT manager has asked you to use specific isi commands
to become familiar with the cluster, understand the hardware layout, and use the
built-in help options. OneFS relies on the backend network to communicate across
the nodes, so take a moment to see how this works.

This lab explores the OneFS isi command structure and syntax. You will perform
each task in a step-by-step manner.

In this lab, you perform the following tasks:


• Run isi commands
• Examine cluster hardware
• Use the Configuration console

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 29
Using CLI commands

isi commands
In this task, use the built-in man isi and isi --help, to view the commands
and command structure.

1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open


mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. To view the isi command structure with all of its subcommands and options,
at the boston-1 prompt, press ENTER after typing the isi --help or isi -
h command.

Tip: You must use the vertical scroll bar to view the entire content.
Alternatively, typing a pipe | and the word more (| more) is a common
addition to commands with a lot of output. | more enables you to view the
output one page at a time. Press ENTER to view more of the output.

3. Pick one of the commands from the output and type it at the prompt. For
example: isi job or isi sync and see what type of output or assistance
you get. Shown in the graphic is the output for running the isi ntp
command.

Lab Guide - PowerScale Administration

Page 30 © Copyright 2020 Dell Inc.


Using CLI commands

Tip: To view a basic description of any command and its available options,
type -h or --help option after the command and press ENTER. For
example: isi ntp --help

4. To view a detailed description of the isi command, subcommands and


options, at the boston-1 prompt, press ENTER after typing the man isi
command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 31
Using CLI commands

Tip: Type q to exit from the man command and return to the boston-1 prompt.

5. The tab complete is a useful command line feature that can auto-fill partially
typed commands, file names, options, paths, subcommands and more. To
perform tab complete, press TAB while typing a command and notice that it
will autocomplete the command you are typing or suggest options.

Instructor Note: Demonstrate the use of the tab complete feature for
autocomplete and option suggestions.

Lab Guide - PowerScale Administration

Page 32 © Copyright 2020 Dell Inc.


Using CLI commands

Examine Cluster Hardware


In this task, you use isi commands to examine the cluster hardware.

1. To view the hard drives for node 1, at the boston-1 prompt, press ENTER
after typing the isi devices drive list | grep dev command.

Note: The use of | grep in the command. The addition to the command
enables you to view only the output fields with the word "dev". Running the
command without grep will output many lines that show an EMPTY bay. Try
running without using | grep.

The output shows HEALTHY drives in node 1. Also, the JOURNAL drive in
the first drive bay is required because this virtual cluster does not have an
internal SSD to store it.

a. To view the journal drive for all nodes, run the isi_for_array isi
device drive list | grep -w da1 command. The grep -w
switch is used to find an exact match. Similarly, you can view all the
drives for all nodes by running the isi_for_array isi device
drive list | grep dev command.

2. To get the list of subcommands for statistics, at the boston-1 prompt, press
ENTER after typing the isi statistics --help command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 33
Using CLI commands

3. To view all the drives on this virtual node, at the boston-1 prompt, press
ENTER after typing the isi statistics drive | more command. Notice
in the output that there is a single SSD and the rest are SAS and UNKNOWN
drives. You can use grep to filter only what you want.

Note: grep'ing the Drive string in the command is to display the header. The -
e is used to display multiple strings.

4. To view the hardware in every bay, you can run isi devices for all six
nodes in the cluster. At the boston-1 prompt, press ENTER after typing the
isi_for_array 'isi devices drive list' | grep -ie Lnn -e
dev | more command.

Lab Guide - PowerScale Administration

Page 34 © Copyright 2020 Dell Inc.


Using CLI commands

Note the use of | grep and | more in the command. The string Lnn is used
to output the header. Try the isi_for_array 'isi devices drive
list' command without piping.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 35
Using CLI commands

The Configuration Console


Now, use the Configuration console (also known as config subsystem) command to
view the internal int-a, int-b and failover networks. You know you are in the
Configuration console because the command prompt changes to boston >>>.

1. In the mRemoteNG window, open the Configuration console. Press ENTER


after typing the isi config command.

Note: You can only run the console commands while in the Configuration
console. You must leave the Configuration console to run any other cluster
command.

2. To list the documented commands available in the console, press ENTER


after typing the help command.

Tip: You do not need to type isi before commands while in the Configuration
console.

3. To view options for the interface command, press ENTER after typing the
help interface command.

Question: What are the two valid interfaces shown in the output?

Instructor Note: int-a and int-b

4. To view the internal networks on the cluster, press ENTER after typing the
interface command.

Lab Guide - PowerScale Administration

Page 36 © Copyright 2020 Dell Inc.


Using CLI commands

5. To view the configuration table with the network information for int-a and int-b,
press ENTER after typing the status command.

Question: What is the number of allocated IP address and the number of free
IP addresses available on the cluster?

Instructor Note: 6 and 3

6. Exit the Configuration console. Press ENTER after typing the exit command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 37
Designing a OneFS Directory Structure

Designing a OneFS Directory Structure

Scenario:

The first thing to configure is the directory structure. The three BMC business units
to access the boston cluster are sales, engineering, and marketing. In this
exercise you will create a unique base directory structure for the engineering and
marketing business units. You will create the sales base directory as part of
another exercise.

Given in the lab exercise is the information needed to create the base directories. If
needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Create the OneFS base directories for the engineering and marketing business
units

Instructor Note: Instruct students to use either the objective


information, the WebUI step-by-step, or the CLI step-by-step method.

Lab Guide - PowerScale Administration

Page 38 © Copyright 2020 Dell Inc.


Designing a OneFS Directory Structure

Base Directories
In this task, you will create two of the three required base directories. Note that you
will create the sales directory is a later lab exercise.

1. Use the following information to configure the base directories:

a. To configure the base directories, use the WebUI, File system


explorer page or the standard mkdir UNIX command.

i. Base directory name: engineering

ii. Base directory path: /ifs/engineering

iii. Base directory name: marketing

iv. Base directory path: /ifs/marketing

b. Verify that the base directories have been created by either looking
into the File system explorer page or by using the ls command.

Question: Is there a difference in default permissions when creating the base


directory using the WebUI and the CLI?

Instructor Note: Yes. The WebUI default is 777 whereas the CLI is
755.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 39
Designing a OneFS Directory Structure

WebUI Step-by-Step Walk Through: Create Base Directories


In this task, you will create two of the three required base directories. Note that you
will create the sales directory is a later lab exercise.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to the File System, File system explorer page.

a. Engineering base directory: Select Create a directory. Enter


Directory name as engineering, and keep the default user, group,
and permissions. Click Create directory.

b. Verify that the /ifs/engineering base directory is shown in the


File system explorer page.

Lab Guide - PowerScale Administration

Page 40 © Copyright 2020 Dell Inc.


Designing a OneFS Directory Structure

c. Marketing base directory: Select Create a directory. Enter Directory


name as marketing, and keep the default user, group, and
permissions. Click Create directory

d. Verify that the /ifs/marketing base directory is shown in the File


system explorer page.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 41
Designing a OneFS Directory Structure

CLI Step-by-Step Walk Through: Create Base Directories


In this task, you will create two of the three required base directories. Note that you
will create the sales directory is a later lab exercise.

1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open


mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

a. Run the pwd command to view the present working directory.

b. Run the mkdir /ifs/engineering command.

c. Run the mkdir /ifs/marketing command.

d. Run the ls -la /ifs command and verify the base directory is
present.

Lab Guide - PowerScale Administration

Page 42 © Copyright 2020 Dell Inc.


NTP

NTP

Scenario:

Your goal is to provide user access to the cluster. Now that you have a 6-node
cluster, you need to synchronize all the cluster nodes to the same time source.
Time synchronization is critical for cluster functionality.

The IT manager has instructed you to synchronize the cluster’s Network Time
Protocol (NTP) services with the dees.lab Active Directory server. Time
synchronization is your first step to access management, you need to
configure the date and time reported by the cluster.

Given in the lab exercise is the information needed to configure NTP. If needed,
use the WebUI or the CLI step-by-step walk through portion of the exercise for
guidance.

In this lab, you perform the following tasks:


• Configure the NTP services

Instructor Note: Instruct students to use either the objective


information, the WebUI step-by-step, or the CLI step-by-step method.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 43
NTP

NTP
The Active Directory domain controller is on the Windows Server 2012R2 VM. This
VM is designated as the time server that the cluster will sync its time to.
Authentication will fail if the cluster time is more than 5 minutes off from the AD
domain controller.

1. Use the following information to configure NTP on the cluster:

a. NTP can be configured using the WebUI General settings page or


the isi ntp servers command.

i. NTP server IP Address: 192.168.3.2

b. Verify that the date, time, and timezone on the nodes is the same as
that on the domain controller. Change the timezone if they do not
match. To change the time and timezone on the nodes, navigate to
the Date & Time tab on the General settings page of the WebUI.

i. Timezone: America/New_York - EDT

Lab Guide - PowerScale Administration

Page 44 © Copyright 2020 Dell Inc.


NTP

WebUI Step-by-Step Walk Through: NTP


The Active Directory domain controller is on the Windows Server 2012R2 VM, and
has an IP address of 192.168.3.2. You will designate this as the time server that
the cluster will sync its time to because authentication will fail if the cluster time is
more than 5 minutes off from the AD domain controller.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to the Cluster management, General settings page, and then click
the NTP tab.

a. In the NTP Servers field, enter 192.168.3.2, and then click Save
Changes. The server with IP address 192.168.3.2 is the time server
that you want to synchronize the cluster to. The IP address of the
NTP server that you have added is displayed on the NTP Servers
field.

3. Select the Date and time tab. Compare the node’s Date and Time and Time

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 45
NTP

Zone to the system time on the DC server and the Jumpstation. You will
need to launch the DC console and login as administrator/Dees12345!.
Notice that the node time is in 24-hour format.

a. Change the cluster time if the DC time zone is different from the time
zone set on the nodes. Use the Time zone drop-down list to
configure the cluster time and time zone to match the DC server
using 24-hour format. No matter where you are in the world, set the
time zone to America/New_York - EDT.

Lab Guide - PowerScale Administration

Page 46 © Copyright 2020 Dell Inc.


NTP

b. Click Save changes if you have made any changes.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 47
NTP

CLI Step-by-Step Walk Through: NTP


The Active Directory domain controller is on the Windows Server 2012R2 VM, and
has an IP address of 192.168.3.2. You will designate this as the time server that
the cluster will sync its time to because authentication will fail if the cluster time is
more than 5 minutes off from the AD domain controller.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open


mRemoteNG, and SSH to any node using root/Dees12345!

2. At the boston-1 prompt, run the isi ntp servers create 192.168.3.2
command to add the NTP server. To verify that the server has been added,
run the isi ntp servers list command, and you will see the server
listed as shown below:

3. Verify that the date, time and timezone on the nodes are the same as that on
the DC server. You will need to launch the DC console and login as
administrator/Dees12345! Notice that the node time is in 24-hour format.

a. To check the date, time and timezone on the nodes, open the
Configuration console by running the isi config command at the
boston-1 prompt. Notice that the prompt changes to boston>>>

i. To check the current date and time, run the date at the
boston>>> prompt. Similarly, run the timezone command to
check the timezone set on the nodes. Verify that the timezone
is set to Eastern Time Zone as shown below.

Lab Guide - PowerScale Administration

Page 48 © Copyright 2020 Dell Inc.


NTP

ii. If the timezone on the DC server and the nodes do not match,
change the timezone on the nodes by running the timezone
Eastern Time Zone command at the boston>>> prompt.
Run the commit command to save changes.

b. To exit the Configuration console, run the exit command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 49
Authentication Providers

Authentication Providers

Scenario:

BMC uses a mix of NFS and SMB clients. The next step for achieving client access
is to configure your cluster to use both LDAP and Active Directory services to
authenticate clients. OneFS best practice is to always connect LDAP first before
joining the cluster to an AD domain. To complete the cluster authentication setup,
join the cluster to Active Directory (AD).

Given in the lab exercise is the information needed to configure the Authentication
Providers. If needed, use the WebUI or the CLI step-by-step walk through portion
of the exercise for guidance.

In this lab, you perform the following tasks:


• Connect to the LDAP server
• Join Active Directory

Instructor Note: Instruct students to use either the objective


information, the WebUI step-by-step, or the CLI step-by-step method.

Lab Guide - PowerScale Administration

Page 50 © Copyright 2020 Dell Inc.


Authentication Providers

Authentication Providers
Configure LDAP and Active Directory.

1. Use the following information to configure your cluster to use LDAP for
authentication:

a. To configure the LDAP provider you can use the WebUI


Authentication providers page or the isi auth ldap command.

i. LDAP Provider Name: Isilon-LDAP

ii. Server URIs: ldap://centos

iii. Base distinguished name (DN): dc=dees and dc=lab

Hint: Separate the two DN using a comma. It does not matter if


there are spaces or no spaces after each comma.

iv. Groupnet: groupnet0

2. Use the following information to configure Active Directory for the cluster:

a. Before configuring Active Directory, ensure that the time on the nodes
and time on the Jumpstation has a time difference of less than five
minutes. You can configure Active Directory using the WebUI,
Authentication providers page or the isi auth ads command.

i. Domain name: dees.lab

ii. User: administrator

iii. Password: Dees12345!

iv. Groupnet: groupnet0

3. Verify that LDAP and Active Directory have been configured.

a. WebUI: View the properties of the System access zone.

b. CLI: Use the isi auth status command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 51
Authentication Providers

WebUI Step-by-Step Walk Through: Connecting to LDAP


Choose to configure a base Distinguished Name (DN) using the WebUI. The base
DN identifies the entry in the directory where LDAP clients begin their search. Base
DNs can include the parameters common name (cn), locality (l), organizational unit
(ou), domain component (dc), and others. An example of a Distinguished Name is:
cn=lab1, dc=isilon, dc=com. It does not matter if there are spaces or no spaces
after each comma.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to Access, Authentication providers, and then click the LDAP tab.
Click Add an LDAP provider.

3. In the Add an LDAP provider window, complete the fields as highlighted in


the graphic and then click Add LDAP provider.

Lab Guide - PowerScale Administration

Page 52 © Copyright 2020 Dell Inc.


Authentication Providers

4. To verify that the LDAP provider has been added, go to the LDAP tab, in the
LDAP providers table, ensure the new LDAP provider is listed and that it has
a green dot in the Status column as shown below.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 53
Authentication Providers

CLI Step-by-Step Walk Through: Connecting to LDAP


Choose to configure a base Distinguished Name (DN) using the CLI. The base DN
identifies the entry in the directory where LDAP clients begin their search. Base
DNs can include the parameters common name (cn), locality (l), organizational unit
(ou), domain component (dc), and others. An example of a Distinguished Name is:
cn=lab1, dc=isilon, dc=com. It does not matter if there are spaces or no spaces
after each comma.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open


mRemoteNG, and SSH to any node using root/Dees12345!

2. To configure LDAP, at the boston-1 prompt, run the isi auth ldap
create Isilon-LDAP --base-dn="dc=dees,dc=lab" --server-
uris="ldap://centos" --groupnet=groupnet0 command.

3. Verify that the LDAP provider has been added.

a. At the boston-1 prompt, run the isi auth status command. This
command shows which providers are currently active on the cluster.
Verify that the LDAP provider created is listed and online.

b. At the boston-1 prompt, run isi auth ldap list to view all the
LDAP providers configured for the cluster.

c. At the boston-1 prompt, run isi auth ldap view Isilon-LDAP


to view the provider details.

4. Query the LDAP server and output the various attributes for users. At the
boston-1 prompt, run the ldapsearch -x -h centos -b
dc=dees,dc=lab command.

Lab Guide - PowerScale Administration

Page 54 © Copyright 2020 Dell Inc.


Authentication Providers

WebUI Step-by-Step Walk Through: Joining Active Directory


Choose to configure Active Directory for the cluster using the WebUI. BMC uses a
Windows 2012R2 domain.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Verify the time on the nodes by navigating to Cluster management, General


settings, Date and time. If there is more than a five minute difference, enter
the current time to match the Jumpstation time using 24-hour format, and
then click Save changes.

3. Navigate to the Access, Authentication providers page, Active Directory


tab. Click Join a domain.

4. On the Join a Domain page, enter the fields highlighted in the graphic and
then click Join.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 55
Authentication Providers

a. Joining the domain may take several minutes. After the cluster
successfully joins the domain, you will get a success notification and
the status should be green in the Active Directory providers table.

Lab Guide - PowerScale Administration

Page 56 © Copyright 2020 Dell Inc.


Authentication Providers

CLI Step-by-Step Walk Through: Joining Active Directory


Choose to configure Active Directory for the cluster using the CLI. BMC uses a
Windows 2012R2 domain.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open


mRemoteNG, and SSH to any node using root/Dees12345!

2. To verify the date and time on the nodes, enter the Configuration console by
running the isi config command at the boston-1 prompt. Run the date
command at the boston>>> prompt. Ensure that the time on the nodes and
the Jumpstation have a time difference of less than five minutes. If not, change
the time at the nodes by running the date YYYY/MM/DD HH:MM:SS
command at the boston>>> prompt. Replace the values for the date and time
in the command. Exit the Configuration console by running the exit
command.

3. To configure Active Directory for the cluster, at the boston-1 prompt, run the
isi auth ads create dees.lab administrator --
password=Dees12345! --groupnet=groupnet0 command.

4. Verify that Active Directory had been configured.

a. At the boston-1 prompt, run the isi auth status command. This
command shows which providers are currently active on the cluster.
Verify that the Active Directory provider that was just created is listed
and online.

b. At the boston-1 prompt, run isi auth ads list to view all the
Active Directory providers configured for the cluster.

c. At the boston-1 prompt, run isi auth ads view dees.lab to


verify that the dees.lab provider is online.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 57
Authentication Providers

5. At the boston-1 prompt, run the isi auth users list --domain dees
command. The output shows the Active Directory users.

Lab Guide - PowerScale Administration

Page 58 © Copyright 2020 Dell Inc.


Configuring Access Zones

Configuring Access Zones

Scenario:

Good progress. You have:

✓ Configured NTP
✓ Configured authentication
✓ Configured two of three base directories

The base directories are the root of each access zone you create. Now you will
create the access zones.

Given in the lab exercise is the information needed to configure your access zones.
If needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this exercise, you will perform the following tasks:


• Create access zones for all three of the BMC business unit

Instructor Note: Instruct students to use either the objective


information, the WebUI step-by-step, or the CLI step-by-step method.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 59
Configuring Access Zones

Access Zones
The intent is to have three BMC organizations in unique authentication zones.
Configure three access zones, one for each BMC business unit that will access the
cluster.

1. Use the following information to configure the access zones on the cluster:

a. To configure access zone, use the WebUI, Access zones page or


the isi zone zones command.

b. Configure access zone for the Sales business unit.

i. Access zone name: sales

ii. Base directory: /ifs/sales

Hint: For the sales access zone, you need to create the base
directory while creating the access zone.

Question: What are the default permissions for the sales base
directory?

Instructor Note: 755

iii. Authentication provider: Active Directory

iv. Groupnet: groupnet0

c. Configure access zone for the Engineering business unit.

i. Access zone name: engineering

ii. Base directory: /ifs/engineering

iii. Authentication provider: LDAP

iv. Groupnet: groupnet0

Lab Guide - PowerScale Administration

Page 60 © Copyright 2020 Dell Inc.


Configuring Access Zones

d. Configure access zone for the Marketing business unit.

i. Access zone name: marketing

ii. Base directory: /ifs/marketing

iii. Authentication provider: Active Directory and LDAP

iv. Groupnet: groupnet0

e. Question: What is the default access zone configured for /ifs?

Instructor Note: The System access zone is the default access zone
configured for IFS.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 61
Configuring Access Zones

WebUI Step-by-Step Walk Through: Configure Access Zones


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Configure access zone for the Sales business unit.

a. Navigate to the Access, Access zones page. Click on Create an


access zone.

b. Enter the fields as shown in the graphic and then click Create zone.

Note: As the base directory for sales is not created, the checkbox for
Create zone base directory if it does not exist is checked.

3. Configure access zone for the Engineering business unit.

a. Click on Create an access zone. Enter the fields as shown in the


graphic and then click Create zone.

Lab Guide - PowerScale Administration

Page 62 © Copyright 2020 Dell Inc.


Configuring Access Zones

4. Configure access zone for the Marketing business unit.

a. Click on Create an access zone. Enter the fields as shown in the


graphic and then click Create zone.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 63
Configuring Access Zones

Lab Guide - PowerScale Administration

Page 64 © Copyright 2020 Dell Inc.


Configuring Access Zones

CLI Step-by-Step Walk Through: Configure Access Zones


5. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

a. Access zone: sales

i. isi zone zones create sales /ifs/sales --auth-


providers ads:dees.lab --create-path

Note: If the base directory is not created, use the --create-


path option in the command.

b. Access zone: engineering

i. isi zone zones create engineering


/ifs/engineering --auth-providers ldap:Isilon-
ldap

c. Access zone: marketing

i. isi zone zones create marketing /ifs/marketing


--auth-providers ads:dees.lab --auth-providers
ldap:Isilon-ldap

d. Verify: isi zone zones list

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 65
Configuring SmartConnect

Configuring SmartConnect

Scenario:

You have created the base directory structure, connected to the authentication
providers, and created the access zones. The next step is to configure
SmartConnect. You have enlisted the help of the DNS Administrator to get
SmartConnect configured and name resolution working for the cluster.

Given in the lab exercise is the information needed to configure SmartConnect. If


needed, use the step-by-step walk through portion of the exercise for guidance.

In this exercise, you will perform the following tasks:


• Create the SmartConnect service IP addresses for the subnet
• Create a SmartConnect zone for each access zone
• Create the DNS delegation records

Instructor Note: Instruct students to use either the objective


information, the WebUI step-by-step, or the CLI step-by-step method.

Lab Guide - PowerScale Administration

Page 66 © Copyright 2020 Dell Inc.


Configuring SmartConnect

SmartConnect
1. Use the following information to configure the SmartConnect service IP (SSIP):

a. You can use the WebUI, Network configuration page or the isi
network subnets modify command to configure the SSIP.

Tip: If you need to configure a single SSIP for the cluster, enter the
same IP for the start and end IP values in the range.

i. Subnet: subnet0

ii. Designated SmartConnect service IP address range:


192.168.3.9 - 192.168.3.10

Hint: To verify that the SSIP is configured successfully, use the


isi network subnets view command.

2. Configure the IP address pools for the three SmartConnect zones. You can
use the WebUI, Network configuration page or the isi network pools
create command.

Hint: Create the three IP address pools under subnet0 of groupnet0

a. IP address pool for sales access zone

i. Pool name: sales

ii. IP range: 192.168.3.21 - 192.168.3.29

iii. Interfaces: ext-1 of each node

Tip: In a hybrid cluster environment (mix of different node


types), you may only want to choose the interfaces of nodes
that the business unit will access. For example, in an
environment with a mix of F800 and H400 nodes, the
Engineering business unit may need access only to the F800
nodes. In that case, choose only the interfaces of the F800
nodes while creating the IP address pool for the Engineering
unit.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 67
Configuring SmartConnect

iv. Zone name / balance policy / allocation method:


sales.dees.lab / round-robin / static

Tip: To modify the values for balance policy and allocation


method, you need to activate the SmartConnect Advanced
license. For the purposes of the lab, you may activate a trail
version of the license by navigating to the Licensing page of
the WebUI or using the isi license command.

b. IP address pool for engineering access zone

i. Pool name: engineering

ii. IP range: 192.168.3.31 - 192.168.3.39

iii. Interfaces: ext-1 of each node

iv. Zone name / balance policy / allocation method:


engineering.dees.lab / round-robin / dynamic

c. IP address pool for marketing access zone

i. Pool name: marketing

ii. IP range: 192.168.3.41 - 192.168.3.49

iii. Interfaces: ext-1 of each node

iv. Zone name / balance policy / allocation method:


marketing.dees.lab / round-robin / static

3. Once the IP address pools are configured for each access zone, you need to
create the delegation records in DNS. Use the following information to create a
delegation record for each SmartConnect zone.

a. The DC VM is the DNS server. Login credentials for the DC console:


administrator/Dees12345!.

b. Create two new hosts in the forward lookup zone dees.lab named
sip9 and sip10. sip9 IP address is 192.168.3.9 and sip10 IP address
is 192.168.3.10.

c. Create the following new delegations:

Lab Guide - PowerScale Administration

Page 68 © Copyright 2020 Dell Inc.


Configuring SmartConnect

i. Delegated domain: sales. FQDN: sip9.dees.lab.

ii. Delegated domain: engineering. FQDN: sip10.dees.lab.

iii. Delegated domain: marketing. FQDN: sip9.dees.lab and


sip10.dees.lab.

d. Verify the DNS settings. Ping sales. Ping engineering. Ping


marketing. Ensure you get a reply from each delegation with an IP
address that is in the range defined in the SmartConnect zone.

Question: With the rebalance policy set to round-robin, when you


perform a second ping on a access zone, what should the reply IP
address be?

Instructor Note: The next IP address in the IP range of that access


zone. For sales, a second ping would return the IP address
192.169.3.22.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 69
Configuring SmartConnect

WebUI Step-by-Step Walk Through: Configure SmartConnect


Service IP
1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to the Cluster management, Network configuration page.

3. Click on the View/Edit for subnet0.

a. Click Edit and update the fields as shown in the graphic and click on
Save changes.

Lab Guide - PowerScale Administration

Page 70 © Copyright 2020 Dell Inc.


Configuring SmartConnect

CLI Step-by-Step Walk Through: SmartConnect Service IP


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

a. Configure: isi network subnets modify subnet0 --sc-


service-addr 192.168.3.9-192.168.3.10

b. Verify: isi network subnets view subnet0

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 71
Configuring SmartConnect

WebUI Step-by-Step Walk Through: IP Address Pools


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. On the Cluster management, Network configuration page, click More on


subnet0 and then Add pool. Enter the fields as shown in the graphics for
sales IP address pool and click on Add pool.

Lab Guide - PowerScale Administration

Page 72 © Copyright 2020 Dell Inc.


Configuring SmartConnect

Tip: In a hybrid cluster environment (mix of different node types), you may
only want to choose the interfaces of nodes that the business unit will access.
For example, in an environment with a mix of F800 and H400 nodes, the Sales
unit may need access only to the F800 nodes. In that case, choose only the
interfaces of the F800 nodes while creating the IP address pool for the Sales
unit.

3. Navigate to the Cluster management, Licensing page. Scroll down and click
on Manage trials. Select SmartConnect Advanced and click Start trial.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 73
Configuring SmartConnect

4. Navigate back to Cluster management, Network configuration page.

a. Click More on subnet0 and select Add pool. Enter the following to
create the engineering IP address pool:

• Pool name: engineering

• IP address range: 192.168.3.31 to 192.168.3.39

• Add all the external interfaces of ext-1 to manage the incoming traffic

• SmartConnect Basic: engineering.dees.lab

• SmartConnect Advanced: Set the Allocation method to Dynamic and


leave the rest with the default values.

Lab Guide - PowerScale Administration

Page 74 © Copyright 2020 Dell Inc.


Configuring SmartConnect

5. Click More on subnet0 and select Add pool. Enter the following to create the
marketing IP address pool:

• Pool name: marketing

• IP address range: 192.168.3.41 to 192.168.3.49

• Add all the external interfaces of ext-1 to manage the incoming traffic

• SmartConnect Basic: marketing.dees.lab

• SmartConnect Advanced: Default

6. Verify that the IP address pools are successfully created and listed under
subnet0 as shown in the graphic.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 75
Configuring SmartConnect

Lab Guide - PowerScale Administration

Page 76 © Copyright 2020 Dell Inc.


Configuring SmartConnect

CLI Step-by-Step Walk Through: IP Address Pools


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

a. Create sales SmartConnect zone: isi network pools create


groupnet0.subnet0.sales --access-zone sales --
aggregation-mode roundrobin --alloc-method static -
-ifaces 1-6:ext-1 --ranges 192.168.3.21-
192.168.3.29 --sc-dns-zone sales.dees.lab

Tip: In a hybrid cluster environment (mix of different node types), you


may only want to choose the interfaces of nodes that the business
unit will access. For example, in an environment with a mix of F800
and H400 nodes, the Sales unit may need access only to the F800
nodes. In that case, choose only the interfaces of the F800 nodes
while creating the IP address pool for the Sales unit.

b. Activate the SmartConnect Advanced license. At the boston-1


prompt, run the isi license add --evaluation
SmartConnect_Advanced command. At the : prompt, type q. To
agree to the EULA, type yes and press ENTER.

c. Create engineering SmartConnect zone: isi network pools


create groupnet0.subnet0.engineering --access-zone
engineering --aggregation-mode roundrobin --alloc-
method dynamic --ifaces 1-6:ext-1 --ranges
192.168.3.31-192.168.3.39 --sc-dns-zone
engineering.dees.lab

d. Create marketing SmartConnect zone: isi network pools


create groupnet0.subnet0.marketing --access-zone
marketing --aggregation-mode roundrobin --alloc-
method static --ifaces 1-6:ext-1 --ranges
192.168.3.41-192.168.3.49 --sc-dns-zone
marketing.dees.lab

e. Verify: isi network subnets list

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 77
Configuring SmartConnect

Lab Guide - PowerScale Administration

Page 78 © Copyright 2020 Dell Inc.


Configuring SmartConnect

Step-by-Step Walk Through: DNS Delegation


Once the IP address pools are configured for each access zone, you need to
create the delegation records in DNS.

1. From the VLP, open the DC console. The DC virtual machine is the domain
controller and the DNS server. Login using administrator/Dees12345!

2. Open the DNS Manager.

Tip: Click the shortcut on the taskbar.

a. In the navigation panel, open Forward Lookup Zones, and select


dees.lab.

b. Right-click on dees.lab and select New Host (A or AAAA).

3. In the New Host window, enter the fields as shown and then select Add Host.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 79
Configuring SmartConnect

4. Add a second New Host using the name sip10 and the IP address
192.168.3.10

5. In the navigation panel, right-click on dees.lab and then select New


Delegation. The New Delegation Wizard starts. A delegation domain on the
DNS acts a server name on the network to host shares, exports, and buckets
for the access zone.

a. On the welcome screen, click Next.

b. For the Delegated domain, type sales, click Next, and then click
Add on the Name Servers window.

c. For the Server fully qualified domain name (FQDN) enter


sip9.dees.lab and select OK and then Next.

d. On the Name Servers page, click Next, and then click Finish.

Lab Guide - PowerScale Administration

Page 80 © Copyright 2020 Dell Inc.


Configuring SmartConnect

6. Add two more delegations with the following settings

a. Delegated domain: engineering. FQDN: sip10.dees.lab.

b. Delegated domain: marketing. FQDN: sip9.dees.lab and


sip10.dees.lab.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 81
Configuring SmartConnect

c. In the navigation pane, right-click dees.lab and select Reload. Log off
the DC virtual machine.

7. Verify DNS settings. On the Jumpstation, open a command prompt. Ping


sales. Ping engineering. Ping marketing. Ensure you get a reply from each
delegation with an IP address that is in the range defined in the access zone
IP address pool.

Lab Guide - PowerScale Administration

Page 82 © Copyright 2020 Dell Inc.


Configuring SmartConnect

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 83
RBAC and ZRBAC

RBAC and ZRBAC

Scenario:

Before continuing with configuring user access, the IT manager has asked you to
grant administrative access control to other members of the team who will perform
various administrative activities on the cluster. For each member, create a user
account and assign the user to a suitable role. Create a user with auditing
privileges for the cluster and a user with administrative privileges for the
Engineering business unit.

Given in the lab exercise is the information needed to create and assign users to
roles. If needed, use the step-by-step walk through portion of the exercise for
guidance.

In this lab, you perform the following tasks:


• View the existing users, built-in roles, and privileges
• Create user accounts for System zone and Non-System zone users
• Assign the users to built-in roles

Lab Guide - PowerScale Administration

Page 84 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

RBAC and ZRBAC


1. Use the following information to view the default users, roles, and privileges.

a. View the default engineering and sales users. You can use the
WebUI, Users tab in the Membership and roles page or you can
use the isi auth users command.

Hint: Ensure that you choose the access zone.

Question: When selecting the providers, why is there a difference


between the sales zone users and engineering zone users?

Instructor Note: The engineering access zone is associated with


LDAP and lists the LDAP users and the sales access zone loads the
Domain Users from Active Directory.

b. View the default administrative roles. Use the WebUI, Roles tab in
the Membership and roles page or the isi auth roles
command.

Question: Why is there a difference between the System zone built-in


roles and non-System zone built-in roles?

Instructor Note: The access zones have their own distinct roles.

c. View the privileges for a role in the WebUI. Use the View/Edit button
next to the role name or use the isi auth roles view command.

Question: Is the list of the privileges different for each role? What
privilege enables login to the WebUI? What privilege enables SSH
login?

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 85
RBAC and ZRBAC

Instructor Note: Answers: 1) YES 2) ISI_PRIV_LOGIN_PAPI 3)


ISI_PRIV_LOGIN_CONSOLE

2. Create a Local user. You can use the WebUI, Users tab in the Membership
and roles page or the isi auth users create command.

a. System Zone User

i. User Name: Dante

ii. Password: Dees12345!

iii. Primary Group: Isilon Users

iv. Home Directory: /ifs/home/dante

v. Enable Account: Yes

Question: How would you add an Active Directory user to the


System zone without creating the user on the cluster?

Instructor Note: Add the AD provider to the System zone and choose
the user from the AD users.

b. Engineering Zone User

i. User Name: engAdmin

ii. Password: Dees12345!

iii. Primary Group: Isilon Users

iv. Home Directory: /ifs/engineering/home/engAdmin

v. Enable Account: Yes

Lab Guide - PowerScale Administration

Page 86 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

Hint: To select the zone, use the --zone option in the CLI
command. For the WebUI, select the zone from the dropdown
menu.

3. Add users to roles. You can use the WebUI, Roles tab in the Membership
and roles page or the isi auth roles modify command.

a. User: Dante

i. Role: AuditAdmin

ii. Verify WebUI access: https://192.168.3.12:8080 (credentials:


user Dante, password Dees12345!)

Question: Can Dante create other users and assign them to


roles?

Instructor Note: No, because the AuditAdmin role does not have the
privileges.

b. User: engAdmin

i. Role: ZoneAdmin and ZoneSecurityAdmin

ii. Verify access using the WebUI URL:


https://engineering:8080 (credentials:
engAdmin/Dees12345!)

Question: Can engAdmin SSH into the cluster? Why or Why


not?

Instructor Note: No, because the ZoneAdmin and


ZoneSecurityAdmin roles do not have the ISI_PRIV_LOGIN_SSH
privilege that is required to SSH into the cluster.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 87
RBAC and ZRBAC

Lab Guide - PowerScale Administration

Page 88 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

WebUI Step-by-Step Walk Through: View Users, Roles, and


Privileges
1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. To view the OneFS users that are created by default, navigate to the Access,
Membership and roles page. Click on the Users tab.

a. To view the users in the System zone, set the Current access zone
to System from the dropdown menu, and select the Providers from
the dropdown menu. The users under FILE: System are the core
accounts created by OneFS.

b. To view the users in the engineering zone, set the Current access
zone to engineering, and change the providers to LDAP: Isilon-
LDAP

c. Similarly view the users for different access zones and providers by
changing the Current access zone and Providers field.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 89
RBAC and ZRBAC

Note: When you choose an Active Directory provider, you need to


select the Active Directory Domain to view the users of that domain.

3. To view the built-in administrative roles, navigate to the Access, Membership


and roles page. Click on the Roles tab.

a. View the built-in roles for any zone by selecting that zone in the
Current access zone dropdown menu. Shown below is an example
of the built-in roles in the engineering zone. The engineering zone
has 2 built-in roles: ZoneAdmin and ZoneSecurityAdmin

b. To view the privileges for the ZoneAdmin role, click the View/Edit
button. In the View role details window, note the privileges.

Lab Guide - PowerScale Administration

Page 90 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

i. You may view the privileges of any role in any zone in a similar
manner. Also, view the privileges of a role in the CLI and
notice the difference in the naming from that shown in the
WebUI.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 91
RBAC and ZRBAC

CLI Step-by-Step Walk Through: View Users, Roles, and Privileges


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. To list the OneFS users that are created by default, at the boston-1 prompt,
run the following commands:

a. System zone: isi auth users list

b. Non-System zone:

i. engineering: isi auth users list --zone


engineering

ii. sales zone local users: isi auth users list --zone
sales

iii. Active Directory users: isi auth users list --zone


sales --domain "dees.lab"

iv. Similarly, view the users for other access zones by replacing
the zone name.

3. To list the built-in administrator roles, at the boston-1 prompt, run the following
commands:

a. System zone built-in roles: isi auth roles list

Lab Guide - PowerScale Administration

Page 92 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

b. Non-system zone built-in roles:

i. engineering: isi auth roles list --zone


engineering

ii. sales: isi auth roles list --zone sales

iii. Similarly, view the roles for other access zones by replacing
the zone name.

4. To view the privileges for the built-in roles, at the boston-1 prompt, run the
following commands:

a. For the SystemAdmin role in the System zone: isi auth roles
view SystemAdmin

b. For the ZoneAdmin role in the engineering zone: isi auth roles
view ZoneAdmin --zone engineering

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 93
RBAC and ZRBAC

c. Similarly, view the privileges for any role in any zone by replacing the
role and zone names.

Tip: Run the isi auth roles list --format table -v


command to view all privileges for all roles simultaneously.

Lab Guide - PowerScale Administration

Page 94 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

WebUI Step-by-Step Walk Through: Create User Account


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Create a user account for user Dante in the System access zone.

a. Navigate to the Access, Membership and roles page. Click on the


Users tab. The Current access zone should be set to System. In
the Providers field, select LOCAL: System. Only the default Guest
account is listed. On the right side of the page, click Create user.

b. On the Create a user dialog box, enter the new user information as
shown in the graphic. After entering the user information, click the
Create user button.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 95
RBAC and ZRBAC

c. Verify that the user account for Dante is created successfully and
shown below the user Guest in the Users tab.

3. Create a user account for user engAdmin in the engineering access zone.

a. Navigate to the Access, Membership and roles page. Click on the


Users tab. The Current access zone should be set to engineering.
In the Providers field, select LOCAL: engineering. Click Create
user and enter the user information as shown in the graphic.

Lab Guide - PowerScale Administration

Page 96 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

b. Verify that the user account for engAdmin is created successfully


and shown below the user Guest in the Users tab.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 97
RBAC and ZRBAC

CLI Step-by-Step Walk Through: Create User Account


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create a user account for user Dante in the System access zone.

a. At the boston-1 prompt, run the isi auth users create Dante
--enabled yes --set-password --password-expires no
command.

i. At the password prompt, type Dees12345! and press ENTER.

ii. At the confirm prompt, type Dees12345! and press ENTER.

Note: The text you type at the password and confirm prompt will not be visible.

b. Verify that the user account for Dante has been created by running
the isi auth users list command.Note that without using the
--provider option, the output shows users from all the configured
providers.

3. Create a user account for user engAdmin in the engineering access zone.

a. At the boston-1 prompt, run the isi auth users create


engAdmin --zone engineering --enabled yes --set-
password --password-expires no command.

i. At the password prompt, type Dees12345! and press ENTER.

ii. At the confirm prompt, type Dees12345! and press ENTER.

b. Verify the user is created by running the isi auth users list -
-zone engineering command.

Lab Guide - PowerScale Administration

Page 98 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

c. View the user default setting by running the isi auth users view
engAdmin --zone engineering command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 99
RBAC and ZRBAC

WebUI Step-by-Step Walk Through: Add User to Role


In this task, you will add Dante to the AuditAdmin role. You have the option of
doing this through either the WebUI or the CLI. Instructions for the WebUI are
shown.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Add user Dante to the AuditAdmin role.

a. Navigate to the Access, Membership and roles page. Click on the


Roles tab. Set the Current access zone field to System. Click
View/Edit on the AuditAdmin role.

b. In the View role details window, click on Edit role. In the Edit role
details window, click on Add a member to this role.

Lab Guide - PowerScale Administration

Page 100 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

c. Select Dante as shown in the graphic. Once done, click Save


changes and then Close.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 101
RBAC and ZRBAC

3. Access the WebUI as Dante

a. Log out of the WebUI and login as Dante using the credentials
Dante/Dees12345!

Lab Guide - PowerScale Administration

Page 102 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

b. Notice the difference in menu options as compared to the user root.

4. Add user engAdmin to the ZoneAdmin and ZoneSecurityAdmin roles.

a. Navigate to the Access, Membership and roles page. Click on the


Roles tab. Set the Current access zone field to engineering.

b. Click View/Edit on the ZoneAdmin role. In the View role details


window, click on Edit role. In the Edit role details window, click on
Add a member to this role. Select engAdmin as shown in the
graphic. Once done, click Save changes and then Close.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 103
RBAC and ZRBAC

c. Add engAdmin to the ZoneSecurityAdmin role in a similar manner.

5. Access the WebUI as engAdmin.

a. Log out of the WebUI. In the Chrome browser, go to URL


https://engineering:8080. On the Your connection is not private
message, click Advanced and then click Proceed to engineering
(unsafe). Login using engAdmin/Dees12345!

Lab Guide - PowerScale Administration

Page 104 © Copyright 2020 Dell Inc.


RBAC and ZRBAC

b. Notice the difference in the homepage and menu options as


compared to the user root.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 105
RBAC and ZRBAC

CLI Step-by-Step Walk Through: Add User to Role


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Add user Dante to the AuditAdmin role.

a. At the boston-1 prompt, run the isi auth roles modify


AuditAdmin --add-user Dante command.

b. Verify by running isi auth roles view AuditAdmin | grep


Members

c. Start an SSH session to node 2 by double clicking Bos-node2. If you


get a PuTTY Security Alert, click Yes. Login using Dante/Dees12345!

i. At the boston-2 prompt, run the isi auth roles list


command and note the response.

3. Add user engAdmin to the ZoneAdmin and ZoneSecurityAdmin roles.

a. To add to the ZoneAdmin role, at the boston-1 prompt, run the isi
auth roles modify ZoneAdmin --add-user engAdmin --
zone engineering command.

b. Verify by running the isi auth roles view ZoneAdmin --


zone engineering | grep Members command.

c. To add to the ZoneSecurityAdmin role, at the boston-1 prompt, run


the isi auth roles modify ZoneSecurityAdmin --add-
user engAdmin --zone engineering command.

d. Verify by running the isi auth roles view


ZoneSecurityAdmin --zone engineering | grep Members
command.

e. You will be unable to SSH into the cluster as engAdmin because the
ZoneAdmin and ZoneSecurityAdmin roles are not assigned the
ISI_PRIV_LOGIN_SSH privilege which is require for SSH acces.

Lab Guide - PowerScale Administration

Page 106 © Copyright 2020 Dell Inc.


On-disk Identity

On-disk Identity

Scenario:

The On-disk identity stores an authoritative version of the identity for users and
groups. This becomes important when accessing restored files replicated to a
remote system. The IT manager has asked you to use the OneFS CLI to identify
the UID, GID, SID, and the On-disk identity for user accounts.

Given in the lab exercise is the information required to identify identities. If needed,
use the CLI step-by-step walk through portion of the exercise for guidance.

In this lab, you perform the following tasks:


• View UID, GID, SID and On-disk identity

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 107
On-disk Identity

On-disk Identity
1. Use the following information to help identify the UID, GID, SID, and the on-
disk identity for user accounts. This step can only be performed using the
OneFS CLI.

a. Ensure that the global setting for the On Disk Identity is set as
native. Use the WebUI Access > Settings page or the isi auth
settings global command.

b. View the access token for the marketing zone user sera. To do this,
use the isi auth mapping token command.

Question: Which authentication source is used to generate the user


sera access token? Which is the On-disk identity and why?

Instructor Note: 1) LDAP 2) UID, because an object that has an


explicit UNIX identity maintains that identity as the On Disk identity.

c. View the access token for the marketing zone user sera by
referencing the domain as dees\\sera or [email protected]

Question: Is the SID the same for user sera when referenced as
sera and dees\\sera? Why or Why not?

Instructor Note: Yes, because the user sera account exists in both
Active Directory and LDAP and the account has a real SID as well as
a real UID.

d. View the access token for the sales zone user john.

Hint: John belongs to the DEES domain.

Question: Which is the On-disk identity for user john? Why?

Lab Guide - PowerScale Administration

Page 108 © Copyright 2020 Dell Inc.


On-disk Identity

Instructor Note: The On Disk identity is the SID. User john only
exists in Active Directory. If an object lacks a real UID or GID, then its
SID is set as the On Disk identity. OneFS only uses a SID for the on
disk identity when no configured authentication sources return a real
UID or GID.

e. View the mapping information for user with UID 10. To do this, use
the isi auth mapping view command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 109
On-disk Identity

CLI Step-by-Step Walk Through: On-disk Identity


In this task, use the isi auth mapping command to locate on-disk identities for
users and groups, as well as complete reverse lookup of accounts. On-disk identity
output can help you resolve issues related to permissions and access.

1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open


mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Verify that the global setting for the On Disk Identity is set as native. To do
this, at the boston-1 prompt, run the isi auth settings global view
command and note the setting.

a. Change the setting, if needed, at the boston-1 prompt run the isi
auth settings global modify --on-disk-identity
native command.

3. View the access token for the marketing zone user sera. At the boston-1
prompt, run the isi auth mapping token sera --zone marketing
command.

a. Locate the UID and SID for user sera. Notice that the SID is an
algorithmic SID or 'UNIX SID', where the common S-1-22 is
appended with the UID to create a SID for an account that requires

Lab Guide - PowerScale Administration

Page 110 © Copyright 2020 Dell Inc.


On-disk Identity

one. Locate the GID and SID for the Primary Goup. Notice that the
SID is a UNIX SID.

b. Notice that the On Disk identity is the same as the UID. User sera is
an LDAP user. An object that has an explicit UNIX identity maintains
that identity as the On Disk identity.

4. At the boston-1 prompt, run the isi auth mapping token dees\\sera
--zone marketing command.

Note: The format to reference a user in Active Director is domain\\user or


username@domain. The domain is required.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 111
On-disk Identity

a. The On Disk identity setting is the UID. As the user account sera
exists in both Active Directory and LDAP, the account has a real SID
as well as a real UID.

5. View the access token for the sales zone user john. At the boston-1 prompt,
run the isi auth mapping token dees\\john --zone sales
command.

a. Note the On Disk identity for user john. The On Disk identity is the
SID. User john only exists in Active Directory. If an object lacks a real
UID or GID, then its SID is set as the On Disk identity. OneFS only
uses a SID for the on-disk identity when no configured authentication
sources return a real UID or GID.

6. To display the mapping information for a user with UID 10, at the boston-1
prompt, run the isi auth mapping view UID:10 command. You can
also view the mapping information for a user by using the SID or GID in the
command.

Lab Guide - PowerScale Administration

Page 112 © Copyright 2020 Dell Inc.


On-disk Identity

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 113
Permissions

Permissions

Scenario:

Before configuring the shares and exporting the file systems, the IT manager wants
you to become familiar with the way OneFS depicts and enforces permissions. For
this lab, view and modify the permissions on the cluster, and demonstrate how to
make a file or directory using advanced ACL.

Given in the lab exercise is the information needed to view, change and understand
file system permissions in OneFS. If needed, use the WebUI or the CLI step-by-
step walk through portion of the exercise for guidance.

In this lab, you perform the following tasks:


• View and change permissions for a directory
• Set a directory to ACL authoritative
• Add and remove an advanced ACL
• Verify the ACL policy settings

Lab Guide - PowerScale Administration

Page 114 © Copyright 2020 Dell Inc.


Permissions

Permissions
1. Use the following information to view, change the permissions and ownership
of the sales access zone base directory.

a. View the permissions and ownership of the sales base directory. You
can use the WebUI, File system explorer page or the ls command.

b. Change the owner of the sales base directory to the Active Directory
administrator. You can use the file system explorer of the WebUI or
the chown command.

c. Change the group owner of the sales base directory to the domain
administrators group. You can use the files system explorer of the
WebUI or the chgrp command.

d. Ensure that all individuals have read, write and execute access to the
sales base directory. You can use the files system explorer of the
WebUI or the chmod command.

2. Use the following information to change the authoritative permission type, add
and remove ACLs, for the sales base directory. This step can only be
performed only using the CLI.

a. Change the authoritative permission type for the sales base directory
to ACL. Use the chown command and its options.

Question: What is the indication that the directory is now ACL


authoritative?

Instructor Note: No Synthetic ACL wording and the "+" symbol is


present.

b. Add an advanced ACL to the sales base directory for a user. Use the
chmod command and its options.

i. User: nobody

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 115
Permissions

ii. Permissions: read, write, execute

Hint: The permission is generic_all

c. Add an advanced ACL to the sales base directory for a group. Use
the chmod command and its options.

i. Group: domain users

ii. Permissions: read, write, execute

Hint: The permission is generic_all

d. Remove the everyone ACL entry for the sales base directory. Use
the chmod command and its options.

e. Change the authoritative permission type of the sales base directory


to POSIX. Use the chmod command and its options.

Question: If a Windows user accesses the directory, which


permissions are used to authorize the user, POSIX or ACLs?

Instructor Note: Synthetic ACLs

3. Use the following information to understand the impact of changing the ACL
policy settings. This step can be performed only using the WebUI.

a. View the default ACL policy settings. Use the Access menu in the
WebUI.

b. Change the environmental variable with the different options and note
the changes to different settings.

Question: For each option, what is the impact of using chmod on files
that have existing ACLs?

Lab Guide - PowerScale Administration

Page 116 © Copyright 2020 Dell Inc.


Permissions

Instructor Note: Balanced: Merge the permissions, UNIX: Remove


ACLs and set UNIX permissions, Windows: Deny permission to
modify ACL

Note: Change the Environment back to Balanced when you


complete the lab exercise.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 117
Permissions

WebUI Step-by-Step Walk Through: Permissions and Ownership


In this task, you will view permissions, change permissions and change the
ownership of the sales access zone base directory using the WebUI.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Examine the permissions and ownership of the sales access zone base
directory.

a. Navigate to the File system, File system explorer page. Set the
Current access zone to sales. On the sales directory, click View
details.

b. In the View directory property details window, you can see the
permissions and ownership set for the directory.

Lab Guide - PowerScale Administration

Page 118 © Copyright 2020 Dell Inc.


Permissions

3. Change the permissions and ownership for the sales access zone base
directory.

a. In the View directory property details window, click the Edit


properties button.

b. Change the User (owner) of the directory to the Active Directory


administrator. In the User field, click on Browse. Select the user as
shown in the graphic.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 119
Permissions

c. Next, change the Group (owning group) of the directory to the


DEES\domain admins group. In the Group field, click on Browse.
Select the group as shown in the graphic.

Lab Guide - PowerScale Administration

Page 120 © Copyright 2020 Dell Inc.


Permissions

d. The IT manager wants the permissions for the base directory writable
for all individuals. Ensure all the Permissions check boxes are
checked. Click Save changes, and then Close.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 121
Permissions

CLI Step-by-Step Walk Through: Permissions and Ownership


In this task, you will view permissions, change permissions and change the
ownership of the sales access zone base directory using the CLI.

1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open


mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Examine the permissions and ownership of the sales access zone base
directory. At the boston-1 prompt, run the ls -led /ifs/sales command.
Note the permissions and ownership for the directory.

Note: The ls command lists the directory contents. The -l option is to list files
in the long format. The -e option prints the Access Control List (ACL). The -d
option lists on the directory and not its contents.

3. Change the permissions and ownership for the sales access zone base
directory.

a. Change the owner of the sales base directory to the Active Directory
administrator. At the boston-1 prompt, run the chown
[email protected] /ifs/sales command.

b. Change the owning group of the sales base directory to the


DEES\domain admins group. At the boston-1 prompt, run the
chgrp "DEES\\domain admins" /ifs/sales command.

c. The IT Manager wants you to make sure that any individual


accessing the directory has read, write, and execute permissions. To
change permissions, at the boston-1 prompt, run the chmod 777
/ifs/sales command.

d. Verify the changes to the permissions and ownership. At the boston-


1 prompt, run the ls -led /ifs/sales command.

Lab Guide - PowerScale Administration

Page 122 © Copyright 2020 Dell Inc.


Permissions

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 123
Permissions

Step-by-Step Walkthrough: ACL Authoritative


In this task, you change the permission authority to ACLs, add an ACL, remove an
ACL, and change the permission authority back to POSIX. This exercise can be
performed only using the CLI.

1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open


mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. View the authoritative permission type for the sales access zone base
directory. At the boston-1 prompt, run the ls -led /ifs/sales command.
Note that the permission type is set to POSIX as the ACL for the sales
directory is the Synthetic ACL.

3. Change the permission authority of the /ifs/sales directory to ACL.

a. At the boston-1 prompt, run the chown -s administrator@dees


/ifs/sales command.

Note: Although the owner of the directory was the Active Directory
administrator, the -s option uses an external authentication provider
and in this example, makes the directory ACL authoritative.

b. Run the ls -led /ifs/sales command. Note that the Synthetic


ACL wording is absent and a "+" symbol is added next to the POSIX
bits.

4. Explicitly add an advanced ACL for a user.

a. At the boston-1 prompt, run the chmod +a user nobody allow


generic_all /ifs/sales command. The chmod +a adds a new
ACL entry and inserts it into the canonical location in the ACL.

b. Run the ls -led /ifs/sales command and note the entry of the
added user.

Lab Guide - PowerScale Administration

Page 124 © Copyright 2020 Dell Inc.


Permissions

Note: The nobody account is used for mapping root users over NFS to a different
UID so they are not treated as root on the cluster. This is known as root squash.
The generic_all permission represents Full Control permissions in Windows.

5. Explicitly add an advanced ACL for a group.

a. At the boston-1 prompt, run the chmod +ai group


'dees\domain users' allow generic_all /ifs/sales
command. Use chmod +ai to add inheritance flags to the ACL
entries. Inheritance flags can also be added specifically as part of the
ACE.

b. Run the ls -led /ifs/sales command and note the entry of the
added group.

6. Remove an ACL.

a. At the boston-1 prompt, run the chmod -a# 3 /ifs/sales


command. This deletes ACL position 3, the everyone ACL.

b. Run the ls -led /ifs/sales command to verify.

7. Switch the /ifs/sales from ACL authoritative to POSIX authoritative.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 125
Permissions

a. Use the chmod -b command. At the boston-1 prompt, run the


chmod -b 777 /ifs/sales command. The -b option removes
the ACL and replaces it with the specified UNIX permissions.

b. Run the ls -led /ifs/sales command. Note that POSIX is the


authoritative permission type and that the synthetic ACLs have
changed to correspond with the POSIX mode bits.

Note: With POSIX authoritative, changing the mode bits also changes
the synthetic ACLs.

Lab Guide - PowerScale Administration

Page 126 © Copyright 2020 Dell Inc.


Permissions

Step-by-Step Walk Through: ACL Policy Setting


In this task, the IT manager wants you to understand the impact of changing the
ACL policy settings. This exercise can be performed only using the WebUI.

1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a


Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to the Access, ACL policy settings page.

a. The default setting for the Environment is set to Balanced. To


compare between the different options, note the changes to the Use
the chmod command on files with existing ACLs setting as you
switch between the options.

b. Change the Environment to UNIX only. Note the changes.

c. Change the Environment to Windows only. Note the changes.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 127
Permissions

d. Change the Environment back to Balanced.

Note: In the given scenario, you have created access zone sales that
authenticates using Active Directory, engineering that authenticates
using LDAP, and marketing that uses both authentication providers.
The ideal setting for such a setup is Balanced.

Lab Guide - PowerScale Administration

Page 128 © Copyright 2020 Dell Inc.


SMB Shares

SMB Shares

Scenario:

The Marketing business unit has requested two Windows shares for their
environment. The first share is to accommodate home directories for all the
marketing users and the second share is for general purpose file sharing. Once the
shares are created, verify accessibility from an SMB client.

Given in the lab exercise is the information needed to configure SMB shares. If
needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Create home directories
• Create a general purpose share

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 129
SMB Shares

SMB Shares
1. Create a share to accommodate home directories for all marketing users.

a. Enable SMB service. Use the WebUI, Windows sharing (SMB) page
or the isi services command.

b. Create a share. Use the WebUI, Windows sharing (SMB) page or


the isi smb shares command.

Hint: Ensure to set the access zone before creating the share.

i. Share name: home

ii. Path: /ifs/marketing/home/%U

iii. Allow variable expansion: yes

iv. Auto create directory: yes

v. Permissions: Set permissions so Everyone has full access

Hint: Use the --wellknown option when using the isi smb
shares permission command. In the WebUI, you may
change the permissions by clicking the View/Edit button.

c. Map the user home directory using the File Explorer on the
Jumpstation.

i. Folder: \\marketing\home

d. Verify that the home directory for user John is created. Use the File
system explorer page of the WebUI or the ls command.

2. Create a general share for all the marketing users.

a. Create a share. You can use the WebUI, Windows sharing (SMB)
page or the isi smb shares create command.

Hint: Ensure to set the access zone before creating the share.

i. Share name: mktg-share

Lab Guide - PowerScale Administration

Page 130 © Copyright 2020 Dell Inc.


SMB Shares

ii. Path: /ifs/marketing/mktg-share

iii. Create SMB share directory: yes

iv. Directory ACLs: Do not change existing permissions

b. Map the user home directory using the File Explorer on the
Jumpstation.

i. Folder: \\marketing\mktg-share

c. Verify access to the share.

i. Create a new folder in the mktg-share.

Question: Why do you get an access denied message when


attempting to create the new folder?

Instructor Note: Permission issue

ii. Change the permissions to Full Control for member


Everyone. Use the Windows sharing (SMB) page or the isi
smb shares permission command. Try to create a new
folder in the mktg-share.

Question: Why do you get an access denied when attempting


to create the new folder?

Instructor Note: The share is POSIX authoritative.

iii. Change the permissions for the mktg-share directory. You can
use the File system explorer page of the WebUI or the

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 131
SMB Shares

chmod command. Try to create a new folder in the mktg-


share.

Lab Guide - PowerScale Administration

Page 132 © Copyright 2020 Dell Inc.


SMB Shares

WebUI Step-by-Step Walk Through: Home Directories


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Enable SMB service.

a. Navigate to the Protocols, Windows sharing (SMB) page. Select


the SMB server settings tab. Check Enable SMB service, scroll
down and click Save changes.

3. Navigate to the Protocols, Windows sharing (SMB) page. Select the SMB
shares tab. Set the Current access zone to marketing. Click on Create an
SMB share.

4. In the Create an SMB share window, enter the fields as shown and click
Create share.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 133
SMB Shares

Note: When joining the Active Directory domain, the Domain Users group is
automatically added to the cluster's user group by default. This can be seen by
selecting Add member and choosing an AD user.

5. Map the user home directory.

a. Minimize the WebUI.

Lab Guide - PowerScale Administration

Page 134 © Copyright 2020 Dell Inc.


SMB Shares

b. Open the File Explorer on the Jumpstation by clicking the icon on


the task bar. Right-click on This PC and click on Map network drive.

c. Enter the field as shown and select Finish.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 135
SMB Shares

d. In the left navigation panel, click This PC to view the Folders,


Devices and drives, and Network locations panel. Notice that the
home directory is mapped. Double-click on the mapped share to
ensure you have access.

6. Go back to the WebUI and navigate to the File system, File system explorer
page. Set the Current access zone to marketing. Click on home. Verify that
the john home directory has been created.

Lab Guide - PowerScale Administration

Page 136 © Copyright 2020 Dell Inc.


SMB Shares

CLI Step-by-Step Walk Through: Home Directories


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Enable SMB service.

a. At the boston-1 prompt, run the isi services smb enable


command.

3. To create an SMB share to accommodate home directories for all marketing


users, at the boston-1 prompt, run the following commands:

a. Create share: isi smb shares create home --


zone=marketing --path=/ifs/marketing/home/%U --
allow-variable-expansion=yes --auto-create-
directory=yes

b. Modify permissions: isi smb shares permission modify


home --zone=marketing --wellknown Everyone --
permission-type=allow --permission=full

4. Map the user home directory.

a. Minimize mRemoteNG.

b. Open the File Explorer on the Jumpstation by clicking the icon on the
task bar. Right-click on This PC and click on Map network drive.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 137
SMB Shares

c. Enter the field as shown and select Finish.

d. In the left navigation panel, click This PC to view the Folders,


Devices and drives, and Network locations panel. Notice that the

Lab Guide - PowerScale Administration

Page 138 © Copyright 2020 Dell Inc.


SMB Shares

home directory is mapped. Double-click on the mapped share to


ensure you have access.

5. Verify that the john home directory has been created. Go back to
mRemoteNG, at the boston-1 prompt, run ls /ifs/marketing/home and
ensure that john is listed.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 139
SMB Shares

WebUI Step-by-Step Walk Through: General Purpose Share


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Create a general share for all the marketing users.

a. Navigate to the Protocols, Windows sharing (SMB) page. Set the


Current access zone to marketing. Click on Create an SMB share.

b. In the Create an SMB share window, enter the fields as shown and
click Create share.

3. Minimize the WebUI. Go to the File Explorer on the Jumpstation to map the
share. Map the mktg-share share. Refer to the previous task if required.

Lab Guide - PowerScale Administration

Page 140 © Copyright 2020 Dell Inc.


SMB Shares

4. Verify access to the share.

a. In the mapped mktg-share, right click in the right panel, select New
and then select Folder. Why do you get an access denied message?
This is because you do not have the needed permissions.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 141
SMB Shares

b. Go back to the WebUI. Navigate to Protocols, Windows sharing


(SMB) page, marketing access zone. On the mktg-share, select
View / Edit and then click Edit SMB share. Change the permissions
as shown. Save changes and Close.

c. Go back to the File Explorer on the Jumpstation and try to create a


new folder in mktg-share. Why do still get access denied? This is
because when creating an SMB share with the ACL policy settings
set to Balanced, and the Directory ACLs setting is Do not change
existing permissions, the share is POSIX authoritative.

Lab Guide - PowerScale Administration

Page 142 © Copyright 2020 Dell Inc.


SMB Shares

5. Change the permissions on the mktg-share folder.

a. Go back to the WebUI. Navigate to the File system, File system


explorer page, marketing access zone. On the mktg-share folder,
select View / Edit and then select Edit properties. Change the
permissions as shown. Save changes and Close.

b. Go back to the File Explorer on the Jumpstation and try to create a


new folder in mktg-share. You will be able to create the folder
successfully.

Note: When a share is created with the ACL policy settings set to
Balanced and the Directory ACLs setting set to Apply Windows
default ACLs, the share is ACL authoritative.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 143
SMB Shares

CLI Step-by-Step Walk Through: General Purpose Share


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create a general share for all the marketing users. At the boston-1 prompt,
run the isi smb shares create mktg-share --zone=marketing --
path=/ifs/marketing/mktg-share --create-path command.

3. Minimize mRemoteNG. Go to the File Explorer on the Jumpstation to map the


share. Map the mktg-share share. Refer to the previous task if required.

4. Verify access to the share.

a. In the mapped mktg-share, right click in the right panel, select New
and then select Folder. Why do you get an access denied message?
This is because you do not have the needed permissions.

Lab Guide - PowerScale Administration

Page 144 © Copyright 2020 Dell Inc.


SMB Shares

b. At the boston-1 prompt, run the isi smb share view mktg-
share --zone marketing | grep -ie Permission -e
Everyone command. Notice that the permission is set as allow read
for Everyone.

c. At the boston-1 prompt, run the isi smb shares permission


modify mktg-share --zone=marketing --wellknown
Everyone --permission-type=allow --permission=full
command. The permissions for Everyone is set to Full Control.

d. Go back to the File Explorer on the Jumpstation and try to create a


new folder in mktg-share. Why do still get access denied? This is
because when creating an SMB share with the ACL policy settings

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 145
SMB Shares

set to Balanced, and the Directory ACLs setting is Do not change


existing permissions, the share is POSIX authoritative.

5. Change the permissions on the mktg-share folder.

a. At the boston-1 prompt, run the chmod 777


/ifs/marketing/mktg-share command.

b. Go back to the File Explorer on the Jumpstation and try to create a


new folder in mktg-share. You will be able to create the folder
successfully.

Note: When a share is created with the ACL policy settings set to
Balanced and the Directory ACLs setting set to Apply Windows
default ACLs, the share is ACL authoritative.

Lab Guide - PowerScale Administration

Page 146 © Copyright 2020 Dell Inc.


NFS Exports

NFS Exports

Scenario:

Now that you have configured the Windows shares, export the directories to the
Linux users. The IT manager has asked that you create, configure, export, and
mount a general purpose directory for use by the marketing members that use
Linux clients. Although the dominant access to the files is using Linux, some
Windows users may also access the files. The IT manager wants you to
understand root squash and the permission behavior.

Given in the lab exercise is the information needed to configure NFS exports. If
needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Create an export
• Mount an export

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 147
NFS Exports

NFS Exports
1. Create a general purpose NFS export for the marketing members using Linux
clients.

a. Create a general purpose directory in the marketing access zone.


You can use the File system explore page of the WebUI or the
mkdir and chmod commands.

i. Directory Name: mktg-export

ii. Permissions: Read, Write, Execute for User, Group, and


Others

b. Enable NFS service. You can use the UNIX sharing (NFS) page of
the WebUI or the isi services command.

c. Create the export for the mktg-export directory in the marketing


access zone. You can use the UNIX sharing (NFS) page of the
WebUI or the isi nfs exports command.

i. Directory Path: /ifs/marketing/mktg-export

ii. Read-Write Clients: 192.168.3.3

Question: When a client is not specified, what is the default


behavior for all the clients connecting to the export?

Instructor Note: Any client not restricted by rules can access the
export.

d. Create an NFS alias for the export in the marketing access zone.
You can use the NFS aliases tab of the UNIX sharing (NFS) page in
the WebUI or the isi nfs aliases command.

i. Alias Name: gen-purpose

ii. Path: /ifs/marketing/mktg-export

Lab Guide - PowerScale Administration

Page 148 © Copyright 2020 Dell Inc.


NFS Exports

2. Mount the NFS directory exported from the cluster to the Centos machine.
This step can be performed only with the CLI.

a. Open an SSH connection to the Centos machine in the lab


enviroment. Login using root/Dees12345!

b. Mount the exported mktg-export directory on the Centos client. Use


the mount command.

i. NFS export: marketing:/gen-purpose

Hint: If the DNS and alias are not configured, use the IP
address for the marketing access zone and the directory path
for the export.

ii. Mount point: /mnt/gen-purpose

Hint: You need to create the mount point directory on the


Centos client using the mkdir command.

c. Create a empty file named test using the touch command. Verify the
owner and group for the test file using the ls command.

Question: Why is the user and group not root on the file test that
user root on the Centos client created?

Instructor Note: root is squashed to nobody

d. On the cluster, revoke the Write permission for Others in the mktg-
export directory and note that the owner for the directory is user root.
You can use the File system explorer page of the WebUI or the
chmod and ls commands.

e. In the Centos client, try to create another empty file named test1.

Question: Why do you get a permission denied response?

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 149
NFS Exports

Instructor Note: Because root on the cluster and root on the client
are not the same user.

f. Add the Centos root (192.168.3.3) to the Root clients rule of the NFS
export for the mktg-export directory. You can use the UNIX sharing
(NFS) page of the WebUI or the isi nfs exports command.

g. Try to create file test1 again on the Centos client. Verify the owner
and group for file test1.

h. Remove all the empty files created using the rm command.

Lab Guide - PowerScale Administration

Page 150 © Copyright 2020 Dell Inc.


NFS Exports

WebUI Step-by-Step Walk Through: Create an Export


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Create the mktg-export directory.

a. Navigate to the File System, File system explorer page. Set the
Current access zone field to marketing. Click on Create directory.

b. Enter the fields as shown and click Create directory.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 151
NFS Exports

3. Enable NFS service.

a. Navigate to the Protocols, UNIX sharing (NFS) page. Select the


Global settings tab. Check Enable NFS export service and click
Save changes.

Lab Guide - PowerScale Administration

Page 152 © Copyright 2020 Dell Inc.


NFS Exports

4. Create the export for the mktg-export directory.

a. Navigate to the Protocols, UNIX sharing (NFS) page. Set the


Current access zone field to marketing. Click Create export.

b. Enter the fields as shown and click Create export.

Note: IP address 192.168.3.3 is the NFS client (centos machine) that


you are giving read-write permissions to.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 153
NFS Exports

5. Create an NFS alias for the export so the user will not have access to the full
directory path of their export. An alias creates a shortcut for the directory path
name.

a. Navigate to the Protocols, UNIX sharing (NFS) page. Select the


NFS aliases tab. Set the Current access zone field to marketing.
Click Create alias.

b. Enter the fields as shown and click Create alias.

Lab Guide - PowerScale Administration

Page 154 © Copyright 2020 Dell Inc.


NFS Exports

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 155
NFS Exports

CLI Step-by-Step Walk Through: Create an Export


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create the mktg-export directory. At the boston-1 prompt, run the following
commands:

a. Create: mkdir /ifs/marketing/mktg-export

b. Change Permissions: chmod 777 /ifs/marketing/mktg-


export

3. Enable NFS service.

a. At the boston-1 prompt, run the isi services nfs enable


command.

4. Create the export for the mktg-export directory. At the boston-1 prompt, run
the following commands:

a. Create Export: isi nfs exports create


/ifs/marketing/mktg-export --zone marketing --read-
write-clients 192.168.3.3

Note: IP address 192.168.3.3 is the NFS client (centos machine) that


you are giving read-write permissions to.

b. View export settings:

i. Get the export ID: isi nfs exports list --zone


marketing

ii. View settings: isi nfs exports view <ID> --zone


marketing

Lab Guide - PowerScale Administration

Page 156 © Copyright 2020 Dell Inc.


NFS Exports

Replace the <ID> field in the command with the export ID.

5. Create an NFS alias for the export so the user will not have access to the full
directory path of their export. An alias creates a shortcut for the directory path
name. At the boston-1 prompt, run the following commands:

a. Create: isi nfs aliases create /gen-purpose


/ifs/marketing/mktg-export --zone marketing

b. Verify: isi nfs aliases list --zone marketing

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 157
NFS Exports

Step-by-Step Walk Through: Mount the Export


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. In the Connections panel, double-click
Centos. Login using root/Dees12345!

2. Mount the exported mktg-export directory to the Centos client. At the


root@centos prompt, perform the following steps:

a. Create a mount point on the Centos machine to mount the mktg-


export directory export on the cluster. Run the mkdir /mnt/gen-
purpose command. The /mnt/gen-purpose is the directory on the
Centos machine that you are mounting the cluster export to.

b. Mount the export: Run the mount marketing:/gen-purpose


/mnt/gen-purpose command. You will use the NFS alias you
created earlier (gen-purpose).

Note: If DNS were not configured, you can use an IP address that is
in the marketing IP address pool configured on the cluster. Without
using DNS and without using an NFS alias, the command may look
like mount 192.168.3.33:/ifs/marketing/mktg-export
/mnt/gen-purpose

c. Verify the mount: Run the mount | grep marketing


command.

3. On the Centos SSH connection, run the whoami command to verify you are
user root.

a. Create an empty file named test by running the touch /mnt/gen-


purpose/test command.

b. Run the ls -l /mnt/gen-purpose command. The user and


group is not root as the root of the Centos client is squashed to user
nfsnobody.

Lab Guide - PowerScale Administration

Page 158 © Copyright 2020 Dell Inc.


NFS Exports

c. Return to the cluster CLI, at the boston-1 prompt, run the ls -l


/ifs/marketing/mktg-export and note that root is squashed as
nobody. Alternatively, you may navigate to the File system, File
system explore page of the WebUI, marketing access zone, mktg-
export directory, and view the details on the test file.

4. Revoke the Write permission for Others. You may perform this step either
using the WebUI or CLI.

a. WebUI option: On the Jumpstation, open a Chrome browser and


launch the WebUI (https://192.168.3.11:8080). Login to the cluster
using root/Dees12345!

i. Navigate to the File system, File system explorer page,


marketing access zone, and select View / Edit for mktg-
export.

ii. Select Edit properties. Note that the user is root. Uncheck the
write permission for Others. Click on Save changes and
Close.

b. CLI option: On the Jumpstation, open mRemoteNG and double-


click on Bos-node1, and SSH using root/Dees12345!

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 159
NFS Exports

i. At the boston-1 prompt, run the chmod 775


/ifs/marketing/mktg-export command.

ii. Verify by running the ls -led /ifs/marketing/mktg-


export command. Note that the user is root.

5. Return to the Centos CLI and run the touch /mnt/gen-purpose/test1


command. You will get a Permission denied response. The root on the Centos
machine is not the same user as the root on the cluster. The root on the
cluster is the owner of the mktg-export directory. The Centos root is
squashed as user nfsnobody who belongs to Others and has no write
permission on the mktg-export directory.

6. The root user of the Centos client is a trusted root for the cluster. Change the
root squash for the client. You may perform this step either using the WebUI or
CLI.

a. WebUI option: On the Jumpstation, open a Chrome browser and


launch the WebUI (https://192.168.3.11:8080). Login to the cluster
using root/Dees12345!

i. Navigate to the Protocols, UNIX sharing (NFS) page, NFS


exports tab, access zone marketing. Click on View/Edit for
/ifs/marketing/mktg-export.

ii. Click Edit export. In the Edit NFS export details window,
enter 192.168.3.3 in the Root clients field. This is the Centos
client where user root is trusted. Click Save changes and then
Close.

Lab Guide - PowerScale Administration

Page 160 © Copyright 2020 Dell Inc.


NFS Exports

b. CLI option: On the Jumpstation, open mRemoteNG and double-


click on Bos-node1, and SSH using root/Dees12345!

i. Get ID: Run the isi nfs exports list --zone


marketing command to get the export ID.

ii. Add root client: Run the isi nfs exports modify <ID>
--zone marketing --root-clients 192.168.3.3
command. Replace the <ID> field with the export ID.

iii. Verify: Run the isi nfs exports view <ID> --zone
marketing command. Replace the <ID> field with the export
ID. Notice that 192.168.3.3 has been added as a root client.
This is the Centos client where user root is trusted.

7. Return to the Centos CLI. Run the touch /mnt/gen-purpose/test1


command to create an empty file. Notice that there is no permission denied
response.

a. Run the ls -l /mnt/gen-purpose command. By adding the


client to the Root clients on the cluster, root is not squashed for the
Centos client.

Note: The 192.168.3.3 client is configured under the Root clients rule as well as
the Always read/write clients rule. The Root clients rule takes precedence.

b. Return to the cluster CLI, at the boston-1 prompt, run the ls -l


/ifs/marketing/mktg-export and note that root is the owner of

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 161
NFS Exports

the test1 file. Alternatively, you may navigate to the File system, File
system explore page of the WebUI, marketing access zone, mktg-
export directory, and view the details on the test1 file.

8. Run the rm /mnt/gen-purpose/test* command to cleanup the directory.


Enter Y when prompted to confirm the directory cleanup.

Lab Guide - PowerScale Administration

Page 162 © Copyright 2020 Dell Inc.


S3

S3

Scenario:

The IT manager wants the Marketing users to access data from an S3 application.
Demonstrate the S3 configuration steps to create a bucket and access the bucket
using other protocols.

Given in the lab exercise is the information needed to configure S3. If needed, use
the WebUI or the CLI step-by-step walk through portion of the exercise for
guidance.

In this lab, you perform the following tasks:


• Create a S3 bucket
• Access the S3 bucket

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 163
S3

S3 Buckets
1. Configure S3 for the marketing access zone and verify multi-protocol access.

a. Enable the S3 service. Use the Object storage (S3) page of the
WebUI or the isi services command.

b. Set the base domain for the marketing access zone settings. Use the
Object storage (S3) page of the WebUI or the isi s3 settings
command.

i. Base domain: marketing.dees.lab

c. Create a bucket in the marketing access zone. Use the Object


storage (S3) page of the WebUI or the isi s3 buckets command.

i. Bucket name: mktg-bucket

ii. Path: /ifs/marketing/mktg-bucket

iii. Owner: DEES\john

iv. Create path: Yes

d. Create a secret key for the Active Directory user John. Note the
Access id and the Secret Key. Use the Object storage (S3) > Key
management page of the WebUI or the isi s3 keys command.

e. Create a new account in the S3 Browser. Open the browser from the
desktop or taskbar. Use the Access id and the Secret Key obtained
from the previous task.

i. Account Name: PowerScale

ii. Account Type: S3 Compatible Storage

iii. REST Endpoint: marketing.dees.lab:9021

f. Using the S3 Browser, upload the Animation directory located in the


File-Repository directory to mktg-bucket.

Lab Guide - PowerScale Administration

Page 164 © Copyright 2020 Dell Inc.


S3

g. Create a new SMB share in the marketing access zone. Use the
Windows Sharing (SMB) page of the WebUI or the isi smb
shares command.

i. Name: mktg-bucket

ii. Path: /ifs/marketing/mktg-bucket

iii. Directory ACLs: Apply Windows default ACLs

iv. Permissions: Set permissions so Everyone has full access

h. Map the mktg-bucket share using Windows Explorer. Verify that you
are able to access the Animation folder.

i. Folder: \\marketing\mktg-bucket

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 165
S3

WebUI Step-by-Step Walk Through: Configure S3


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Create a S3 bucket in the marketing access zone.

a. First, we need to enable S3 and the S3 HTTP port. Navigate to the


Protocols, Object storage (S3) page. Select the Global settings
tab. Check the box to enable S3 and HTTP and click Save changes.
We will leave the port number with the default value.

b. Configure the zone settings. Select the Zone settings tab and
change the current access zone to marketing. Enter the base
domain and as shown and click Save changes.

c. To create a bucket, select the Buckets tab and set the current
access zone to marketing. Click Create Bucket and enter the fields

Lab Guide - PowerScale Administration

Page 166 © Copyright 2020 Dell Inc.


S3

as shown. Select the Owner as DEES\john as shown in the second


graphic. Once done, click Create Bucket.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 167
S3

d. Once done, you will find the bucket listed in the Buckets table.

3. Add contents to the S3 bucket and verify multi-protocol access.

a. Create a secret key for the Active Directory user John. On the Object
storage (S3) page, select the Key management tab. Set the current
access zone to marketing. Select user John and click Create new
key. A new key secret key will be listed in the Secret key Details
table. Note the Access id and the Secret key for user John.

Lab Guide - PowerScale Administration

Page 168 © Copyright 2020 Dell Inc.


S3

b. Open the S3 Browser using the taskbar or the desktop. Enter the
details as shown in the graphic to create a new account. You need to
copy the Access id and the Secret Key from the WebUI and paste
them in the new account form. Once done, Add new account.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 169
S3

c. The mktg-bucket will be listed on the left. To add contents, select the
bucket and then click Upload > Upload Folder. Choose the
Animation directory from the File-Repository directory and click
Select Folder. Once uploaded, the folder will appear on the right.

Lab Guide - PowerScale Administration

Page 170 © Copyright 2020 Dell Inc.


S3

d. Now, try to access the bucket using SMB. Create a SMB share in the
marketing access zone as shown. Refer to the SMB Shares
exercise of the lab guide for the detailed procedure.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 171
S3

e. Map the mktg-bucket share on the Jumpstation. Verify that you are
able to access the Animation folder.

Lab Guide - PowerScale Administration

Page 172 © Copyright 2020 Dell Inc.


S3

CLI Step-by-Step Walk Through: Configure S3


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create a S3 bucket in the marketing access zone.

a. First, we need to enable S3 and and the S3 HTTP port. At the


boston-1 prompt, run the isi services s3 enable command.
Then, run the isi s3 settings global modify --https-
only=no command.

b. Verify by running the isi s3 settings global view command.

c. Configure the zone settings. At the boston-1 prompt, run the isi s3
settings zone modify --base-domain
marketing.dees.lab --zone marketing command.

d. Verify by running the isi s3 settings zone view --zone


marketing command.

e. To create a bucket, run the isi s3 buckets create mktg-


bucket /ifs/marketing/mktg-bucket --create-path --
owner DEES\\john --zone marketing command.

f. Verify by running the isi s3 buckets view mktg-bucket --


zone marketing command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 173
S3

3. Add contents to the S3 bucket and verify multi-protocol access.

a. Create a secret key for the Active Directory user John. At the
boston-1 prompt, run the isi s3 keys create DEES\\john --
zone marketing command. Note the Access id and the Secret
key for user John.

b. Open the S3 Browser using the taskbar or the desktop. Enter the
details as shown in the graphic to create a new account. You need to
copy the Access ID and the Secret Key from the CLI and paste them
in the new account form. Once done, Add new account.

Lab Guide - PowerScale Administration

Page 174 © Copyright 2020 Dell Inc.


S3

c. The mktg-bucket will be listed on the left. To add contents, select the
bucket and then click Upload > Upload Folder. Choose the
Animation directory from the File-Repository directory and click
Select Folder. Once uploaded, the folder will appear on the right.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 175
S3

d. Now, try to access the bucket using SMB. At the boston-1 prompt,
run the isi smb shares create mktg-bucket --
zone=marketing --path=/ifs/marketing/mktg-bucket -
-i=yes command. Change the permissions by running the isi smb
shares permission modify mktg-bucket --
zone=marketing --wellknown Everyone --permission-
type=allow --permission=full command.

e. Verify by running the isi smb shares view mktg-bucket --


zone=marketing command.

f. Map the mktg-bucket share on the Jumpstation. Verify that you are
able to access the Animation folder.

Lab Guide - PowerScale Administration

Page 176 © Copyright 2020 Dell Inc.


S3

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 177
HTTP - Optional

HTTP - Optional

Scenario:

The Marketing business unit has requested the use of HTTP for their clients. The IT
manager has asked you to configure HTTP on the cluster for the Marketing
business unit and test the connection.

Given in the lab exercise is the information needed to configure HTTP. If needed,
use the WebUI or the CLI step-by-step walk through portion of the exercise for
guidance. This is an optional lab exercise for the course. You may choose to either
perform or skip this exercise.

In this lab, you perform the following tasks:


• Configure and enable HTTP
• Connect and verify HTTP access

Lab Guide - PowerScale Administration

Page 178 © Copyright 2020 Dell Inc.


HTTP - Optional

HTTP
1. Use the following information to configure HTTP for the cluster.

a. Create the HTTP root directory in the marketing access zone. You
can use the File system explorer page of the WebUI or the mkdir
command.

i. Directory name: streaming

ii. Permissions: Read, Write, Execute for User, Group and


Others.

iii. Path: /ifs/marketing

b. Configure HTTP for the cluster. You can use the HTTP settings page
of the WebUI or the isi http settings command.

i. HTTP Service Enabled: Yes

ii. HTTP Authentication: Basic Authentication Only

iii. Document Root Directory: /ifs/marketing/streaming

2. Use the following information to access the cluster using HTTP.

a. Add contents to the HTTP root directory.

i. Change the HTTP root directory (/ifs/marketing/streaming) to


ACL authoritative. Use the chown -s command.

ii. Copy the main_config_changes.log file from the .ifsvar


directory to the HTTP base directory. Use the cp command.

iii. Give full permission to the copied log file in the HTTP base
directory. Use the chmod command.

b. Access the cluster using HTTP.

i. Go to URL 192.168.3.11 and login as John (username:


dees\john, password: Dees12345!)

ii. Open the log file to verify access.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 179
HTTP - Optional

Lab Guide - PowerScale Administration

Page 180 © Copyright 2020 Dell Inc.


HTTP - Optional

WebUI Step-by-Step Walk Through: Configure HTTP


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Create HTTP root directory.

a. Navigate to the File System, File system explorer page. Set the
Current access zone field to marketing. Click on Create directory.

b. Enter the fields as shown and click Create directory.

3. Configure HTTP for the cluster.

a. Navigate to Protocols, HTTP settings. Enter the fields as shown and


click Save changes.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 181
HTTP - Optional

Lab Guide - PowerScale Administration

Page 182 © Copyright 2020 Dell Inc.


HTTP - Optional

CLI Step-by-Step Walk Through: Configure HTTP


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create HTTP root directory. At the boston-1 prompt, run the following
commands:

a. Create directory: mkdir /ifs/marketing/streaming

b. Change permissions: chmod 777 /ifs/marketing/streaming

3. Configure HTTP for the cluster.

a. At the boston-1 prompt, run the isi http settings modify --


service enabled --basic-authentication yes --
server-root /ifs/marketing/streaming command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 183
HTTP - Optional

Step-by-Step Walk Through: Accessing Using HTTP


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Add contents to the HTTP root directory. At the boston-1 prompt, perform the
following steps:

a. Change directory permissions to ACL authoritative by running the


chown -s root /ifs/marketing/streaming command. Since
the directory will be accessed by a Windows host, we will make the
directory ACL authoritative.

b. Copy the main_config_changes log file from the .ifsvar directory to


the HTTP base directory (/ifs/marketing/streaming): cp
/ifs/.ifsvar/main_config_changes.log
/ifs/marketing/streaming

c. Give full permissions to the log file by running the chmod 777
/ifs/marketing/streaming/main_config_changes.log
command. You will get an access denied message if the permissions
are not set.

3. Access the cluster using HTTP.

a. On the Jumpstation, open a Chrome browser or open a new tab on


an open browser. In the address bar, type 192.168.3.11, and then
press ENTER.

b. Enter the credentials for user John (username: dees\john, password:


Dees12345!) and then click Sign in.

Lab Guide - PowerScale Administration

Page 184 © Copyright 2020 Dell Inc.


HTTP - Optional

c. To verify access, on the Index page, click on the log file to ensure the
file opens.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 185
HTTP - Optional

Lab Guide - PowerScale Administration

Page 186 © Copyright 2020 Dell Inc.


Data Protection and File Layout

Data Protection and File Layout

Scenario:

As part of the documentation process, the IT manager has asked you to analyze
how OneFS uses data protection levels to protect data, and then lead a small
workshop for the team to ensure everyone understands OneFS data protection.
The IT manager has created exercises to identify the protection levels of the file
layouts for various protection strips, as well as determine the overhead for cluster
sizing. You will review and make changes to the data protection settings for
directory and file protection levels that can affect node pool, folder, and file.

Given in the lab exercise is the information needed to configure Data Protection. If
needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Answer the data protection questions
• Configure node pool level data protection
• Configure directory and file level data protection

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 187
Data Protection and File Layout

Data Protection and File Layout


1. Before configuring data protection levels, reinforce your understanding of
protection overhead and file layout.

a. Data protection chart used for the exercises.

b. Answer the following questions.

i. Question: Given that you are using an existing Gen 6 cluster


running OneFS 9.0 that is set for N+2d:1n, what is the stated
protection overhead for an 8 node cluster?

Instructor Note: Answer: 13%

ii. Question: What is the percentage of protection overhead for


the cluster when adding another full chassis to the node pool?

Lab Guide - PowerScale Administration

Page 188 © Copyright 2020 Dell Inc.


Data Protection and File Layout

Instructor Note: Answer: 12 nodes = 11%

iii. Question: Given 4 full chassis in a cluster with N+4n


protection, what is the protection overhead?

Instructor Note: Answer: 16 nodes = 25%

iv. Question: With an 18-node cluster, going from N+2n to N+4n,


what can be said about the overhead needed for protection?

Instructor Note: Answer: Doubles from 11% to 22%

v. Question: Reference the image below. Given a 1 MB file. The


file has two data stripes. What is the protection level for this
file?

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 189
Data Protection and File Layout

Instructor Note: Answer: N+2d:1n

2. Configure Data Protection at a Node Pool level.

a. Change the Requested protection for the vmware_100gb_6gb


node pool to +1n. Use the Storage pools page of the WebUI or the
isi storagepool nodepools command.

Hint: Ensure to get the node pool ID if you are performing this step
using the CLI.

Question: A message in the WebUI indicates that the L3 cache


cannot be enabled. What action would you need to take to enable L3
cache?

Lab Guide - PowerScale Administration

Page 190 © Copyright 2020 Dell Inc.


Data Protection and File Layout

Instructor Note: Add SSDs to the nodes

Question: Check the SmartPools module status in the Summary tab.


Does a message inform you of a change?

Instructor Note: Yes. The message indicates the requested


protection is below the suggested protection.

b. Change the Requested protection in the default file pool policy to


+2d:1n. Use the Storage pools page of the WebUI or the isi
filepool command.

Question: Note the Storage Pools > Summary tab has an Info
message stating vmware_100gb_6gb requested protection is
different than the suggested protection. What does this mean?

Instructor Note: OneFS suggests a different protection level because


data is at risk.

c. Change the Requested protection for the vmware_100gb_6gb


node pool to +2d:1n. Use the Storage pools page of the WebUI or
the isi storagepool nodepools command.

3. Configure Data Protection at a Directory and File level.

a. Create directory mktg-DP in the marketing access zone. Ensure that


everyone has full access. Use the File system explorer page of the
WebUI or the mkdir and chmod commands.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 191
Data Protection and File Layout

b. Change the protection settings for the mktg-DP directory. Use the
File system explorer page of the WebUI or the isi set command.

i. Data storage target: vmware_100gb_6gb

ii. Requested protection: +1n

Hint: To edit these fields in the WebUI, ensure to uncheck the


Managed by SmartPools option.

c. Create an SMB share for the mktg-DP directory. Use the Windows
sharing (SMB) page of the WebUI or the isi smb shares
command.

i. Share Name: mktg-DP

ii. Path: /ifs/marketing/mktg-DP

iii. Directory ACLs: Apply Windows Default ACLs

iv. Permissions: Set permissions so Everyone has full access

d. Map \\marketing\mktg-DP using the Windows File Explorer on the


Jumpstation. Copy the Animation folder from the File Repository to
the mapped share.

e. Verify the protection level on the /ifs/marketing/mktg-


DP/Animation/ANIMATION.wmv file. Use the File system explorer
page of the WebUI or the isi get command.

Note: Although the file policy is set at +2d:1n protection, the explicit
directory permission on the mktg-DP directory takes precedence.

Question: In the output of the isi get /ifs/marketing/mktg-


DP/Animation/ANIMATION.wmv , what is the data layout of
LEVEL 5+1 ?

Lab Guide - PowerScale Administration

Page 192 © Copyright 2020 Dell Inc.


Data Protection and File Layout

Instructor Note: 5 data stripe units and 1 FEC stripe unit spread
across 1 drive per node, which means N+1n protection.

f. Change the Requested protection for the /ifs/marketing/mktg-


DP/Animation/ANIMATION.wmv file to +3d:1n. Use the File system
explorer page of the WebUI or the isi set command.

Note: Although the file policy is set at +2d:1n protection, and the
explicit directory permission on the mktg-DP directory is set at +1n,
the explicit protection policy at the file level takes precedence.

g. Verify the protection level on the /ifs/marketing/mktg-


DP/Animation/ANIMATION.wmv file once again.

Question: What is the data layout of LEVEL 15+3/3 ?

Instructor Note: 15 data stripe units and 3 FEC stripe units spread
across 3 drive per node.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 193
Data Protection and File Layout

WebUI Step-by-Step Walk Through: Configuring Node Pool Level


Protection
1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to the File System, Storage pools page, Summary tab. The
Summary tab displays storage pool status and local storage usage statistics.
The SmartPools info message indicates that the requested protection is
different than the protection suggested by OneFS. The Info message is not an
error.

3. Configure the +1n protection level for the vmware_100gb_6gb node pool.

a. Navigate to the SmartPools tab. Notice the protection setting of the


vmware_100gb_6gb node pool. Click on the Pool:
vmware_100gb_6gb. Change the protection level to +1n and click
Save changes.

Lab Guide - PowerScale Administration

Page 194 © Copyright 2020 Dell Inc.


Data Protection and File Layout

b. Verify that the protection level has changed for vmware_100gb_6gb


in the SmartPools tab.

i. Go to the Summary tab and note the message for the


SmartPools module.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 195
Data Protection and File Layout

4. Change the protection level for the default file pool policy. Navigate to the File
Pool Policies tab and click View details for Default policy. Click on Edit
policy. Change the Requested protection to +2d:1n and click Save
changes.

Lab Guide - PowerScale Administration

Page 196 © Copyright 2020 Dell Inc.


Data Protection and File Layout

5. Change the protection level for vmware_100gb_6gb back to +2d:1n and


verify. Use the same procedure mentioned in Step 3.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 197
Data Protection and File Layout

CLI Step-by-Step Walk Through: Configuring Node Pool


Protection Levels
1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Configure the +1n protection level for the vmware_100gb_6gb node pool.

a. At the boston-1 prompt, run the isi storagepool nodepools


list command. Note the ID and the Protection Policy for the
vmware_100gb_6gb node pool.

b. To change the Protection Policy, you need the ID of the node pool
that you want to change. Run the isi storagepool nodepools
modify <node pool ID> -p +1n command. The -p option tells
you which protection level you are going to use. Replace the <node
pool ID> in the command with the ID for the vmware_100gb_6gb
node pool.

c. Run the isi storagepool nodepools list command again and


verify that the protection level has changed.

Lab Guide - PowerScale Administration

Page 198 © Copyright 2020 Dell Inc.


Data Protection and File Layout

3. Change the protection level for the default file pool policy to +2d:1n.

a. At the boston-1 prompt, run the isi filepool default-policy


modify --set-requested-protection +2:1 command.

b. Verify by running the isi filepool default-policy view


command.

4. Change the protection level for vmware_100gb_6gb back to +2d:1n. At the


boston-1 prompt, run the isi storagepool nodepools modify <node
pool ID> -p +2d:1n command. Replace the <node pool ID> in the
command with the ID for the vmware_100gb_6gb node pool.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 199
Data Protection and File Layout

Lab Guide - PowerScale Administration

Page 200 © Copyright 2020 Dell Inc.


Data Protection and File Layout

WebUI Step-by-Step Walk Through: Configuring Directory and File


Level Protection
1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Configure the +1n protection level for the mktg-DP directory.

a. Navigate to the File system, File system explorer page, System


access zone. Click on View/Edit for the marketing directory and
verify the directory is managed by SmartPools. Close when done.

b. Change the Current access zone to marketing and create a new


directory named mktg-DP as shown.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 201
Data Protection and File Layout

c. Click on View/Edit for mktg-DP and click on Edit properties. Under


Protection settings, uncheck the Managed by SmartPools setting
and set the fields as shown in the graphic. Click Save changes and
close the window when done.

Lab Guide - PowerScale Administration

Page 202 © Copyright 2020 Dell Inc.


Data Protection and File Layout

3. Add contents to the mktg-DP directory.

a. Create an SMB share named mktg-DP in the marketing access zone


as shown. Refer to the SMB shares exercise for the detailed
procedure.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 203
Data Protection and File Layout

b. Map the mktg-DP share and then copy the Animation folder from the
File-Repository directory to the mapped share.

Lab Guide - PowerScale Administration

Page 204 © Copyright 2020 Dell Inc.


Data Protection and File Layout

4. Verify the protection level for the files located in mktg-DP

a. Navigate to the File system, File system explorer page, System


access zone. Navigate to the mktg-DP/Animation directory. Click on
View/Edit for the file ANIMATION.wmv. Verify that the protection
level for the file is set as +1n and close the window.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 205
Data Protection and File Layout

b. Although the file policy is set at +2d:1n protection, the explicit


directory permission on the mktg-DP directory takes precedence.

5. Configure +3d:1n protection level for the ANIMATION.wmv file.

a. Navigate to the File system, File system explorer page, marketing


access zone. Navigate to the mktg-DP/Animation directory. Click on
View/Edit for the ANIMATION.wmv file. Click on Edit properties and
change the fields as shown. Once done, click on Save changes and
close.

Lab Guide - PowerScale Administration

Page 206 © Copyright 2020 Dell Inc.


Data Protection and File Layout

b. View the file properties once again to verify the changes and close
the window.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 207
Data Protection and File Layout

c. Although the file policy is set at +2d:1n protection, and the explicit
directory permission on the mktg-DP directory is set at +1n, the
explicit protection policy at the file level takes precedence.

Lab Guide - PowerScale Administration

Page 208 © Copyright 2020 Dell Inc.


Data Protection and File Layout

CLI Step-by-Step Walk Through: Configuring Directory and File


Level Protection
1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Configure the +1n protection level for the mktg-DP directory. At the boston-1
prompt, run the following commands:

a. Create Directory: mkdir /ifs/marketing/mktg-DP

b. Change Permissions: chmod 777 /ifs/marketing/mktg-DP

c. Configure Protection Level: isi set -p +1n --diskpool


<node pool ID> /ifs/marketing/mktg-DP

Note: Replace the <node pool ID> in the command with the ID for
the vmware_100gb_6gb node pool. Run the isi storagepool
nodepools list command to get the ID.

d. Verify: isi get /ifs/marketing

3. Add contents to the mktg-DP directory.

a. Create an SMB share named mktg-DP in the marketing access


zone. At the boston-1 prompt, run the following commands:

i. Create Share: isi smb shares create mktg-DP --


zone=marketing --path=/ifs/marketing/mktg-DP -
-i=yes

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 209
Data Protection and File Layout

ii. Change Permissions: isi smb shares permission


modify mktg-DP --zone=marketing --wellknown
Everyone --permission-type=allow --
permission=full

b. Map the mktg-DP share and then copy the Animation folder from the
File-Repository directory to the mapped share.

Lab Guide - PowerScale Administration

Page 210 © Copyright 2020 Dell Inc.


Data Protection and File Layout

4. Verify the protection level for the files located in mktg-DP

a. Check the protection level for the ANIMATION.wmv file. At the


boston-1 prompt, run the isi get /ifs/marketing/mktg-
DP/Animation/ANIMATION.wmv command.

b. Although the file policy is set at +2d:1n protection, the explicit


directory permission on the mktg-DP directory takes precedence.

5. Configure +3d:1n protection level for the ANIMATION.wmv file.

a. Configure: Run the isi set -p +3d:1n


/ifs/marketing/mktg-DP/Animation/ANIMATION.wmv
command.

b. Verify: Run the isi get /ifs/marketing/mktg-


DP/Animation/ANIMATION.wmv command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 211
Data Protection and File Layout

Note: You may see a different protection level initially as it takes


some time for the requested protection level to change for the file.

c. Although the file policy is set at +2d:1n protection, and the explicit
directory permission on the mktg-DP directory is set at +1n, the
explicit protection policy at the file level takes precedence.

Lab Guide - PowerScale Administration

Page 212 © Copyright 2020 Dell Inc.


SmartPools

SmartPools

Scenario:

The IT manager has asked you to tier cluster data on two different types of storage.
Use SmartPools to create node pools and tiers. Use file pools policies to specify
target files to move and store on a different tier with different protection settings.
The protection levels you choose are unique to your workflows, your clusters, and
your organization requirements. What is an acceptable failure rate for one
organization may be unacceptable to another.

Given in the lab exercise is the information needed to configure SmartPools. If


needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Create node pool and tier

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 213
SmartPools

SmartPools
1. Configure SmartPools.

a. Activate SmartPools license. For the lab environment, you will use
the trial license functionality. Use the Licensing page of the WebUI
or the isi license command.

b. Create a manual node pool named media_pool, by splitting the


existing six-node, node pool into two three-node, node pools. Note
that this operation can only be perform using the OneFS CLI. Use the
isi storagepool nodepools command.

i. Name: media_pool

ii. Nodes: 4,5, and 6

c. Change the protection level for media_pool to +3d:1n. Use the


Storage pools page of the WebUI or the isi storagepool
nodepools command.

d. Build a tier to encompass the newly created node pool. Use the
Storage pools page of the WebUI or the isi storagepool
tiers command.

i. Name: media_tier

ii. Node pool: media_pool

Lab Guide - PowerScale Administration

Page 214 © Copyright 2020 Dell Inc.


SmartPools

WebUI Step-by-Step Walk Through: Configure SmartPools


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Navigate to the File System, Storage pools page, Summary tab. The
Summary tab displays storage pool status and local storage usage statistics.
Note that in the Status section, under the Module column, you will notice that
Policies indicates a Status of Caution because SmartPools is not yet
licensed. The SmartPools and SmartPool settings module should both show
a status of Info or Good. The CloudPools module should show a status of
Caution because CloudPools is also not yet licensed.

3. Activate SmartPools license. For the lab environment, you will use the trial
license functionality.

a. Navigate to the Cluster management, Licensing page. At the


bottom of the page, select the Manage trials button.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 215
SmartPools

b. Check SmartPools and click on Start trial. Once done, close the
window.

c. Navigate to the Licensing page, Software licenses overview table.


Ensure that the Status of SmartPools is set as Evaluation.

4. Create a manual node pool named media_pool by splitting the existing six-
node, node pool into two three-node, node pools. Note that you can only
perform this operation using the OneFS CLI.

a. On the Jumpstation, open mRemoteNG from the taskbar. Select


Bos-node1, and SSH using root/Dees12345!

b. At the boston-1 prompt, run the isi storagepool nodepools


create media_pool -n 4,5,6 command.

c. To verify, return to the WebUI. Navigate to the File system, Storage


pools page, SmartPools tab. Verify that the newly created
media_pool node pool shows as consisting of nodes 4 – 6. The
original node pool vmware_100gb_6gb now shows as only
encompassing nodes 1 – 3 because you have manually removed
Nodes 4 – 6 from that node pool.

Lab Guide - PowerScale Administration

Page 216 © Copyright 2020 Dell Inc.


SmartPools

d. The blue "X" in the actions column is an indication the newly created
media_pool is manually created and can be deleted. The gray "X" on
the vmware_100gb_6gb indicates that it is automatically created by
OneFS.

5. Change the protection level for media_pool to +3d:1n.

a. In the SmartPools tab, click the Pool: media_pool link. Set the
requested protection for the node pool to +3d:1n, click Save
changes.

6. Create a tier to encompass the newly created node pool. You will configure a
file pool policy shortly to use the tier.

a. Ensure you are still on the Storage pools page, SmartPools tab. To
create a tier, click the Create a tier button and name the tier as
media_tier. Add the media_pool to the tier. Click Create tier.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 217
SmartPools

b. Verify media_tier displays in the Tiers and node pools list.

The media_pool node pool displays under the media_tier. In this


example, the tier has only one node pool in it, but additional node
pools can be added to media_tier.

Lab Guide - PowerScale Administration

Page 218 © Copyright 2020 Dell Inc.


SmartPools

CLI Step-by-Step Walk Through: Configure SmartPools


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Activate SmartPools license. For the lab environment, you will use the trial
license functionality.

a. At the boston-1 prompt, run the isi license add --


evaluation SmartPools command. At the : prompt, type q. To
agree to the EULA, type yes and press ENTER.

b. Verify by running the isi license list command.

3. Create a manual node pool named media_pool by splitting the existing six-
node, node pool into two three-node, node pools. Note that this operation can
only be perform using the OneFS CLI.

a. At the boston-1 prompt, run the isi storagepool nodepools


create media_pool -n 4 -n 5 -n 6 command.

b. To verify, run the isi storagepool nodepools list command.


Verify that the newly created media_pool node pool shows as
consisting of nodes 4 – 6. The original node pool
vmware_100gb_6gb now shows as only encompassing nodes 1 – 3

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 219
SmartPools

because you have manually removed nodes 4 – 6 from that node


pool.

c. The Manual column indicates that the media_pool is a manually


created node pool wheras the original vmware_100gb_6gb was
automatically created by OneFS when the nodes joined the cluster.

4. Change the protection level for media_pool to +3d:1n.

a. At the boston-1 prompt, run the isi storagepool nodepools


modify media_pool -p +3d:1n command.

b. To verify, run the isi storagepool nodepools list command.

5. Create a tier to encompass the newly created node pool. You will configure a
file pool policy shortly to use the tier.

a. At the boston-1 prompt, run the isi storagepool tiers


create media_tier --children media_pool command.

b. Verify by running the isi storagepool tiers view


media_tier command.

Lab Guide - PowerScale Administration

Page 220 © Copyright 2020 Dell Inc.


SmartPools

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 221
File Pool Policies

File Pool Policies

Scenario:

The IT manager has asked you to use file pools policies to specify targets files to
move and store on a different tier with different protection settings.

Given in the lab exercise is the information needed to configure File Pool Policies.
If needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Configure Default File pool policy
• Create and verify a File pool policy

Lab Guide - PowerScale Administration

Page 222 © Copyright 2020 Dell Inc.


File Pool Policies

File Pool Policies


1. Configure the default File Pool Policy.

a. Review the cluster default file pool policy. Use the Storage pools
page of the WebUI or the isi filepool command.

Question: Without SmartPools licensed, what is the effect of the


default file pool policy on the cluster data?

Instructor Note: The default policy will be applied to all the data on
the cluster. The only exception is data for which there is some manual
configuration specified.

b. For this scenario, ensure that files only go to the Default node pool
(vmware_100gb_6gb). Change the default file policy in the Storage
pools page of the WebUI or use the isi filepool default-
policy command.

i. Data Storage Target: vmware_100gb_6gb

ii. Data SSD Strategy: Metadata read acceleration

iii. Snapshot Storage Target: vmware_100gb_6gb

iv. Snapshot SSD Strategy: Metadata read acceleration

Question: Where will the data on the cluster reside if the


storage target in the default file pool policy is set as
anywhere?

Instructor Note: The data will be randomly distributed and balanced


between vmware_100gb_6gb or media_pool

2. Create a file pool policy. A file pool policy consists of two components, Filters

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 223
File Pool Policies

and Actions. The filters are used to select the files and the actions modify the
setting of the files selected by the filter.

a. Copy the mydocuments folder from the File-Repository directory in


the Jumpstation into the mapped share mktg-share.

b. Review the location and protection level of data residing in the


mydocuments folder. Use the isi get command.

Tip: To get information on the location of data, ensure to use the -D


option in the command.

Question: With the default storage target set as


vmware_100gb_6gb node pool, why do some files show that they
are located on the media_pool node pool?

Instructor Note: Initially when data is written, the files may reside in
other node pools as well. In time, the data will eventually be moved to
the target node pool configured in the default policy. This operation
can also be manually triggered by running the SmartPoolsTree job.

c. Create a file pool policies to change the protection and location of


.mp4 files present in the mydocuments directory. Use the Storage
pools page of the WebUI or the isi filepool policies
command.

i. Policy name: media_policy

ii. Filter: The Filename must end with .mp4

iii. Data Storage Target: media_tier

iv. Data SSD Strategy: Metadata read acceleration

v. Protection Level: +3d:1n1d

Question: Since the policy is designated for .mp4 files, what


data access pattern should you set to optimize the content?

Lab Guide - PowerScale Administration

Page 224 © Copyright 2020 Dell Inc.


File Pool Policies

Instructor Note: Streaming

d. Test the effect the media_policy has on the files. Run the
SmartPools Tree job. Note that the job may take a few minutes to
complete. Use the Job operations page of the WebUI or the isi
job jobs command.

i. Path: /ifs

e. Verify that the policy and disk pool is media_pool for all the .mp4
files. Use the isi get command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 225
File Pool Policies

WebUI Step-by-Step Walk Through: Configure Default File Pool


Policy
1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Review the cluster default file pool policy.

a. Navigate to the File system, Storage pools page. Click the File pool
policies tab. Only the Default policy is displayed. At this stage the
Default policy applies to all data written to the cluster for which some
manual configuration has not been specified.

Note: You will not see the Create a file pool policy button if
SmartPools is not licensed. Without the license, you cannot create
file pool policies.

b. In the File pool policies table, in the Action column, click View
Details for the Default policy. In the View default policy details
window, review the different settings and close the window.

Lab Guide - PowerScale Administration

Page 226 © Copyright 2020 Dell Inc.


File Pool Policies

3. For this scenario, ensure that files only go to the default node pool
(vmware_100gb_6gb). With the Storage target and Snapshot storage
target set to anywhere, files will be placed across all node pools.

a. Click the View/Edit button for the Default policy. Click Edit policy.
Make the changes as shown and then click Save changes and
Close.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 227
File Pool Policies

Lab Guide - PowerScale Administration

Page 228 © Copyright 2020 Dell Inc.


File Pool Policies

CLI Step-by-Step Walk Through: Configure Default File Pool Policy


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Review the cluster default file pool policy.

a. At this stage, only the default policy file pool policy is configured. The
default policy applies to all data written to the cluster for which some
manual configuration has not been specified.

Note: You will not be able to create a new file pool policy if
SmartPools is not licensed.

b. At the boston-1 prompt, run the isi filepool default-policy


view command and review the different settings.

3. For this scenario, ensure that files only go to the default node pool
(vmware_100gb_6gb). With the Storage target and Snapshot storage

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 229
File Pool Policies

target set to anywhere, files will be placed across all node pools.

a. At the boston-1 prompt, run the isi filepool default-policy


modify --data-storage-target vmware_100gb_6gb --
data-ssd-strategy metadata --snapshot-storage-
target vmware_100gb_6gb --snapshot-ssd-strategy
metadata command to change the storage target and snapshot
target to the vmware_100gb_6gb node pool.

b. Verify by running the isi filepool default-policy view


command.

Lab Guide - PowerScale Administration

Page 230 © Copyright 2020 Dell Inc.


File Pool Policies

Step-by-Step Walk Through: Create File Pool Policy


A file pool policy consists of two components, Filters and Actions. The filters are
used to select the files and the actions modify the setting of the files selected by the
filter.

1. On the Jumpstation, login as DEES\John (password: Dees12345!).

2. Open the Windows File Explorer on the Jumpstation and map the mktg-
share, if it is not mapped. Copy the mydocuments folder from the File-
Repository directory into the mapped share mktg-share. If needed, you can
review the SMB Shares lab for the steps to map a share.

3. Review the location and protection level of data residing in the mydocuments
folder.

a. Open mRemoteNG from the taskbar. Select Bos-node1, and SSH


using root/Dees12345!

b. At the boston-1 prompt, run the isi get


/ifs/marketing/mktg-share/mydocuments/Manual-

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 231
File Pool Policies

battery-test-gen5.mp4 command to get the file attributes for


the Manual-battery-test-gen5.mp4 file.

c. At the boston-1 prompt, run the isi get -D


/ifs/marketing/mktg-share/mydocuments/Manual-
battery-test-gen5.mp4 | grep "Disk pools" command to
view the disk pools setting for the file. Since the target for the Default
policy is node pool vmware_100gb_6gb, the files will be put into
node pool vmware_100gb_6gb. If the output from the command
shows files in the media_pool, eventually all the files will move to the
vmware_100gb_6gb node pool.

d. If you do not change the target node pools for the Default policy, the
files can be in either of the two node pools because the Default
policy writes to anywhere, and will randomly balance which node
pool a file is written. Running the isi get -D
/ifs/marketing/mktg-share/mydocuments/* | grep
"Disk pools" command shows the distribution of the files across
the two pools.

4. Create a file pool policy to change the protection and location of .mp4 files
present in the mydocuments directory. You may perform this step either
using the WebUI or the CLI.

a. WebUI Option: Open a Chrome browser and launch the WebUI


(https://192.168.3.11:8080). Login to the cluster using
root/Dees12345!

i. Navigate to the the File system, Storage pools page, File


pool policies tab. Click Create a file pool policy. Use the
settings shown and click Create policy.

Lab Guide - PowerScale Administration

Page 232 © Copyright 2020 Dell Inc.


File Pool Policies

ii. Verify that the media_policy is listed in the File pool policies
tab.

b. CLI Option: Open mRemoteNG from the taskbar. Select Bos-node1,


and SSH using root/Dees12345!

i. At the boston-1 prompt, run the isi filepool policies


create media_policy --begin-filter --
name='*.mp4' --operator=eq --end-filter --set-
requested-protection=+3d:1n1d --data-storage-

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 233
File Pool Policies

target=media_tier --data-ssd-strategy metadata


command.

ii. Verify by running the isi filepool policies view


media_policy command.

5. Test the effect the media_policy has on the files. Run the SmartPools Tree
job. Note that the job may take a few minutes to complete. You may perform
this step either using the WebUI or the CLI.

a. WebUI option: Open a Chrome browser and launch the WebUI


(https://192.168.3.11:8080). Login to the cluster using
root/Dees12345!

i. To run the job, go to Cluster management, Job operations,


and the Job types tab. On the SmartPoolsTree job, select
Start job and Start job again.

Lab Guide - PowerScale Administration

Page 234 © Copyright 2020 Dell Inc.


File Pool Policies

ii. On the Job summary tab, monitor the job. The indication that
the job is completed is when the job is no longer present in the
Active jobs table.

b. CLI option: Open mRemoteNG from the taskbar. Select Bos-node1,


and SSH using root/Dees12345!

i. At the boston-1 prompt, run the isi job jobs start


SmartPoolsTree --path /ifs command.

ii. Run the isi job jobs list command. This displays a list
of all the active jobs. The indication that the job is completed is
when the job is no longer present in the list. Run the command
once every minute to find out if it has been completed.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 235
File Pool Policies

6. When the job finishes, go to the SSH session and run the isi get -D
/ifs/marketing/mktg-share/mydocuments/*.mp4 | grep "Disk
pools" command. Verify that the policy and disk pool is media_pool for all
the .mp4 files.

Lab Guide - PowerScale Administration

Page 236 © Copyright 2020 Dell Inc.


File Filtering

File Filtering

Scenario:

A problem with the last storage system was the large amount of capacity
consumption due to individuals storing personal pictures on the shares. The IT
manager wants to ensure that users cannot use storage for specific files types
within specific shares. Use the file filtering feature to prevent certain files from
consuming storage in a share.

Given in the lab exercise is the information needed to configure file filtering. If
needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, perform the following tasks:


• Configure file filtering for an SMB share

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 237
File Filtering

File Filtering
1. Use the information given below to add and verify a file filter on a SMB share.

a. Add a file filter to prevent users from writing JPEG image files on the
marketing general purpose share (mktg-share). Edit the share using
the Window sharing (SMB) page of the WebUI or the isi smb
shares command.

i. File Filter Enabled: yes

ii. File Extensions: .jpg

iii. File Filter Type: deny

b. Verify the file filter on mktg-share.

i. Copy the M & E folder from the File-Repository directory to


the mapped mktg-share. Skip the items that are restricted by
the copy operation.

Question: Why is there an access denied message during the


copy operation?

Instructor Note: As some files in the the M & E folder are .jpg files,
the filter added on the share prevents them from being copied.

ii. Compare the File-Repository > M & E folder with the mktg-
share > M & E folder.

Lab Guide - PowerScale Administration

Page 238 © Copyright 2020 Dell Inc.


File Filtering

WebUI Step-by-Step Walk Through: Configure File Filtering


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Add a file filter to prevent users from writing JPEG image files on the
marketing general purpose share.

a. Navigate to Protocols, Windows sharing (SMB) page, set access


zone to marketing. On mktg-share, click View/Edit and click Edit
SMB share.

b. In the Edit SMB share details window, go to the File filter section
and check Enable file filters. Add the file extension .jpg as shown
and then save the changes.

3. Verify the file filter on mktg-share.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 239
File Filtering

a. Map the mktg-share, if not already mapped. Use Windows Explorer


to copy the M & E folder from the File-Repository directory to the
mapped mktg-share.

b. The user is notified with an access denied message when trying to


store JPEG files to the share. Select the checkbox and click Skip.

c. Compare the File-Repository > M & E folder with the mktg-share >
M & E folder. Notice that the JPEG files did not copy over from the
Sat Images folders, but other files such as the Thumbs.db are
copied successfully.

4. Similarly, you can add file filters to an access zone in the Access, File filter
page of the WebUI. The file filter on an SMB shares takes precedence over
the filter filter on the access zone where the share resides.

Lab Guide - PowerScale Administration

Page 240 © Copyright 2020 Dell Inc.


File Filtering

CLI Step-by-Step Walk Through: Configure File Filtering


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Add a file filter to prevent users from writing JPEG image files on the
marketing general purpose share.

a. At the boston-1 prompt, run the isi smb shares modify mktg-
share --zone=marketing --file-filtering-enabled=yes
--file-filter-extensions=.jpg --file-filter-
type=deny command.

b. Verify by running the isi smb shares view mktg-share --


zone marketing | grep "File Filter" command.

3. Verify the file filter on mktg-share.

a. Map the mktg-share, if not already mapped. Use Windows Explorer


to copy the M & E folder from the File-Repository directory to the
mapped mktg-share.

b. The user is notified with an access denied message when trying to


store JPEG files to the share. Select the checkbox and click Skip.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 241
File Filtering

c. Compare the File-Repository > M & E folder with the mktg-share >
M & E folder. Notice that the JPEG files did not copy over from the
Sat Images folders, but other files such as the Thumbs.db are
copied successfully.

4. Similarly, you can add file filters to an access zone using the isi file-
filter settings modify command. The file filter on an SMB shares
takes precedence over the filter filter on the access zone where the share
resides.

Lab Guide - PowerScale Administration

Page 242 © Copyright 2020 Dell Inc.


SmartQuotas

SmartQuotas

Scenario:

A portion of the storage capacity of the sales general purpose directory will be
allocated to each user of the directory. The IT manager has asked you to use
quotas to enforce storage limits. You will start by setting up report generation and
event notifications. Next, create a directory type quota and apply user quotas to the
general purpose directory. Ensure to test the settings.

Given in the lab exercise is the information needed to configure SmartQuotas. If


needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Report generation and event notification
• Add a directory type quota to a directory
• Add default user quota to a directory
• Test quota directory
• Edit default user quotas

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 243
SmartQuotas

SmartQuotas
1. Use the following information to change the global quota reporting frequency
and notification rules.

a. Activate SmartQuotas license. For the lab environment, you will use
the trial license functionality.

b. Change the frequency of the scheduled reports. Use the


SmartQuotas page of the WebUI or the isi quota settings
reports command.

i. Schedule: Run multiple policies every 2 minutes everyday.

c. Add the global notification rules for quotas. Use the SmartQuotas
page of the WebUI or the isi quota settings notifications
command.

i. Notification rule types: soft limit exceeded, hard limit


exceeded, hard - write access denied.

ii. Notify owner: yes

iii. Email: [email protected]

iv. Notification delay (where applicable): No delay

v. Schedule (where applicable): Run multiple policies every 5


minutes everyday.

2. Use the following information to configure directory type quotas and default-
user quotas on the general purpose sales directory.

a. Create a general purpose share for the sales access zone. Use the
Windows sharing (SMB) page of the WebUI or the isi smb
shares command.

i. Share Name: sales-gen

ii. Path: /ifs/sales/sales-gen

iii. Create SMB share directory: Yes

Lab Guide - PowerScale Administration

Page 244 © Copyright 2020 Dell Inc.


SmartQuotas

iv. Directory ACLs: Apply Windows Default ACLs

v. Permissions: Set permissions so Everyone has full access

b. Using Windows Explorer, map the \\sales\sales-gen share.

c. Add a directory type quota on the /ifs/sales/sales-gen directory. Use


the SmartQuotas page of the WebUI or the isi quota quotas
command.

i. Quota Type: Directory

ii. Target Directory Path: /ifs/sales/sales-gen

iii. Soft Limit: 300 MB

iv. Soft Limit Grace Period: 30 minutes

v. Hard Limit: 400 MB

vi. Show available space: Soft or Hard limit threshold

d. Add the default-user quota on the /ifs/sales/sales-gen directory. Use


the SmartQuotas page of the WebUI or the isi quota quotas
command.

i. Quota Type: Default-user

ii. Target Directory Path: /ifs/sales/sales-gen

iii. Hard Limit: 10 MB

e. Test the quotas on the /ifs/sales/sales-gen directory.

i. Using Windows Explorer, copy the File-Repository > M & E >


Sat Images folder to the mapped sales-gen share.

Question: Is the copy operation successful? Why or Why not?

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 245
SmartQuotas

Instructor Note: No, because the folder size is larger than the hard
limit enforced on the user.

ii. Observe the reports generated. Use the SmartQuotas page of


the WebUI or the isi quota reports command.

Question: Why are reports generated every 2 minutes?

Instructor Note: The quotas are configured with the global reporting
settings which schedule reports every 2 minutes of everyday.

iii. View the list of all quotas. Use the SmartQuotas page of the
WebUI or the isi quota quotas command.

Question: Why is a quota listed for user John?

Instructor Note: The default-user quota automatically creates a


quota for all users of the target directory.

f. Edit the quotas for default-user and for a specific user.

i. Change the hard limit enforced on user John to 200 MB. Use
the SmartQuotas page of the WebUI or the isi quota
quotas command.

Hint: To edit a quota on a specific user, ensure to first remove


its existing link with the default-user quota.

ii. Using Windows Explorer, copy the File-Repository > M & E >
Sat Images folder to the mapped sales-gen share.

Lab Guide - PowerScale Administration

Page 246 © Copyright 2020 Dell Inc.


SmartQuotas

Question: Is the copy operation successful? Why or Why not?

Instructor Note: Yes, because the folder size is smaller than the hard
limit enforced on user John.

iii. Change the hard limit enforced on all users to 250 MB by


changing the setting on the default-user quota. Use the
SmartQuotas page of the WebUI or the isi quota quotas
command.

Question: Is the hard limit updated for user John? Why or Why
not?

Instructor Note: No, because user john is no longer linked to the


default-user quota and has a specific quota setting. Any specific user
quotas that are configured take precedence over a default user quota.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 247
SmartQuotas

WebUI Step-by-Step Walk Through: Report Generation


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Activate the SmartQuotas license.

a. Navigate to Cluster Management, Licensing page. Check the


SmartQuota license status. If the status is inactive, scroll down and
click Manage trials. Add the trial version for the purpose of this lab
exercise.

b. Navigate to the Licensing page, Software licenses overview table.


Ensure that the Status of SmartQuotas is set as Evaluation.

Lab Guide - PowerScale Administration

Page 248 © Copyright 2020 Dell Inc.


SmartQuotas

3. Change the frequency of the quota reports.

a. Navigate to the File System, SmartQuotas page. Click the Settings


tab and review the scheduled reporting options.

b. Enter the schedule as shown for Scheduled reporting.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 249
SmartQuotas

c. Leave the Manual reporting section and the Email mapping section
with its default settings.

Lab Guide - PowerScale Administration

Page 250 © Copyright 2020 Dell Inc.


SmartQuotas

4. Add the global notification rules for quotas.

a. To add each notification rule given below, under the Notification


rules section, click the Add a notification rule button.

b. Add notification rule for Soft limit exceeded as shown.

c. Add notification rule for Hard limit exceeded as shown.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 251
SmartQuotas

d. Add notification rule for Hard limit denied as shown.

5. Once done, scroll to the bottom of the page and click Save changes.

Lab Guide - PowerScale Administration

Page 252 © Copyright 2020 Dell Inc.


SmartQuotas

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 253
SmartQuotas

CLI Step-by-Step Walk Through: Report Generation


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Activate the SmartQuotas license.

a. At the boston-1 prompt, run the isi license add --


evaluation SmartQuotas command. Press ENTER and at the :
prompt, type q. To agree to the EULA, type yes and press ENTER.

b. Verify by running the isi license list command.

3. Change the frequency of the quota reports. At the boston-1 prompt, run the
following commands:

a. Modify Scheduled Reporting: isi quota settings reports


modify --schedule "Every 1 day every 2 minutes
between 12:00 AM and 11:59 PM"

b. Verify: isi quota settings reports view

4. Add the global notification rules for quotas. At the boston-1 prompt, run the
following commands:

Lab Guide - PowerScale Administration

Page 254 © Copyright 2020 Dell Inc.


SmartQuotas

a. Add notification rule for Soft limit exceeded: isi quota


settings notifications create --threshold soft --
condition exceeded --action-email-owner yes --
action-email-address "[email protected]" --holdoff 0s

b. Add notification rule for Hard limit exceeded: isi quota


settings notifications create --threshold hard --
condition exceeded --action-email-owner yes --
action-email-address "[email protected]" --holdoff 0s

c. Add notification rule for Hard limit denied: isi quota


settings notifications create --threshold hard --
condition denied --action-email-owner yes --action-
email-address "[email protected]" --holdoff 0s

d. Verify: isi quota settings notifications list

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 255
SmartQuotas

WebUI Step-by-Step Walk Through: Configure Directory Quota


and Default-User Quota
1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Create a general purpose share for the sales access zone.

a. Navigate to Protocols, Windows sharing (SMB), sales access


zone. Click on Create an SMB share.

i. Enter the fields as shown. This is the share you will use for
auditing events. Once all the fields are entered click Create
Share.

Lab Guide - PowerScale Administration

Page 256 © Copyright 2020 Dell Inc.


SmartQuotas

b. Go to Windows Explorer and map the \\sales\sales-gen share and


then click Finish. Refer to the SMB Shares lab if you need the
detailed steps to map a Network Drive.

3. Add a directory type quota on the /ifs/sales/sales-gen directory.

a. Navigate to the File System, SmartQuotas page. Go to the Quota


and usage tab, and then click Create a quota.

b. Select the quota type as Directory quota and fill the other fields as
shown. Once done, click Create quota. A success message is
displayed on the Quota and usage tab.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 257
SmartQuotas

Lab Guide - PowerScale Administration

Page 258 © Copyright 2020 Dell Inc.


SmartQuotas

c. In the Quotas and usage table, click View/Edit for the directory and
review all of your settings. If you require a custom notification for this
quota, you can change the configuration in the Quotas & Usage
table.

d. In Windows Explorer, right-click the mapped sales-gen share, and


then click Properties. Note that the capacity is the quota soft limit you
configured. Close the Properties window.

4. Add the default-user quota on the /ifs/sales/sales-gen directory.

a. Navigate to the File System, SmartQuotas page. Go to the Quota


and usage tab, and then click Create a quota.

b. Select the quota type as User quota and fill the other fields as
shown. Once done, click Create quota. A success message is
displayed on the Quota and usage tab.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 259
SmartQuotas

Lab Guide - PowerScale Administration

Page 260 © Copyright 2020 Dell Inc.


SmartQuotas

c. Verify that the default-user has been added in the Quotas and usage
table.

5. Test the quotas on the /ifs/sales/sales-gen directory.

a. Open Windows Explorer and navigate to the File-Repository folder.


Copy the M & E > Sat Images folder to the mapped sales-gen share.
A Copy Item dialog box opens indicating there is not enough space
because the folder size is larger than the hard limit enforced on the
user. Close the dialog box.

b. View the generated reports. Navigate to File System, SmartQuotas,


Generated reports archive tab. Note the Report Generations listed
as per the schedule set in the global Scheduled reporting settings.
For more details on each report, click on View details.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 261
SmartQuotas

c. Navigate to the Quotas and Usage tab and note all the quotas listed.
Observe that a quota for user john is listed. The default-user quota
automatically creates a quota for user john.

6. Edit the quotas for default-user and for a specific user.

a. Edit the quota for user john. Navigate to the File System,
SmartQuotas, Quotas and usage tab. Find the user: Dees\john
linked entry. In order to edit this quota, you need to first remove its
link with the default-user quota. For the entry, click the Unlink
button.

b. Click the View/Edit button and then click Edit. Change the Hard limit
to 200 MB and click Save changes and close the window. You
should see a success message.

Lab Guide - PowerScale Administration

Page 262 © Copyright 2020 Dell Inc.


SmartQuotas

c. Using Windows Explorer, copy the File-Repository > M & E >Sat


Images folder to the mapped sales-gen share. The copy operation
will be successful as the folder size is lesser than the quota hard limit.

d. Next, update the quota for all users to 250 MB. On the Quotas and
usage tab, click on View/Edit for the default-user quota entry.
Update the hard limit to 250 MB and save the changes.

e. Notice the quotas on the Quotas and usage tab. The hard limit for
user john has not been updated because user john is not linked to the
default-user quota and has a specific quota setting. Any specific user
quotas that you configure take precedence over a default user quota.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 263
SmartQuotas

Lab Guide - PowerScale Administration

Page 264 © Copyright 2020 Dell Inc.


SmartQuotas

CLI Step-by-Step Walk Through: Configure Directory Quota and


Default-User Quota
1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create a general purpose share for the sales access zone.

a. Create Share: At the boston-1 prompt, run the isi smb shares
create sales-gen --zone=sales --
path=/ifs/sales/sales-gen --i=yes --create-path
command.

i. Change Permissions: At the boston-1 prompt, run the isi


smb shares permission modify sales-gen --
zone=sales --wellknown Everyone --permission-
type=allow --permission=full command.

3. Go to Windows Explorer and map the \\sales\sales-gen share and then click
Finish. Refer to the SMB Shares lab if you need the detailed steps to map a
Network Drive.

4. Add a directory type quota on the /ifs/sales/sales-gen directory.

a. To create the quota, at the boston-1 prompt, run the isi quota
quotas create /ifs/sales/sales-gen directory --
hard-threshold 400M --soft-threshold 300M --soft-
grace 30m --container yes command.

b. Verify by running the isi quota quotas list command. The


quota has a hard limit of 400MB and soft limit of 300MB. The soft
grace period is set to 30 minutes. The quota will use the global
notification rules.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 265
SmartQuotas

c. In Windows Explorer, right-click the mapped sales-gen share, and


then click Properties. Note that the capacity is the quota soft limit you
configured. Close the Properties window.

5. Add the default-user quota on the /ifs/sales/sales-gen directory.

a. To create the quota, at the boston-1 prompt, run the isi quota
quotas create /ifs/sales/sales-gen default-user --
hard-threshold 10M command.

b. Verify by running the isi quota quotas list command. The


quota has a hard limit of 10MB. The quota will use the global
notification rules.

6. Test the quotas on the /ifs/sales/sales-gen directory.

Lab Guide - PowerScale Administration

Page 266 © Copyright 2020 Dell Inc.


SmartQuotas

a. Open Windows Explorer and navigate to the File-Repository folder.


Copy the M & E > Sat Images folder to the mapped sales-gen share.
A Copy Item dialog box opens indicating there is not enough space
because the folder size is larger than the hard limit enforced on the
user. Close the dialog box.

b. View the generated reports by running the isi quota reports


list command. The reports are generated as per the schedule set in
the global Scheduled reporting settings.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 267
SmartQuotas

c. View all the quotas enforced by running the isi quota quotas
list command. Observe that a quota for user john is listed. The
default-user quota automatically creates a quota for user john.

7. Edit the quotas for default-user and for a specific user.

a. Edit the quota for user john. In order to edit this quota, you need to
first remove its link with the default-user quota. At the boston-1
prompt, run the isi quota quotas modify
/ifs/sales/sales-gen user --user "DEES\john" --
linked no command.

Lab Guide - PowerScale Administration

Page 268 © Copyright 2020 Dell Inc.


SmartQuotas

b. Change the hard limit to 200 MB by running the isi quota quotas
modify /ifs/sales/sales-gen user --user "DEES\john"
--hard-threshold 200M command.

c. Verify by running the isi quota quotas list command.

d. Using Windows Explorer, copy the File-Repository > M & E >Sat


Images folder to the mapped sales-gen share. The copy operation
will be successful as the folder size is lesser than the quota hard limit.

e. Next, update the quota for all users to 250 MB. Run the isi quota
quotas modify /ifs/sales/sales-gen default-user --
hard-threshold 250M command.

f. Verify by running the isi quota quotas list command. The


hard limit for user john has not been updated because user john is no
more linked to the default-user quota and has a specific quota
setting. Any specific user quotas that you configure take precedence
over a default user quota.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 269
SmartDedupe

SmartDedupe

Scenario:

Another problem with the last storage system was the large amount of storage
consumed by duplicate data. The IT manager wants you to run a deduplication
assessment to verify you can assess space savings, specify deduplication settings,
and view space savings. Also, view the deduplication report.

Given in the lab exercise is the information needed to configure SmartDedupe. If


needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Specify deduplication settings
• Assess space savings
• Run deduplication

Lab Guide - PowerScale Administration

Page 270 © Copyright 2020 Dell Inc.


SmartDedupe

SmartDedupe
1. Use the following information to configure SmartDedupe and deduplicate the
marketing general purpose share.

a. Activate the SmartDedupe license. For the lab environment, you will
use the trial license functionality.

b. Ensure that the Animation, M & E, and mydocuments directories


are present in the mapped mktg-share directory.

i. Using Windows Explorer, copy mydocuments from File-


Repository to the /ifs/marketing/mktg-share/Animation
directory.

c. Configure SmartDedupe settings. Use the Deduplication page of the


WebUI or the isi dedupe settings command.

i. Deduplication Directory: /ifs/marketing/mktg-share

ii. Deduplication Assessment Directory: /ifs/marketing/mktg-


share

d. Run deduplication assessment on the mktg-share directory and


check the report.

i. Start the DedupeAssessment job with its default values. Use


the Job operations page of the WebUI or the isi job jobs
command.

ii. View the report and note the space savings. Use the
Deduplication page of the WebUI or the isi dedupe
reports command.

Question: If multiple directories are assessed, are disk


savings differentiated by directory in the deduplication report?

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 271
SmartDedupe

Instructor Note: No, the space savings are reported as one


combined value.

e. Run deduplication on the mktg-share directory.

i. Start the Dedupe job with its default values. Use the Job
operations page of the WebUI or the isi job jobs
command.

ii. Check the deduplication stats and note the amount of disk
space saved. Use the Deduplication page of the WebUI or the
isi dedupe command.

iii. View the report and note the space savings. Use the
Deduplication page of the WebUI or the isi dedupe
reports command.

Question: Under what conditions may two or more identical


files in a directory not be deduplicated?

Instructor Note: Identical files are deduplicated only when the files or
identical 8K blocks are stored in the same node pool and have the
same protection policy.

Lab Guide - PowerScale Administration

Page 272 © Copyright 2020 Dell Inc.


SmartDedupe

WebUI Step-by-Step Walk Through: Configure Deduplication


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Activate the SmartDedupe license.

a. Navigate to Cluster Management, Licensing page. Check the


SmartDedupe license status. If the status is inactive, scroll down and
click Manage trials. Add the trial version for the purpose of this lab
exercise.

b. Ensure that the SmartDedupe license is set to Evaluation in the


Software licenses overview table.

3. Add contents to the mktg-share directory.

a. Map the mktg-share directory, if not already mapped. Ensure that the
Animation, M & E, and mydocuments directories are present as
shown. If not, copy the directories from the File-Repository directory
using Windows Explorer.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 273
SmartDedupe

b. For the purpose of this demonstration, make another copy of the


mydocuments directory from File-Repository and paste it in the
Animation directory as shown.

4. Configure SmartDedupe settings. In this demonstration, we will run


deduplication on the /ifs/marketing/mktg-share directory.

a. Navigate to the File System, SmartDedupe page. Select the


Settings tab.

Lab Guide - PowerScale Administration

Page 274 © Copyright 2020 Dell Inc.


SmartDedupe

b. Configure the mktg-share directory for deduplication and


deduplication assessment as shown. Click Save changes.

Note: If you assess multiple directories, disk savings are not


differentiated by directory in the deduplication report.

5. Run deduplication assessment on the mktg-share directory and check the


report. The assessment operation does not perform deduplication but provides
an estimate of the space savings when deduplication is performed.

a. Navigate to Cluster Management, Job operations page. Select the


Job types tab. In the Job types table, find the
DedupeAsssessment job, and then click Start job.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 275
SmartDedupe

b. On the Start a job dialog box, click Start job. A Job Start
Succeeded message displays briefly.

c. Click the Job summary tab. When the DedupeAssessment job


completes, the job is removed from the Active jobs table. This may
take a few minutes to complete.

d. Once the job completes, navigate to File system, SmartDedupe,


Summary tab.

e. Scroll down to the Deduplication assessment reports table. Click


View report for the most recent report. Note that it may take as much
as five minutes for the report to display in the table.

Lab Guide - PowerScale Administration

Page 276 © Copyright 2020 Dell Inc.


SmartDedupe

Note: Sometimes the savings reported are low in environments where


multiple node pools and protection policies are configured. Identical
files are deduplicated only when the files or identical 8K blocks are
stored in the same node pool and have the same protection policy.

6. Run deduplication on the mktg-share directory.

a. Navigate to Cluster Management, Job operations page. Select the


Job types tab. In the Job types table, find the Dedupe job, and then
click Start job.

b. On the Start a job dialog box, click Start job. A Job Start
Succeeded message displays briefly.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 277
SmartDedupe

c. Check the status of the job in the Summary tab. Once the job
completes, it will be removed from the Active jobs table. This may
take a few minutes to complete.

d. Once the job completes, navigate to File system, SmartDedupe,


Summary tab. View the amount of disk space saved in the
Deduplication savings section. Scroll down to the Deduplication
reports section and view the latest report.

Note: Deduplication is not efficient for all file types. For example,
media, PDFs, and zip files are not good candidates for deduplication

Lab Guide - PowerScale Administration

Page 278 © Copyright 2020 Dell Inc.


SmartDedupe

unless the files are heavily replicated in the same node pool. Typical
user files such as Microsoft Office files that are shared, modified, and
copied among users are good candidates for deduplication. A good
use case for deduplication is user home directories where hundreds
of users may typically keep multiple versions of the same files.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 279
SmartDedupe

CLI Step-by-Step Walk Through: Configure Deduplication


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Activate the SmartDedupe license.

a. At the boston-1 prompt, run the isi license add --


evaluation SmartDedupe command. Type q to skip through the
license. To agree to the EULA, type yes and press Enter.

b. Verify by running the isi license list command.

3. Add contents to the mktg-share directory.

a. Map the mktg-share directory, if not already mapped. Ensure that the
Animation, M & E, and mydocuments directories are present as
shown. If not, copy the directories from the File-Repository directory
using Windows Explorer.

Lab Guide - PowerScale Administration

Page 280 © Copyright 2020 Dell Inc.


SmartDedupe

b. For the purpose of this demonstration, make another copy of the


mydocuments directory from File-Repository and paste it in the
Animation directory as shown.

4. Configure SmartDedupe settings. In this demonstration, we will run


deduplication on the /ifs/sales/mktg-share directory.

a. Configure the deduplication path. At the boston-1 prompt, run the


isi dedupe settings modify --paths
/ifs/marketing/mktg-share command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 281
SmartDedupe

b. Configure the deduplication assessment path. At the boston-1


prompt, run the isi dedupe settings modify --assess-
paths /ifs/marketing/mktg-share command.

Note: If you assess multiple directories, disk savings are not


differentiated by directory in the deduplication report.

c. Verify by running the isi dedupe settings view command.

5. Run deduplication assessment on the mktg-share directory and check the


report. The assessment operation does not perform deduplication but provides
an estimate of the space savings when deduplication is performed.

a. To run the assessment, at the boston-1 prompt, run the isi job
jobs start DedupeAssessment command.

b. To check the job status, run the isi job jobs list command to
display the active jobs list. Once the job completes, it will no longer be
present in the list. Run the command every few seconds to check if
the job is finished. This may take a few minutes to complete.

c. Once the job finishes, view the report generated. We need the job ID
to view the report. Run the isi dedupe reports list command
to get the job ID. Replace the ID in the isi dedupe reports
view <ID> command to view the report. Note that it may take as
much as five minutes for the report to display in the list.

Lab Guide - PowerScale Administration

Page 282 © Copyright 2020 Dell Inc.


SmartDedupe

Note: Sometimes the savings reported are low in environments where


multiple node pools and protection policies are configured. Identical
files are deduplicated only when the files or identical 8K blocks are
stored in the same node pool and have the same protection policy.

6. Run deduplication on the mktg-share directory.

a. To run deduplication, at the boston-1 prompt, run the isi job


jobs start Dedupe command.

b. Check the job status by running the isi job jobs list
command. This may take a few minutes to complete.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 283
SmartDedupe

c. Once the job completes, view the amount of disk space saved by
running the isi dedupe stats command.

d. View the report generated. Run the isi dedupe reports list
command to get the job ID. Replace the ID in the isi dedupe
reports view <ID> command to view the report.

Note: Deduplication is not efficient for all file types. For example,
media, PDFs, and zip files are not good candidates for deduplication
unless the files are heavily replicated in the same node pool. Typical
user files such as Microsoft Office files that are shared, modified, and
copied among users are good candidates for deduplication. A good
use case for deduplication is user home directories where hundreds
of users may typically keep multiple versions of the same files.

Lab Guide - PowerScale Administration

Page 284 © Copyright 2020 Dell Inc.


SnapshotIQ

SnapshotIQ

Scenario:

The IT manager wants to ensure you can recover files from snapshots. Use
SnapshotIQ to access specific point in time copies of files and restore the data from
the snapshots.

Given in the lab exercise is the information needed to configure SnapshotIQ. If


needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

Many of the steps need to be done in quick succession in order to see the
snapshot behavior. For this reason, it is suggested that you read through all the
steps for tasks 1 and 2 before you perform the steps.

In this lab, you perform the following tasks:


• Create snapshot schedule
• View Snapshots
• Access and restore previous versions of files

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 285
SnapshotIQ

SnapshotIQ
1. Create Snapshot Schedule.

a. Activate the trail SnapshotIQ license.

b. Create a Snapshot target directory. Use the Windows Explorer to


create a new directory named MySales in the mapped sales-gen
share. Ensure that the share is empty before creating the directory.

c. Create a snapshot schedule for the MySales directory. Use the


SnapshotIQ page of the WebUI or the isi snapshot schedules
command.

i. Schedule Name: MySnaps

ii. Snapshot Naming Pattern: MySales_%H:%M

iii. Path: /ifs/sales/sales-gen/MySales

iv. Snapshot Duration: 2 Hours

v. Schedule: Run multiple policies every 2 minutes of


everyday

Question: When a schedule policy is deleted manually, are the


snapshots deleted as well? Why or why not?

Instructor Note: No, because a service running the background


monitors the snapshots and does not allow them to be deleted or
removed until the snapshot meets its expiration time that was set in
the policy.

d. View the snapshots created. Use the SnapshotIQ page of the WebUI
or the isi snapshot snapshots command. In accordance with
the configured schedule, it could take up to 2-3 minutes for snapshots
to appear.

Question: What is the size of the snapshot compared to the target


directory when no data in the directory has been modified or deleted?

Lab Guide - PowerScale Administration

Page 286 © Copyright 2020 Dell Inc.


SnapshotIQ

Instructor Note: Snapshots are pointer-based and therefore will use


a fraction of the space that the directory data uses.

2. Create Snapshots. Perform this step using the OneFS CLI and Windows
Explorer.

a. Create multiple point-in-time versions for a file using CLI. Use the
date > filename command.

i. Create a file named snapfile which contains the current


timestamp.

ii. Append the current timestamp to snapfile every 2 minutes for


a minimum of 4-5 times.

iii. Delete snapfile using the rm command.

iv. After 2-3 minues, create snapfile once again with the current
timestamp.

b. View the amount of space that the snapshot uses after data is
modified. Use the SnapshotIQ page of the WebUI or the isi
snapshot snapshots command.

3. Restore files from Snapshots. This step can be performed only using the
OneFS CLI and Windows Explorer.

a. Restore snapshots from the .snapshot directory. Use the cp


command.

i. Navigate to the /ifs/sales/sales-gen/MySales/.snapshot


directory in the CLI.

ii. The subdirectories in the .snapshot folder indicate the


different point in time versions of the target MySales directory.
Navigate into the subdirectory you wish to restore the file from
using the CLI.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 287
SnapshotIQ

iii. Copy snapfile in the subdirectory into the present directory


with the name old-snapfile.

iv. Verify the contents in old-snapfile.

b. Recover an earlier version of a file using Windows Explorer.

i. Navigate to the /ifs/sales/sales-gen/MySales directory using


Windows Explorer.

ii. Open the Properties window for snapfile.

iii. Use the Open, Copy and Restore operations to restore earlier
versions of snapfile.

iv. Verify the contents of the restored files.

Lab Guide - PowerScale Administration

Page 288 © Copyright 2020 Dell Inc.


SnapshotIQ

WebUI Step-by-Step Walk Through: Create Snapshot Schedule


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Activate the trail SnapshotIQ license.

a. Navigate to Cluster management, Licensing page. At the bottom of


the page, select Manage trials. Next, check SnapshotIQ and click
Start trial.

b. Verify the status of the license in the overview table.

3. Create a Snapshot target directory.

a. Map the sales-gen share, if not already mapped. Open the mapped
sales-gen share. Delete all contents in the directory, if any.

b. Right-click in the window create a new folder named MySales.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 289
SnapshotIQ

4. Create a snapshot schedule for the MySales directory.

a. Navigate to Data protection, SnapshotIQ page. Click each tab on


the page and review SnapshotIQ capabilities.

b. Go to the Snapshot Schedules tab and click the Create a schedule


button. Fill in the details as shown. For demonstration purposes, a 2
minute policy is used. This may not be realistic for many workflows.
Once done, click Create schedule.

Lab Guide - PowerScale Administration

Page 290 © Copyright 2020 Dell Inc.


SnapshotIQ

c. After the snapshot schedule is created, in the Schedules section,


click View/Edit to review the settings for the new schedule.

Tip: When creating a snapshot policy, keep in mind the length of time that you want
to set for expiration of the snapshots. If you are taking a snapshot every two
minutes as you do in the lab, you may not want to keep each snapshot for more
than two hours. If you use a setting that takes a snapshot every few days, then you
may want to set the expiration time so that you have at least one saved snapshot
when the next snapshot is taken. If you manually delete the policy, the snapshots
are not deleted because a service running in the background monitors the
snapshots and does not allow them to be deleted or removed until the expiration
time that you set in the policy.

5. Go to the Snapshots tab. You should begin to see snapshots. The snapshots
do not consume any space until data has been changed or deleted. In
accordance with the configured schedule, it could take up to 2-3 minutes for

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 291
SnapshotIQ

snapshots to appear.

Tip: Click on another tab and then back to the Snapshots tab to refresh the
Snapshots table.

Lab Guide - PowerScale Administration

Page 292 © Copyright 2020 Dell Inc.


SnapshotIQ

CLI Step-by-Step Walk Through: Create Snapshot Schedule


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Activate the trail SnapshotIQ license. At the boston-1 prompt run the following
commands:

a. License: isi license add --evaluation SnapshotIQ

b. Verify: isi license list

3. Create a Snapshot target directory.

a. Map the sales-gen share, if not already mapped. Open the mapped
sales-gen share. Delete all contents in the directory, if any.

b. Right-click in the window create a new folder named MySales.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 293
SnapshotIQ

4. Create a snapshot schedule for the MySales directory.

a. At the boston-1 prompt, run the isi snapshot schedules


create MySnaps /ifs/sales/sales-gen/MySales
MySales_%H:%M "Every 1 day every 2 minutes" --
duration 2H command. For demonstration purposes, a 2 minute
policy is used. This may not be realistic for many workflows

b. Verify by running the isi snapshot schedules view MySnaps


command.

Lab Guide - PowerScale Administration

Page 294 © Copyright 2020 Dell Inc.


SnapshotIQ

Tip: When creating a snapshot policy, keep in mind the length of time that you want
to set for expiration of the snapshots. If you are taking a snapshot every two
minutes as you do in the lab, you may not want to keep each snapshot for more
than two hours. If you use a setting that takes a snapshot every few days, then you
may want to set the expiration time so that you have at least one saved snapshot
when the next snapshot is taken. If you manually delete the policy, the snapshots
are not deleted because a service running in the background monitors the
snapshots and does not allow them to be deleted or removed until the expiration
time that you set in the policy.

5. Run the isi snapshot snapshots list -v command. You should


begin to see snapshots. The snapshots do not consume any space until data
has been changed or deleted. In accordance with the configured schedule, it
could take up to 2-3 minutes for new snapshots to appear.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 295
SnapshotIQ

Step-by-Step Walk Through: Create Snapshots


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Create multiple point-in-time versions for a file using CLI.

a. Navigate to the MySales directory by executing the cd


/ifs/sales/sales-gen/MySales command.

b. Create a file named snapfile which contains the current time. At the
boston-1 prompt, run the date > snapfile command. This
command writes the current timestamp to the file.

c. Navigate to the share folder that was mapped and verify that snapfile
is created. Open the file with Notepad and view its contents.

d. Again, in the CLI, wait for 2 minutes and run the date >>
snapfile command to append the latest timestamp to the end of the
existing file.

TIP: Use inline commands boston-1# while true; do; date


>> snapfile; sleep 120; done to append the file
automatically. Use CTRL-C to stop the script.

Lab Guide - PowerScale Administration

Page 296 © Copyright 2020 Dell Inc.


SnapshotIQ

e. In the share, open the file and view its contents after the edits. For
demonstration purposes, append the file 4-5 times. Note the time
each time you append to the file.

f. Remove the snapshot file. Run the rm snapfile command and


enter y to delete the file. Take note of the time when you delete the
file. Later you will be able to access earlier versions of the file using
snapshots.

g. After 2-3 minutes, run the date > snapfile to recreate the file.
The contents will only include the current time at which you created
the file.

3. View the amount of space that the snapshot uses after data is modified. You
may perform this step either using the WebUI or the CLI.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 297
SnapshotIQ

a. WebUI option: Open a Chrome browser and launch the WebUI


(https://192.168.3.11:8080). Login to the cluster using
root/Dees12345!

i. Navigate to Data protection, SnapshotIQ, Snapshots and


review the snapshots created.

b. CLI option: At the boston-1 prompt, run the isi snapshot


snapshots list -v command.

c. The amount of space consumed by the edits made in this lab is


minimal.

Lab Guide - PowerScale Administration

Page 298 © Copyright 2020 Dell Inc.


SnapshotIQ

Step-by-Step Walk Through: Restore files from Snapshots


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Restore snapshots from the .snapshot directory.

a. Ensure that the working directory is /ifs/sales/sales-gen/MySales by


running the pwd command. If not, run the cd /ifs/sales/sales-
gen/MySales command.

b. At the boston-1 prompt, run the ls -al command. Notice that the
.snapshot directory is not seen. It is a hidden directory but can be
accessed.

c. Access the .snapshot directory of the /ifs/sales/sales-gen/MySales


by running the cd .snapshot command.

d. Run the pwd command and notice that the present working directory
is .snapshot directory.

e. List the contents of the .snapshots directory by running the ls -alF


command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 299
SnapshotIQ

f. You should see a subdirectory here for each snapshot which has
been taken (one every two minutes). In accordance with the snapshot
schedule you created earlier in this lab, the directory names should
be MySales_<timestamp>. Take note of the timestamps in each
directory name. The contents of these snapshot directories are in a
read only state. Earlier versions of files can be copied from the
snapshot directories, but files in the directories are in a read only
state. They cannot be edited or modified.

g. Copy an earlier version of a file back from one of the snapshot


directories via the CLI. For the demonstration, the snapfile from
snapshot MySales_07:10 is copied. Use the cd MySales_07:10
command to navigate into the snapshot subdirectory.

h. Copy the earlier version of the file from the snapshot into the present
directory. Rename the file to old-snapfile by running the cp
snapfile /ifs/sales/sales-gen/MySales/old-snapfile
command.

i. Go back to the share and verify if the file is recovered. Open the file
and review the contents of the file.

Lab Guide - PowerScale Administration

Page 300 © Copyright 2020 Dell Inc.


SnapshotIQ

3. Recover an earlier version of a file using Windows Explorer.

a. Using Windows Explorer, open the mapped sales-gen share and


navigate to the MySales directory. Right-click on snapfile and click
Properties.

b. In the Properties window, select the Previous Versions tab. Each


version of the file represents a point in time view of the file created by
the snapshot. You have the option to Open the file, Copy this point in
time view of the file, or Restore the file in the present directory to the
point in time view.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 301
SnapshotIQ

c. Select one of the earlier versions. For the demonstration, the snapfile
from time 07:06 is selected. Click on Open. The earlier version of the
file will be displayed. Review the date information placed into this
point in time view of the file and close the file.

Lab Guide - PowerScale Administration

Page 302 © Copyright 2020 Dell Inc.


SnapshotIQ

d. To copy the file, select the file and click Copy. For the
demonstration, the snapfile from time 07:06 is selected. Select your
desktop as the save location.

e. Navigate to the desktop and verify that the earlier version of the file is
present.

f. To restore the original file, open the Previous Versions tab in the
Properties window. Select the version of the file to restore. For the
demonstration, the snapfile from time 07:08 is selected. Click
Restore. Click Restore again to restore the file. Click OK in the
success window. Close the Properties window.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 303
SnapshotIQ

g. In the share, open snapfile and verify that it has been restored to the
previous version. Close the file.

h. Click OK to close the Properties window.

Lab Guide - PowerScale Administration

Page 304 © Copyright 2020 Dell Inc.


SyncIQ

SyncIQ

Scenario:

The Engineering business unit has requested a general use data set to test against
their application development effort. The IT manager has asked you to configure
SyncIQ to copy "real-type" data to the engineering group. Use this document as
part of your presentation to the team who are not familiar with the way SyncIQ
works.

The task covers how to create a SyncIQ policy, run the SyncIQ policy, and edit
SyncIQ policy. Also, re-run the policy and view the updated results to achieve the
SyncIQ test.

Given in the lab exercise is the information needed to configure SyncIQ. If needed,
use the WebUI or the CLI step-by-step walk through portion of the exercise for
guidance.

In this lab, perform the following tasks:


• Create SyncIQ policy
• Run SyncIQ policy and view results
• Edit SyncIQ policy
• Run modified SyncIQ policy and view results

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 305
SyncIQ

SyncIQ
1. Use the following information to create and run a SyncIQ policy to copy data
from the marketing general purpose to a target directory in the engineering
access zone.

a. Activate SyncIQ license. For the lab environment, you will use the
trial license functionality.

b. Create target directory for the replication in the engineering access


zone. Use the File system explorer page of the WebUI or the mkdir
command.

i. Directory name: targetCopy

ii. Path: /ifs/engineering

iii. Permissions: Ensure everyone has full control to the


directory

c. Ensure that the mapped mktg-share directory is not empty. If empty,


copy the M & E folder from the File-Repository directory using
Windows Explorer.

d. Create the SyncIQ policy. Use the SyncIQ page of the WebUI or the
isi sync policies command.

i. Policy name: MktgToEng

ii. Action: Copy

iii. Enabled: Yes

iv. Run job: Manually

v. Source root directory: /ifs/marketing/mktg-share

vi. Target host: engineering.dees.lab

vii. Target directory: /ifs/engineering/targetCopy

e. Run a policy assessment for the MktgToEng policy. Use the SyncIQ
page of the WebUI or the isi sync jobs command.

Lab Guide - PowerScale Administration

Page 306 © Copyright 2020 Dell Inc.


SyncIQ

f. View the report for the policy assessment. Use the SyncIQ page of
the WebUI or the isi sync reports command.

g. Run the MktgToEng policy. Use the SyncIQ page of the WebUI or
the isi sync jobs command.

h. Compare the contents of the /ifs/engineering/targetCopy directory


to the /ifs/marketing/mktg-share directory and verify that the copy
was successful.

2. Use the following information to modify and test the SyncIQ policy created in
the previous step.

a. Modify the MktgToEng policy to run whenever the source is modified.


Use the SyncIQ page of the WebUI or the isi sync policies
command.

b. Test the updated MktgToEng policy.

i. Copy the legal discovery directory present in the File-


Repository directory to the mapped mktg-share directory
using Windows Explorer.

ii. Verify that the policy runs automatically after the copy
operation and view the report. Use the SyncIQ page of the
WebUI or the isi sync reports command.

iii. View the contents of the /ifs/engineering/targetCopy


directory.

c. Revoke the Write permission for Group on the


/ifs/engineering/targetCopy directory. Use the File system explorer
page of the WebUI or the chmod command.

Question: Is the permission change operation successful? Why or


Why not?

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 307
SyncIQ

Instructor Note: No, because the replicated directory is read-only


and can only be accessed by the SyncIQ process. In order to write to
the replicated directory, we need to make it independent from the
source.

Lab Guide - PowerScale Administration

Page 308 © Copyright 2020 Dell Inc.


SyncIQ

WebUI Step-by-Step Walk Through: Create and Run SyncIQ policy


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Activate the SyncIQ license.

a. Navigate to Cluster Management, Licensing page. Check the


SyncIQ license status. If the status is inactive, scroll down and click
Manage trials. Add the trial version for the purpose of this lab
exercise.

b. Ensure that the status for SyncIQ changes to Evaluation in the


Software licenses overview table.

3. Create target directory targetCopy for the replication. In this exercise, we will
replicate the mktg-share directory to the targetCopy directory in the
engineering access zone.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 309
SyncIQ

a. Map the mktg-share directory, if not already mapped. Ensure that the
directory is not empty. If empty, copy the M & E folder from the File-
Repository directory using Windows Explorer.

b. On the WebUI, navigate to the File system, File system explorer


page. Set the access zone to engineering.

c. Click Create directory and enter the details as shown.

4. Create the SyncIQ policy.

a. Navigate to the Data protection, SyncIQ page. Select the Policies


tab and click Create a SyncIQ policy.

Lab Guide - PowerScale Administration

Page 310 © Copyright 2020 Dell Inc.


SyncIQ

b. Enter the information as shown in the graphic. Keep all other fields
with the default setting.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 311
SyncIQ

c. Navigate to the Settings tab and review the default settings. Do not
modify these settings without assistance from PowerScale Technical
Support.

5. Run a policy assessment for the MktgToEng policy.

a. On the Policies tab, click More in the Actions column for the
MktgToEng policy, and then select Assess sync.

b. The Assess sync should take less than a minute. Navigate to the
Summary tab and notice the Running status in the Active jobs list.
Once the job completes, it will no longer be listed in the Active jobs
list.

c. Navigate to the Reports tab and view the report, including the
duration, total files, and total data transferred. The policy assessment
did not actually transfer the files, but it provides an estimate of
duration and data to be copied before the first run of the policy.

Lab Guide - PowerScale Administration

Page 312 © Copyright 2020 Dell Inc.


SyncIQ

6. Run the MktgToEng policy and test the results.

a. On the Policies tab, click More in the Actions column for the
MktgToEng policy, and then select Start job. The SyncIQ job runs in
the background.

b. You may view the job status in the Active jobs table in the Summary
tab. Once the job completes, navigate to the Reports tab and view
the new report.

c. Navigate to File system, File system explorer. Set the access zone
to engineering. Expand the targetCopy directory. Compare the
contents with the /ifs/marketing/mktg-share directory and verify that
the copy was successful.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 313
SyncIQ

CLI Step-by-Step Walk Through: Create and Run SyncIQ policy


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Activate the SyncIQ license.

a. At the boston-1 prompt, run the isi license add --


evaluation SyncIQ command. Type q to skip through the license.
To agree to the EULA, type yes and press ENTER.

b. Verify by running the isi license list command.

3. Create target directory targetCopy for the replication. In this exercise, we will
replicate the mktg-share directory to the targetCopy directory in the
engineering access zone.

a. Map the mktg-share directory, if not already mapped. Ensure that the
directory is not empty. If empty, copy the M & E folder from the File-
Repository directory using Windows Explorer.

b. At the boston-1 prompt, run mkdir


/ifs/engineering/targetCopy to create the target directory.

c. Run the chmod 777 /ifs/engineering/targetCopy command


to ensure the directory has all permissions enabled.

4. Create the SyncIQ policy.

Lab Guide - PowerScale Administration

Page 314 © Copyright 2020 Dell Inc.


SyncIQ

a. At the boston-1 prompt, run the isi sync policies create


MktgToEng copy /ifs/marketing/mktg-share
engineering.dees.lab /ifs/engineering/targetCopy --
enabled yes command.

b. Verify by running the isi sync policies list command.

5. Run a policy assessment for the MktgToEng policy.

a. At the boston-1 prompt, run the isi sync jobs start


MktgToEng --test command to start the assessment job.

b. The job should take less than a minute to run. Run the isi sync
jobs view MktgToEng command to view the job status. Run the
command every 20 seconds until the job is no more listed. This
indicates that the job has completed.

c. To view the report, we first need the job ID. Run the isi sync
reports list command. Note the Job ID.

d. Replace the Job ID in the isi sync reports view MktgToEng


<Job ID> command to view the report, including the duration, total
files, and total data transferred. The policy assessment did not

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 315
SyncIQ

actually transfer the files, but it provides an estimate of duration and


data to be copied before the first run of the policy.

6. Run the MktgToEng policy and test the results.

a. To run the policy, at the boston-1 prompt, run the isi sync jobs
start MktgToEng command.

b. Run the isi sync jobs view MktgToEng command to view the
job status. Once done, check the report.

c. Get the job ID by running the isi sync reports list command.
Replace the Job ID in the isi sync reports view MktgToEng
<Job ID> command.

d. Run the ls /ifs/engineering/targetCopy command.


Compare the contents with the /ifs/marketing/mktg-share directory
and verify that the copy was successful.

Lab Guide - PowerScale Administration

Page 316 © Copyright 2020 Dell Inc.


SyncIQ

WebUI Step-by-Step Walk Through: Modify SyncIQ Policy


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Modify the MktgToEng policy.

a. Navigate to the Data protection, SyncIQ page. Select the Policies


tab. Click on View/Edit for the MktgToEng policy. Click Edit.

b. Change the policy to run whenever the source is modified. Leave the
rest of the fields unchanged. Click Save changes.

c. Note the success message and click Close.

3. Test the updated MktgToEng policy.

a. Add contents to the mktg-share directory. Open Windows Explorer


on the Jumpstation. Copy the legal discovery directory present in
the File-Repository directory to the mapped mktg-share directory.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 317
SyncIQ

b. On the WebUI, select the Summary tab on the SyncIQ page. You
may either see the job running in the Active jobs section or a report
generated in the SyncIQ recent reports section. View the report
once the job completes. The MktgToEng policy runs every time the
source directory is modified.

c. Navigate to the File system, File system explorer page. Set the
access zone to engineering and expand the targetCopy directory.
Verify that the legal discovery directory is present.

4. On the File system explorer page, click View/Edit for the targetCopy
directory. Click Edit and remove the Write permission for Group. Save the
changes. You will see an error indicating that you can't make changes to the
file system. This is because the replicated directory is read-only and only the
SyncIQ process can modify it.

Lab Guide - PowerScale Administration

Page 318 © Copyright 2020 Dell Inc.


SyncIQ

CLI Step-by-Step Walk Through: Modify SyncIQ Policy


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Modify the MktgToEng policy.

a. Change the policy to run whenever the source is modified. At the


boston-1 prompt, run the isi sync policies modify
MktgToEng --schedule when-source-modified command.

b. Verify by running the isi sync policies view MktgToEng


command.

3. Test the updated MktgToEng policy.

a. Add contents to the mktg-share directory. Open Windows Explorer


on the Jumpstation. Copy the legal discovery directory present in
the File-Repository directory to the mapped mktg-share directory.

b. Run the isi sync jobs view MktgToEng command to view the
job status. Once done, check the report using the isi sync

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 319
SyncIQ

reports view MktgToEng <Job ID> command. The


MktgToEng policy runs every time the source directory is modified.

c. Run the ls /ifs/engineering/targetCopy command. Verify


that the legal discovery directory is present.

4. Run the chmod 757 /ifs/engineering/targetCopy command. Notice


that there is an error indicating that the directory is read-only. The replicated
directory can only be modified by the SyncIQ process.

Lab Guide - PowerScale Administration

Page 320 © Copyright 2020 Dell Inc.


SmartLock

SmartLock

Scenario:

The IT manager wants to ensure that sensitive documents containing customer


information are not accidentally deleted or tampered during file operations. Use
SmartLock to commit files to a write once read many (WORM) state. Create a
WORM domain on a directory to test the SmartLock functionality.

Given in the lab exercise is the information needed to configure SmartLock. If


needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Create an enterprise SmartLock domain
• Test the SmartLock domain

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 321
SmartLock

SmartLock
1. Use the following information to create a SmartLock domain on directory in the
marketing general purpose share.

a. Activate the SmartLock license. For the lab environment, you will use
the trial license functionality.

b. Create a empty target directory for SmartLock. Use the File system
explorer page of the WebUI or the mkdir command.

i. Directory name: SmartLock_Target

ii. Path: /ifs/marketing/mktg-share

iii. Permissions: Ensure everyone has full control to the


directory

c. Create a enterprise WORM domain for the SmartLock_Target


directory. Use the SmartLock page of the WebUI or the isi worm
domains command.

i. Domain Type: Enterprise

ii. Privileged Delete: Off

iii. Path: /ifs/marketing/mktg-share/SmartLock_Target

iv. Default Retention: 5 minutes

v. Auto-commit Offset: 1 second

d. Test the SmartLock domain on the /ifs/marketing/mktg-


share/SmartLock_Target directory.

i. Using Windows Explorer, create a new text document file in


the SmartLock_Target directory.

ii. Enter text into the file. Save the file and close. Reopen the file
and note the contents. Once done, try to delete the file.

Question: Were changes saved successfully? Did the file


delete successfully? Why or why not?

Lab Guide - PowerScale Administration

Page 322 © Copyright 2020 Dell Inc.


SmartLock

Instructor Note: No, because the file is committed to a WORM state


1 second after it is saved in the directory. The file is released only
after the retention period is complete. During the retention period, the
file is read-only.

iii. Wait for 5 minutes and try to delete the file.

Question: Under what conditions can you delete a file in an


enterprise WORM domain during the retention period?

Instructor Note: The file can be deleted during its retention period if
the privileged delete feature is enabled. To access the privilege delete
functionality, you must be assigned the
ISI_PRIV_IFS_WORM_DELETE privilege and own the file you are
deleting. You can also access the privilege delete functionality for any
file if you are logged in through the root or compadmin user account.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 323
SmartLock

WebUI Step-by-Step Walk Through: Create a SmartLock Domain


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Activate the SmartLock license. Add the trial version for the purpose of this
lab exercise.

a. Navigate to the Cluster Management, Licensing page. Check the


SmartLock license status. If the status is inactive, scroll down and
click Manage trials.

b. Ensure that the SmartLock license is set to Evaluation in the


Software licenses overview table.

3. Create a empty target directory for SmartLock.

a. Navigate to the File system, File system explorer page. Set the
access zone to marketing. Open the mktg-share directory. Click
Create directory and enter the fields as shown.

Lab Guide - PowerScale Administration

Page 324 © Copyright 2020 Dell Inc.


SmartLock

4. Create a enterprise WORM domain for the SmartLock_Target directory.

a. Navigate to the File system, SmartLock page. Click Create domain.


To confirm, click Create. For the purpose of this demonstration, let us
set the retention period to 5 minutes. Enter the fields as shown and
click Create domain.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 325
SmartLock

b. Verify that the domain is listed in the WORM domains table. Click
View/Edit for the created domain to verify the settings.

5. Test the SmartLock domain.

a. Map the mktg-share directory, if not already mapped. Open the


mktg-share using Windows Explorer. Open the SmartLock_Target
directory. Inside the window, right-click and select New > Text
Document. Leave the filename as default. Note the time when you
create the file.

Lab Guide - PowerScale Administration

Page 326 © Copyright 2020 Dell Inc.


SmartLock

b. Open the file using Notepad. Enter some text. Save the file and
close. Reopen the file and notice that the text you entered is not
saved.

c. Now, try to delete the file. You will get an error message saying that
the file cannot be deleted. This is because the file is committed to a
WORM state and will only be released after the retention period is
complete. During the retention period, the file is read-only.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 327
SmartLock

d. Try to delete the file, 5 minutes after it was created. The file will
successfully be deleted as the retention period since the last
modification has expired.

Lab Guide - PowerScale Administration

Page 328 © Copyright 2020 Dell Inc.


SmartLock

CLI Step-by-Step Walk Through: Create a SmartLock Domain


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Activate the SmartLock license. Add the trial version for the purpose of this
lab exercise.

a. At the boston-1 prompt, run the isi license add --


evaluation SmartLock command. Type q to skip through the
license. To agree to the EULA, type yes and press Enter.

b. Verify by running the isi license list command.

3. Create a empty target directory for SmartLock. At the boston-1 prompt, run
the following commands:

a. Create Directory: mkdir /ifs/marketing/mktg-


share/SmartLock_Target

b. Change Permissions: chmod 777 /ifs/marketing/mktg-


share/SmartLock_Target

4. Create a enterprise WORM domain for the SmartLock_Target directory.

a. At the boston-1 prompt, run the isi worm domains create


/ifs/marketing/mktg-share/SmartLock_Target --
autocommit-offset 1s --privileged-delete false --

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 329
SmartLock

default-retention 5m command. For the purpose of this


demonstration, let us set the retention period to 5 minutes.

b. Verify by running the isi worm domains view


/ifs/marketing/mktg-share/SmartLock_Target command.

5. Test the SmartLock domain.

a. Map the mktg-share directory, if not already mapped. Open the


mktg-share using Windows Explorer. Open the SmartLock_Target
directory. Inside the window, right-click and select New > Text
Document. Leave the filename as default. Note the time when you
create the file.

b. Open the file using Notepad. Enter some text. Save the file and
close. Reopen the file and notice that the text you entered is not
saved.

Lab Guide - PowerScale Administration

Page 330 © Copyright 2020 Dell Inc.


SmartLock

c. Now, try to delete the file. You will get an error message saying that
the file cannot be deleted. This is because the file is committed to a
WORM state and will only be released after the retention period is
complete. During the retention period, the file is read-only.

d. Try to delete the file, 5 minutes after it was created. The file will
successfully be deleted as the retention period since the last
modification has expired.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 331
HealthCheck

HealthCheck

Scenario:

The IT manager has emphasized the importance to proactively detect potential


issues in the cluster. As an administrator, familiarize yourself with the HealthCheck
framework to schedule and evaluate different cluster parameters.

Given in the lab exercise is the information needed to configure HealthCheck. If


needed, use the WebUI or the CLI step-by-step walk through portion of the
exercise for guidance.

In this lab, you perform the following tasks:


• Run a health check evaluation
• Create an evaluation schedule

Lab Guide - PowerScale Administration

Page 332 © Copyright 2020 Dell Inc.


HealthCheck

HealthCheck
1. Use the following information to perform health checks on different cluster
items.

a. View the different checklists. Use the HealthCheck page of the


WebUI or the isi healthcheck checklists command.

b. Run a health check to evaluate cluster capacity. Use the


HealthCheck page of the WebUI or the isi healthcheck
command.

c. Once the evaluation is complete, view the details to check for


warnings and failures. Use the HealthCheck page of the WebUI or
the isi healthcheck command.

d. Schedule a health check for multiple checklists. Use the


HealthCheck page of the WebUI or the isi healthcheck
schedules command.

i. Schedule name: MySchedule

ii. Checklists: nfs, ntp, smartconnect, smartpools, snapshot,


smb, synciq

iii. Schedule: Every 2 minutes everyday

e. Wait for 2 minutes and check the evaluation details for the scheduled
checklists. Use the HealthCheck page of the WebUI or the isi
healthcheck command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 333
HealthCheck

WebUI Step-by-Step Walk Through: Configure HealthCheck


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch the WebUI (https://192.168.3.11:8080). Login to
the cluster using root/Dees12345!

2. Run a health check to evaluate cluster capacity.

a. Navigate to the Cluster management, HealthCheck page. Select


the HealthChecks tab and view by HealthCheck lists. Note the
different checklists and their description.

b. Identify the cluster_capacity checklist from the table and click View
details. Note the list of items that will be evaluated in the checklist.
Once done, scroll to the bottom and click Close.

Lab Guide - PowerScale Administration

Page 334 © Copyright 2020 Dell Inc.


HealthCheck

c. For the cluster_capacity checklist, click More and select Run. Once
done, you will notice that the checklist has been queued for
evaluation. The evaluation may take a minute to complete.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 335
HealthCheck

d. Once complete, the Last run start time column for the checklist will
change to PASS or FAIL with a timestamp.

e. To view the details of the evaluation, select the Evaluations tab.


Click on View details for the most recent evaluation of the
cluster_capacity checklist. You may see a list of warnings and
failures. In this example, there are no warning or failures as all items
on the checklist passed the evaluation.

Lab Guide - PowerScale Administration

Page 336 © Copyright 2020 Dell Inc.


HealthCheck

3. Schedule a health check for multiple checklists.

a. Navigate to the Cluster management, HealthCheck page. Select


the Scheduler tab. Notice that there is a default schedule set every
day for the basic checklist. This checklist is a combination of
evaluations items from all the other checklists to evaluate the overall
health of the cluster.

b. Click on Create a new schedule. For the purpose of this


demonstration, we will run the schedule every 2 minutes daily. Enter
the fields as shown and click Save changes.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 337
HealthCheck

c. Wait for 2 minutes and check the Evaluations tab. Notice that the
checklists from the newly created schedule are either queued or
evaluated. Click View details to view the evaluation details for any of
the checklists.

Note: Failures will be listed due to the vmware environment.

Lab Guide - PowerScale Administration

Page 338 © Copyright 2020 Dell Inc.


HealthCheck

CLI Step-by-Step Walk Through: Configure HealthCheck


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Run a health check to evaluate cluster capacity.

a. At the boston-1 prompt, run the isi healthcheck checklists


list command. Note the different checklists and their description.

b. View the items that will be evaluated in the cluster_capacity


checklist by running the isi healthcheck checklists view
cluster_capacity command.

c. Run the isi healthcheck run cluster_capacity command


to run the evaluation. The evaluation may take a minute to complete.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 339
HealthCheck

d. Once done, view the evaluation details. To get the ID of the


evaluation, run the isi healthcheck list --format list |
grep 'cluster_capacity' command.

e. Replace the ID in the isi healthcheck view <ID> command.


You may see a list of warnings and failures. In this example, there are
no warning or failures as all items on the checklist passed the
evaluation.

3. Schedule a health check for multiple checklists.

a. At the boston-1 prompt, run the isi healthcheck schedules


list command. Notice that there is a default schedule set every day
for the basic checklist. This checklist is a combination of evaluations
items from all the other checklists to evaluate the overall health of the
cluster.

b. Create a schedule by running the isi healthcheck schedule


create MySchedule "Every 1 day every 2 minutes
between 12:00 AM and 11:59 PM" --checklist
ntp,smartconnect,smartpools,snapshot,smb,synciq,nfs
command. For the purpose of this demonstration, we will run the
schedule every 2 minutes daily.

c. Verify by running the isi healthcheck schedules view


MySchedule command.

Lab Guide - PowerScale Administration

Page 340 © Copyright 2020 Dell Inc.


HealthCheck

d. Wait for 2 minutes and run the isi healthcheck list command.
Notice that the checklists from the newly created schedule are either
queued or evaluated. Replace the ID in the isi healthcheck
view <ID> command to view the evaluation details.

Note: Failures will be listed due to the vmware environment.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 341
Monitoring

Monitoring

Scenario:

The IT manager wants to ensure you understand the monitoring tools. Both
InsightIQ and DataIQ are installed on stand-alone servers. Both applications are a
single point of monitoring for multiple PowerScale clusters in the data center.
DataIQ can monitor non-PowerScale volumes, InsightIQ can only monitor
PowerScale volumes.

Given in the lab exercise is the information needed to monitor the cluster using
DataIQ, InsightIQ and isi statistics. If needed, use the step-by-step walk
through portion of the exercise for guidance.

In this lab, you perform the following tasks:


• Create a performance baseline with InsightIQ
• Create a performance baseline
• Add the PowerScale cluster to the DataIQ application
• Gather metrics using isi statistics

Lab Guide - PowerScale Administration

Page 342 © Copyright 2020 Dell Inc.


Monitoring

Monitoring
1. Use the following information to perform the initial configuration for InsightIQ.

a. Enable the insightiq user account. Use the Membership and roles
page of the WebUI or the isi auth users command.

i. Enable account: yes

ii. Password: Dees12345!

b. Connect to InsightIQ and add the boston cluster.

i. URL: http://192.168.3.30

ii. Credentials: administrator/Dees12345!

iii. Local datastore: /datastore

iv. Cluster Address: 192.168.3.11

v. Username: insightiq

vi. Password: Dees12345!

c. Enable and configure the File System Analytics (FSA). Use the
SETTINGS page of the InsightIQ interface.

i. Generate FSA reports on the monitored cluster: yes

ii. Generate FSA reports in InsightIQ: yes

iii. Result Set Options settings: Default

iv. Take Snapshot: no

v. Result Set Retention settings: Default

2. Use to following information to view and create baseline reports on the boston
cluster.

a. Get baseline reports on the cluster. Use the Live Performance


Reporting page on the InsightIQ interface.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 343
Monitoring

i. View the report on Node Performance for a given period (start


of the course to the current date).

ii. View the report on Node Performance with a zoom level of 6


hours.

iii. Add a filter to report the Node Performance only for node 1.

Question: What is a common use case and value proposition


for administrators to use filters?

Instructor Note: Filters allows you to view or monitor a specific item.


If Node 1 were having performance issues, this live report would allow
you to filter through the metrics and just present the results from Node
1, thereby making it easier and faster to perform troubleshooting or
performance analytics.

iv. Note the different options available.

Question: Which button enables administrators to share their


report view with other administrators?

Instructor Note: PERMALINK

b. Create a custom performance report. Use the PERFORMANCE


REPORTING page on the InsightIQ interface.

i. Performance Report Name: Lab Performance Report Test

ii. Live Performance Reporting: yes

iii. Schedule Performance Report: yes, default values

iv. Performance Module Entity: Active clients

Lab Guide - PowerScale Administration

Page 344 © Copyright 2020 Dell Inc.


Monitoring

v. Entity Breakout: Protocol

vi. Performance Module Entity: Deduplication Summary


(Physical)

c. Generate cluster capacity reports. Use the Capacity Reporting page


of the InsightIQ interface.

i. View the capacity forecast. Plot the total usage.

ii. View the forecast data.

d. Create a data usage report showing the largest physical files. Use the
File System Analytics page of the InsightIQ interface.

i. View the data usage for any available FSA report.

ii. View the largest cluster file.

iii. View the data properties for any available FSA report.

iv. View the file type with the most files.

3. Use the following information to add the boston cluster to the DataIQ
application.

a. Create a NFS export to mount /ifs and add the DataIQ server as a
root client. Use the UNIX sharing (NFS) page of the WebUI or the
isi nfs exports command.

i. Root client: DataIQ server (192.168.3.60)

b. Launch the DataIQ application (https://192.168.3.60). Login using


administrator/Dees12345!

c. Add the boston cluster to the DataIQ application. Use the page
Settings > Data management configuration page.

i. FS type: NFS file system

ii. Volume name: PowerScale

iii. NFS share: 192.168.3.11:/ifs

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 345
Monitoring

iv. Scan Frequency: Every day, every 1 minute

d. View the directory details for /ifs. Use the Data Management page.

Question: How many directories are listed under /ifs?

e. View the directory details for /ifs/marketing. Use the Data


Management page.

f. Apply a rule to notify the DataIQ administrator when the


/ifs/marketing directory reaches 250 MB. Use the Data
Management page.

4. View the following cluster information using the isi statistics command.

a. View the most active clients accessing the cluster.

b. View the average client throughput.

c. View the clients connected to each node.

d. View the overall performance of the cluster and the performance of


each node.

e. View the performance per drive for the node you are logged into.

f. Review the drive information for all nodes.

g. View the read and writes by class.

h. View the performance for the top active results per node.

i. Show the keys options.

j. View the node uptimes.

k. View the detailed broken down cache reads and prefetches.

Lab Guide - PowerScale Administration

Page 346 © Copyright 2020 Dell Inc.


Monitoring

Step-by-Step Walk Through: InsightIQ Initial Configuration


1. On the Jumpstation, login as DEES\John (password: Dees12345!).

2. Enable the insightiq user account. The insightiq user is one of the accounts
automatically created by OneFS. By default the account is disabled. You may
either use the WebUI or the CLI to perform this step.

a. WebUI option: Open a Chrome browser and launch the WebUI


(https://192.168.3.11:8080). Login to the cluster using
root/Dees12345!

i. Navigate to the Access, Membership and roles page. Select


the System access zone. On the Users tab, choose FILE:
System from the Providers drop down menu.

ii. Locate the insightiq user and click View/Edit. Click Edit user.
Change the password to Dees12345! and enable the account
as shown. Once done, save the changes and close.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 347
Monitoring

b. CLI option: Open mRemoteNG from the taskbar. Select Bos-node1,


and SSH using root/Dees12345!

i. At the boston-1 prompt, run the isi auth users modify


insightiq --enabled yes --password Dees12345!
command.

ii. Verify by running the isi auth users view insightiq


command.

3. Connect to InsightIQ and add the boston cluster.

a. Open the Google Chrome web browser or open a new tab in the
current Chrome browser.

Lab Guide - PowerScale Administration

Page 348 © Copyright 2020 Dell Inc.


Monitoring

b. Connect to the InsightIQ application at http://192.168.3.30, where


192.168.3.30 is the IP address of the InsightIQ virtual appliance.

Note: To connect to the InsightIQ application by host name, you must


first configure a host name for the IP address in the DNS.

c. On the InsightIQ application login page, login using


administrator/Dees12345!

d. The one-time InsightIQ EULA page appears. Review the EULA, click I
have read and agree to..., and then click Submit.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 349
Monitoring

e. The first time you log in, you are prompted to configure the datastore.
For this demonstration, we use a local datastore (as opposed to a
remote datastore) in this implementation of InsightIQ. The local
datastore option is included by default with the InsightIQ virtual
appliance. If you want to place a datastore on a PowerScale storage
cluster or on a NFS-mounted server, you can specify the cluster or
server in this task.

f. Specify the path for the Local Datastore as /datastore and click
Submit.

g. Next, the Add Cluster dialog box appears. Set InsightIQ to monitor
the boston cluster. While in this lab, you will identify the cluster by an
IP address or a host name, alternatively you could specify the name
of a SmartConnect zone.

h. Enter the fields as listed and click OK. It will take a few minutes to
complete adding the cluster. Once done, the monitored cluster is
displayed in the SETTINGS page.

• Cluster Address: 192.168.3.11

• Username: insightiq

• Password: Dees12345!

4. Enable and configure the File System Analytics (FSA).

a. On the SETTINGS page, click Configure for the boston cluster.

Lab Guide - PowerScale Administration

Page 350 © Copyright 2020 Dell Inc.


Monitoring

b. On the Configuration for boston page, select the Enable FSA tab.
Check the Generate FSA reports and View FSA reports in
InsightIQ features in order to view chart information in the Data
Usage and Data Properties views. Click Submit.

c. Next, Select the FSA Configuration tab. Under the Job Settings
fields, review the Result Set Options settings:

• Directory chart maximum depth: This number represents the maximum


depth of the directories that you want InsightIQ to analyze. To specify
unlimited depth, type -1.

• File/directory list size: This number represents the maximum number of


top-contributing files and directories to list in the InsightIQ Data Usage
view.

• Path squash depth: This number represents the maximum number of


directory levels to include in a path.

d. Uncheck the Take snapshot field.

e. Review the Result Set Retention settings:

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 351
Monitoring

• Maximum result set age in days: This number represents, in days, how
long to retain each data set before automatically deleting it. To retain data
sets indefinitely, type 0. You can manually delete a data set at any time.

• Maximum result set count: This number represents the maximum number
of data sets to retain. To retain an unlimited number of data sets, type 0.

f. Verify that the fields are set as shown and click Submit.

g. Click the DASHBOARD tab to review your monitored cluster.

Lab Guide - PowerScale Administration

Page 352 © Copyright 2020 Dell Inc.


Monitoring

Step-by-Step Walk Through: Performance Baseline


1. On the Jumpstation, login as DEES\John (password: Dees12345!), open a
Chrome browser and launch InsightIQ (https://192.168.3.30). Login using
administrator/Dees12345!

2. Get baseline reports on the cluster.

a. Navigate to the PERFORMANCE REPORTING page, Live


Performance Reporting tab.

b. Ensure that Node Performance is selected in the Report Type


dropdown list. Note the other options in the list.

c. Select the date range from the course start date to the current day
(For example, if day 1 of the course was Monday 4/20/2020 then the
Date Range is Mon, Apr 20, 2020, 8:00 am through the current date
and time). Click View Report.

d. Scroll down for the chart view of reports on different metrics such as
External Network Throughput Rate, Protocol Operations Rate,
CPU, Disk Throughput, etc. Any of the report views created here
can be scheduled, generated as a PDF, and emailed. The report can
be customized allowing administrators to choose exactly which charts
to include in the report.

e. You may also change the zoom level, move forward and move
backward in time. Set the Zoom Level to 6hrs and scroll down to a
chart. Note the start time. Mouse-over the chart data to note the fine
granularity of information.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 353
Monitoring

f. Next, let us add filters to the reports. Scroll to the top of the page and
click Create/manage data filters. Note that the rules can be used to
create filters that allow administrators to quickly pinpoint and view
data.

g. Click Add Rule and then click the drop-down arrow that appears.
Select Node from the list. In the Match column, select 1 to view Node
1. Click Apply.

h. Scroll through the reports and notice that the reports only show
information pertaining to Node 1 as per our filter selection in the
previous step. This filter allows you to view or monitor a specific item.
If Node 1 were having performance issues, this live report would
allow you to filter through the metrics and just present the results from
Node 1, thereby making it easier and faster to perform
troubleshooting or performance analytics.

Lab Guide - PowerScale Administration

Page 354 © Copyright 2020 Dell Inc.


Monitoring

i. On the right side or each chart there is an option to download that


particular chart as a CSV file by clicking its link. The charts are all
interactive and allow data detail to be shown by simply hovering over
that part of the chart. Administrators can also click-and-drag to move
the timeline, double-click to zoom in and double-click while holding
the shift key to zoom out.

j. To share a view with other administrators, scroll back to the top of the
page, and in the top right corner of the page, click PERMALINK. The
URL can be sent to an individual. When that person clicks on it, they
can login to the InsightIQ server and see the exact same view.

3. Create a custom performance report.

a. On the PERFORMANCE REPORTING page, click Create a New


Performance Report.

b. To start with blank report, click Create from Blank Report. There are
many standard templates in the lower half of this page that you can
use as starting point for a custom report.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 355
Monitoring

c. Name the report Lab Performance Report Test. Check the Live
Performance Reporting box. This makes the report show up as an
option in Live Performance Reporting.

d. Check the Scheduled Performance Report box. This generates a


report at a specific frequency on a specific cluster. When this is
selected, more scheduling options appear. Use the default Schedule
and Delivery.

Lab Guide - PowerScale Administration

Page 356 © Copyright 2020 Dell Inc.


Monitoring

e. In the Select a Module for this Position field, select Active Clients.
Note the other options in the field. Select the Protocol radio button.
Click Add another performance module, then select Deduplication
Summary (Physical). Once done, click Finish.

f. This generates reports that show active clients broken down by the
protocols they are using. Confirmation of successfully saving the
report is displayed.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 357
Monitoring

g. To view the report, navigate to the Live Performance Reporting tab.


In the Report Type dropdown list, you may select to view the report.

4. Generate cluster capacity reports.

a. Navigate to the FILE SYSTEM REPORTING page. Select the


Capacity Reporting tab.

b. Scroll down to the Capacity Forecast section. Under Plot data, keep
the default option of Total Usage checked.

c. Click View Report. There is a lot of data, but it does help


administrators understand what is currently consumed for storage
and how the estimated value of more capacity is calculated. This is
very useful, as the adaptive nature of OneFS is so different from other
storage systems, it can bring a lot of clarity to administrators.

d. Select different options under the Plot Data section, and note how
the presentation of the chart changes.

Lab Guide - PowerScale Administration

Page 358 © Copyright 2020 Dell Inc.


Monitoring

e. To view Forecast data, hold down the left mouse button and drag it
across the chart for a particular range. This will highlight that area of
the chart. Select the Click to view button to view the forecast data.

5. Create a data usage report showing the largest physical files.

a. On the FILE SYSTEM REPORTING page, select the File System


Analytics tab.

b. In the Report Type drop-down list, select Data Usage. On the FSA
Report menu, select one of the available days. Click View Report.

c. Mouse over the pie chart. Note that any of the slices can be selected
and the detail of the directory is shown on the right. All the columns in
the right section can be selected to change how the data is sorted.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 359
Monitoring

d. To view the largest cluster file, scroll down the page to the Top 1000
Files by table and select Largest Physical Size from the drop-down.

e. Scroll to the top of the page. To explore data properties, from the
Report Type menu, select Data Properties. Then from the FSA
Report menu, select an available day. Click View Report.

f. View the file type that contains the most files. Scroll down to the File
Count by Physical Size chart. Next to Breakout by: select the File
Extension option and note the files by extension are listed.

Lab Guide - PowerScale Administration

Page 360 © Copyright 2020 Dell Inc.


Monitoring

g. With charts like this, you can see files that haven't been used in a
long time and what sizes they are. Continue to explore the graphs
and data views of your cluster as time allows.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 361
Monitoring

Step-by-Step Walk Through: DataIQ


1. On the Jumpstation, login as DEES\John (password: Dees12345!)

2. Create a NFS export for to mount IFS and add the DataIQ server as a root
client. You may either use the WebUI or CLI to perform this step.

a. WebUI option: Open a Chrome browser and launch the WebUI


(https://192.168.3.11:8080). Login to the cluster using
root/Dees12345!

i. Navigate to the UNIX sharing (NFS) page, NFS exports tab.


Set the current access zone to System. Create a new export
as shown in the graphic.

b. CLI option: Open mRemoteNG, and SSH to any node using


root/Dees12345!

Lab Guide - PowerScale Administration

Page 362 © Copyright 2020 Dell Inc.


Monitoring

i. At the boston-1 prompt, run the isi nfs exports create


/ifs --root-clients 192.168.3.60 command.

3. Add the boston cluster to the DataIQ application.

a. Open a new Chrome window or a new tab in an existing Chrome


browser. Launch the DataIQ application (https://192.168.3.60). Login
using administrator/Dees12345!

b. Once logged in, click on Settings and select Data management


configuration. Click Add new volume.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 363
Monitoring

c. Enter the fields as shown to add the PowerScale boston cluster.


Once done, scroll to the bottom and click Create. You will be able to
see PowerScale listed in the Volumes section of the page.

Lab Guide - PowerScale Administration

Page 364 © Copyright 2020 Dell Inc.


Monitoring

4. View monitoring data on the boston cluster.

a. Navigate to the Data Management page, select Browse. You can


view the directory breakdown chart and details of the sub directories
for the boston cluster.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 365
Monitoring

b. To get a breakdown on a specific directory, select the directory and


the details only for that directory are reloaded. In this example, the
marketing directory is selected.

c. You can also specify a size limit on directories to be notified when a


directory has exceeded a size. In this example, the administrator is
notified when the marketing directory 250 MB.

Lab Guide - PowerScale Administration

Page 366 © Copyright 2020 Dell Inc.


Monitoring

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 367
Monitoring

Step-by-Step Walk Through: isi statistics Command


1. On the Jumpstation, login as DEES\John (password: Dees12345!). Open
mRemoteNG from the taskbar. Select Bos-node1, and SSH using
root/Dees12345!

2. Review the isi statistics options.

a. At the boston-1 prompt, run the isi statistics -h |more


command.

b. Identify the options for client, drive, heat, list, protocol, pstat,
query, system, and workload. Run the isi statistics with each
option to view the output and type of data. For example, run the isi
statistics client command.

c. List the valid arguments to the subcommand isi statistics


client by running isi statistics client -h.

3. Use isi statistics to confirm client statistics, history, clients by degraded


active clients and degraded connected clients.

a. Show the average for client throughput by running the isi


statistics client --sort InAvg command.

b. Show by node which clients are connected to which node by running


the isi statistics client --totalby Node command.

Lab Guide - PowerScale Administration

Page 368 © Copyright 2020 Dell Inc.


Monitoring

c. View the overall performance of the cluster and the performance of


each node by running the isi statistics system --nodes
all command.

d. View the performance per drive for the node you are logged into by
your SSH session by running the isi statistics drive --
type sas command.

e. Use the isi statistics drive --type sas --nodes all


command to review the drive information for all nodes.

4. Use the isi commands to manipulate additional information about the cluster.

a. To constantly monitor nodes, run the isi statistics system --


format top --nodes all --interval 2 command. This
refreshes the command every 2 seconds. Press CTRL+C to exit out
of system monitoring.

b. Show the read and writes by class using the isi statistics
heat --classes read,write command.

c. Show performance for the top active results per node by running the
isi_for_array "isi statistics drive | head -5"
command.

Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 369
Monitoring

d. Show the keys options by running the isi statistics list


keys | more command. Press q to exit from the command.

e. Show the node uptimes by running the isi statistics query


current --keys node.uptime --nodes all command.

f. Show the basic consolidated cache stats by running the


isi_cache_stats command.

g. Get the detailed broken down cache reads and prefetches by running
the isi_cache_stats -v command.

Lab Guide - PowerScale Administration

Page 370 © Copyright 2020 Dell Inc.


Lab Guide - PowerScale Administration

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 371
POWERSCALE
HARDWARE CONCEPTS

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Table of Contents

PowerScale Hardware Concepts .............................................................................. 2


Rebranding - Isilon is now PowerScale ................................................................................ 3
PowerScale Solutions Certification Journey Map ................................................................. 4
Prerequisite Skills ................................................................................................................ 5
Course Objectives................................................................................................................ 6

Installation Engagement............................................................................................ 7
Module Objectives ............................................................................................................... 8
Customer Engagement Responsibility ................................................................................. 9
Physical Tools Required .................................................................................................... 10
Installation and Implementation Phases ............................................................................. 12
SolVe ................................................................................................................................. 13
Safety Precautions and Considerations ............................................................................. 16
Onsite Do's and Don'ts....................................................................................................... 18

Introduction to PowerScale Nodes ......................................................................... 19


Module Objectives ............................................................................................................. 20
PowerScale Node Specifications ....................................................................................... 21
PowerScale Hardware Overview........................................................................................ 23
PowerScale Nodes Overview ............................................................................................. 24
PowerScale Node Types.................................................................................................... 25
Gen 6 Hardware Components............................................................................................ 27
Gen 6.5 Hardware Components......................................................................................... 29
PowerScale Node Tour - Generation 6 .............................................................................. 31
Advantages and Terminologies .......................................................................................... 35

Pre-Engagement Questionnaire ............................................................................. 36


Module Objectives ............................................................................................................. 37
Job Roles ........................................................................................................................... 38
Pre-Engagement Questionnaire ......................................................................................... 40
PEQ Tour ........................................................................................................................... 41

PowerScale Hardware Concepts

Page ii © Copyright 2020 Dell Inc.


Internal and External Networking ........................................................................... 48
Module Objectives ............................................................................................................. 49
PowerScale Networking Architecture ................................................................................. 50
Leaf-Spine Backend Network ............................................................................................. 52
Legacy Connectivity ........................................................................................................... 54
Node Interconnectivity ....................................................................................................... 55
F200 and F600 Network Connectivity ................................................................................ 57
PowerScale Architecture - External Network ...................................................................... 58
Breakout Cables ................................................................................................................ 59
Cabling Considerations ...................................................................................................... 60

Cluster Management Tools ..................................................................................... 61


Module Objectives ............................................................................................................. 62
OneFS Management Tools ................................................................................................ 63
Serial Console Video ......................................................................................................... 64
Configuration Manager ...................................................................................................... 65
isi config ..................................................................................................................... 67
Web Administration Interface (WebUI) ............................................................................... 68
Command Line Interface (CLI) ........................................................................................... 70
CLI Usage .......................................................................................................................... 72
OneFS Application Programming Interface (API) ............................................................... 73
Front Panel Display............................................................................................................ 75

Course Summary ..................................................................................................... 76


Course Summary ............................................................................................................... 77

Appendix ................................................................................................. 79

Glossary .................................................................................................. 89

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page iii


PowerScale Hardware Concepts

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 1


PowerScale Hardware Concepts

PowerScale Hardware Concepts

PowerScale Hardware Concepts

Page 2 © Copyright 2020 Dell Inc.


PowerScale Hardware Concepts

Rebranding - Isilon is now PowerScale

Important: In mid-2020 Isilon launched a new hardware platform, the


F200 and F600 branded as Dell EMC PowerScale. Over time the
Isilon brand will convert to the new platforms PowerScale branding. In
the meantime, you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 3


PowerScale Hardware Concepts

PowerScale Solutions Certification Journey Map

The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.

PowerScale Solutions

A. PowerScale Advanced Administration (C, VC)

B. PowerScale Advanced Disaster Recovery (C, VC)

(Knowledge and Experience based Exam)

Implementation Specialist, PowerScale Technology Architect Specialist, Platform Engineer, PowerScale


PowerScale

A. PowerScale Concepts (ODC)


A. PowerScale Concepts (ODC) A. PowerScale Concepts (ODC) B. PowerScale Hardware Concepts (ODC)
C. PowerScale Hardware Installation (ODC)
B. PowerScale Administration (C,VC,ODC) B. PowerScale Solution Design (ODC) D. PowerScale Hardware Maintenance
(ODC)
E. PowerScale Implementation (ODC)

Information Storage and Management

Information Storage and Management (C, VC, ODC)

(C) - Classroom

(VC) - Virtual Classroom

(ODC) - On Demand Course

For more information, visit: http://dell.com/certification

PowerScale Hardware Concepts

Page 4 © Copyright 2020 Dell Inc.


PowerScale Hardware Concepts

Prerequisite Skills

To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Current PowerScale hardware portfolio and the OneFS operating system
• PowerScale Concepts
• Isilon InfiniBand to Ethernet Backend Conversion

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 5


PowerScale Hardware Concepts

Course Objectives

After completion of this course, you will be able to:


→ Discuss installation engagement actions.
→ Explain the use of PEQ in implementation.
→ Describe PowerScale nodes.
→ Identify the PowerScale node internal and external networking components.
→ Explain the PowerScale cluster management tools.

PowerScale Hardware Concepts

Page 6 © Copyright 2020 Dell Inc.


Installation Engagement

Installation Engagement

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 7


Installation Engagement

Module Objectives

After completing this lesson, you will be able to:


• Describe the Customer Engineer and Implementation Specialist roles and
responsibilities.
• Explain the customer engagement procedures.

PowerScale Hardware Concepts

Page 8 © Copyright 2020 Dell Inc.


Installation Engagement

Customer Engagement Responsibility

There are five steps or phases for acquiring a PowerScale cluster. Each phase has
a separate team that engages with the customer. In the design phase a Solution
Architect (SA) works with the customer, determine their specific needs, and
documents what the solution looks like. After the product purchase, shipment, and
delivery to the customer site the install and implementation phase of a PowerScale
cluster begins. The result of the SA engagement is PowerScale Pre-Engagement
Questionnaire (PEQ) that the Customer Engineers (CE) and Implementation
Specialist (IS) uses to install and configure the cluster. Before the install phase, all
design decisions have been made.

Note: The Pre-Engagement Questionnaire (PEQ) is now replacement


for PowerScale Configuration Guide.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 9


Installation Engagement

Physical Tools Required

Shown in the graphic are suggested tools for a typical installation.

3
2 4

1 5

1: The cables that are required are a single CAT5/CAT6 network patch cord, to
directly connect your laptop to the node. USB-to-serial adapter, preferably one that
uses the Prolific 2303 Chipset.

2: DB9-to-DB9 Null modem cable (female/female).

3: The software that is required or recommended is:

• Latest recommended OneFS release


• Latest cluster firmware
• Latest drive firmware package
• SolVe Online
• WinSCP - copies files to and from cluster
• PuTTy - serial access cluster via SSH

4: Basic hand tools: screwdrivers (flat-head and Phillips), wire cutters, anti-static
wrist strap.

5: Cable ties/Velcro strips for cable management and routing.

PowerScale Hardware Concepts

Page 10 © Copyright 2020 Dell Inc.


Installation Engagement

Resources: Links to download the WinSCP and PuTTy software.


Other software can be downloaded at support.emc.com.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 11


Installation Engagement

Installation and Implementation Phases

There are three distinct steps in the install and implementation phase: Install, Build,
and Implement.

1: During the install, the components are unpacked and racked, and switches are
rack that is mounted. Nodes are connected to the back-end switches, power is
added, and front-end network cables are connected between the cluster and
customer network. The Customer Engineer or CE performs these tasks.

2: Depending on the role, the CE may perform the cluster build also. The cluster
build is achieved when the system is powered on, the PowerScale Configuration
Wizard has been launched and the information added.

3: In some regions, running the Configuration Wizard may be the sole responsibility
of the IS. After the cluster is built, the IS configures the features of OneFS as
written in the PEQ.

PowerScale Hardware Concepts

Page 12 © Copyright 2020 Dell Inc.


Installation Engagement

SolVe

Before you arrive at a client site, remember to read the call notes and follow the
processes that are detailed in them. Check if there are any special instructions from
PowerScale Technical Support that you must follow.

SolVe Online is a revised and updated version of SolVe Desktop. It is a knowledge


management-led standard procedure for DELL-EMC field, service partners, and
customers.

1: Download SolVe Desktop application on the system. Go to the Tools and Sites
section, choose SolVe. And select SolVe Desktop Executable. Depending on the
browser used, you may be presented with security dialogue boxes. Take the
needed actions to launch the executable.

2:

Click through the Setup wizard and then select Install. Clicking Finish launches
the SolVe Desktop. SolVe must be authorized for use. Select OK. A few general
items1.

1 Notice the dialog in the lower left showing the version. This area also shows the
progress when upgrading and downloading content. Also notice in the lower right
the service topics. Once connected, many of articles that are shown may not be

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 13


Installation Engagement

3: From the menu, select Authorize and download the list of available products.
Adhere to the instructions shown, that is to leave SolVe open, enter credentials,
this is using SSO, and open the keychain file. Select OK. And then go to
downloads and open the keychain file.

4: Next are the Release Notes. Review and then close this window. Bring back the
SolVe. Notice the dialog2 in the lower left indicating the keychain is loaded, that
means you are authorized, and content is updated. Now, scroll down, and click
PowerScale to gather the PowrScale content.

5: Click OK. Again, note the progress in the lower left. Once the download is
complete, you see that the PowerScale image has changed. Tools that are
downloaded appear in the upper left corner of the screen without the green arrow
present.

6: Now you can click PowerScale and view the available procedures. If updates are
available for download, you see an information icon, click the icon, and approve the
updated content download.

relevant to PowerScale. There is a filtering option in the menu to receive the


articles that pertain to a specific product.

2The icons with a green arrow indicate that the user must click the icon in order to
download the tool.

PowerScale Hardware Concepts

Page 14 © Copyright 2020 Dell Inc.


Installation Engagement

Resources: Partners3 can search through the Dell EMC partner


portal. SolVe Online can be downloaded from EMC support portal.
Access SolVe Online through SolVe Online portal. Click here for an
overview on SolVe Desktop/Online.

3The view is dependent upon Partner Type. A service partner sees what an
employee sees, a direct sales partner sees what a customer sees, and an
ASP/ASN partner sees products depending upon credentials.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 15


Installation Engagement

Safety Precautions and Considerations

When working with PowerScale equipment, it is critical to ensure you adhere to the
following precautions.

6
1

2
5
3

1: The AC supply circuit for PowerScale nodes must supply the total current that is
specified on the label of the node. All AC power supply connections must be
properly grounded. Connections that are not directly connected to the branch
circuit, such as nodes that are connected to a power strip, must also be properly
grounded. Do not overload the branch circuit of the AC supply that provides power
to the rack holding PowerScale nodes. The total rack load should not exceed 80%
of the branch circuit rating. For high availability, the left and right sides of any rack
must receive power from separate branch feed circuits. To help protect the system
from sudden increases or decreases in electrical power, use a surge suppressor,
line conditioner, or uninterruptible power supply or UPS.

2: To avoid personal injury or damage to the hardware, always use two people to
lift or move a node or chassis. A Gen 6 chassis can weigh more than 200 lbs. It is
recommended to use a lift to install the components into the rack. If a lift is not
available, you must remove all drive sleds and compute modules from the chassis
before lifting. Even when lifting an empty chassis, never attempt to lift and install
with fewer than two people.

3:

Electrostatic Discharge

PowerScale Hardware Concepts

Page 16 © Copyright 2020 Dell Inc.


Installation Engagement

4: If you install PowerScale nodes in a rack that is not bolted to the floor, use both
front and side stabilizers. Installing PowerScale nodes in an unbolted rack without
these stabilizers could cause the rack to tip over, potentially resulting in bodily
injury. Use only approved replacement parts and equipment.

5: Beyond precautions of working with electricity, it is also critical to ensure proper


cooling. Proper airflow must be provided to all PowerScale equipment. Gen 6
nodes have an ASHRAE (American Society of Heating, Refrigerating and Air-
Conditioning Engineers) designation of A3. The nodes can operate in environments
with ambient temperatures from five degrees, up to 40° Celsius for limited periods
of time.

6: You can install racks in raised or nonraised floor data centers capable of
supporting that system. It is your responsibility to ensure that data center floor can
support the weight of the system. A fully populated rack with A2000 chassis’
weighs about 3,500 lbs (1,590 kg). If the floor is rated at less than 3,500 lbs, then
additional care and planning must be taken. Some data center floors have different
static load vs. dynamic (rolling) load specifications, and sectional weight and load
point limits. This becomes important while moving preracked solutions around the
data center.

Caution: Failure to adhere to the safety precautions may result in


electric shock, bodily injury, fire, damage to PowerScale systems
equipment, or loss of data. Review the safety precautions and
considerations4 before the installation.

4Failure to heed these warnings may also void the product warranty. Only trained
and qualified personnel should install or replace equipment. Select the button
options for specific information. Always refer to the current Site Preparation and
Planning Guide for proper procedures and environmental information.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 17


Installation Engagement

Onsite Do's and Don'ts

When onsite, remember to represent Dell EMC and yourself in the best possible
light. Do not change the PEQ without the approval of the design team. Any
approved changes should be meticulously tracked and any appropriate change
control processes should be followed. Remember to bring your documentation and
copies to provide to the customer.

Before you leave a client site, ensure you:

• Test the device function and connectivity by following documented test


procedures in the training material and support guides.
• Escalate any client satisfaction issues or severity level 1 situations to the next
level of support.
• Follow up on any outstanding commitments that are made to the client.
• Contact PowerScale support to report the call status.
• Ensure that the product is registered and that the Install Base Record is
updated.

Tip: To make an Install Base entry, use the IB Status Change page
link.

PowerScale Hardware Concepts

Page 18 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Introduction to PowerScale Nodes

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 19


Introduction to PowerScale Nodes

Module Objectives

After completing this module, you will be able to:


• Describe node naming conventions.
• Identify each PowerScale node series.
• Identify PowerScale node components.

PowerScale Hardware Concepts

Page 20 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Node Specifications

Attribute F200, F800, H600 H5600 H500, A200 A2000


F600 F810 H400

Rack Units 1U 4 4 4 nodes 4 4 4 nodes


nodes nodes in 4U nodes nodes in 4U
in 4U in 4U in 4U in 4U

Nodes per N/A 4 4 4 4 4 4


Chassis

Per Node F200: F800: 18 TB– 200 30 TB– 30 TB– 200


Capacity 3.84 – 24 TB– 36 TB TB–240 180 TB 180 TB TB–240
15.36 231 TB TB TB
TB F810:
F600: 57.5 -
15.36 – 231 TB
61.4
TB

Storage Media F200: 15 30 20 15 15 15


per Node 4 SSDs SAS SATA SATA SATA SATA
SSDs drives drives drives drives drives
F600:
8
NVMe
SSDs

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 21


Introduction to PowerScale Nodes

Storage Media F200: 3.84 600 10 TB 2 TB, 4 2 TB, 4 10 TB


Capacity 960 TB, GB or or 12 TB, 8 TB, 8 or 12
Options GB, 7.68 1.2 TB TB TB, or TB, or TB
1.92 TB, or SAS SATA 12 TB 12 TB SATA
TB, 15.36 drives drives SATA SATA drives
3.84 TB drive drives
TB SSDs
SSDs
F600:
1.92
TB,
3.84
TB,
7.68
TB
NVMe
SSDs

ECC Memory F200: 256 256 256 GB H400: 16 GB 16 GB


per Node 48 GB GB GB 64 GB
or 96 H500:
GB 128
F600: GB
128
GB,
192
GB, or
384
GB

OneFS 8.1.x F800 - 8.1.x 8.2.x 8.1.x 8.1.x 8.1.x


Compatibility 8.2.x 8.1.x, 8.2.x 9.0.x 8.2.x 8.2.x 8.2.x
9.0.x 8.2.x, 9.0.x 9.0.x 9.0.x 9.0.x
9.0.x
F810 -
8.1.3,
8.2.1,
9.0.x

PowerScale Hardware Concepts

Page 22 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Hardware Overview

Nodes combine to create a cluster. Each cluster


behaves as a single, central storage system.
PowerScale is designed for large volumes of
unstructured data. PowerScale has multiple servers
that are called nodes.

PowerScale includes all-flash, hybrid, and archive


storage systems.
Dual chassis, eight node
Generation 6 (or Gen 6) cluster
Gen 6 highlights5
Gen 6.5 highlights6

5The Gen 6 platform reduces the data center rack footprints with support for four
nodes in a single 4U chassis. It enables enterprise to take on new and more
demanding unstructured data applications. The Gen 6 can store, manage, and
protect massively large datasets with ease. With the Gen 6, enterprises can gain
new levels of efficiency and achieve faster business outcomes.

6 The ideal use cases for Gen 6.5 (F200 and F600) is remote office/back office,
factory floors, IoT, and retail. Gen 6.5 also targets smaller companies in the core
verticals, and partner solutions, including OEM. The key advantages are low entry
price points and the flexibility to add nodes individually, as opposed to a chassis/2
node minimum for Gen 6.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 23


Introduction to PowerScale Nodes

PowerScale Nodes Overview

Generation 6 (or Gen 6) chassis and Generation 6.5 nodes

The design goal for the PowerScale nodes is to keep the simple ideology of NAS,
provide the agility of the cloud, and the cost of commodity.

Storage nodes are peers.

The Gen 6x family has different offerings that are based on the need for
performance and capacity. As Gen 6 is a modular architecture, you can scale out
compute and capacity separately. OneFS powers all the nodes.

PowerScale Hardware Concepts

Page 24 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Node Types

Click each generation node type to learn more.

Gen 6

The Gen 6 platform provides following offerings. Previous generations of


PowerScale nodes come in 1U, 2U, and 4U form factors. Gen 6 has a modular
architecture, with four nodes fitting into a single 4U chassis.

• F-Series
• H-Series
• A-series

Double-click image for enlarged view.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 25


Introduction to PowerScale Nodes

Gen 6.5

Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.

• F6007
• F2008

Double click image for enlarged view.

7Mid-level All Flash Array 1U PE server with 10 (8 usable) x 2.5” drive bays,
enterprise NVMe SSDs (RI, 1DWPD), data reduction standard. Front End
networking options for 10/25 GbE or 40/100 GbE and 100 GbE Back End. Also
called as Cobalt Nodes.

8 Entry-level All Flash Array 1U PE server with 4 x 3.5” drive bays (w/ 2.5” drive
trays), enterprise SAS SSDs (RI, 1DWPD), data reduction standard. 10/25 GbE
Front/Back End networking. Also called as Sonic Nodes.

PowerScale Hardware Concepts

Page 26 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Gen 6 Hardware Components

Gen 6 requires a minimum of four nodes to form a cluster. You must add nodes to
the cluster in pairs.

The chassis holds four compute nodes and 20 drive sled slots.

Both compute modules in a node pair power-on immediately when one of the
nodes is connected to a power source.

Gen 6 chassis

1 10 9

2 8
4
6

3
5 7

1: The compute module bay of the two nodes make up one node pair. Scaling out a
cluster with Gen 6 nodes is done by adding more node pairs.

2: Each Gen 6 node provides two ports for front-end connectivity. The connectivity
options for clients and applications are 10 GbE, 25 GbE, and 40 GbE.

3: Each node can have 1 or 2 SSDs that are used as L3 cache, global namespace
acceleration (GNA), or other SSD strategies.

4: Each Gen 6 node provides two ports for back-end connectivity. A Gen 6 node
supports 10 GbE, 40 GbE, and InfiniBand.

5: Power supply unit - Peer node redundancy: When a compute module power
supply failure takes place, the power supply from the peer node temporarily
provides power to both nodes.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 27


Introduction to PowerScale Nodes

6: Each node has five drive sleds. Depending on the length of the chassis and type
of the drive, each node can handle up to 30 drives or as few as 15.

7: Disks in a sled are all the same type.

8: The sled can be either a short sled or a long sled. The types are:

• Long Sled - four drives of size 3.5"


• Short Sled - three drives of size 3.5"
• Short Sled - three or six drives of size 2.5"

9: The chassis comes in two different depths, the normal depth is about 37 inches
and the deep chassis is about 40 inches.

10: Large journals offer flexibility in determining when data should be moved to the
disk. Each node has a dedicated M.2 vault drive for the journal. A node mirrors
their journal to its peer node. The node writes the journal contents to the vault when
a power loss occurs. A backup battery helps maintain power while data is stored in
the vault.

PowerScale Hardware Concepts

Page 28 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Gen 6.5 Hardware Components

Gen 6.5 requires a minimum of three nodes to form a cluster. You can add single
nodes to the cluster. The F600 and F200 are a 1U form factor and based on the
R640 architecture.

Graphic shows F200 or F600 node pool.

1
5

8 2

7 4

1: Scaling out an F200 or an F600 node pool only requires adding one node.

2: For front-end connectivity, the F600 uses the PCIe slot 3.

3: Each Gen F200 and F600 node provides two ports for backend connectivity. The
PCIe slot 1 is used.

4: Redundant power supply units - When a power supply fails, the secondary
power supply in the node provides power. Power is supplied to the system equally
from both PSUs when the Hot Spare feature is disabled. Hot Spare is configured
using the iDRAC settings.

5: Disks in a node are all the same type. Each F200 node has four SAS SSDs.

6: The nodes come in two different 1U models, the F200 and F600. You need
nodes of the same type to form a cluster.

7: The F200 front-end connectivity uses the rack network daughter card (rNDC).

8: Each F600 node has 8 NVMe SSDs.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 29


Introduction to PowerScale Nodes

Important: The F600 nodes have a 4-port 1 GB NIC in the rNDC slot.
OneFS does not support this NIC on the F600.

PowerScale Hardware Concepts

Page 30 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

PowerScale Node Tour - Generation 6

Gen 6 Chassis

All Gen 6 chassis come with the front panel and the front panel display module.
The front panel covers the drive sleds while allowing access to the display.

Movie:
The web version of this content contains a movie.

Script: This demonstration takes a tour of the Gen 6 front panel display, drive
sleds, and an outside look at the node’s compute modules. We’ll focus on
identifying components and indicator function.

Front Panel Display

We’ll start the tour on the front panel display. This allows various administrative
tasks and provides alerts. There are 5 navigation buttons that let the administrator
select each node to administer. There are 4 node status indicators. If a node’s
status light indicator is yellow, it indicates a fault with the corresponding node. The
product badges indicate the types of nodes installed in the chassis. Only two
badges are necessary because nodes can only be installed in matched adjacent
node pairs. The front panel display is hinged to allow access to the drive sleds it
covers and contains LEDs to help the administrator see the status of each node.

Sleds

Now, taking the front bezel off the chassis and you will see the drive sleds for the
nodes. The Gen 6 chassis has 20 total drive sled slots that can be individually
serviced, but only one sled per node can be safely removed at a time. The graphic
shows that each node is paired with 5 drive sleds. The status lights on the face of
the sled indicate whether the sled is currently in service, and whether the sled
contains a failing drive. The service request button informs the node that the sled
needs to be removed, allowing the node to prepare it for removal by moving key
boot information away from drives in that sled. This temporarily suspends the
drives in the sled from the cluster file system, and then spins them down. This is
done to maximize survivability in the event of further failures and protect the cluster
file system from the effect of having several drives temporarily go missing. The do-
not-remove light blinks while the sled is being prepared for removal, and then turns
off when it is ready. We’ll see this here. The sleds come in different types. First,

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 31


Introduction to PowerScale Nodes

when configured for nodes that support 3.5" drives, there are 3 drives per sled, as
shown here, equaling 15 drives per node, making 60 drives per chassis. The
second type is a longer sled that holds four 3.5” drives. This is used in the deep
archive, deep rack chassis for A2000 nodes. The long sleds have 20 drives per
node, for up to 80 3.5" drives per chassis. In the 3.5" drive sleds, the yellow LED
drive fault lights are on the paddle cards attached to the drives, and they are also
visible through the cover of the drive sled as indicated here. The long sled has 4
LED viewing locations. The third type of sled applies to nodes supporting 2.5"
drives. The 2.5” drive sleds can have 3 or 6 drives per sled (as shown), 15 or 30
drives per node, making 60 or 120 drives per fully populated chassis. Internally to
the 2.5" sled, there are individual fault lights for each drive. The yellow LED
associated with each drive is visible through holes in the top cover of the sled so
that you can see which drive needs replacement. The LED will stay on for about 10
minutes while the sled is out of the chassis.

Compute

When we look at the back, we see the four nodes’ compute modules in the chassis’
compute bays. We also see the terra cotta colored release lever on each compute
module, secured by a thumb screw. As shown, compute module bay 1 and 2 make
up one node pair and bay 3 and 4 make up the other node pair. In the event of a
compute module power supply failure, the power supply from the peer compute
module in the node pair will temporarily provide power to both nodes. Let’s move
the upper right of a compute module. The top light is a blue, power LED and below
that is an amber, fault LED. Each compute module has a ‘DO NOT REMOVE’
indicator light which is shaped like a raised hand with a line through it. To service
the compute module in question, shut down the affected node and wait until the
‘DO NOT REMOVE’ light goes out. Then it is safe to remove and service the unit in
question. The uHDMI port is used for factory debugging. The PCIE card on the
right is for external network connectivity and the left PCIE card is for internal
network connectivity. The compute module has a 1GbE management port, and the
DB9 serial console port. Each compute module has either a 1100W dual-voltage
(low and medium compute) or a 1450W high-line (240V) only (high and ultra-
compute) power supply unit. If high-line only nodes are being installed in a low-line
(120V) only environment, two 1U rack-mountable step-up transformers are required
for each Gen 6 chassis. Always keep in mind that Gen 6 nodes do not have power
buttons - both compute modules in a node pair will power on immediately when one
is connected to a live power source. There are also status indicator lights such as
the PSU fault light. All nodes have an ASHRAE (American Society of Heating,

PowerScale Hardware Concepts

Page 32 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Refrigerating and Air-conditioning Engineers) designation of A3, which enables the


nodes to operate in environments with ambient temperatures from 5 up to 40
degrees Celsius for limited periods of time. In closing, here are also 2 SSD bays on
each compute module, one or both of which are populated with SSDs (depending
on node configuration) that are used as L3 cache. This concludes the tour of the
Isilon Gen 6 front panel display, drive sleds, and an outside look at the node’s
compute modules.

Inside Gen 6 Node

This hardware tour will take a deeper look inside the node’s compute module.

Movie:
The web version of this content contains a movie.

Script: This demonstration takes a tour of the inside of the Gen 6 compute module.

First, let’s take at the back of the chassis. The chassis can have two or four
compute modules. Remember that a node is a ¼ of the chassis and consists of a
compute module and five drive sleds. Each node pairs with a peer node to form a
node pair. Shown here, nodes three and four form a node pair. Let’s start by
removing the node’s compute module to get a look inside. This demonstration does
not use a powered system. This tour does not highlight the steps for removing
components. Remember to always follow the proper removal and install
procedures from the SolVe Desktop.

WARNING: Only qualified Dell EMC personnel are allowed to open compute
nodes.

Let’s remove the node’s lid. This can be a bit tricky on the first time. Pull the blue
release handle without pressing down on the lid. Pressing down on the lid while
trying to open will keep the node lid from popping up. The lid portion of the compute
module holds the motherboard, CPU and RAM. There are two different
motherboard designs to accommodate different CPU types; the performance-based
Broadwell-EP or the cost optimized Broadwell-DE. Shown here is the Broadwell-DE
based board that the H400, A200, and A2000 use. Note the position of the four
DIMMs and their slot numbering. Here is the Broadwell-EP based board that the
F800, H600 and H500 use. Note the position of the four DIMMs and their slot

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 33


Introduction to PowerScale Nodes

numbering. The DIMMs are field replaceable units. The CPU is not. Due to the
density and positioning of motherboard components around the DIMM slots,
damage to the motherboard is possible if care is not taken while removing and
installing DIMM modules.

Let’s turn to the lower portion of the compute module. First, we see the fan module.
This is a replaceable unit. Shown is the release lever for the fans.

The riser card, on the right side of the compute module, contains the PCIE card
slots, the NVRAM vault battery, and the M.2 card containing the NVRAM vault.
Let’s remove this to get a closer look. Removing the riser card can be tricky the first
time. Note the two blue tabs for removing the HBA riser, a sliding tab at the back
and a fixed tab at the front. At the same time, push the sliding tab in the direction of
the arrow on the tab and free the front end by pulling the riser away from the
locking pin on the side of the chassis with the fixed tab. Lift the tabs to unseat the
riser and pull it straight up. Try this at least once before going onsite to replace a
component. Here are the two PCIe slots and the ‘Pelican’ slot. They are x4 or x8
depending on the performance level of the node. The internal NIC for
communication between nodes is the PCI card shown on the left, the external PCI
card is on the right. The external NIC is used for client and application access.
Depending on the performance level of the node, the external NIC may either be a
full-size PCIe card facing left, or a ‘Pelican’ card connected to the smaller
proprietary slot between the two PCIe slots and facing right.

Next is the battery. The backup battery maintains power to the compute node while
journal data is being stored in the M.2 vault during an unexpected power loss
event. Note that because the riser card and the battery are paired, if the battery
needs to be replaced, it is replaced together with the riser card. Lastly, as seen
here, the M.2 vault disk is located under the battery. The M.2 vault disk is also a
field replaceable unit. This concludes the inside tour. Remember to review the
documentation on the SolVe Desktop for proper removal and replacement of the
node’s compute module components.

PowerScale Hardware Concepts

Page 34 © Copyright 2020 Dell Inc.


Introduction to PowerScale Nodes

Advantages and Terminologies

Generation 6 4U Node

Gen 6 provides flexibility. From a customer perspective, it allows for easier


planning. Each chassis requires 4U in the rack, with the same cabling and a higher
storage density in a smaller data center footprint. It should be noted that this also
means that there is four times as much cabling across the Gen 6 4U chassis
populated with four nodes. Customers can select the ideal storage to compute ratio
for their workflow.

New PowerScale F600 nodes with full NVMe support deliver massive performance
in a compact form factor. OneFS delivers up to 80% storage utilization for
maximum storage efficiency. Data deduplication can further reduce storage
requirements by up to 30% and inline data compression on the F200, F600, F810
all-flash platforms, and the H5600 hybrid platform can reduce the space that is
consumed.

Generation 6 Terminologies

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 35


Pre-Engagement Questionnaire

Pre-Engagement Questionnaire

PowerScale Hardware Concepts

Page 36 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Module Objectives

After completing this module, you will be able to:


• Identify the job roles of people involved in the implementation.
• Explain the use of PEQ in implementation.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 37


Pre-Engagement Questionnaire

Job Roles

There are four job roles that are associated with PowerScale hardware installation
and implementation process.

1: Customer Engineer (CE):

• Performs hardware installation and hardware upgrade services


• Creates PowerScale cluster
• Verifies that hardware installation is successful

2: Implementation Specialist (IS):

• Has knowledge of storage system


• Implements cluster

3: Project Manager (PM):

• First contact of customers for service engagement


• Builds delivery schedule
• Coordinates services delivery with customer and service personnel
• Monitors progress of service delivery

4: Solutions Architect (SA):

PowerScale Hardware Concepts

Page 38 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

• Develops implementation plan


• Designs configuration

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 39


Pre-Engagement Questionnaire

Pre-Engagement Questionnaire

The PowerScale PEQ is the replacement for the Configuration Guide. The stated
purpose of the PEQ is to document the Professional Services project installation
parameters and to facilitate the communication between the responsible resources.
The PEQ incorporates the process workflow and eases hand-off from Pre-Sales to
Delivery. It is a delivery document, which benefits other roles, helps define roles
and responsibilities and is not the same as the Qualifier.

Click images for enlarged view.

PowerScale Hardware Concepts

Page 40 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

PEQ Tour

The PEQ is an Excel spreadsheet consisting of eight tabs: Cover, Engagement


Details (SE), Solution Diagram (SE), Checklist(PM), Project Details(PM),
Hardware, Cluster, and Reference.

Click each image for enlarged view.

Cover

To start the application, open the PEQ spreadsheet tool. The first tab that is
displayed is the Cover tab. The Cover tab contains the creation date and the
customer name.

Engagement Details (SE)

Begin filling out the document from upper left to bottom right. SE shares the
Customer contact information and describes at a high level what the project team is
expected to do at each site, using the provided drop-down menus. The SE also
provides general customer environment information, such as Operating Systems in
use, backup apps and protocols, and any specialty licenses sold. Accurate and

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 41


Pre-Engagement Questionnaire

complete customer information is important to the smooth and efficient planning


process.

Solution Diagram (SE)

On the Solution Diagram tab, the SE provides the solution diagrams or topologies
that are used during the presales cycle.

PowerScale Hardware Concepts

Page 42 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Checklist (PM)

Project Manager begins with the Engagement Checklist tab to help them plan
project tasks with a great deal of granularity.

Project Details (PM)

It is also the responsibility of the Project Manager to maintain the Data Center
readiness information about the Project Details tab. Here the PM focuses on
verifying that each site has met the power, cooling, networking, and other
prerequisites before scheduling resources. The PM should also complete the
Administrative Details section with team member information, project Id details, and
an optional timeline.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 43


Pre-Engagement Questionnaire

Hardware

The hardware tab shows the physical connections parameters and some basic
logical parameters necessary to “stand up” the cluster. When multiple node types
are selected and defined on the Engagement Details tab, the Cluster Details
section includes a complete listing of the extended Node Details and Front-End
Switch details.

PowerScale Hardware Concepts

Page 44 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Cluster

The Cluster tab represents a single cluster and its logical configuration. Each
section on the Cluster Tab has a designated number (Yellow Chevron). The
numbers represent the listed priority of that section and should be completed in
order starting with number one. This tab is split into sections that describe different
features. These tabs are enabled through the questions in the Licensing \ Features
section.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 45


Pre-Engagement Questionnaire

Reference

The Reference Tab provides frequently used content, cross‐references, and


checklists and other items that assist the delivery resources throughout the delivery
engagement. It is intended quick reference not as the authoritative source of that
information.

PowerScale Hardware Concepts

Page 46 © Copyright 2020 Dell Inc.


Pre-Engagement Questionnaire

Click image for enlarged view.

Note: The Solution Architect (SA) typically fills out the PEQ.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 47


Internal and External Networking

Internal and External Networking

PowerScale Hardware Concepts

Page 48 © Copyright 2020 Dell Inc.


Internal and External Networking

Module Objectives

After completing this module, you will be able to:


• Explain the significance of internal and external networks in clusters.
• Describe InfiniBand switches and cables and identify Ethernet cabling.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 49


Internal and External Networking

PowerScale Networking Architecture

OneFS supports standard network communication protocols IPv4 and IPv6.


PowerScale nodes include several external Ethernet connection options, providing
flexibility for a wide variety of network configurations9.

Network: There are two types of networks that are associated with a cluster:
internal and external.

Front-end, External Network

Client/Application PowerScale Storage


Layer Layer

Ethernet

Protocols: NFS, SMB, S3, Ethernet Backend communication


HTTP, FTP, HDFS, SWIFT Layer (PowerScale internal)

F200 cluster showing supported front-end protocols.

Clients connect to the cluster using Ethernet connections10 that are available on all
nodes.

9 In general, keeping the network configuration simple provides the best results with
the lowest amount of administrative overhead. OneFS offers network provisioning
rules to automate the configuration of additional nodes as clusters grow.

10Because each node provides its own Ethernet ports, the amount of network
bandwidth available to the cluster scales linearly.

PowerScale Hardware Concepts

Page 50 © Copyright 2020 Dell Inc.


Internal and External Networking

The complete cluster is combined with hardware, software, networks in the


following view:

Back-end, Internal Network

Double click image to enlarge.

OneFS supports a single cluster11 on the internal network. This back-end network,
which is configured with redundant switches for high availability, acts as the
backplane for the cluster.12

11 All intra-node communication in a cluster is performed across a dedicated


backend network, comprising either 10 or 40 GbE Ethernet, or low-latency QDR
InfiniBand (IB).

12 This enables each node to act as a contributor in the cluster and isolating node-
to-node communication to a private, high-speed, low-latency network. This back-
end network utilizes Internet Protocol (IP) for node-to-node communication.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 51


Internal and External Networking

Leaf-Spine Backend Network

The Gen 6x back-end topology in OneFS 8.2 and later supports scaling a
PowerScale cluster to 252 nodes. See the participant guide for more details.

22 downlinks per leaf - 40 Gb ports 10 uplinks per


Dell Z9100 switches leaf - 100 Gb ports

27 uplinks per
spine switch

4 leaf switches = max


of 88 nodes

Max scale out to 132 nodes with


2 spine switches

Leaf-Spine topology for a PoweScale cluster with up to 88 nodes.

Leaf-Spine is a two-level hierarchy where nodes connect to leaf switches, and leaf
switches connects to spine switches. Leaf switches do not connect to one another,
and spine switches do not connect to one another. Each leaf switch connects with
each spine switch and all leaf switches have the same number of uplinks to the
spine switches.

The new topology uses the maximum internal bandwidth and 32-port count of Dell
Z9100 switches. When planning for growth, F800 and H600 nodes should connect
over 40 GbE ports whereas A200 nodes may connect using 4x1 breakout cables.
Scale planning enables for nondisruptive upgrades, meaning as nodes are added,
no recabling of the backend network is required. Ideally, plan for three years of
growth. The table shows the switch requirements as the cluster scales. In the table,
Max Nodes indicate that each node is connected to a leaf switch using a 40 GbE
port.

PowerScale Hardware Concepts

Page 52 © Copyright 2020 Dell Inc.


Internal and External Networking

Installing a New Leaf-Spine Cluster

If you install a new cluster or scale a cluster to include 32 performance nodes


(F800, H600, and H500 models) with 40 GbE back-end ports, or more than 96
archive nodes (H400, A200, A2000 models) with 10 GbE back-end ports, use the
Leaf-Spine topology to configure the back-end network.

To install a new Leaf-Spine cluster, follow this workflow.


1. Install the switch rails.
2. Install the Spine switches followed by the Leaf switches.
3. Cable the leaf switches to the spine switches and then to the nodes for both the
networks.
4. Make sure the switch operating system version is 10.4.1.4P4 or later.

Important: Do not connect Leaf to Leaf or Spine to Spine switches.

5. Create a cluster by using any four nodes on the first Leaf switch.
6. Confirm that OneFS 8.2 or later is installed on the cluster.
7. Add the remaining nodes to the cluster that was created in step 5.
8. Confirm the cluster installation by checking the CELOG events.

Important: The events reported can be related to links introduced


between two or more Leaf switches to node connections (downlinks)
or between two or more Leaves to Spine switch connections (uplinks).
Incorrect cabling is also reported in events.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 53


Internal and External Networking

Legacy Connectivity

Three types of InfiniBand cable are used with currently deployed clusters. Older
nodes and switches, which run at DDR or SDR speeds use the legacy CX4
connector. In mixed environments (QDR nodes and DDR switch, or conversely) a
hybrid IB cable is used. This cable has a CX4 connector on one end and a QSFP
connector on the other. However, QDR nodes are incompatible with SDR switches.
On each cable, the connector types identify the cables. The graphic shows, the
combination of the type of node and the type of InfiniBand switch port determines
the correct cable type.

PowerScale Hardware Concepts

Page 54 © Copyright 2020 Dell Inc.


Internal and External Networking

Node Interconnectivity

1: Backend ports int-a and int-b. The int-b port is the upper port. Gen 6 backend
ports are identical for InfiniBand and Ethernet and cannot be identified by looking at
the node. If Gen 6 nodes are integrated in a Gen 5 or earlier cluster, the backend
will use InfiniBand. Note that there is a procedure to convert an InfiniBand backend
to Ethernet if the cluster no longer has pre-Gen 6 nodes.

2: PowerScale nodes with different backend speeds can connect to the same
backend switch and not see any performance issues. For example, an environment
has a mixed cluster where A200 nodes have 10 GbE backend ports and H600
nodes have 40 GbE backend ports. Both node types can connect to a 40 GbE
switch without effecting the performance of other nodes on the switch. The 40 GbE
switch provides 40 GbE to the H600 nodes and 10 GbE to the A200 nodes.

3: There are two speeds for the backend Ethernet switches, 10 GbE and 40 GbE.
Some nodes, such as archival nodes, might not need to use all of a 10 GbE port
bandwidth while other workflows might need the full utilization of the 40 GbE port
bandwidth. The Ethernet performance is comparable to InfiniBand so there should
be no performance bottlenecks with mixed performance nodes in a single cluster.
Administrators should not see any performance differences if moving from
InfiniBand to Ethernet.

4: Gen 6.5 backend ports use the PCIe slot.

Gen 6 nodes can use either an InfiniBand or Ethernet switch on the backend.
InfiniBand was designed as a high-speed interconnect for high-performance

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 55


Internal and External Networking

computing, and Ethernet provides the flexibility and high speeds that sufficiently
support the PowerScale internal communications.

Gen 6.5 only supports Ethernet. All new, PowerScale clusters support Ethernet
only.

Warning: With Gen 6, do not plug a backend Ethernet topology into a


backend InfiniBand NIC. If you plug Ethernet into the InfiniBand NIC,
it switches the backend NIC from one mode to the other and will not
come back to the same state.

PowerScale Hardware Concepts

Page 56 © Copyright 2020 Dell Inc.


Internal and External Networking

F200 and F600 Network Connectivity

The graphic shows a closer look at the external and internal connectivity. Slot 1 is
used for backend communication on both the F200 and F600. Slot 3 is used for the
F600 2x 25 GbE or 2x 100 GbE front-end network connections. The rack network
daughter card (rNDC) is used for the F200 2x 25 GbE front-end network
connections.

The F200 and F600 have no dedicated management port.

PCIe slot 1 - used for all BE


communication PCIe slot 3 - used for F600 FE

rNDC used for F200 FE

Note: The graphic shows the R640 and does not represent the F200 and F600 PCIe and rNDC
configuration.

Tip: Interfaces are named "25gige-N" or "100gige-N." Interface


names may not indicate the link speed. For example, the interface
name for NICs that are running at the lower speed such as 10 Gb do
not change to "10gige-1." You can use ifconfig to check the link
speed.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 57


Internal and External Networking

PowerScale Architecture - External Network

Eight node Gen 6 cluster showing supported protocols.

The external network provides connectivity for clients over standard file-based
protocols. It supports link aggregation, and network scalability is provided through
software in OneFS. A Gen 6 node has to 2 front-end ports - 10 GigE, 25 GigE, or
40 GigE, and one 1 GigE port for management. Gen 6.5 nodes have 2 front-end
ports - 10 GigE, 25 GigE, or 100 GigE.

In the event of a Network Interface Controller (NIC) or connection failure, clients do


not lose their connection to the cluster. For stateful protocols, such as SMB and
NFSv4, this prevents client-side timeouts and unintended reconnection to another
node in the cluster. Instead, clients maintain their connection to the logical interface
and continue operating normally. OneFS supports Continuous Availability (CA) for
stateful protocols like SMB, and NFSv4 is supported.

PowerScale Hardware Concepts

Page 58 © Copyright 2020 Dell Inc.


Internal and External Networking

Breakout Cables

Backend breakout cables

The 40 GbE and 100 GbE connections are 4 individual lines of 10 GbE and 25
GbE. Most switches support breaking out a QSFP port into four SFP ports using a
1:4 breakout cable. The backend is done automatically when the switch detects the
cable type as a breakout cable. The front end is often configured manually on a per
port basis.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 59


Internal and External Networking

Cabling Considerations

Listed here are some general


cabling considerations.

• On a Gen 6 chassis, ensure that


each member of a node pair is
connected to a different power
source13.
• Before creating the cluster, do a
quick cable inspection.
• The front-end, client-facing,
network connections be evenly
distributed across patch panels in
the server room. Distributing the
connections may avoid single
points of failure.
• Use care when handling and looping copper InfiniBand cables, and any type of
optical network cables. Bending or mishandling cables can result in damaged
and unusable cables.
• Do not coil the cables less than 10 inches in diameter to prevent damage.
Never bend cables beyond their recommended bend radius.

13The use of Y cables is not recommended because power supply of node is no


longer redundant if all power is supplied by the same cable. Verify that all cables
are firmly seated and that wire bales firmly in place to keep the power cables
seated.

PowerScale Hardware Concepts

Page 60 © Copyright 2020 Dell Inc.


Cluster Management Tools

Cluster Management Tools

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 61


Cluster Management Tools

Module Objectives

After completing this module, you will be able to:


• Identify tools used to manage PowerScale.

PowerScale Hardware Concepts

Page 62 © Copyright 2020 Dell Inc.


Cluster Management Tools

OneFS Management Tools

The OneFS management interface is used to perform various administrative and


management tasks on the PowerScale cluster and nodes. Management capabilities
vary based on which interface is used. The different types of management
interfaces in OneFS are:

• Serial Console
• Web Administration Interface (WebUI)
• Command Line Interface (CLI).
• OneFS Application Programming Interface (API)
• Front Panel Display

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 63


Cluster Management Tools

Serial Console Video

Movie:

The web version of this content contains a movie.

Link:
https://edutube.emc.com/Player.aspx?vno=KjBgi9m8LmZLw58klDHmOA==&autopl
ay=true

Script: Four options are available for managing the cluster. The web
administration interface (WebUI), the command-line interface (CLI), the serial
console, or the platform application programming interface (PAPI), also called the
OneFS API. The first management interface that you may use is a serial console to
node 1. A serial connection using a terminal emulator, such as PuTTY, is used to
initially configure the cluster. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, and so
on. Shown are the terminal emulator settings.

The configuration Wizard automatically starts when a node is first powered on or


reformatted. If the Wizard starts, the menu and prompt are displayed as shown.
Choosing option 1 steps you through the process of creating a cluster. Option 2 will
exit the Wizard after the node finishes joining the cluster. After completing the
configuration Wizard, running the isi config command enables you to change
the configuration settings.

PowerScale Hardware Concepts

Page 64 © Copyright 2020 Dell Inc.


Cluster Management Tools

Configuration Manager

For initial configuration, access the CLI by establishing a serial connection to the
node designated as node 1. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, so on.

Serial Port14

Configure the terminal emulator utility to use the following settings:

• Transfer rate = 115,200 bps

14 The serial port is usually a male DB9 connector. This port is called the service
port. Connect a serial null modem cable between a serial port of a local client, such
as a laptop, and the node service port. Connect to the node designated as node 1.
As most laptops today no longer have serial ports, you might need to use a USB-
to-serial converter. On the local client, launch a serial terminal emulator.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 65


Cluster Management Tools

• Data bits = 8
• Parity = none
• Stop bits = 1
• Flow control = hardware

More Information on Command Prompt15

15Either a command prompt or a Configuration Wizard prompt appears. The


command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, a hash (#)
prompt appears. If you log in as another user, a % symbol appears. For example,
Cluster-1# or Cluster-1%. The prompt is the typical prompt that is found on most
UNIX and Linux systems. When a node first powers on or reformats, the
Configuration Wizard automatically starts. If the Configuration Wizard starts, the
prompt displays are shown. There are four options: Create a new cluster, join an
existing cluster, exit wizard and configure manually, and Reboot into SmartLock
Compliance mode. Choosing option 1 creates a cluster, while option 2 joins the
node to an existing cluster. If you choose option 1, the Configuration Wizard steps
you through the process of creating a cluster. If you choose option 2, the
Configuration Wizard ends after the node finishes joining the cluster. You can then
configure the cluster using the WebUI or the CLI.

PowerScale Hardware Concepts

Page 66 © Copyright 2020 Dell Inc.


Cluster Management Tools

isi config

Edit Wizard settings

Common commands - shutdown, status,


name

Changes
prompt to
>>>

Other "isi" commands not available in configuration


console

The isi config command, pronounced "izzy config," opens the configuration
console. The console contains configured settings from the time the Wizard started
running.

Use the console to change initial configuration settings. When in the isi config
console, other configuration commands are unavailable. The exit command is
used to go back to the default CLI.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 67


Cluster Management Tools

Web Administration Interface (WebUI)

OneFS
version
User must have logon privileges

Connect to
any node in
cluster over
HTTPS on
port 8080

Multiple browser support

Double-click the image to enlarge.

The WebUI is a graphical interface that is used to manage the cluster.

The WebUI requires at least one IP address that is configured16 on one of the
external Ethernet ports presents in one of the nodes.

16Either a command prompt or a Configuration Wizard prompt appears. The


command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, it will be a #
symbol. If you log in as another user, it will be a % symbol. For example, Cluster-1#
or Cluster-1%. This prompt is the typical prompt that is found on most UNIX and
Linux systems. When a node first powers on or reformats, the Configuration Wizard
automatically starts. If the Configuration Wizard starts, the prompt display is shown.
There are four options: Create a cluster, join an existing cluster, exit wizard and
configure manually, and Reboot into SmartLock Compliance mode. Choosing
option 1 creates a cluster, while option 2 joins the node to an existing cluster. If you

PowerScale Hardware Concepts

Page 68 © Copyright 2020 Dell Inc.


Cluster Management Tools

Example browser URLs:


• https://192.168.3.11:8080
• https://engineering.dees.lab:8080

To access the web administration interface from another computer, an Internet


browser is used to connect to port 8080. The user must log in using the root
account, admin account, or an account with log-on privileges. After opening the
web administration interface, there is a four-hour login timeout. In OneFS 8.2.0 and
later, the WebUI uses the HTML5 doc type, meaning it is HTML5 compliant in the
strictest sense, but does not use any HTML specific features. Previous versions of
OneFS require Flash.

choose option 1, the Configuration Wizard steps you through the process of
creating a cluster. If you choose option 2, the Configuration Wizard ends after the
node finishes joining the cluster. You can then configure the cluster using the
WebUI or the CLI.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 69


Cluster Management Tools

Command Line Interface (CLI)

The CLI can be accessed in two ways:

• Out-of-band17
• In-band18

Both methods are done using any SSH client such as OpenSSH or PuTTY. Access
to the interface changes based on the assigned privileges.

OneFS commands are code that is built on top of the UNIX environment and are
specific to OneFS management. You can use commands together in compound
command structures combining UNIX commands with customer facing and internal
commands.

4
1
5

3 6

17Accessed using a serial cable that is connected to the serial port on the back of
each node. As many laptops no longer have a serial port, a USB-serial port adapter
may be needed.

18 Accessed using external IP address that is configured for the cluster.

PowerScale Hardware Concepts

Page 70 © Copyright 2020 Dell Inc.


Cluster Management Tools

1: The default shell is zsh.

2: OneFS is built upon FreeBSD, enabling use of UNIX-based commands, such as


cat, ls, and chmod. Every node runs OneFS, including the many FreeBSD kernel
and system utilities.

3: Connections make use of Ethernet addresses.

4: OneFS supports management isi commands. Not all administrative


functionalities are available using the CLI.

5: The CLI command use includes the capability to customize the base command
with the use of options, also known as switches and flags. A single command with
multiple options result in many different permutations, and each combination
results in different actions performed.

6: The CLI is a scriptable interface. The UNIX shell enables scripting and execution
of many UNIX and OneFS commands.

Caution: Follow guidelines and procedures to appropriately


implement the scripts to not interfere with regular cluster operations.
Improper use of a command or using the wrong command can be
potentially dangerous to the cluster, the node, or to customer data.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 71


Cluster Management Tools

CLI Usage

Can use common UNIX tools

"help" shows needed privileges

Shows syntax and usage

Option explanation

The man isi or isi --help command is an important command for a new
administrator. These commands provide an explanation of the available isi
commands and command options. You can also view a basic description of any
command and its available options by typing the -h option after the command.

PowerScale Hardware Concepts

Page 72 © Copyright 2020 Dell Inc.


Cluster Management Tools

OneFS Application Programming Interface (API)

The OneFS Application Programming Interface, or OneFS API, is a secure and


scriptable19 interface for managing the cluster.

HTTPS is used in API to encrypt communications.

OneFS applies authentication and RBAC controls to API commands to ensure that
only authorized commands are run.

The example shows a description for https://:8080/platform/quota/quotas1.

1: PAPI conforms to the REST architecture. An understanding of HTTP/1.1 (RFC


2616) is required to use the API.

2: Structured like URLs that execute on a browser that supports authentication

19A chief benefit of PAPI is its scripting simplicity, enabling customers to automate
their storage administration.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 73


Cluster Management Tools

3: Some commands are not PAPI aware, meaning that RBAC roles do not apply.
These commands are internal, low-level commands that are available to
administrators through the CLI. Commands not PAPI aware: isi config, isi
get, isi set, and isi services

4: The number indicates the PAPI version. If an upgrade introduces a new version
of PAPI, some backward compatibility ensures that there is a grace period for old
scripts to be rewritten.

PowerScale Hardware Concepts

Page 74 © Copyright 2020 Dell Inc.


Cluster Management Tools

Front Panel Display

Front Panel Display of a Gen 6 chassis.

The Gen 6 front panel display is an LCD screen with five buttons that are used for
basic administration tasks20.

The Gen 6.5 front panel has limited functionality21 compared to the Gen 6.

20Some of them include adding the node to a cluster, checking node or drive
status, events, cluster details, capacity, IP and MAC addresses.

21You can join a node to a cluster and the panel display node name after the node
has joined the cluster.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 75


Course Summary

Course Summary

PowerScale Hardware Concepts

Page 76 © Copyright 2020 Dell Inc.


Course Summary

Course Summary

Now that you have completed this course, you should be able to:
→ Discuss installation engagement actions.
→ Explain the use of PEQ in implementation.
→ Describe PowerScale nodes.
→ Identify the PowerScale node internal and external networking components.
→ Explain the PowerScale cluster management tools.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 77


Appendix

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 79


Appendix

Electrostatic Discharge
Electrostatic Discharge is a major cause of damage to electronic components and
potentially dangerous to the installer. To avoid ESD damage, review ESD
procedures before arriving at the customer site and adhere to the precautions when
onsite.

Clean Work Area: Clear


work area of items that
naturally build up
electrostatic discharge

Antistatic Packaging:
Leave components in
antistatic packaging until
time to install.

PowerScale Hardware Concepts

Page 80 © Copyright 2020 Dell Inc.


Appendix

No ESD Kit Available:


• Before touching
component, put one
hand firmly on bare
metal surface.
• After removing
component from
antistatic bag, do
NOT move around
room or touch
furnishings,
personnel, or
surfaces.
• If you must move
around or touch
something, first put
component back in
antistatic bag

ESD Kit: Always use


ESD kit when handling
components.

Don't Move: Minimize


movement to avoid
buildup of electrostatic
discharge.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 81


Appendix

PowerScale Nodes
Individual PowerScale nodes provide the data storage capacity and processing
power of the PowerScale scale-out NAS platform. All of the nodes are peers to
each other and so there is no single 'master' node and no single 'administrative
node'.

• No single master
• No single point of administration

Administration can be done from any node in the cluster as each node provides
network connectivity, storage, memory, non-volatile RAM (NVDIMM) and
processing power found in the Central Processing Units (CPUs). There are also
different node configurations, compute, and capacity. These varied configurations
can be mixed and matched to meet specific business needs.

Each contains.

• Disks

• Processor

• Cache

• Front-end network connectivity

PowerScale Hardware Concepts

Page 82 © Copyright 2020 Dell Inc.


Appendix

Tip: Gen 6 nodes can exist within the same cluster. Every
PowerScale node is equal to every other PowerScale node of the
same type in a cluster. No one specific node is a controller or filer.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 83


Appendix

F-Series
The F-series nodes sit at the top of both performance and capacity with all-flash
arrays for ultra-compute and high capacity. The all flash platforms can accomplish
250-300k protocol operations per chassis and get 15 GB/s aggregate read
throughput from the chassis. Even when the cluster scales, the latency remains
predictable.

• F80022
• F81023

22 The F800 is suitable for workflows that require extreme performance and
efficiency. It is an all-flash array with ultra-high performance. The F800 sits at the
top of both the performance and capacity platform offerings when implementing the
15.4TB model, giving it the distinction of being both the fastest and densest Gen 6
node.

23 The F810 is suitable for workflows that require extreme performance and
efficiency. The F810 also provides high-speed inline data deduplication and in-line
data compression. It delivers up to 3:1 efficiency, depending on your specific
dataset and workload.

PowerScale Hardware Concepts

Page 84 © Copyright 2020 Dell Inc.


Appendix

H-Series
After F-series nodes, next in terms of computing power are the H-series nodes.
These are hybrid storage platforms that are highly flexible and strike a balance
between large capacity and high-performance storage to provide support for a
broad range of enterprise file workloads.

• H40024
• H50025
• H560026
• H60027

24The H400 provides a balance of performance, capacity and value to support a


wide range of file workloads. It delivers up to 3 GB/s bandwidth per chassis and
provides capacity options ranging from 120 TB to 720 TB per chassis. The H400
uses a medium compute performance node with SATA drives.

25The H500 is a versatile hybrid platform that delivers up to 5 GB/s bandwidth per
chassis with a capacity ranging from 120 TB to 720 TB per chassis. It is an ideal
choice for organizations looking to consolidate and support a broad range of file
workloads on a single platform. H500 is comparable to a top of the line X410,
combining a high compute performance node with SATA drives. The whole Gen 6
architecture is inherently modular and flexible with respect to its specifications.

26The H5600 combines massive scalability – 960 TB per chassis and up to 8 GB/s
bandwidth in an efficient, highly dense, deep 4U chassis. The H5600 delivers inline
data compression and deduplication. It is designed to support a wide range of
demanding, large-scale file applications and workloads.

27The H600 is Designed to provide high performance at value, delivers up to


120,000 IOPS and up to 12 GB/s bandwidth per chassis. It is ideal for high

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 85


Appendix

performance computing (HPC) workloads that don’t require the extreme


performance of all-flash. These are spinning media nodes with various levels of
available computing power - H600 combines our turbo compute performance nodes
with 2.5" SAS drives for high IOPS workloads.

PowerScale Hardware Concepts

Page 86 © Copyright 2020 Dell Inc.


Appendix

A-Series
The A-series nodes namely have lesser compute power compared to other nodes
and are designed for data archival purposes. The archive platforms can be
combined with new or existing all-flash and hybrid storage systems into a single
cluster that provides an efficient tiered storage solution.

• A20028
• A200029

28The A200 is an ideal active archive storage solution that combines near-primary
accessibility, value and ease of use.

29The A2000 is an ideal solution for high density, deep archive storage that
safeguards data efficiently for long-term retention. The A2000 is capable of
containing 80, 10TB drives for 800TBs of storage by using a deeper chassis with
longer drive sleds containing more drives in each sled.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 87


Glossary
Front Panel Display
The Front Panel Display is located on the physical node or chassis. It is used to
perform basic administrative tasks onsite.

OneFS CLI
The command-line interface runs "isi" commands to configure, monitor, and
manage the cluster. Access to the command-line interface is through a secure shell
(SSH) connection to any node in the cluster.

PAPI
The customer uses OneFS application programming interface (API) to automate
the retrieval of the most detailed network traffic statistics. It is divided into two
functional areas: One area enables cluster configuration, management, and
monitoring functionality, and the other area enables operations on files and
directories on the cluster.

Serial Console
The serial console is used for initial cluster configurations by establishing serial
access to the node designated as node 1.

WebUI
The browser-based OneFS web administration interface provides secure access
with OneFS-supported browsers. This interface is used to view robust graphical
monitoring displays and to perform cluster-management tasks.

PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 89


PowerScale Hardware Concepts

© Copyright 2020 Dell Inc. Page 90


POWERSCALE
HARDWARE
INSTALLATION

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Table of Contents

Rebranding - Isilon is now PowerScale ................................................................................ 2


PowerScale Solutions Certification Journey Map ................................................................. 3
Prerequisite Skills ................................................................................................................ 4
Course Objectives................................................................................................................ 5

Planning the Installation............................................................................................ 6


Module Objectives ............................................................................................................... 7
Installation References ........................................................................................................ 8
Safety Precautions and Considerations ............................................................................... 9
SolVe ................................................................................................................................. 11
SolVe Online ...................................................................................................................... 14
Exercise: SolVe Desktop................................................................................................... 20
Installation Planning Considerations .................................................................................. 26
Serial Number .................................................................................................................... 32
Serial Number .................................................................................................................... 34

Prepare Site .............................................................................................................. 36


Module Objectives ............................................................................................................. 37
Building the Rack Cabinet .................................................................................................. 38
Powering the Rack ............................................................................................................. 40
Bolting Rack to the Floor .................................................................................................... 42
Unpacking the Components ............................................................................................... 43
Repackaging the Shipping Material .................................................................................... 45

Installing Components in the Rack ........................................................................ 46


Module Objectives ............................................................................................................. 47
Sliding Rails ....................................................................................................................... 48
Installing 1U Nodes ............................................................................................................ 50
Installing 2U Nodes ............................................................................................................ 51
Installing 4U Nodes ............................................................................................................ 53
Install Gen 6 Chassis ......................................................................................................... 55
Installing Back-end Switches ............................................................................................. 56

PowerScale Hardware Installation

Page ii © Copyright 2020 Dell Inc.


Backend Switch Support .................................................................................................... 59

Installing Components ............................................................................................ 60


Module Objectives ............................................................................................................. 61
Installation Guides ............................................................................................................. 62
Installing Gen 6 Sub-components ...................................................................................... 63
Gen 6.5 Drive Bays ............................................................................................................ 65
Gen 6.5 Internal Components ............................................................................................ 66
Installing Front Panel or Bezel ........................................................................................... 72

Cabling the Node to a Network ............................................................................... 73


Module Objectives ............................................................................................................. 74
Connecting to the Network ................................................................................................. 75
Breakout Cables ................................................................................................................ 78
Cable and Cabling Considerations ..................................................................................... 79
Connecting to the External Network ................................................................................... 80
Gen 6.5 Network Connectivity ............................................................................................ 81
Connecting Power.............................................................................................................. 82
Configuration Manager ...................................................................................................... 85

Course Summary ..................................................................................................... 87


Course Summary ............................................................................................................... 88

Appendix ................................................................................................. 89

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page iii


PowerScale Hardware Installation

PowerScale Hardware Installation

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 1


PowerScale Hardware Installation

Rebranding - Isilon is now PowerScale

Important: In mid-2020 Isilon launched a new hardware platform, the


F200 and F600 branded as Dell EMC PowerScale. Over time the
Isilon brand will convert to the new platforms PowerScale branding. In
the meantime, you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.

PowerScale Hardware Installation

Page 2 © Copyright 2020 Dell Inc.


PowerScale Hardware Installation

PowerScale Solutions Certification Journey Map

The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.

PowerScale Solutions

A. PowerScale Advanced Administration (C, VC)

B. PowerScale Advanced Disaster Recovery (C, VC)

(Knowledge and Experience based Exam)

Implementation Specialist, PowerScale Technology Architect Specialist, Platform Engineer, PowerScale


PowerScale

A. PowerScale Concepts (ODC)


A. PowerScale Concepts (ODC) A. PowerScale Concepts (ODC) B. PowerScale Hardware Concepts (ODC)
C. PowerScale Hardware Installation (ODC)
B. PowerScale Administration (C,VC,ODC) B. PowerScale Solution Design (ODC) D. PowerScale Hardware Maintenance
(ODC)
E. PowerScale Implementation (ODC)

Information Storage and Management

Information Storage and Management (C, VC, ODC)

(C) - Classroom

(VC) - Virtual Classroom

(ODC) - On Demand Course

For more information, visit: http://dell.com/certification

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 3


PowerScale Hardware Installation

Prerequisite Skills

To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Current PowerScale hardware portfolio and the OneFS operating system
• PowerScale Concepts
• Isilon InfiniBand to Ethernet Backend Conversion

PowerScale Hardware Installation

Page 4 © Copyright 2020 Dell Inc.


PowerScale Hardware Installation

Course Objectives

After completion of this course, you will be able to:


→ Plan an installation.
→ Prepare the site.
→ Install the node in the rack.
→ Install components.
→ Cable the node to a network.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 5


Planning the Installation

Planning the Installation

PowerScale Hardware Installation

Page 6 © Copyright 2020 Dell Inc.


Planning the Installation

Module Objectives

After completing this module, you will be able to:


• Identify the safety precautions and electrostatic discharge (ESD) procedures.
• Use the SolVe Desktop and SolVe Online.
• Identify installation planning considerations.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 7


Planning the Installation

Installation References

The Isilon Info Hubs page has links to specific OneFS software, hardware, and
troubleshooting pages. The OneFS in hubs have documents for installation-related
areas.

OneFS 9.0 PowerScale Info Hub.

Tip: Link to the Isilon Info Hubs page.

PowerScale Hardware Installation

Page 8 © Copyright 2020 Dell Inc.


Planning the Installation

Safety Precautions and Considerations

When working with PowerScale equipment, it is critical to ensure you adhere to the
following precautions.

6
1

2
5
3

1: The AC supply circuit for PowerScale nodes must supply the total current that is
specified on the label of the node. All AC power supply connections must be
properly grounded. Connections that are not directly connected to the branch
circuit, such as nodes that are connected to a power strip, must also be properly
grounded. Do not overload the branch circuit of the AC supply that provides power
to the rack holding PowerScale nodes. The total rack load should not exceed 80%
of the branch circuit rating. For high availability, the left and right sides of any rack
must receive power from separate branch feed circuits. To help protect the system
from sudden increases or decreases in electrical power, use a surge suppressor,
line conditioner, or uninterruptible power supply or UPS.

2: To avoid personal injury or damage to the hardware, always use two people to
lift or move a node or chassis. A Gen 6 chassis can weigh more than 200 lbs. It is
recommended to use a lift to install the components into the rack. If a lift is not
available, you must remove all drive sleds and compute modules from the chassis
before lifting. Even when lifting an empty chassis, never attempt to lift and install
with fewer than two people.

3:

Electrostatic Discharge

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 9


Planning the Installation

4: If you install PowerScale nodes in a rack that is not bolted to the floor, use both
front and side stabilizers. Installing PowerScale nodes in an unbolted rack without
these stabilizers could cause the rack to tip over, potentially resulting in bodily
injury. Use only approved replacement parts and equipment.

5: Beyond precautions of working with electricity, it is also critical to ensure proper


cooling. Proper airflow must be provided to all PowerScale equipment. Gen 6
nodes have an ASHRAE (American Society of Heating, Refrigerating and Air-
Conditioning Engineers) designation of A3. The nodes can operate in environments
with ambient temperatures from five degrees, up to 40° Celsius for limited periods
of time.

6: You can install racks in raised or nonraised floor data centers capable of
supporting that system. It is your responsibility to ensure that data center floor can
support the weight of the system. A fully populated rack with A2000 chassis’
weighs about 3,500 lbs (1,590 kg). If the floor is rated at less than 3,500 lbs, then
additional care and planning must be taken. Some data center floors have different
static load vs. dynamic (rolling) load specifications, and sectional weight and load
point limits. This becomes important while moving preracked solutions around the
data center.

Caution: Failure to adhere to the safety precautions may result in


electric shock, bodily injury, fire, damage to PowerScale systems
equipment, or loss of data. Review the safety precautions and
considerations1 before the installation.

1Failure to heed these warnings may also void the product warranty. Only trained
and qualified personnel should install or replace equipment. Select the button
options for specific information. Always refer to the current Site Preparation and
Planning Guide for proper procedures and environmental information.

PowerScale Hardware Installation

Page 10 © Copyright 2020 Dell Inc.


Planning the Installation

SolVe

Before you arrive at a client site, remember to read the call notes and follow the
processes that are detailed in them. Check if there are any special instructions from
PowerScale Technical Support that you must follow.

SolVe Online is a revised and updated version of SolVe Desktop. It is a knowledge


management-led standard procedure for DELL-EMC field, service partners, and
customers.

1: Download SolVe Desktop application on the system. Go to the Tools and Sites
section, choose SolVe. And select SolVe Desktop Executable. Depending on the
browser used, you may be presented with security dialogue boxes. Take the
needed actions to launch the executable.

2:

Click through the Setup wizard and then select Install. Clicking Finish launches
the SolVe Desktop. SolVe must be authorized for use. Select OK. A few general
items2.

2 Notice the dialog in the lower left showing the version. This area also shows the
progress when upgrading and downloading content. Also notice in the lower right
the service topics. Once connected, many of articles that are shown may not be

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 11


Planning the Installation

3: From the menu, select Authorize and download the list of available products.
Adhere to the instructions shown, that is to leave SolVe open, enter credentials,
this is using SSO, and open the keychain file. Select OK. And then go to
downloads and open the keychain file.

4:

Next are the Release Notes. Review and then close this window. Bring back the
SolVe. Notice the dialog3 in the lower left indicating the keychain is loaded, that
means you are authorized, and content is updated. Now, scroll down, and click
PowerScale to gather the PowrScale content.

5: Click OK. Again, note the progress in the lower left. Once the download is
complete, you see that the PowerScale image has changed. Tools that are
downloaded appear in the upper left corner of the screen without the green arrow
present.

6: Now you can click PowerScale and view the available procedures. If updates are
available for download, you see an information icon, click the icon, and approve the
updated content download.

relevant to PowerScale. There is a filtering option in the menu to receive the


articles that pertain to a specific product.

3The icons with a green arrow indicate that the user must click the icon in order to
download the tool.

PowerScale Hardware Installation

Page 12 © Copyright 2020 Dell Inc.


Planning the Installation

Resources: Partners4 can search through the Dell EMC partner


portal. SolVe Online can be downloaded from EMC support portal.
Access SolVe Online through SolVe Online portal. Click here for an
overview on SolVe Desktop/Online.

4The view is dependent upon Partner Type. A service partner sees what an
employee sees, a direct sales partner sees what a customer sees, and an
ASP/ASN partner sees products depending upon credentials.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 13


Planning the Installation

SolVe Online

Select each panel for a description for using SolVe Online and generating a
procedure.

Home

The graphic shows the SolVe Online home page. The home page list the latest
advisories and shows the products. The product folders contain any procedure
routes you accessed.

SolVe Online home page.

Procedure Routes

SolVe Online retains a history of the procedure routes you accessed. The graphic
shows route. You can download the document from the route.

The example shows the routes to replacing an A200 node and setting up a 40U-P
rack.

PowerScale Hardware Installation

Page 14 © Copyright 2020 Dell Inc.


Planning the Installation

Procedure routes.

New Route

When you click on the top level product on the home page, SolVe Online starts the
menu for a new route.

As an example, you want the procedure to replace a drive in an H400 node. Here,
you select Replacement Procedures.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 15


Planning the Installation

New route for the Isilon platform.

Product Topics

From the Isilon platform page, open the Replacement Procedures topic. The
example shows both the PowerScale and the Gen 6 topics expanded. Select the
H400 topic.

PowerScale Hardware Installation

Page 16 © Copyright 2020 Dell Inc.


Planning the Installation

Isilon platform route page with Replacement Procedures expanded.

Procedure Generator

The generator launches and choose the Replace Drive procedure, and then Next.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 17


Planning the Installation

Step 1 of the procedure generator.

Generator Steps

Step 2 asks for the usage information and step 3 shows the procedure to generate.

Steps 2 and 3 to generate the procedure.

PowerScale Hardware Installation

Page 18 © Copyright 2020 Dell Inc.


Planning the Installation

Procedure Document

Generating the document may take a few minutes. Once complete, the PDF is
downloaded, and you return to the home page. On the home page, the new route is
listed.

Home page with the new route.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 19


Planning the Installation

Exercise: SolVe Desktop

The exercise has audio.

The web version of this content contains an interactive activity.

Script:

Let us see how easy it is to generate a document. For this exercise, the SolVe
Desktop is installed, and you step through the process to get the Isilon Site
Preparation and Planning Guide. Shown is the shortcut on the desktop. Click the
shortcut to launch the tool.

License Agreement:

PowerScale Hardware Installation

Page 20 © Copyright 2020 Dell Inc.


Planning the Installation

The license agreement comes up. Click Accept.

SolVe:

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 21


Planning the Installation

The SolVe Desktop launches. Now click the Isilon image. Here we get the
procedures. Expand Node Procedures.

Node Procedures:

PowerScale Hardware Installation

Page 22 © Copyright 2020 Dell Inc.


Planning the Installation

Select X210. Select Site Planning and Preparation Guide, then select Next.

Selection:

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 23


Planning the Installation

We keep the destination as shown. Select Generate. Note the information and then
click OK.

Generated:

PowerScale Hardware Installation

Page 24 © Copyright 2020 Dell Inc.


Planning the Installation

A word doc is shown that gives some general information. Scroll down. The image
of the document that is shown here is a link to that document. Click the image. The
PDF is opened.

This completes the exercise of stepping through the process of generating a


document using the SolVe Desktop.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 25


Planning the Installation

Installation Planning Considerations

The installation process starts before you arrive at the customer site. When
planning an installation, there are a few items that you must know before you
arrive.

PowerScale Hardware Installation

Page 26 © Copyright 2020 Dell Inc.


Planning the Installation

IP Address Verification

CE information extracted from PEQ.

The Customer Engineer (CE) typically works with the Implementation Specialist
(IS), who completes any custom configurations after the initial configuration is
done. CEs must verify that all internal networks configured for a PowerScale cluster
use IP version 4 addresses.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 27


Planning the Installation

The external network can use either IPv4 or IPv6 addresses.

Use the pre-engagement questionnaire (PEQ) throughout the installation and initial
configuration process.

Note: The PowerScale PEQ replaces the Configuration Guide.

Rack Space Planning

Make sure you discuss with the customer the plan for the spacing of the Gen 5
nodes or Gen 6 chassis5.

5 PowerScale hardware can take from 1U to 4U. All Gen 6 chassis are 4U and the
recommended maximum number that can be installed in a standard 19" rack is 12,
with the additional space needed for switches or cabling. Up to 10 chassis can fit in
a 42U rack. Keep in mind that Gen 6 nodes based on the High and Ultra compute
modules will contain high line only (240V) power supplies. When used in an
environment where only low-line power (120V) is available, two rack-mounted step-
up transformers will be required per Gen6 chassis, each of which will consume 2U
additional rack space per chassis and thus reduce the amount of space available
for nodes. Racks should be populated from bottom to top to improve rack stability
and decrease the risk of tipping.

PowerScale Hardware Installation

Page 28 © Copyright 2020 Dell Inc.


Planning the Installation

In mixed environments, nodes, chassis and switches should generally be


positioned in the rack as shown in the graphic.

 Heavier nodes6
 Pairs of step-up transformers7
 Ethernet backend8
 1U accelerator nodes and SRS gateway servers9

6 Such as 4U components, like the Gen 6 chassis, should go at the bottom.

7If needed, should be placed adjacent to their associated chassis, one above the
chassis and the other below (to prevent cabling issues).

8 A new Gen 6 or Gen 6.5 cluster will have Ethernet switches for the BE network.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 29


Planning the Installation

 Gaps between nodes or switches10


 Cable management accessories11

Initial Node Selection

Initial Node Selection

- Node 1 sets OneFS version for the cluster

- Node 1 determined by OneFS version and processor speed

- Newest node has most recent OneFS version

Gen 5 Node processor Gen 6 Node processor Gen 6.5 Node processor
hierarchy hierarchy hierarchy

- S-Series - F800 and F810 - F600

- X-Series - H500, H5600, H600 - F200

- NL-Series - A200 and H400

- HD-Series - A2000

The node on which you initially run the Configuration Wizard is designated with a
node ID of 112. If selecting between two node types, such as an H500 and F800,

9 1U accelerator nodes and SRS gateway servers should go at the top of the rack.

10 Any gaps between nodes or switches require a filler panel to ensure proper
airflow.

11The cable management tray option requires 1U of rack space per node, so you
will need to plan for the additional space.

12Node 1 should be selected based on suitability criteria, then reimaged to the


desired OneFS version, and then configured as node 1. The desired OneFS
version should be determined in collaboration with the customer and the
implementation team, and the choice should be based on the customer’s needs vs.
the features available in each version.

PowerScale Hardware Installation

Page 30 © Copyright 2020 Dell Inc.


Planning the Installation

with equal OneFS versions, choose the node type with the greater processor power
as node 1.

The processor speeds of each Gen 5, Gen 6, and Gen 6.5 node type are shown in
descending order.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 31


Planning the Installation

Serial Number

Serial Number

Before beginning the installation, it is essential to document the serial number of


each node in order to register the product and update the install base (IB) records.
The serial number is on the physical node itself. For Gen 6 Series nodes, the serial
number is printed on a sticker or dog tag, which is attached to the back of the node.
A dog-tag is a plastic card that is fixed to the node or compute module. The graphic
shows a Gen 6 dog tag.

Gen 6 serial number on


dog tag

Example: Access A100

The video demonstrates how to access the A100 serial number.

Movie:
The web version of this content contains a movie.

Script:

PowerScale Hardware Installation

Page 32 © Copyright 2020 Dell Inc.


Planning the Installation

For A100 accelerator nodes, you must remove the face plate from the front of the
node. Press both latches in simultaneously until they click. Then remove the face
plate. Locate the black plastic tab on the upper left of the node and slide it out. The
serial number is printed on the blue label. When you are done, slide the tab back in
and replace the face plate.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 33


Planning the Installation

Serial Number

Use /etc/isilon_serial_number or isi_hw_status command to display


the node serial number.

The serial number should match the service tag.

The service tag is in the front of the system or may be on a sticker on the chassis
of the system. The mini Enterprise Service Tag (EST) is found on the back of the
system. Dell uses the information to route support calls to the appropriate
personnel.

Performing a chassis replacement13 should retain the service tag information.

F200 with the System Service Tag on the front.

1
2 3 4

5 6

13An F200/F600 chassis replacement is done when replacing a system board


(motherboard). The replacement retains the service tag data. iDRAC and OneFS
retain the Identify Module (IDM) settings. The replacement retains other critical
data and resources for the hardware. When the replacement is done, EasyRestore
runs to enable you to retain critical information that is not stored on the system
board.

PowerScale Hardware Installation

Page 34 © Copyright 2020 Dell Inc.


Planning the Installation

1: Pull out the information tag in front of the system to view the Express Service
Code and Service Tag.

2: Top view of the service tag.

3: Back view of the service tag.

4: OpenManage Mobile (OMM) label.

5: Service Tag.

6: iDRAC MAC address and iDRAC secure password label.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 35


Prepare Site

Prepare Site

PowerScale Hardware Installation

Page 36 © Copyright 2020 Dell Inc.


Prepare Site

Module Objectives

After completing this module, you will be able to:


• Build the rack cabinet.
• Unpack hardware.
• Repackage shipping materials for reuse.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 37


Prepare Site

Building the Rack Cabinet

SolVe Online to generate rack unpacking and setup guide.

You can use a standard ANSI/EIA RS310D 19-inch rack system14 in a PowerScale
cluster installation. The rack cabinet must have full earth ground to provide reliable
grounding. The rack should have dual power supplies with a power distribution that
allows load balancing and switches between the two if there is an outage. Use a
rack cabinet that has dual power distribution units, one on each side.

14The default PDU configuration includes 4 PDUs. You receive power cables to
support the maximum configurations and are likely to have extra cords as part of
your shipment.

PowerScale Hardware Installation

Page 38 © Copyright 2020 Dell Inc.


Prepare Site

Generate the appropriate guide using SolVe. The guide explains the tools that are
required, unpacking, and building the rack.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 39


Prepare Site

Powering the Rack

The next step is to attach power cables to the power distribution units on each side
of the cabinet.

 Connect Power Cord15


 Cabinet requirements16
 PDU configuration17

15You must connect power cords to unpopulated P1 and P2 connectors on the four
power distribution units within the cabinet.

16Depending on the cabinet components and configuration, the 40U-P cabinet


requires two, four, six, or eight independent 200 to 240 V power sources.

17The default PDU configuration includes 4 PDUs. You receive power cords to
support the maximum configurations and are likely to have extra cords as part of
your shipment.

PowerScale Hardware Installation

Page 40 © Copyright 2020 Dell Inc.


Prepare Site

40U-P cabinet PDUs do not include a power on/Off switch. Ensure that the circuit
breakers on each PDU are UP, and OFF, position until ready to supply AC power 18
to the unit.

Use grounding lugs if the site requires external grounding19.

18Equipment that is correctly installed within the rack cabinet is grounded through
the AC power cables and connectors. In general, supplemental grounding is not
required.

19If your site requires external grounding (for example, to a common grounding
network beneath the site floor), you can use the grounding lugs that are provided
on each of the lower power distribution panels of the cabinet.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 41


Prepare Site

Bolting Rack to the Floor

The customer is responsible for bolting the rack to the floor. If a third-party rack is
used, be sure to secure the rack to the subfloor with a stabilizer bracket or some
other method. Use both front and side stabilizers when installing nodes in a rack
that is not bolted to the floor. Installing in an unbolted rack without these stabilizers
could cause the rack to tip over, potentially resulting in bodily injury. Use only
approved replacement parts and equipment.

1 2 3

5 4

1: If the rack is not bolted to the floor, it is recommended that you secure your rack
cabinet to the floor with stabilizing brackets or use anti-tip devices. The stabilizing
brackets are also important if you are installing or removing a server in the upper
half of the cabinet when the lower half is empty.

2: Sometimes, the trim strips are unattached from the cabinet, or was removed to
detach the shipping brackets. Once the cabinet is secured to the floor stabilization
brackets, install the lower trim strip to the bottom front of the cabinet.

3: The seismic bracket is installed on the front, back, and sides of the cabinet.

4: The anti-tip bracket is installed on the front of the cabinet.

5: The anti-move bracket is installed on the front and back of the cabinet.

PowerScale Hardware Installation

Page 42 © Copyright 2020 Dell Inc.


Prepare Site

Unpacking the Components

Once the rack is built, you are ready to unpack the components. First, check the
ShockWatch meter to ensure it has not been activated. If the ShockWatch meter
has been activated, be sure to contact support before continuing. Ensure that you
have sufficient clearance around the rack cabinet. If components are still packed
onto a pallet, you need approximately eight ft of clearance to position the pallet.
Open the shipping container and remove the PowerScale components.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 43


Prepare Site

More information20.

20To avoid personal injury or component damage, use two people to lift and move
a Gen 6 chassis. Inspect each component for any sign of damage. If the
components appear damaged in any way, notify PowerScale technical support. Do
not use a damaged component.

PowerScale Hardware Installation

Page 44 © Copyright 2020 Dell Inc.


Prepare Site

Repackaging the Shipping Material

After you have built the rack, you will want to repack and return the shipping
material as it is reused for cost and environmental reasons. Leave the job site of
the customer as clean as or cleaner than it was when you arrived.

1: The first step is to lay each of the outer sleeves flat, then fold it lengthwise
before collapsing the three sections.

2: The second step is to stack the shipping brackets on the pallet. After that, place
ramp section B upside-down on the pallet, then stack section A, right-side up. Then
stack the two folded outer sleeves above the ramp sections.

3: The third step is to place ramp section B upside-down on the pallet, then stack
section A, right-side up. Then stack the two folded outer sleeves above the ramp
sections. Next, apply the two collar sections to either side of the pallet, hold them in
place with the package hood.

4: Finally band the unit with shipping straps and return it to Dell EMC.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 45


Installing Components in the Rack

Installing Components in the Rack

PowerScale Hardware Installation

Page 46 © Copyright 2020 Dell Inc.


Installing Components in the Rack

Module Objectives

After completing this module, you will be able to:


• Install slide rails.
• Install nodes and chassis in rack.
• Install back-end switches.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 47


Installing Components in the Rack

Sliding Rails

Once you have built your rack cabinet, which is unpacked the node and verified its
contents, and repacked the shipping material, you can start installing the hardware.
Click each tab to learn more.

Sliding Rails

The first step is to attach the sliding rails. A sliding rail system is used to attach the
node or chassis to the rack or cabinet and allow you easy access to the
components. The sliding rail kit is compatible with rack cabinets with the following
hole types:

• 3/8-inch square holes


• 9/32-inch round holes
• 10-32, 12-24, M5X.8, or M6X1 prethreaded holes

A number 2 Phillips screwdriver is required to complete the installation.

Adjustable Rails

1U, 2U, and 4U rails for nodes and chassis adjust in length from 24 inches to 36
inches to accommodate a variety of cabinet depths. The rails are not left-specific or
right-specific and can be installed on either side of the rack. The video
demonstrates installing the rails for a Gen 6 chassis.

Movie:
The web version of this content contains a movie.

Pre-Threaded Holes

Refer the student guide for the video transcript.

Movie:
The web version of this content contains a movie.

PowerScale Hardware Installation

Page 48 © Copyright 2020 Dell Inc.


Installing Components in the Rack

Considerations

Listed are some things to consider:


• The step alignment pins on the rails are used for round or square-hole rack
types.
• Using a rail kit other than the Dell EMC-approved rail kit might cause injury or
damage to the node.
• If you fail to attach the rails correctly as described in the Rail Kit Installation
Guide, it can lead to severe injury when the node is pulled for future service, or
the rails may fail when installing a node, resulting in a whole chassis's worth of
equipment being damaged.

Pre-Threaded Holes Script: To install the rail kit in a rack with prethreaded holes,
replace the preassembled step alignment pins in the ends of the outer rails with the
proper shoulder alignment pins. Adjust the outer slide rail rear slide bracket to fit
the depth of the rack cabinet, ensuring that the alignment pins protrude through the
rack mounting rails. Secure each outer rail to the rack cabinet using two 8, 32 x
0.75” knurled thumbscrews in the center holes of the slide rail.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 49


Installing Components in the Rack

Installing 1U Nodes

Before you begin, verify that the rail kit contains all the necessary components.
Each 1U slide rail consists of a slide bracket, an outer rail, an intermediate rail, and
an inner rail. Change the rail alignment posts if needed. Download and view the
SolVe Desktop procedure for the most recent instruction details.

1: Remove the inner rails from the rail assemblies on both rails.

2: Attach an inner rail to each side of the node.

3: Install both the slide rails in the rack. Each assembly contains a right and a left
slide rail.

4: You are now ready to install the node on the rails in the rack, pushing in until the
slide rails are locked. Remember, the enclosure is heavy and should be installed
into or removed from a rack by two people.

5: To further secure the rail assembly and server in the cabinet, insert and tighten a
small stabilizer screw directly behind each bezel latch.

PowerScale Hardware Installation

Page 50 © Copyright 2020 Dell Inc.


Installing Components in the Rack

Installing 2U Nodes

See the student guide for the video transcript.

Movie:

The web version of this content contains a movie.

Installing 2U Nodes Transcript: First, verify that the rail kit contains all the
necessary components. The 2U node rail kit includes:

• Two slide rails, used to mount a node in a rack or cabinet


• Four 8-32 x 0.75-inch knurled slide rail mounting screws for securing the slide
rails to the rack or cabinet
• Eight 8-32 x 0.25-inch shoulder alignment pins for use only in a prethreaded
rack or cabinet
• Two 6-32 x 0.25-inch knurled chassis retaining screws for securing the node to
the rack or cabinet
• One node retaining bracket, which provides a mounting surface for the chassis
retaining screws

Each slide rail consists of a slide bracket, an outer rail, an intermediate rail, and an
inner rail. To remove the inner rails from the rail assemblies, fully extend the slide
rails to the open and locked positions. Press the release button on the inner slide
rail and pull it from the intermediate slide rail until they separate.

Next, you attach the inner rails to the sides of the node. Align the large end of the
keyhole slots of the inner rail over the shoulder screws on the side of the node
chassis. It may be necessary to loosen the shoulder screws to slide the rails on.
Push the inner slide rail until it locks into place. Tighten the shoulder screws. Then,
attach the second slide rail on the other side the same way. To attach the rail to the
rack, first determine the rack cabinet that you are using. See the Rail Kit Installation
Guide for specific instructions for different types of rack cabinets. Here, we are
using a rack cabinet with 3/8” square holes. (Note: If using a rack with prethreaded
holes, replace the preassembled step alignment pins in the ends of the outer rails
with the proper shoulder alignment pins before proceeding to the next step.)

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 51


Installing Components in the Rack

First, place the rear slide bracket into the back of the rack. Adjust the outer slide rail
to fit the depth of the rack cabinet, ensuring that the alignment pins protrude
through holes in the rack mounting rails and that the outer and rear slide brackets
are level. Attach to the same holes on both ends of the rack. Place the node
retaining bracket on the front of the right-side rail so that the middle hole covers the
top alignment pin. Then secure the bracket with one of the 8-32 x 0.75-inch knurled
thumbscrews. Secure all other ends of the rail to the rack by inserting the 8-32 x
0.75-inch knurled thumbscrews. Tighten the screws. Repeat the process for the
second rail.

Next, install the node in the rack. However, Do Not Continue until you confirm that
both rails are secured to the rack and all mounting screws are in place and
tightened. Failing to do so could result in injury or damage to the node, and to avoid
injury, use two people to lift and move the node.

Fully extend each intermediate slide rail until the rail is fully open and locked into
place. With a person on each side of the node, keep the node level with the slide
rails and align the ends of the inner slide rails with the ends of the intermediate
slide rails. Slide the inner slide rails on each side of the node into the intermediate
slide rails, and then retract the slide rails until the node is fully inserted in the rack.
There is an audible click when the node is secure. Confirm that the inner slide rails
attached to the node are inserted correctly and firmly secured to the intermediate
slide rails that are attached to the rack.

Secure the node to the rack cabinet using either the 6-32 x 0.25-inch knurled
chassis retaining screws or chassis retaining screws and retainer bracket
depending on the node type. Tighten the screws.

PowerScale Hardware Installation

Page 52 © Copyright 2020 Dell Inc.


Installing Components in the Rack

Installing 4U Nodes

Before you begin, unpack and verify that the rail kit contains all the necessary
components. Each slide rail consists of an inner rail, an outer rail, and an
intermediate rail. To attach the rail to the rack, first determine the rack cabinet that
you are using. See the Rail Kit Installation Guide for specific instructions for
different types of rack cabinets. The graphic shows the installation for a rack
cabinet with 3/8” square holes.

1: Place the rear slide bracket into the back of the rack. Adjust the outer slide rail to
fit the depth of the rack cabinet, ensuring that the alignment pins protrude through
holes in the rack mounting rails and that the outer and rear slide brackets are level
and attached to the same holes on both ends of the rack. Secure all other ends of
the rail to the rack by inserting the 8-32 x 0.75-inch knurled thumbscrews. Tighten
the screws. Repeat the process for the second rail.

2: Attach a clip-on nut retainer to the third hole above the slide rail on each side of
the rack. The next step is to install the node in the rack. However, do not continue
until you confirm that both rails are secured to the rack and all mounting screws are
in place and tightened. Failing to do so could result in injury or damage to the node.
Remember to always use two people to lift a node. If a 4U node is shipped with
hard drives already installed, it can weight over 200 lbs. Use of a mechanical lift is
recommended. To use the cable management tray, be sure to leave 1U space
under the node for the cable management tray. For ease of installation, install the
Cable Management Tray before installing the main chassis.

3: Once the node is installed in the rack, insert the drives into the node.

4: Once the drives are inserted into the node, insert the front panel.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 53


Installing Components in the Rack

Important: 4U nodes ship with the inner slide rails pre-installed on the
node. If using a rack with prethreaded holes, replace the
preassembled step alignment pins in the ends of the outer rails with
the proper shoulder alignment pins before proceeding to the next
step.

PowerScale Hardware Installation

Page 54 © Copyright 2020 Dell Inc.


Installing Components in the Rack

Install Gen 6 Chassis

A chassis containing drives and nodes can weigh more than 200 pounds. You
should attach the chassis to a lift to install it in a rack. If a lift is not available, you
must remove all drive sleds and compute modules from the chassis before
attempting to lift it. Even when the chassis is empty, never attempt to lift and install
the chassis with fewer than two people.

1. The first step in installing the chassis is to align the chassis with the rails that
are attached to the rack.
2. Slide the first few inches of the back of the chassis onto the supporting ledge of
the rails.
3. Release the lift casters and carefully slide the chassis into the cabinet as far as
the lift allows.
4. Secure the lift casters on the floor.
5. Carefully push the node off the lift arms and into the rack. Ensure to leave the
lift under the chassis until the chassis is safely balanced and secured within the
cabinet.
6. Install two mounting screws at the top and bottom of each rail to secure the
chassis to the rack.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 55


Installing Components in the Rack

Installing Back-end Switches

Back-end switches are PowerScale procured, supported, and managed.

1: Back-end switches

• Provided by PowerScale.
• Switch ports face rear21.
• Connect each switch to separate power circuit.
• For multiple power connectors22, connect each cable to different power circuit.
• Each pair of switch supports a single cluster23.

21 Install switches into the rack with the ports facing to the rear of the rack. In a
redundant switch deployment, each switch should connect to a separate power
circuit for redundancy.

22If the switch has multiple power connectors, connect each of the switch’s power
cables to a different power circuit. Some switches have additional power
connectors to support optional redundant power supplies that may not be installed.
Ensure that each power cable is plugged into a connector that actually provides
power to the switch.

PowerScale Hardware Installation

Page 56 © Copyright 2020 Dell Inc.


Installing Components in the Rack

2: InfiniBand (Gen 5 and Gen 6)

• Switches24:
• QLogic 72-port
• Mellanox 36-port
• High-speed, unmanaged fabric.
• Greater range of sizes.
• Identical functionality for Gen5 and Gen 6 nodes.

3: Ethernet (Gen 6 and Gen 6.5)

• Switches25:
• Celestica
• Arista
• High-speed, managed fabric.
• Limited monitoring.

23 Each pair of back-end switches supports a single cluster. Only one cluster is
allowed to be connected to the same pair of back-end switches. Connecting cables
to the switch is covered later.

24
InfiniBand switches are still supported for Gen 5 nodes and for mixed Gen 5 and
Gen 6 node clusters.

25 A pure Gen 6 cluster uses Ethernet switches (Celestica and Arista initially) for its
back-end network. When adding Gen 6 nodes to an existing Gen 5 cluster, the Gen
6 nodes connect to the InfiniBand switches. Follow the manufacturer’s instructions
for mounting the switches in the rack. If the customer is building a new cluster of all
Gen 6 nodes, the back-end switches must be Ethernet.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 57


Installing Components in the Rack

• No small switches:
• 24-port minimum26
• 252 max27 (at launch).

26 The new Ethernet back-end is only supported on the Gen 6 nodes. The minimum
allowable switch size is a 24-port switch.

27 An all Gen 6 cluster can scale up to 252 nodes.

PowerScale Hardware Installation

Page 58 © Copyright 2020 Dell Inc.


Installing Components in the Rack

Backend Switch Support

The table shows the supported switches and the considerations.

Legend:
• S - 1:1 supported
• LC - Line card required
• BO - Breakout cable (1:4)
• AN - Autonegotiate to lower speed
• NS - Not supported
• XS - Supported, but not recommended

Switch Gen 6: Gen 6: F200: 10 F200: 25 F600: 40 F600:


10 40 GbE GbE GbE 100 GbE
GbE GbE

Dell Z9100 BO S XS BO XS S

Dell Z9264 BO S XS BO XS S

Dell S4148 S NS S AN NS NS

Dell S4112 S NS S AN NS NS

Arista 7308 LC S LC LC/AN AN NS

Arista 7304 S LC S AN LC/AN NS

Celestica BO S BO BO/AN AN NS
D4040

Celestica S NS S AN NS NS
D2060

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 59


Installing Components

Installing Components

PowerScale Hardware Installation

Page 60 © Copyright 2020 Dell Inc.


Installing Components

Module Objectives

After completing this module, you will be able to:


• Install drives and drive sleds.
• Install internal node components.
• Attach front panel or bezel.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 61


Installing Components

Installation Guides

You can generate installation guides using SolVe. The example shows the first step
in generating an F200 node install guide.

Generate install guides using SolVe.

PowerScale Hardware Installation

Page 62 © Copyright 2020 Dell Inc.


Installing Components

Installing Gen 6 Sub-components

You may need to install chassis components after the chassis is racked. For
instance, if no lift is available to install the chassis into the rack, then remove the
compute modules and drive sleds of the node. Once removed, install the chassis,
using two people, and then reinstall the compute modules and drive sleds of the
node. Select each tab to view a short demonstration on installing the drives, drive
sleds, and compute modules.

Drive and Sled

This short video demonstrates installing drives into a drive sled and installing the
sled into a chassis. See the student guide for the video transcript.

Movie:
The web version of this content contains a movie.

Compute Module

This video demonstrates how to install the compute module of a node into a
chassis. Refer the student guide for the video transcript.

Movie:
The web version of this content contains a movie.

Install Drive and Sled Script: For 3.5" drives, push the drive and paddle card
straight down into the empty connector. Insert the tabs on the drive sled cover into
the slots in the drive sled. For 2.5” drives, with the top cover at an angle, insert the
cover's three tabs into the slots on the sled. Next lower the open side of the cover
until it is flat against the drive sled. Slide the cover forward to lock it onto the sled.
Swing the blue sled cover latches back into place and press until you feel the
latches click. Ensure that the drive sled handle is open. With two hands, slide the
drive sled into the same bay you removed it from. Push the drive sled handle back
into the face of the sled to secure the drive sled in the bay. Swing the display
assembly back against the chassis until you feel it click into place. Align the front
bezel with the front of the chassis, then push until you feel the bezel snap into
place.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 63


Installing Components

Install Compute Module Script: Keep the lever in the open position until the
compute module is pushed all the way into the bay. Support the node with both
hands and slide it into the node bay. Push the release lever in against the compute
module back panel. You can feel the lever pull the module into place in the bay. If
the lever does not pull the compute module into the bay, pull the lever back into the
open position. Next, ensure that the module is pushed all the way into the bay, then
push the lever in against the compute module again. Tighten the thumbscrew on
the release lever to secure the lever in place.

PowerScale Hardware Installation

Page 64 © Copyright 2020 Dell Inc.


Installing Components

Gen 6.5 Drive Bays

The F200 has four SAS SSDs numbered from 0 to 3 and uses no SAS expander.

The F600 drive bays 0 and 1 are empty due to the internal cabling layout of the
node. The F600 has eight NVMe SSDs that use bays 2 to 9.

F200 Front View

Bay 0 Bay 1 Bay 2 Bay 3

Bay 0 and bay 1 - Bay 2 and Bay 4 and Bay 6 and Bay 8 and
NOT USED bay 3 bay 5 bay 7 bay 9

F600 Front View

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 65


Installing Components

Gen 6.5 Internal Components

The graphic shows the F600 with the cover removed.

1
3
8
4
5 9

6 10

7
11

1: CPU2 is available on the F600. On the F200, this area is the location of the
NVDIMM battery.

2: CPU1 is populated on the F200 and F600.

3: High-performance fans. To maintain proper cooling while the system is on,


replace only one fan at a time. Fans are labeled 1 to 8. The graphic shows the
topmost fan 1.

4: NVDIMM battery. The NVDIMM battery is not hot-swappable. To prevent data


loss and potential damage to the node, ensure that the node, the LEDs on the
node, the LEDs on the NVDIMM and the LEDs on the NVDIMM battery are off
before removing the NVDIMM battery.

PowerScale Hardware Installation

Page 66 © Copyright 2020 Dell Inc.


Installing Components

5: Disk drives. The F200 uses SAS SSDs, and the F600 uses NVMe SSDs. Hard
drives are supplied in hot swappable hard drive carriers that fit in the hard drive
slots. You can check the status indicator on the carrier to identify a faulted hard
drive.

6: DIMM. The F200 uses 48 GB (6x8GB Single Rank DDR4 RDIMMs) or 96 GB


(6x16GB Dual Rank DDR4 RDIMMs). The F600 uses 128 GB (8x16GB Dual Rank
DDR4 RDIMMs), or 192 GB (12x16GB Dual Rank DDR4 RDIMMs), or 384 GB
(12x32GB Dual Rank DDR4 RDIMMs). Both the F200 and F600 journal use a
16GB NVDIMM (1x16GB Single Rank DDR4 NVDIMM-N).

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 67


Installing Components

7: Backplane. Label each hard drive before removing from the node to ensure that
the drives are returned to the same drive bay.

8: Power supply units (PSU). The node needs one PSU for normal operation.
Remove and replace only one PSU at a time in a node that is powered on. PSUs
must be of the same type and have the same maximum output power.

PowerScale Hardware Installation

Page 68 © Copyright 2020 Dell Inc.


Installing Components

9: Internal Dual SD Module (IDSDM). The F200 and F600 nodes use one of the
two micro SD slots. The micro SD shows up in OneFS as a USB device (da0).
isi_hwmon monitors IDSDM. IDSDM in OneFS is used for secondary backup for
PSI information and temporary storage for FW upgrade binaries.

10: Frontend NIC. The F600 uses PCIe slot 3 for front-end connectivity, and the
F200 uses the rack network daughter card (rNDC).

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 69


Installing Components

F200 rNDC.

PowerScale Hardware Installation

Page 70 © Copyright 2020 Dell Inc.


Installing Components

F600 PCIe card in the expansion card riser.

11: Backend. The backend NIC is installed in slot 1 for both the F200 and F600.
The F200 has a 25 GbE NIC while the F600 uses a 100 GbE NIC.

Backend NIC on the riser.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 71


Installing Components

Installing Front Panel or Bezel

Bezels in A-Series devices are accelerator node specific and may not appear as
shown. Bezels may include a keylock. All bezels include tabs on either side that
you press in to release the bezel and its latches. Push the ends, not the middle, of
the bezel, press the bezel onto the latch brackets until it snaps into place. The
video demonstrates installing the bezel on a Gen 6 chassis.

Movie:

The web version of this content contains a movie.

PowerScale Hardware Installation

Page 72 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

Cabling the Node to a Network

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 73


Cabling the Node to a Network

Module Objectives

After completing this module, you will be able to:


• Connect the node to internal and external networks.
• Connect power to the node.
• Connect to initially configure the cluster.

PowerScale Hardware Installation

Page 74 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

Connecting to the Network

Once the system is racked and stacked, move on to connecting the node to the
internal and external networks.

Network

A PowerScale cluster uses separate internal and external networks for backend
and front-end connectivity.

 Internal Network28
 External Network29

28The internal network is dedicated for intracluster data and messages, providing
point-to-point connectivity with microsecond latency. Nodes connect to the internal
network of a cluster to communicate with other cluster nodes. Nodes connect to the
external client network to communicate with the rest of the world.

29 Gen 6 nodes can use either InfiniBand or Ethernet switches for back-end
connectivity. You cannot mix Ethernet and InfiniBand back-ends within one cluster.
If Gen 6 nodes are added to a cluster with Gen 5 nodes, the backend uses
InfiniBand switches only.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 75


Cabling the Node to a Network

 InfiniBand30
 Hybrid Cables31

Gen 6 Internal Network

The graphic shows a Gen 6, four node cluster with an InfiniBand backend. There
are no special considerations when connecting to an Ethernet back-end. The two
steps are:

30All current Gen 5 and Gen 6 nodes and InfiniBand switches use 40 Gb/sec Quad
Data Rate (QDR) InfiniBand, which uses QSFP type cables. Legacy nodes and
switches use 10 Gb/sec Single Data Rate (SDR) or 20 Gb/sec Double Data Rate
(DDR) InfiniBand which uses CX4-type cables.

31 Hybrid cables are available to connect Legacy nodes to current IB switches and
the opposite way. Reference the PowerScale Supportability and Compatibility
Guide if working with legacy switches and nodes to determine the cable type
needed. Nodes with QDR InfiniBand adapters are incompatible with SDR
InfiniBand switches.

PowerScale Hardware Installation

Page 76 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

1. Connect an InfiniBand or Ethernet cable between the int-a port and the switch
(switch 1) for the Internal A network.
2. If the network topology supports a second internal network, connect the int-b
port to a separate network switch (switch 2) for the int-b network.

Gen 6.5 Internal Network

Gen 6.5 only supports Ethernet. Each Gen F200 and F600 node provides two ports
for backend connectivity. The PCIe slot 1 is used. F600 nodes use 100GbE or
25GbE ports. F200 nodes use 10GbE or 25GbE ports.

Best Practice: Though two switches are not required, it is a best


practice to have a redundant internal network.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 77


Cabling the Node to a Network

Breakout Cables

Example 1:4 breakout cables.

The 40 GbE and 100 GbE connections contain 4 individual lines of 10 GbE and 25
GbE.

Most switches support breaking out a QSFP port into four SFP ports using a 1:4
breakout cable.

The backend is managed automatically when the switch detects the cable type as a
breakout cable.

The front-end is often configured manually on a per port basis.

PowerScale Hardware Installation

Page 78 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

Cable and Cabling Considerations

Each cable and the cabling should adhere to the cabling recommendations listed:
• Color code the cables to depict their type and function within the environment.
• Both ends of the cables should be clearly labeled depicting cable usage and
connection points in accordance to the labeling convention.
• Labeling should be done with correct type of label to sustain environmental
stress such as heat and humidity.
• Attached labels should not constrict the cable.
• The minimum bend radius of an InfiniBand cable is 20 cm.
• Avoid placing cables along sharp edges.
• Do not zip and constrict cables.
• Avoid using zip ties for the cable bundling. Instead use Velcro ties.
• Bundle different cable types separately and keep a good distance between
power and InfiniBand cables.
• Ensure that your cabling support is well anchored and able to withstand cable
weight.
• Restrict cables from dangling between rack spaces.
• Retain a sufficient loop on the cable in order to facilitate maintenance work, for
example, when sliding out a node.
• Align cable placements in order to facilitate future addition of cables along the
path.

Warning:: Never should the InfiniBand cables be bent or twisted


during installation or while the cluster is in production use.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 79


Cabling the Node to a Network

Connecting to the External Network

External networks connect the cluster to the outside world. Subnets are used in
external networks to manage connections more efficiently. Specifying external
network subnets depends on the topology of the network32. The Ethernet cable
connects the node to the external network of cluster so the node can communicate
with external clients. Use an Ethernet cable to connect the ext-1 port on the node to
the switch for the external network.

Gen 6 nodes33 have two ports, ext-1 and ext-2, as the graphic depicts.

Client/Application PowerScale Storage


Layer Layer
Ethernet
Layer

Gen 6.5
Protocols: NFS, SMB, S3,
2x10 GbE, 2x25 GbE, 2x 40
HTTP, FTP, HDFS, SWIFT
GbE, 2x100 GbE

32In a basic network topology, one in which all nodes reside on a single subnet,
only one external subnet is required.

33
The Gen 6 node can connect to one or more external Ethernet networks using 10
GbE or 40 GbE connections.

PowerScale Hardware Installation

Page 80 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

Gen 6.5 Network Connectivity

The graphic shows a closer look at the external and internal connectivity.

Slot 1 is used for backend communication on both the F200 and F600.

Slot 3 is used for the F600 2x 25 GbE or 2x 100 GbE front-end network
connections.

The rack network daughter card (rNDC) is used for the F200 2x 25 GbE front-end
network connections.

The F200 and F600 have no dedicated management port.

PCIe slot 1 - used for all BE PCIe slot 3 - used for F600
communication FE

FE FE

rNDC used for F200 FE

The graphic shows the R640 and does not represent the F200 and F600 PCIe and rNDC
configuration.

Tip: Interfaces are named "25gige-N" or "100gige-N." Interface


names may not indicate the link speed. For example, the interface
name for NICs that are running at the lower speed such as 10 Gb do
not change to "10gige-1." You can use ifconfig to check the link
speed.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 81


Cabling the Node to a Network

Connecting Power

Node Power Supply

Nodes and chassis contain redundant power supplies to ensure that the node
remains powered in case a power supply fails. Connect the power supply cables to
the node power connectors, and then connect each power supply cable to a
separate power source.

PowerScale Hardware Installation

Page 82 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

• Gen 6
o Compute has single power
supply.34
o Node-pair provides redundant
power.35
o Nodes automatically powers
on.36
• Gen 6.5

o Nodes have dual redundant


750 W 100 V – 240 V,
50/60Hz.

Connecting Power Cord

The video demonstrates connecting An LED shows the current state of each power
the power cord to a Gen 6 node. supply.

Movie:
The web version of this content contains a movie.

34 With Gen 6, each compute module has a single power supply.

35If the power supply fails, the node receives power from the power supply of the
adjacent node.

36Gen 6 nodes automatically power on when they are connected to power. When
setting up new nodes, if you are not yet ready for them to be powered on, do not
connect the power cables.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 83


Cabling the Node to a Network

Connecting Power Cord Transcript: Connect the power cable to the power
supply. Rotate the metal bail down over the power cable to hold the cord in place.

PowerScale Hardware Installation

Page 84 © Copyright 2020 Dell Inc.


Cabling the Node to a Network

Configuration Manager

For initial configuration, access the CLI by establishing a serial connection to the
node designated as node 1. The serial console gives you serial access when you
cannot or do not want to use the network. Other reasons for accessing using a
serial connection may be for troubleshooting, site rules, a network outage, so on.

Serial Port37

Configure the terminal emulator utility to use the following settings:

• Transfer rate = 115,200 bps

37 The serial port is usually a male DB9 connector. This port is called the service
port. Connect a serial null modem cable between a serial port of a local client, such
as a laptop, and the node service port. Connect to the node designated as node 1.
As most laptops today no longer have serial ports, you might need to use a USB-
to-serial converter. On the local client, launch a serial terminal emulator.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 85


Cabling the Node to a Network

• Data bits = 8
• Parity = none
• Stop bits = 1
• Flow control = hardware

More Information on Command Prompt38

38Either a command prompt or a Configuration Wizard prompt appears. The


command prompt displays the cluster name, a dash (-), a node number, and either
a hash (#) symbol or a percent (%) sign. If you log in as the root user, a hash (#)
prompt appears. If you log in as another user, a % symbol appears. For example,
Cluster-1# or Cluster-1%. The prompt is the typical prompt that is found on most
UNIX and Linux systems. When a node first powers on or reformats, the
Configuration Wizard automatically starts. If the Configuration Wizard starts, the
prompt displays are shown. There are four options: Create a new cluster, join an
existing cluster, Exit wizard and configure manually, and Reboot into SmartLock
Compliance mode. Choosing option 1 creates a cluster, while option 2 joins the
node to an existing cluster. If you choose option 1, the Configuration Wizard steps
you through the process of creating a cluster. If you choose option 2, the
Configuration Wizard ends after the node finishes joining the cluster. You can then
configure the cluster using the WebUI or the CLI.

PowerScale Hardware Installation

Page 86 © Copyright 2020 Dell Inc.


Course Summary

Course Summary

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 87


Course Summary

Course Summary

Now that you have completed this course, you should be able to:
→ Plan an installation.
→ Prepare the site.
→ Install the node in the rack.
→ Install components.
→ Cable the node to a network.

PowerScale Hardware Installation

Page 88 © Copyright 2020 Dell Inc.


Appendix

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 89


Appendix

Electrostatic Discharge
Electrostatic Discharge is a major cause of damage to electronic components and
potentially dangerous to the installer. To avoid ESD damage, review ESD
procedures before arriving at the customer site and adhere to the precautions when
onsite.

Clean Work Area: Clear


work area of items that
naturally build up
electrostatic discharge

Antistatic Packaging:
Leave components in
antistatic packaging until
time to install.

PowerScale Hardware Installation

Page 90 © Copyright 2020 Dell Inc.


Appendix

No ESD Kit Available:


• Before touching
component, put one
hand firmly on bare
metal surface.
• After removing
component from
antistatic bag, do
NOT move around
room or touch
furnishings,
personnel, or
surfaces.
• If you must move
around or touch
something, first put
component back in
antistatic bag

ESD Kit: Always use


ESD kit when handling
components.

Don't Move: Minimize


movement to avoid
buildup of electrostatic
discharge.

PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 91


PowerScale Hardware Installation

© Copyright 2020 Dell Inc. Page 92


POWERSCALE
IMPLEMENTATION

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Table of Contents

PowerScale Implementation ..................................................................................... 1


Rebranding - Isilon is now PowerScale ................................................................................ 2
PowerScale Solutions Certification Journey Map ................................................................. 3
Prerequisite Skills ................................................................................................................ 4
Course Objectives................................................................................................................ 5

Creating a New Cluster .............................................................................................. 6


Module Objectives ............................................................................................................... 7
Implementation Phases........................................................................................................ 8
Pre Engagement Questionnaire ........................................................................................... 9
Connecting Via Serial Port ................................................................................................. 10
Creating a Cluster .............................................................................................................. 11

Joining Nodes to Existing Cluster .......................................................................... 12


Module Objectives ............................................................................................................. 13
Verifying Version and Adding a Node................................................................................. 14
SmartFail: Remove a Node from Cluster ............................................................................ 16
Reformat a Node................................................................................................................ 18
Reimage a Node ................................................................................................................ 20
Reimage Use Case ............................................................................................................ 21

Completing Initial Configuration............................................................................. 22


Module Objectives ............................................................................................................. 23
Update Install Database ..................................................................................................... 24
Connectivity Tests.............................................................................................................. 25

Other Configuration Options .................................................................................. 26


Module Objectives ............................................................................................................. 27
SmartLock Compliance Mode ............................................................................................ 28
Secure Remote Services (SRS) ......................................................................................... 30
Hadoop Overview .............................................................................................................. 33

PowerScale Implementation

Page ii © Copyright 2020 Dell Inc.


Hadoop with PowerScale ................................................................................................... 35
Configuring Hadoop ........................................................................................................... 36
Upgrades and Updates ...................................................................................................... 38
OneFS Version Numbering ................................................................................................ 39
Upgrades ........................................................................................................................... 40
Upgrade Pause and Resume ............................................................................................. 41
Support Upgrade Paths to OneFS 8.1 (OneFS 8.2) ........................................................... 42
Patches and Firmware ....................................................................................................... 44
Automatic Replacement Recognition ................................................................................. 47
isi_upgrade_helper .................................................................................................... 48
isi_upgrade_helper Default ........................................................................................ 49
Release Notes ................................................................................................................... 50

Course Summary ..................................................................................................... 52


Course Summary ............................................................................................................... 53

PowerScale Implementation

© Copyright 2020 Dell Inc. Page iii


PowerScale Implementation

PowerScale Implementation

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 1


PowerScale Implementation

Rebranding - Isilon is now PowerScale

Important: In mid-2020 Isilon launched a new hardware platform, the


F200 and F600 branded as Dell EMC PowerScale. Over time the
Isilon brand will convert to the new platforms PowerScale branding. In
the meantime, you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.

PowerScale Implementation

Page 2 © Copyright 2020 Dell Inc.


PowerScale Implementation

PowerScale Solutions Certification Journey Map

The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.

PowerScale Solutions

A. PowerScale Advanced Administration (C, VC)

B. PowerScale Advanced Disaster Recovery (C, VC)

(Knowledge and Experience based Exam)

Implementation Specialist, PowerScale Technology Architect Specialist, Platform Engineer, PowerScale


PowerScale

A. PowerScale Concepts (ODC)


A. PowerScale Concepts (ODC) A. PowerScale Concepts (ODC) B. PowerScale Hardware Concepts (ODC)
C. PowerScale Hardware Installation (ODC)
B. PowerScale Administration (C,VC,ODC) B. PowerScale Solution Design (ODC) D. PowerScale Hardware Maintenance
(ODC)
E. PowerScale Implementation (ODC)

Information Storage and Management

Information Storage and Management (C, VC, ODC)

(C) - Classroom

(VC) - Virtual Classroom

(ODC) - On Demand Course

For more information, visit: http://dell.com/certification

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 3


PowerScale Implementation

Prerequisite Skills

To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Current PowerScale hardware portfolio and the OneFS operating system
• PowerScale Concepts

PowerScale Implementation

Page 4 © Copyright 2020 Dell Inc.


PowerScale Implementation

Course Objectives

After completion of this course, you will be able to:


→ Create a new cluster.
→ Join a node to an existing cluster.
→ Complete the initial configuration.
→ Complete other configuration options.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 5


Creating a New Cluster

Creating a New Cluster

PowerScale Implementation

Page 6 © Copyright 2020 Dell Inc.


Creating a New Cluster

Module Objectives

After completing this module, you will be able to:


• Describe how to connect to Node 1.
• Run the Configuration Wizard.
• Create the cluster.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 7


Creating a New Cluster

Implementation Phases

The typical PowerScale installation consists of three phases: Planning,


Implementation, and Documentation, functional overview, and acceptance.
Each phase is divided into multiple tasks that the various members of the project
team handle.

1: The first phase is planning. During this phase, gather all required information for
the installation. A Pre- Engagement Questionnaire (PEQ) document outlines all the
information to gather. The customer completes the document and returns it to Dell
EMC before the installation date. The document ensures that the project team has
the required information for the design, installation, and configuration of the
PowerScale cluster.

2: The second phase of the project is the implementation and testing phase. During
this phase, the physical installation and initial configuration of the PowerScale
cluster is carried out and remote support configuration is done. Now, the testing
and verification of the cluster is performed to ensure that the cluster is ready for
use. In addition, the cluster is ready for advanced software installation and
configuration.

3: Phase three of the project consists of documenting the installation, providing a


functional overview of the cluster to the customer, and closing out the
implementation.

PowerScale Implementation

Page 8 © Copyright 2020 Dell Inc.


Creating a New Cluster

Pre-Engagement Questionnaire (PEQ)

The PowerScale PEQ is the replacement for the Configuration Guide. The PEQ
defines the Professional Services project installation parameters and facilitates the
communication between the responsible resources. For this scenario, the required
information is in the Hardware tab.

Double-click images for enlarged view.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 9


Creating a New Cluster

Connecting Via Serial Port

To configure a PowerScale cluster for the first time, access the CLI by establishing
a serial connection to the node designated as node 1. The serial console gives
serial access when you cannot use the network. Other reasons for accessing using
a serial connection may be for troubleshooting, site rules, a network outage, and so
on. The serial port is usually a male DB9 connector. This port is called the service
port1.

Configure the terminal emulator utility to use the following settings:

1 Connect a serial null modem cable between a local system, such as a laptop, and
the service port on the node designated as node 1. As most laptops today no
longer have serial ports, you must use a USB-to-serial converter, available at many
online retailers. On the local system, launch a serial terminal emulator such as
mRemoteNG.

PowerScale Implementation

Page 10 © Copyright 2020 Dell Inc.


Creating a New Cluster

Creating a Cluster

This video demonstrates creating a new cluster.

Movie:

The web version of this content contains a movie.

Link:
https://edutube.emc.com/Player.aspx?vno=zyK3csP1f1h6F0cke3Ivog==&autoplay
=true

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 11


Joining Nodes to Existing Cluster

Joining Nodes to Existing Cluster

PowerScale Implementation

Page 12 © Copyright 2020 Dell Inc.


Joining Nodes to Existing Cluster

Module Objectives

After completing this module, you will be able to:


• Verify the OneFS version.
• Add a node to an existing cluster.
• Remove a node from a cluster.
• Reformat and reimage a node.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 13


Joining Nodes to Existing Cluster

Verifying Version and Adding a Node

Movie:

The web version of this content contains a movie.

Script:

The first step before adding a node to a cluster is to verify which OneFS version is
running. Methods to check that the version include the front panel display, CLI, and
WebUI. Let us start with the CLI. As shown here, I am logged into the cluster. Run
the isi version command. Next, we can use the WebUI. Here I am logging in. The
version is shown in the upper right side of the browser. If the new node is
compatible with the OneFS version on the existing cluster you are joining, there is
no need to reimage the new node or nodes. The node downloads the correct
version of OneFS from the cluster during the join process and reimages
automatically. In this demonstration, a new cluster was created.

There are several ways to add a node to a cluster. You can use the node front
panel display, the Configuration Wizard, or the WebUI. We show adding a node
using the Configuration Wizard and then another node using the WebUI.

Here I have established a serial connection the node I want to add. At the
Configuration Wizard prompt, type 2 and press ENTER to join the node to an
existing cluster. Type the number of the cluster you want to add the node to, and
press ENTER. In this example, I have only one cluster that is indexed as 1. The
node obtains the proper version of OneFS from the cluster if its version of OneFS
differs from that of the cluster. If a reimage is needed, allow the operation to
complete before attempting to add additional nodes.

If the addition is successful, the node boots to a login prompt. Log in as root and
run isi status to verify cluster membership and health. If there are additional nodes
to add, wait 5 minutes between node additions to allow the cluster time to stabilize
and incorporate the newly joined nodes.

Next let us add the third node to the cluster using the WebUI. Here I am logging in
as root. On the Dashboard page, we can see that there are only two nodes in this
cluster. Remember that Gen 5 must have a minimum of three nodes and Gen 6
needs four nodes to form a cluster. Navigate to Cluster Management, Hardware
Configuration. Here we see one available node. Select the Add node in the Actions

PowerScale Implementation

Page 14 © Copyright 2020 Dell Inc.


Joining Nodes to Existing Cluster

column. Once it is finished working, let us navigate back to Dashboard, Cluster


Overview. The table indicates that the third node is recognized. After a minute, the
node is booted and part of the cluster.

We saw how to check the OneFS version and how to add nodes to a cluster using
the Configuration Wizard and the WebUI. This concludes the demo.

isi devices node add <serial-number> command is used to join an


available node to the cluster through CLI.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 15


Joining Nodes to Existing Cluster

SmartFail: Remove a Node from Cluster

Movie:

The web version of this content contains a movie.

The video shows how to remove a node from the cluster using the WebUI.

The SmartFail function removes a node from the cluster.

• SmartFail manually removes2 one node at a time from the cluster.


• The nodes are removed from the cluster node list when the process completes3.
• Once the node has been reformatted, it can be added back4 into the cluster or
into another cluster if wanted.

2Before smart failing multiple nodes, consider whether the cluster has enough free
space to maintain protection on the data in the cluster. Removing too many nodes
causes the cluster to lose quorum. Data is restriped to other nodes in the cluster as
part of the process.

3If the node is still connected to the back-end network when the process
completes, it automatically initiates a reformat - otherwise, it can be reformatted
manually.

4The use cases are to remove older nodes during a life-cycle technology refresh.
First, to move a node from one cluster to another cluster. Second, if PowerScale
Support has identified a critical hardware failure.

PowerScale Implementation

Page 16 © Copyright 2020 Dell Inc.


Joining Nodes to Existing Cluster

• isi devices node stopfail command is used to discontinue the smartfail


process on a node.
• To modify the logical node number (LNN) of a node run isi config command
followed by the command Innset.

Script:

We start by logging into the WebUI. For this demonstration, I am smartfailing node
ID 3 shown on the dashboard. Navigate to Cluster Management, Hardware
Configuration. Select the Remove Nodes tab. Select Node 3 and then Submit.
Next, I will Confirm the smartfail. Back on the Dashboard, we can see the node
status in the process of smartfailing. After several minutes, the Dashboard updates
showing the node is no longer a member of the cluster.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 17


Joining Nodes to Existing Cluster

Reformat a Node

Reformatting a node is the quickest way to return a node to an unconfigured state.


Reformatting does not remove the node from the cluster completely unless
SmartFail has run over the node. Reformatting retains the current OneFS version
that is installed on the node.

Movie:

The web version of this content contains a movie.

The video shows the reboot option.

All data and configuration information is destroyed when reformatting. Do not


reformat unless you are certain it should be run, the node contains no data, or the
data can be destroyed without issues. For example, because the node has been
successfully smart failed.

Use case5.

Script:

5 The use case for reformatting a node is when resetting or repurposing nodes, for
example, the case of testing cluster nodes. This performs the reset quickly so the
nodes can be used for other testing. Another use case example is when destroying
a cluster to reset it to prepare a SyncIQ failback for SmartLock or WORM
directories. WORM directories cannot be overwritten, they can only be removed by
reformatting the cluster. This also applies to enterprise SmartLock directories and
compliance SmartLock directories. If SyncIQ has not been configured, these
considerations do not apply.

PowerScale Implementation

Page 18 © Copyright 2020 Dell Inc.


Joining Nodes to Existing Cluster

I am connected to the node using the console port. You can also connect using an
SSH session to the node. I begin by logging in as root. Here I use the
isi_reformat_node --reboot command and typing yes to understand I am sure I
want to reformat. Typing yes again. The node reboots and as seen here, the
Configuration Wizard displays.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 19


Joining Nodes to Existing Cluster

Reimage a Node

SolVe Online is used to view the latest procedures.

Reimaging a node performs the same node reset functionality as reformatting the
node. The difference is that the OneFS operating system is reinstalled during the
process. Connect to the reimaging node over the console port or an SSH session.

Reimaging a node can be done using a USB flash drive or by copying the OneFS
install file locally on the node. If reimaging using a USB flash drive containing the
OneFS image, a limited number of USB flash drives are supported.

Copy the OneFS install file to the node and run the isi_reimage command to
reimage a node without using a flash drive.

Caution: Only perform this procedure on a node that is not a member


of a cluster. Reimaging includes a node requiring a new installation of
OneFS or a node that has run SmartFail completely over it and was
removed from an active cluster. Like when reformatting, reimaging a
node erases all data.

PowerScale Implementation

Page 20 © Copyright 2020 Dell Inc.


Joining Nodes to Existing Cluster

Reimage Use Case

The use case is to install a new OneFS image on the cluster. In certain
circumstances, nodes with a OneFS version that is installed from the factory may
differ from the OneFS version on the cluster. Do not always upgrade or downgrade
automatically when the node is added to the cluster. In this case, the nodes must
be reimaged to match the cluster OneFS version.

Company Model Name Model Number Size

SanDisk Cruzer SDCZ36-08G 8 GB

SDCZ36-16G 16 GB

SDCZ36-32G 32 GB

Kingston DataTraveler DT101G2/8GBZ 8 GB


Technology 101(G2)
DT101G2/16GBZ 16 GB

DT101G2/32GBZ 32 GB

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 21


Completing Initial Configuration

Completing Initial Configuration

PowerScale Implementation

Page 22 © Copyright 2020 Dell Inc.


Completing Initial Configuration

Module Objectives

After completing this module, you will be able to:


• Update the install database.
• Perform connectivity tests.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 23


Completing Initial Configuration

Update Install Database

After completing the installation, update the install database. Browse to the Dell
EMC Product Registration and Install Base Maintenance service portal at the URL
shown in the browser. Select the Product Registration and Install Base
Maintenance option and then select the IB Status Change option to open the form.
Next complete the form and then submit. Shown in the graphic are sources for
PowerScale technical support.

The graphic shows sources for PowerScale technical support.

PowerScale Implementation

Page 24 © Copyright 2020 Dell Inc.


Completing Initial Configuration

Connectivity Tests

The final step of the initial configuration process is to perform connectivity tests.
The four tests - WebUI, HTTP, DNS, and client are performed from a client that is
connected over Ethernet to the cluster.

 The first test, PowerScale WebUI test, can be done by launching the browser6
to the cluster from the client.
 The other two tests, DNS, and client are performed post implementation7.

6Depending on the browser, you may be prompted with security information. Log in
over the administration interface to complete the test.

7Once the cluster is configured for DNS, you can test by pinging the cluster by
name. This is also true for the HTTP test, use a browser and the URL of the Fully
Qualified Domain Name and path. For example,
https://clustername.company.com/IFS. The client test can check access and
authorization. From a Windows client, map a drive to a share on the cluster. Then
drag and drop a file from the client to the mapped drive. Ensure that the transfer
completes successfully. From an NFS client, mount the export directory that is
presented from the cluster. Then copy a file from the client to the NFS mount point
and ensure that the transfer completes successfully.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 25


Other Configuration Options

Other Configuration Options

PowerScale Implementation

Page 26 © Copyright 2020 Dell Inc.


Other Configuration Options

Module Objectives

After completing this module, you will be able to:


• Configure a node or cluster in SmartLock Compliance mode.
• Configure Secure Remote Services (SRS).
• Identify how PowerScale works with Hadoop.
• Explain upgrades, updates, and release notes.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 27


Other Configuration Options

SmartLock Compliance Mode

Compliance mode enables cost-effective and efficient protection against accidental,


premature, or malicious deletion or modification of data. SmartLock provides
WORM, or Write-Once, or Read-Many status on files. In a WORM committed state,
files can be read but not deleted. This is in compliance with the regulations that the
U.S. Securities and Exchange Commission rule 17a-4f defines.

Movie:

The web version of this content contains a movie.

Script:

This demonstration shows how to create a compliance mode cluster. You can
configure a node or cluster in Compliance mode by selecting the SmartLock
Compliance mode option in the Configuration Wizard. Here I am connected to the
node designated as node 1 with the node that is booted to the Configuration
Wizard. A compliance mode cluster must be created during the initial cluster
configuration process. Once you set a cluster to Compliance mode, you have to
reformat the node in order to return it to a standard cluster mode. If there is data on
the node, returning to standard cluster mode without data loss requires a
Professional Services engagement.

The first step is to select option 4, to reboot the node into SmartLock Compliance
mode. Next type yes to verify the option and understand that root access to the
node will be disabled. I press enter, and the node reboots.

The node boots back to the Configuration Wizard. Note the new wording on option
4, Disable SmartLock Compliance mode, indicating the node is in compliance
mode. This is the last opportunity to back out of compliance mode configuration
without reformatting the node.

Here I enter 1, to create a cluster. We are asked for a SmartLock license. I enter
that.

Accept the EULA, and then we set the password for compadmin and admin. The
rest of the configuration is no different than the standard cluster mode installs.
Remember to see the Configuration Guide for the information to enter. For this

PowerScale Implementation

Page 28 © Copyright 2020 Dell Inc.


Other Configuration Options

demonstration, I am not going to step through each step. Here I am going to


commit the changes and the node reboot.

At the login prompt, let us go ahead and login as compadmin. I run the isi status
command.

As part of this demonstration lets,’s sees how to add a node to a compliant cluster.
I am connected serially to the node I want to join. Once again, select option 4 to
reboot in SmartLock Compliance mode. Once the boot completes, select option 2
in the Configuration Wizard.

This demonstration that is covered creating a SmartLock compliant cluster and


adding a node to the cluster. This concludes the demonstration.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 29


Other Configuration Options

Secure Remote Services (SRS)

The web version of this content contains an interactive activity.

Note: Secure Remote Services (SRS) is the replacement for EMC


Secure Remote Support (ESRS).

The exercise has audio.

EMC Secure Remote Support (or ESRS) is a two-way, secure, IP-based customer
service remote support system that allows 24x7 remote monitoring of EMC
products. ESRS replaced SupportIQ as the PowerScale remote support system.

So, let us see what the ESRS environment looks like. I start with the customer side.
Here I show a heterogeneous Dell EMC environment as ESRS is the tool that
many Dell EMC platforms use. I add the virtual edition of ESRS. This is the
gateway and at least one ESRS gateway server must be installed and configured.
The gateway server acts as the single point of entry and exit for IP-based remote
support activities and monitoring notifications. ESRS can be configured for
redundancy with more than one ESRS instance installed, allowing reports through
ESRS if hardware or partial data environment failure.

The environment has some firewalls. A few notes about the optional policy
manager. You need the Policy Manager if fulfilling requirements for authentication,
authorization, and auditing. By implementing the Policy Manager, you can enable
monitoring on a node-by-node basis, allow or deny remote support sessions, and

PowerScale Implementation

Page 30 © Copyright 2020 Dell Inc.


Other Configuration Options

review remote customer service activities. The Policy Manager enables


permissions to be set for ESRS-managed devices.

Next I draw out the other half of the ESRS, the Dell EMC environment. Now, ESRS
functions as a communications Broker agent between the managed devices, the
Policy Manager, and Dell EMC. Dell EMC does not establish inbound network
communications to the systems. This is a security measure, benefiting customers
in secure sites while permitting limited, controlled outbound communications.

When the ESRS server retrieves a remote access request from the EMC
Enterprise, the policies that are configured on the Policy Manager access the
request and the ESRS server enforces these requests.

Communications between the customer site and Dell EMC support flow over an
encrypted HTTPS connection, which means that sensitive information does not
traverse the Internet unprotected.

On the Dell EMC support side, only authorized support representatives have
access to the customer systems or their information at all. All communication with
Dell EMC initiates from ESRS on port 443 or 8443 outbound from the customer site
to Dell EMC support services.

Enabling ESRS:

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 31


Other Configuration Options

Before enabling and configuring ESRS, you must ensure the OneFS software has
a signed license and an ESRS gateway server is installed and configured. The IP
address pools that handle gateway connections must exist in system and must
belong to a subnet under groupnet0, which is the default system groupnet. Shown
is the OneFS 8.0 WebUI, Cluster Management, General Settings, ESRS page.
Later OneFS versions may prompt to update the license. Select Enable ESRS
Gateway Support, complete the fields, and Save Changes.

ESRS Documentation:

The ESRS Planning Guide has a complete description of gateway server


requirements, installation, and configuration. This guide provides component
requirements, configurations, and preparation for site installation. The EMC Secure
Remote Services Installation and Operations Guide provides a complete
description of the ESRS tool. Use this guide when implementing ESRS. Most
notably, the guide covers the installation and provisioning of ESRS. The EMC
Secure Remote Services Technical Description document provides a complete
description of ESRS features and functionality. This document includes Policy
Manager specifications and the Virtual Edition application.

These documents and others about ESRS can be found on the EMC Online
Support site (support.emc.com) and you can use the SolVe Desktop to download
documents. Remember to always check for the latest updates and Release Notes.

PowerScale Implementation

Page 32 © Copyright 2020 Dell Inc.


Other Configuration Options

Hadoop Overview

Hadoop enables the distributed processing of large datasets across clusters of


commodity servers. It is designed to scale up from a single server to thousands of
servers dynamically, scaling up, or down based on resources and required service
levels8.

Holistic view of a Hadoop solution.

4 5
1

2 6

1: In a traditional Hadoop cluster, the data exists in silos. Production data is


maintained on production servers and then copied in some way to a Landing Zone
Server. Then it imports or ingests the data into Hadoop/HDFS (Hadoop Distributed
File System), making three copies of all data. Note that the data on HDFS is not
production data. It is copied from another source, and a process must be in place
to update the HDFS data periodically with the production data information.

8 Though Hadoop has emerged as a tool of choice for big data analytics, there are
also reasons to use it in typical enterprise environments. To analyze existing data
for improving processes and performance depending on your business model.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 33


Other Configuration Options

2: Hadoop has two core components: HDFS and MapReduce. HDFS is a scalable
file system that is used in the Hadoop cluster. MapReduce is the compute
algorithm that analyzes the data and collects the answers from the query.

3: NameNode:

• Holds the location information for every file in the cluster. The file system metes
data.

Secondary NameNode:

• Backup NameNode, a passive node that requires the administrator to intervene


to bring it up to primary NameNode.

4: Where the data resides, spread across cluster nodes, 3x mirrored.

5: Map Step:

• The primary node takes the input, divides it into smaller subproblems, and
distributes them to worker nodes. The worker node processes the smaller
problem and passes the answer back to its primary node.

Reduce Step:

• The primary node then collects the answers to all the subproblems and
combines them in some way to form the output. The answer to the problem it
was originally trying to solve.

6: Task Tracker:

• A node in the cluster that accepts tasks - Map, Reduce, and Shuffle operations
from a Job Tracker

PowerScale Implementation

Page 34 © Copyright 2020 Dell Inc.


Other Configuration Options

Hadoop with PowerScale

Scale out NAS platform of PowerScale provides native support for the HDFS
protocol.

1: The NameNode on the PowerScale cluster gives the solution a complete and
automated failover process. If the node running as the NameNode fails, another
PowerScale node immediately picks up the function of the NameNode. No data or
metadata would be lost since the distributed nature of PowerScale spreads the
metadata across the cluster. There is no downtime if a failure occurs and most
importantly there is no need for administrative intervention to failover the
NameNode.

2: With PowerScale in a Hadoop solution, all production data resides on the


PowerScale cluster. It is unnecessary to export it out of the production applications
and import it elsewhere, the way a traditional Hadoop environment does. The
MapReduce algorithm runs on dedicated Hadoop compute nodes, which are
required to do the data analysis. The Hadoop data is held on PowerScale so that it
can be manipulated, whether by Hadoop or by using various protocols,
applications, or clients.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 35


Other Configuration Options

Configuring Hadoop

There are the five major steps to configure HDFS for a Hadoop solution.

1: First, you have to validate the OneFS version and license activation. This can be
verified from the WebUI by going to the Cluster Management, Licensing page.
Consult the PEQ for the license key.

Double-click the image for enlarged view.

2: Next you prepare the cluster which includes configuring DNS, SmartConnect,
and Access Zones to allow for the Hadoop cluster to connect to the PowerScale
OneFS cluster.

3: When preparing your Hadoop distribution (that is, Cloudera), do NOT select
HDFS service. PowerScale is used as storage and the native HDFS service is no
longer needed.

4: When configuring your Hadoop distribution (that is, Cloudera), refer to the
PowerScale OneFS installation guide to configure the Hadoop cluster elements, to
use the PowerScale cluster as its HDFS clients, and then tweak the HDFS

PowerScale Implementation

Page 36 © Copyright 2020 Dell Inc.


Other Configuration Options

configuration of OneFS. Leave the PowerScale role as default during the role
assignment process.

5: Finally, verify your PowerScale-Hadoop implementation with native Hadoop


commands (that is, hadoop fs -ls /). To enable HDFS, from the WebUI, go to the
Protocols, Hadoop (HDFS), Settings page.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 37


Other Configuration Options

Upgrades and Updates

The terms “upgrade” and “update” are often used interchangeably. They describe
the process of changing to a newer version of the OneFS operating system (OS) or
changing to newer node or disk firmware versions. However, the terms “upgrade”
and “update” do not mean the same thing.

Upgrade is used when the OneFS operating system version changes. Upgrades
apply code fixes, new operating system features, and functionality. Often times an
upgrade changes the underlying structure or operations of the OneFS file system. It
marks a strategic step forward for the product and for the functionality that it offers
to production clusters.

Update is used when new firmware versions are installed on the nodes or on the
disk drives. Firmware updates are applied to fix (patch) firmware code issues and
to standardize all the nodes or disks onto the same firmware9 code level.

9 Often when receiving a new node or a new drive, the firmware that is shipped on
the new hardware is a different version than the firmware running on the production
cluster. Newer firmware versions add or change cluster functionality, sometimes
they are used to apply code fixes and add support for new hardware.

PowerScale Implementation

Page 38 © Copyright 2020 Dell Inc.


Other Configuration Options

OneFS Version Numbering

Customers may ask about the OneFS version and an understanding of the version
identifiers. OneFS versions are broken down in the significance of the changes that
are made to the release. A change to the first and second numbers of the release
code identifies the major upgrades.

• First digit changes indicate major changes and potential core modifications to
the underlying operating system and data structures.
• The second digit indicates significant changes or enhancements, but usually do
not involve the core structure. Typically, significant features changes or
additions are in these releases.
• The last two numbers indicate minor releases.

− The third digit indicates smaller or less impactful feature enhancements. The
support of new nodes can be in the third digit release.
− The fourth digit denotes minor functionality and is used for issue fixes.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 39


Other Configuration Options

Upgrades

There are four phases to an upgrade:

• Phase 1 - Planning for an Upgrade


• Phase 2 - Pre-upgrade tasks
• Phase 3 - Performing the upgrade process
• Phase 4 - Post-upgrade tasks

Rolling Upgrade

A rolling upgrade is a per-node operation, upgrading and rebooting cluster nodes


one at a time. Only one node is offline at a time. Rolling upgrades are nondisruptive
to clients that can seamlessly failover their connections between nodes such as
NFSv3, and SMB 3.0’s continuous availability clients. Stateful protocols, such as
SMB 2.0, do not support transparent failover of their connections and have a brief
disruption when a node is rebooted into new code.

Simultaneous Upgrade

Simultaneous upgrades are faster than rolling upgrades, but reboot all nodes at the
same time, thus incurring an interruption in data access.

Resource: See the OneFS Upgrades - PowerScale Info page to get


documents and information regarding upgrades.

PowerScale Implementation

Page 40 © Copyright 2020 Dell Inc.


Other Configuration Options

Upgrade Pause and Resume

OneFS 8.2.0 enables the pausing and resuming of a OneFS upgrade. Pause and
resume are useful when the maintenance window ends. The upgrade can be
paused, and then resumed in a later window. The commands are isi upgrade
pause and isi upgrade resume.

Pausing is not immediate. Upgrade is in a pausing state until the current upgrading
node completes. Other nodes do not upgrade until the upgrade is resumed.
Pausing state can be viewed with isi upgrade view, or
isi_upgrade_status, or viewing the pause file data.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 41


Other Configuration Options

Support Upgrade Paths to OneFS 8.1 (OneFS 8.2)

Below are the supported upgrade paths to OneFS 8.1 and OneFS 8.2. Upgrades
from versions earlier than OneFS 8.0 are simultaneous. Upgrading from OneFS 8.0
to OneFS 8.1 can be a rolling or simultaneous upgrade.

Supported Upgrade Paths to OneFS 8.1 (OneFS 8.2)

OneFS 7.2.0.4 - OneFS 8.1 (OneFS 8.2)

OneFS 7.2.0.2 - OneFS 8.1 (OneFS 8.2)

PowerScale Implementation

Page 42 © Copyright 2020 Dell Inc.


Other Configuration Options

OneFS 8.0 - OneFS 8.1 (OneFS 8.2)

Resource: PowerScale Support and Compatibility Guide

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 43


Other Configuration Options

Patches and Firmware

A standard part in each node installation or change activity is to bring the node up
to a minimum level. If required or requested to correct any significant issues,
OneFS patches can be applied. For the nodes, it includes bringing the node
firmware and drive firmware up to the latest version.

More information10.

Patches

A patch is a software update that addresses known issues before those updates
are incorporated into a maintenance release (MR)11.

10 For OneFS versions (OneFS 7.1.1 and above), it may include installing the drive
support package (DSP) to automate and simplify drive firmware updates. For all
drives to be on the same firmware version within each node. If possible, all nodes
should use the same firmware versions. When nodes run the latest version and all
drives are on the same version, customers experience fewer issues and there is a
reduction in future downtime or node outages.

11 To emphasize the importance, let us take an environment running many NFSv2


clients. If you failed to read the release notes stating NFSv2 was no longer
supported in OneFS 7.2, after upgrading to the new version, many NFSv2 clients
would no longer function. You must immediately determine and implement a
resolution to get those clients up and running again. The information in the release
notes would have helped you prepare for this change in advance.

PowerScale Implementation

Page 44 © Copyright 2020 Dell Inc.


Other Configuration Options

The isi upgrade patches command syntax.

There are two types of patches, standard patch and rollup patch.

• A standard patch addresses one or more known issues for a major, minor, or
MR release of OneFS. Some patches contain minor enhancements or
additional logging functionality that can help Dell EMC Technical Support
troubleshoot issues with the cluster.
• Rollup patches address multiple issues that are related to one component of
OneFS functionality, such as SMB. It might also contain fixes from previous,
standard patches that addressed issues that are related to that component.

Firmware

Firmware is the software code that makes hardware components12 contained within
the node work. Updates are used to manage the code versions13 applied to each of

12Included are components such as the drives, the motherboard bios, the disk
controller, the InfiniBand adapter and the network adapters. The firmware is stored
on, and runs inside, the individual hardware components.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 45


Other Configuration Options

the hardware components. Firmware updates are performed across the cluster to
standardize all node components on the same version to eliminate potential
incompatibility issues and simplify support.

The isi devices drive firmware command to view and update firmware.

Firmware updates should be performed as part of a regular cluster maintenance


process. This applies equally to the node firmware and to the firmware of the
various node components.

More Information14.

13Code versions are updated to reflect code fixes or code enhancements for
additional stability.

14 Firmware updates are recommended as part of the process for new cluster
installations. The commands and process used to perform firmware updates are
different from the commands and process used for upgrades. When upgrading

PowerScale Implementation

Page 46 © Copyright 2020 Dell Inc.


Other Configuration Options

Automatic Replacement Recognition

Disable to add
drives manually

Enabled by default

The graphic shows the WebUI Cluster management, Automatic Replacement Recognition page.

When replacing a drive in a node, OneFS automatically formats the drive, adds it to
the cluster, and updates the drive firmware. The new drive firmware matches the
current drive support package that is installed on the cluster. The drive firmware is
not updated for the entire cluster, only for the new drive. If you prefer to format and
add drives manually, disable Automatic Replacement Recognition.

firmware, each node is updated individually and is automatically restarted if needed


when the update is complete. Node firmware updates will require a reboot, but
drive firmware packages do not require a reboot

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 47


Other Configuration Options

isi_upgrade_helper

The isi_upgrade_helper has two functions. First, it analyzes cluster


configuration and recommends a set of nodes for a simultaneous firmware
upgrade. Second, it gathers cluster configuration to assist with the identification of
features that are affected by upgrading firmware on a subset of nodes.

The isi_upgrade_helper command accepts three display options, default, -


-summary, and --verbose. The --include and --exclude options serve an
identical purpose to the upgrade framework arguments of the same name.

PowerScale Implementation

Page 48 © Copyright 2020 Dell Inc.


Other Configuration Options

isi_upgrade_helper Default

The graphic shows the output of the isi_upgrade_helper command with no


options. The output shows three upgrade options. Use option 2 for clusters with
customer data. Use option 3 for clusters without customer data. Gen 6 nodes
without redundant power are not part of recommendation.

The isi_upgrade_helper --summary command extends the default helper


output by adding a cluster configuration table that is associated with each node.
Using isi_upgrade_helper --verbose extends the --summary option,
displaying several categories of cluster configuration that may affect node selection
for simultaneous firmware upgrade.

If a node fails after the initial isi_upgrade_helper, rerun


isi_upgrade_helper to get a revised recommended upgrade list. Failed nodes
are not in the upgrade list.

isi_upgrade_helper

1 - Rolling

2 - Simultaneous - minimize DU - node


lists avoid node peers and max of 1
node per disk pool

3 - Simultaneous - avoid DL - no node


pairs

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 49


Other Configuration Options

Release Notes

Release notes contain installation instructions for drive and node firmware
upgrades and patches. Reading them carefully is an essential step in determining
whether your business benefits from the new features and functionality 15 of a new
OneFS version.

The release notes help you determine if the upgrade has an impact and if so,
provide a solution if needed. The contents of the release notes are similar across
releases - they provide release and upgrade information, new features,
modifications and enhancements, resolved issues, known issues and installation
instructions.

15 To emphasize the importance, let us take an environment running many NFSv2


clients. If you failed to read the release notes indicating that NFSv2 was no longer
supported in OneFS 7.2, after the upgrade to the new version, many NFSv2 clients
would no longer function. You must immediately determine and implement a
resolution to get those clients up and running again. The information in the release
notes would have helped you prepare for this change in advance.

PowerScale Implementation

Page 50 © Copyright 2020 Dell Inc.


Other Configuration Options

Shown are the OneFS 8.2.0 Release Notes.

Resource: Isilon OneFS 8.2.0 Release Notes

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 51


Course Summary

Course Summary

PowerScale Implementation

Page 52 © Copyright 2020 Dell Inc.


Course Summary

Course Summary

Now that you have completed this course, you should be able to:
→ Create a new cluster.
→ Join a node to an existing cluster.
→ Complete a configuration.
→ Complete other configuration options.

PowerScale Implementation

© Copyright 2020 Dell Inc. Page 53


PowerScale Implementation

© Copyright 2020 Dell Inc. Page 54


POWERSCALE
HARDWARE
MAINTENANCE

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Table of Contents

PowerScale Hardware Maintenance ......................................................................... 2


Rebranding - Isilon is now PowerScale ................................................................................ 3
PowerScale Solutions Certification Journey Map ................................................................. 4
Prerequisite Skills ................................................................................................................ 5
Course Objectives................................................................................................................ 6

Before You Begin ....................................................................................................... 7


Module Objectives ............................................................................................................... 8
Hardware Maintenance Basics............................................................................................. 9
Electrostatic Discharge ...................................................................................................... 12
Minimize Tool Use ............................................................................................................. 15

Preparing a Node ..................................................................................................... 16


Module Objectives ............................................................................................................. 17
Powering down a Node ...................................................................................................... 18

Field Replaceable Units (FRUs) .............................................................................. 22


Module Objectives ............................................................................................................. 23
FRU ................................................................................................................................... 24
Gen 6 ................................................................................................................................. 25
Node: Gen 6 ...................................................................................................................... 26
Gen 6.5 .............................................................................................................................. 31

Customer Replaceable Units (CRUs) ..................................................................... 32


Module Objectives ............................................................................................................. 33
Node Compatibility ............................................................................................................. 34
SSD: Gen 6........................................................................................................................ 35
Drive and Sled: Gen 6 ........................................................................................................ 37
Drives: Gen 6 ..................................................................................................................... 38
Drive Sled: Gen 6............................................................................................................... 39
Front Bezel: Gen 6 ............................................................................................................. 41

PowerScale Hardware Maintenance

Page ii © Copyright 2020 Dell Inc.


Power Supply: Gen 6 ......................................................................................................... 42
Gen 6.5 .............................................................................................................................. 43
SSD Replacement Considerations ..................................................................................... 44

Replacing a Failed Chassis ..................................................................................... 45


Module Objectives ............................................................................................................. 46
Chassis Replacement Procedure ....................................................................................... 47

Course Summary ..................................................................................................... 50


Course Summary ............................................................................................................... 51

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page iii


PowerScale Hardware Maintenance

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 1


PowerScale Hardware Maintenance

PowerScale Hardware Maintenance

PowerScale Hardware Maintenance

Page 2 © Copyright 2020 Dell Inc.


PowerScale Hardware Maintenance

Rebranding - Isilon is now PowerScale

Important: In mid-2020 Isilon launched a new hardware platform, the


F200 and F600 branded as Dell EMC PowerScale. Over time the
Isilon brand will convert to the new platforms PowerScale branding. In
the meantime, you will continue to see Isilon and PowerScale used
interchangeably, including within this course and any lab activities.
OneFS CLI isi commands, command syntax, and man pages may
have instances of "Isilon".
Videos associated with the course may still use the "Isilon" brand.
Resources such as white papers, troubleshooting guides, other
technical documentation, community pages, blog posts, and others
will continue to use the "Isilon" brand.
The rebranding initiative is an iterative process and rebranding all
instances of "Isilon" to "PowerScale" may take some time.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 3


PowerScale Hardware Maintenance

PowerScale Solutions Certification Journey Map

The graphic shows the PowerScale Solutions Expert certification track. You can
leverage the Dell Technologies Proven Professional program to realize your full
potential. A combination of technology-focused and role-based training and exams
to cover concepts and principles as well as the full range of Dell Technologies'
hardware, software, and solutions. You can accelerate your career and your
organization’s capabilities.

PowerScale Solutions

PowerScale Advanced Administration (VC,C)

PowerScale Advanced Disaster Recovery (VC,C)

(Knowledge and Experience based Exam)

Implementation Engineer Technology Architect Platform Engineer

PowerScale Concepts (ODC)


PowerScale Concepts PowerScale Concepts
PowerScale Hardware Concepts (ODC)
(ODC) (ODC)
PowerScale Hardware Installation (ODC)
PowerScale Administration PowerScale Solutions Design
PowerScale Hardware Maintenance (ODC)
(ODC,VC,C) (ODC)
PowerScale Implementation (ODC)

Information Storage and Management (ODC, VC, C)

(C) - Classroom

(VC) - Virtual Classroom

(ODC) - On Demand Course

For more information, visit: http://dell.com/certification

PowerScale Hardware Maintenance

Page 4 © Copyright 2020 Dell Inc.


PowerScale Hardware Maintenance

Prerequisite Skills

To understand the content and successfully complete this course, a student must
have a suitable knowledge base or skill set. The student must have an
understanding of:
• Current PowerScale hardware portfolio and the OneFS operating system
• PowerScale Concepts

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 5


PowerScale Hardware Maintenance

Course Objectives

After completion of this course, you will be able to:


→ Explain hardware maintenance procedures.
→ Prepare a node.
→ Replace Field Replaceable Units (FRUs).
→ Replace Customer Replaceable Units (CRUs).

PowerScale Hardware Maintenance

Page 6 © Copyright 2020 Dell Inc.


Before You Begin

Before You Begin

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 7


Before You Begin

Module Objectives

After completing this module, you will be able to:


• Identify basic hardware maintenance procedures.
• Implement electrostatic discharge (ESD) procedures.
• Download FRU packages.

PowerScale Hardware Maintenance

Page 8 © Copyright 2020 Dell Inc.


Before You Begin

Hardware Maintenance Basics

The graphic shows a few basic reminders that are common to all hardware
maintenance procedures.

If you encounter any difficulties while performing this task, immediately contact Dell
EMC Technical Support.

3 6

4 8

2 7

1: Customer Replaceable Units (CRUs) are removed without shutting down the
node. This means you can accomplish CRU replacements and CRU hardware
upgrades while the node is powered up as long as the correct procedure is
followed. Field Replaceable Units (FRUs), on the other hand, require the node to
be powered off. If you must power off a node, always shut it down properly as
described in the replacement guide.

2: On Generation 6 nodes (Gen 6), before disconnecting any cables, ensure that
the compute module's Do Not Remove LED is off. When the LED is white or On,
this indicates the node’s journal is still active. The Do Not Remove LED is on the
right side of the compute module and looks like a symbol of a hand with a slash
through it. Do not disconnect any cables until this LED is off.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 9


Before You Begin

3: On Generation 5 nodes (Gen 5), never power off a node by pressing the power
button or removing both power cables unless you are expressly directed to do so
by Dell EMC Technical Support.

4:

Use the Solve Desktop tool to get the most recent, full instructions for the
procedure. These instructions are frequently updated based on feedback from the
field, so ensure to consult the instruction documents prior to every engagement,
even if you have already performed the service requested previously.

SolVe Desktop has been revised and updated to SolVe Online. It is a knowledge
management-led standard procedure for DELL-EMC field, service partners, and
customers. Click here for an overview on SolVe Desktop or SolVe Online.

5: When performing part replacements on multiple nodes, always work on one


node at a time to help prevent cluster outages and risk of data loss.

6: Save the packaging from the replacement part. Use this packaging to return the
failed part to Dell EMC. A return label is included with the replacement part.

7: If the customer and/or Dell EMC technical support request Failure Analysis on
the replaced part, be sure to attach a filled out FA label to the return box, and
complete an FA request ticket in the WWFA system. Provide the FA ticket number
to your Support contact and/or add it to the SR in a comment.

8: After all work is complete, partner personnel should submit the Partner
Notification Form (PNF) to allow Dell EMC Technical Support to update the install
database. Dell EMC personnel should update the install database directly by going

PowerScale Hardware Maintenance

Page 10 © Copyright 2020 Dell Inc.


Before You Begin

to the Dell EMC Business Services website. In the Post Sales area, click Install
Base Group, complete, and submit the form.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 11


Before You Begin

Electrostatic Discharge

Electrostatic Discharge (ESD) is a major cause of damage to electronic


components and potentially dangerous to the installer. To avoid ESD damage,
review ESD procedures before arriving at the customer site and adhere to the
precautions when onsite.

Clean Work Area: Clear


work area of items that
naturally build up
electrostatic discharge

Anti-static Packaging:
Leave components in
anti-static packaging until
time to install.

PowerScale Hardware Maintenance

Page 12 © Copyright 2020 Dell Inc.


Before You Begin

No ESD Kit Available:


• Before touching
component, put one
hand firmly on bare
metal surface.
• After removing
component from anti-
static bag, do NOT
move around room or
touch furnishings,
personnel, or
surfaces.
• If you must move
around or touch
something, first put
component back in
anti-static bag.

ESD Kit: Always use


ESD kit when handling
components.

Do not Move: Minimize


movement to avoid
buildup of electrostatic
discharge.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 13


Before You Begin

Note: Always follow ESD procedures when handling components.

PowerScale Hardware Maintenance

Page 14 © Copyright 2020 Dell Inc.


Before You Begin

Minimize Tool Use

A design goal for maintenance of the hardware is to make the hardware as


accessible as possible without tools. Demounting handles are color-coded to
indicate hot or cold serviceability. Terracotta handles indicate that the component in
question may be removed without first taking the node in question down. Blue
handles indicate that the node should be shut down for the maintenance
procedure.

Cold Serviceable
Blue Handles

Hot Serviceable
Terracotta Handles

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 15


Preparing a Node

Preparing a Node

PowerScale Hardware Maintenance

Page 16 © Copyright 2020 Dell Inc.


Preparing a Node

Module Objectives

After completing this module, you will be able to:


• Power down a node.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 17


Preparing a Node

Powering down a Node

The web version of this content contains an interactive activity.

We’ll start off by covering how to safely power down a node. Remember, on a Gen
5 node, never power down a node by pressing the power button unless explicitly
instructed to do so by Dell EMC Technical Support. To power down a node, first
connect to the cluster. This can be done using SSH or the serial port. If using a
terminal emulator utility with a serial port connection, here are the settings to use:

• Transfer rate = 115,200 bps


• Data bits = 8
• Parity = none
• Stop bits = 1
• Flow control = hardware

In this exercise, we will shut down node 3. The next step after connecting to the
cluster is to get the IP address of the node to shut down. Here we are connected
and logged in to the cluster on node 1.

Click in the box and type isi status -q to get the node’s IP address.

PowerScale Hardware Maintenance

Page 18 © Copyright 2020 Dell Inc.


Preparing a Node

Notice the IP address of node 3.

Log on to the node that you want to shut down and type the command to shut the
node down: shutdown -p now.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 19


Preparing a Node

To check that the node is powered down, we are connected to another node in the
cluster. Click in the box and type isi status -q again.

The shutdown node has a status of D--- (Down) in the command output.

PowerScale Hardware Maintenance

Page 20 © Copyright 2020 Dell Inc.


Preparing a Node

Note:
• Gen 6 nodes do not support serial flow control - the flow control
setting should be set to 'None' when connecting to Gen 6
hardware.
• isi network interfaces list -v command can also
be used to get the node's IP address.
• isi config command can also be used to shut down a
node.
• It is recommended to run isi_flush command prior to
performing any shutdown to flush the cache.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 21


Field Replaceable Units (FRUs)

Field Replaceable Units (FRUs)

PowerScale Hardware Maintenance

Page 22 © Copyright 2020 Dell Inc.


Field Replaceable Units (FRUs)

Module Objectives

After completing this module, you will be able to replace:


• Gen 6 components: Front Panel, Fan, Battery, DIMMs, PCI cards, M.2 Vault,
Node, Chassis
• F200: Fan, DIMM, NVDIMM Battery, System Battery, Backend NIC, Frontend
NIC rNDC
• F600: Fan, DIMM, NVDIMM Battery, System Battery, Backend NIC, Frontend
NIC

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 23


Field Replaceable Units (FRUs)

FRU

Replacing Field Replaceable Units requires opening a node. Replacements are


performed by Dell EMC partner and field personnel only. Customers should never
replace a FRU themselves.

You can watch the videos of replacement procedures in the next few slides.

PowerScale Hardware Maintenance

Page 24 © Copyright 2020 Dell Inc.


Field Replaceable Units (FRUs)

Gen 6

Movie:

The web version of this content contains a movie.

Link:

https://edutube.emc.com/vlearning/launch/cPBJf1qfSHOzdb2Q|@$@|pA1wg==/vid
eodetails=false,comments=false,launch=yes

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 25


Field Replaceable Units (FRUs)

Node: Gen 6

Part 1

When facing the back of the chassis the compute modules are labeled right to left,
one to four as shown. Because compute modules are installed in pairs that are
called “node-pairs”, the minimum cluster size has increased from three to four
nodes and additional nodes must be added in node-pairs. The graphic shows that
the Node-pairs are either the left half or right half of chassis.

Part 2

Watch the video to learn how to replace a node.

Movie: The web version of this content contains a movie.

Link:

https://edutube.emc.com/Player.aspx?vno=S3kFK4UC82Qmnc2PUec1iA==&autop
lay=true

Script: If you’re performing replacements in multiple nodes, perform them on one


node at a time. This ensures that you don’t endanger the data on your cluster.
Gather logs by following the instructions in the replacement guide. To replace the
compute module, you'll complete the following tasks:

• Unpack the replacement compute module

• Remove the failed compute module from the chassis

PowerScale Hardware Maintenance

Page 26 © Copyright 2020 Dell Inc.


Field Replaceable Units (FRUs)

• Transfer the internal components from the failed compute module to the
replacement unit, except for the battery

• Install the replacement compute module in the chassis

• Transfer the cache SDDs

• Transfer the power supply

• Transfer the serial number tag

Remove the replacement node from the shipping package and inspect it for any
sign of damage. Notify Dell EMC Isilon Technical Support if the node appears
damaged in any way. Do not install a damaged node. Do not discard the shipping
container and packaging. You'll use them to return the failed node to Dell EMC.

Power down the node by following the instructions in the replacement guide.

Label the network cables connected to the back of the node to ensure that you can
reconnect them correctly. Before you disconnect any cables from a node, make
sure Do Not Remove LED is not lit. When the Do Not Remove LED is off
disconnect all cables from the back of the node. If there are transceivers connected
to your network cables, remove them from the node. You might see LEDs on inside
the node, even after you have removed the power cord. That is because the node
next to it is supplying redundant power. On the back of the chassis, loosen the
orange thumbscrew that secures the node release lever. To eject the node from the
node bay, pull the orange handle away from the node. Slowly pull the node out of
the bay. Support the node at front and back with both hands as you remove it from
the bay. Place the node on an ESD protected work surface next to the replacement
node.

Position the node with the fans facing you. The blue release handle should be
under the fans. Place the heel of your hand on the gray connectors above the fans
and grab the blue release handle with the fingertips of your other hand. Make sure
you are not pressing down on the top of the node with the heel of your hand as that
will keep the node lid from popping up when you pull the release handle. Pull on
the blue release handle to lift the lid up from the node. You will feel an initial pop as
the blue release handle pulls away from the node. Pull on the release handle until
you feel a second pop to raise the lid up off the node. Lift the lid straight up off the
node and place it next to the body of the node. Repeat for the replacement node.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 27


Field Replaceable Units (FRUs)

Inside the left side of the node body, just behind the fans, locate the blue touch
point label. Place the thumb of your left hand on the blue touch point and press the
side of the node away from the fans. The metal tab that holds the fans in place will
flex away from the fans so you can remove them. Slide the fans straight up out of
the node with your right hand. Repeat for the replacement node

Locate the two blue tabs for removing the HBA riser. There's a sliding tab at the
back of the riser, and a fixed tab at the front. Complete the following three steps at
the same time. To free the back end of the riser, push the sliding tab in the
direction of the arrow on the tab. To free the front end, pull the riser away from the
locking pin on the side of the chassis with the fixed tab. Lift up on the tabs to
unseat the riser and pull it straight up out of the node.

Remove both the internal and external NICs from the HBA riser. When you are
looking down the length of the HBA riser, with the battery pack close to you, the
internal NIC is on your left, closest to the bottom of the riser. Remove the retaining
screw that secures the internal NIC to the chassis and set it aside. Pull the NIC
straight up out of its slot. Make a note that this is the internal NIC. Repeat for the
external NIC.

Disconnect the battery pack and remove it from the HBA riser. Press in on the
locking tab and disconnect the battery cable. Push in on the retaining tabs on the
bottom edge of the riser and lift up to free one side of the battery pack. Roll the
battery pack away from the riser to free the other side of the pack and remove it.
Unsnap and open the black retaining tab at the end of the M.2 vault card. Lift the
free end of the card at an angle and pull the card away from the connector.

After you remove the M.2 vault card, re-install the battery pack in the HBA riser
from the failed node. The replacement node already contains a battery. Hook the
two battery pack feet closest to the battery cable into the slots on the riser. Roll the
battery pack down until it is flat against the M.2 vault card, and then push in on the
retaining tabs until they click into the slots.

Slide the HBA riser into the node and secure it in place. Align the metal tab next to
the sliding tab on the riser with the slot on the node chassis. Slide the riser
downward until you seat the riser in the chassis. When you push the riser down to
seat it, you will see the sliding tab click forward and back as it secures the riser in
place. Make sure that the locking pin next to the fixed tab at the front of the riser
aligns with the locking slot in the chassis. The locking pin might sit away from the
side of the chassis. You can pinch the side of the chassis and the riser together to

PowerScale Hardware Maintenance

Page 28 © Copyright 2020 Dell Inc.


Field Replaceable Units (FRUs)

make sure that the locking pin aligns with the slot on the chassis. When you install
the fans, the side of the fan module will hold the locking pin in place.

Remove the HBA riser from the replacement node using the same technique as
before. Remove the battery pack using the same technique as before.

Unsnap and open the black retaining tab at the end of the M.2 vault card. Insert the
connecting end of the M.2 vault card at an angle into the connector on the new
HBA riser card. Lower the other end of the M.2 vault until the card lies flat against
the HBA riser. Snap the retaining tab closed over the end of the card. Re-install the
battery using the same technique as before.

Locate the slot where you will install the internal NIC. Align the bottom of the card
with the appropriate slot and push the NIC into the slot. Secure the card to the
chassis using the retaining screw. Repeat for the external NIC. If you're installing a
10Gb NIC, the card is shorter than the internal NIC. You must install it in the middle
slot, right next to the internal NIC. If you're installing a 40Gb NIC, the card looks
just like the internal NIC. You must install it in the far-right slot, closest to the blue
HBA riser release tab.

Install the HBA riser using the same technique as before.

Transfer all DIMMs from the failed node to the replacement node. In the
replacement node press down on the DIMM retaining tabs. Do the same in the
failed node for the first DIMM. Pull the DIMM straight up to remove it from the slot.
Make note of the slot from which the DIMM is removed. Transfer it into the
corresponding slot in the lid of the replacement node. Align the DIMM with the slot
and press down until the retaining tabs snap into place. Push on the retaining tabs
to make sure they are closed. Repeat for all remaining DIMMs.

With the label on top of the fans facing you, insert the rails on either side of the fans
into the slots on the sides of the node. Press down on the fans until you feel them
click into place. Repeat for the replacement node.

Make sure the blue release handle below the fans is pulled out completely. Place
the node lid onto the body of the node. You can use the cutouts on the side of the
lid to align the lid with the node body. Make sure that the lid is not in contact with
the HBA riser or any other internal components, otherwise you might damage
something when you secure the lid. Apply gentle pressure to the top of the lid with
one hand as you push in the blue release handle with the other hand. You'll feel the
lid pull down onto the node as you push in the release handle. If you do not feel the

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 29


Field Replaceable Units (FRUs)

lid pull down onto the node, pull the release handle back out and make sure that
the lid is properly aligned with the node body. Brace one hand against the back of
the node and push the blue release handle all the way in to secure the lid to the
node body. Repeat for the replacement node.

Keep the lever in the open position until the node is pushed all the way into the
bay. Support the node with both hands and slide it into the node bay. Push the
release lever in against the node back panel. You can feel the lever pull the node
into place in the bay. If you do not feel the lever pull the node into the bay, pull the
lever back into the open position, make sure that the node is pushed all the way
into the node bay, then push the lever in against the node again. Tighten the
thumbscrew on the release lever to secure the lever in place. Locate the labels on
the network cables and connect them to the correct ports on the back of the node.
Locate the power cable and connect it to the back of the node. Drop the metal bale
down over the power cord to secure the connector in place. The node will
automatically power up when you connect the power cable.

Follow the instructions in the guide to complete the replacement procedure:

• Verify the node is operating correctly

• Update node firmware

• Gather logs

• Return the failed part to Dell EMC Technical Support

• Update the install database

• Install the drive support package

If you encounter any difficulties while performing this task, contact Dell EMC
Technical Support.

PowerScale Hardware Maintenance

Page 30 © Copyright 2020 Dell Inc.


Field Replaceable Units (FRUs)

Gen 6.5

Movie:

The web version of this content contains a movie.

Link:

https://edutube.emc.com/vlearning/launch/mxpAovnlxV1ZXK1wL4Mr1g==/videodet
ails=false,comments=false,launch=yes

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 31


Customer Replaceable Units (CRUs)

Customer Replaceable Units (CRUs)

PowerScale Hardware Maintenance

Page 32 © Copyright 2020 Dell Inc.


Customer Replaceable Units (CRUs)

Module Objectives

After completing this module, you will be able to replace:


• Gen 6 components: SSD, Drives, Drive Sled, Front Bezel, Power Supply
• F200 and F600: SSD, PSU

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 33


Customer Replaceable Units (CRUs)

Node Compatibility

There are several areas to consider when addressing compatibility. Typically, in


PowerScale, compatibility is used in the context of SSD counts or size and RAM
sizes between different node types in a cluster, called node compatibility. With the
introduction of Gen 6 nodes1, we can extend compatibility to include back-end
network technology and the node types that can be part of a cluster.

Node compatibility allows you to establish an equivalence association between


older and newer generation nodes from the same performance series so you can
combine them into a single node pool. If no node compatibility is created, nodes
cannot be merged into the same node pool. This is important for a few reasons2.
Nodes in the same node pool may have different size SSDs3 and different size
RAM.

1First, Gen 6 nodes can be in a cluster with Gen 5 and Gen 4 nodes. Gen 5 nodes
and earlier are not compatible with a Gen 6 cluster using an Ethernet back-end.
This means you cannot add Gen 5 nodes to a Gen 6 cluster that uses an Ethernet
back-end, but you can have Gen 6 and Gen 5 nodes using the same InfiniBand
backend.

2 It enables you to transition slowly to the new hardware over time without a forklift
upgrade by allowing you to add one node at a time to an existing node pool. This is
more cost effective than adding the node minimum to start a new node pool with
the all new hardware. When a customer has grown the new node counts to
sufficient quantities, node compatibility can be disabled on an individual node pool.

3 Enabling SSD compatibility allows customers to replace older, smaller SSDs with
new, larger SSDs to allow more L3 cache space. This lets customers better utilize
storage resources. Every node in the pool must be the same model or of the same

PowerScale Hardware Maintenance

Page 34 © Copyright 2020 Dell Inc.


Customer Replaceable Units (CRUs)

SSD: Gen 6

Watch the video to learn how to replace an SSD.

Movie:

The web version of this content contains a movie.

Link:

https://edutube.emc.com/Player.aspx?vno=Ti76c637o7LUEAJj9GMayw==&autopla
y=true

Script: If there is more than one cache SSD installed, review the cluster event
associated with the failed SSD to determine which SSD to replace. Refer to the
replacement guide for more information. Press up on the orange tab to free the
bottom of the protective cover from the node, then swing the cover up and remove
it. On the face of the SSD, press up on the orange release button to release the
SSD handle. Rotate the SSD handle downward until it is perpendicular to the
compute module. Pull the SSD from the node.

Install the new cache SSD into the back of the node. If both SSD bays are empty,
install the SSD into the bay on the right. Make sure the SSD handle is completely
open and insert the SSD into the empty drive bay. Rotate the SSD handle upward
to seat the SSD and lock it in place. Place the upper tab of the SSD cover into the
slot above the SSDs. Swing the bottom of the SSD cover down and press it up into
the back of the node until it clears the catches and rests securely in place.

Follow the instructions in the guide to complete the replacement procedure.

series or family. The node pool must have the same number of SSDs per node in
every node if the OneFS version is prior to OneFS 8.0.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 35


Customer Replaceable Units (CRUs)

If you encounter any difficulties while performing this task, contact Dell EMC
Technical Support.

PowerScale Hardware Maintenance

Page 36 © Copyright 2020 Dell Inc.


Customer Replaceable Units (CRUs)

Drive and Sled: Gen 6

Regardless of the node type, each compute module slot pairs with five drive sled
bays. Depending on the length of the chassis and type of drive, this means that
each node can have up to thirty drives, or as few as fifteen with every sled in place.
Every node needs a consistent set of sled types, and drive types in each sled,
meaning you cannot mix-and-match different drives within a sled or different sleds
in node slots. There are three types of drive sleds. For 3.5" drives, there are long
and short sleds, and for 2.5" drives there is a short sled that contains up to six
drives. The 3.5" drives come with a paddle card that connects the drive into the
sled, while the 2.5" drives connect directly into the sled. The 3.5" drives fit into a
sled without paddle cards, but there will be no connection to the sled without the
paddle card.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 37


Customer Replaceable Units (CRUs)

Drives: Gen 6

Part 1

Internal to the 2.5" sled, there are individual fault lights for each drive. The yellow
LED associated with each drive is visible through holes in the top cover. A
supercapacitor can keep one light lit for around 10 minutes while the sled is out of
the chassis, but if more than one light is lit (indicating multiple drive failures) the lit
time is correspondingly reduced.

In the 3.5" drive sleds, the yellow drive fault LEDs are on the paddle cards, and
they are visible through the cover of the drive sled so that you can see which drive,
if any, needs replacement. The graphic shows the 3.5” short drive sled, the 3.5”
long sled has four LED viewing locations.

2.5" Drive Sled

3.5" Drive Sled

Part 2

Watch the video to learn how to replace a drive.

Movie:
The web version of this content contains a movie.

Link:

https://edutube.emc.com/Player.aspx?vno=hfBe54RKpBcgPpEvb2/dXQ==&autopla
y=true

PowerScale Hardware Maintenance

Page 38 © Copyright 2020 Dell Inc.


Customer Replaceable Units (CRUs)

Drive Sled: Gen 6

Part 1

The graphic shows the lights and their information for the drive sleds. All twenty
sleds can be individually serviced. Do not remove more than one sled per node at a
time on running nodes. The typical procedure is to go to a chassis where a fault
has been detected, inspect the sleds to see which one shows a fault light, press the
service request button, wait until the LED stops blinking and goes dark, then
remove the sled and replace the drive. Replace the sled. The node automatically
detects and configures the replacement drive.

The service request button informs the node that the sled will be removed, and the
node prepares for this by moving key boot information from drives on that sled,
suspending the drives in the sled from the cluster file system, and then spinning
them down. This is to maximize survivability in the event of further failures, and to
prevent cluster file system issues that are caused by multiple drives becoming
temporarily unavailable.

3 LEDs and 1 Push Button

Power/Activity

Sled Fault

Not safe to remove

Request for service

Part 2

Watch the video to learn how to replace a drive sled.

Movie:
The web version of this content contains a movie.

Link:

https://edutube.emc.com/Player.aspx?vno=JvXxvcHiwtqgODunE/TzlA==&autoplay
=true

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 39


Customer Replaceable Units (CRUs)

Note: If the suspend button is pressed and drives are detected, the
node attempts to rediscover the sled and rejoin its drives after 1 hour.
If the suspend button is pressed and drives are not detected or the
sled is still removed, the node automatically smartfail the drives after
15 minutes.

PowerScale Hardware Maintenance

Page 40 © Copyright 2020 Dell Inc.


Customer Replaceable Units (CRUs)

Front Bezel: Gen 6

Watch the video to learn how to replace a front bezel.

Movie:

The web version of this content contains a movie.

Link:

https://edutube.emc.com/Player.aspx?vno=LW/ZI6kmMaqKY54dpE/lOw==&autopl
ay=true

Script: Gather logs by following the instructions in the replacement guide. Press
both latches of the front bezel simultaneously to release it. Align the front bezel with
the front of the chassis, then push until you feel the bezel snap into place.

Follow the instructions in the guide to complete the replacement procedure.

If you encounter any difficulties while performing this task, contact Dell EMC
Technical Support.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 41


Customer Replaceable Units (CRUs)

Power Supply: Gen 6

Watch the video to learn how to replace a power supply.

Movie:

The web version of this content contains a movie.

Link:

https://edutube.emc.com/Player.aspx?vno=oveIZf3k48xr4biiu/hZXg==&autoplay=tr
ue

Script: Lift the metal bale to free the power cord. Disconnect the power cord from
the power supply. You may see LEDs on inside the compute module, even after
you have removed the power cord. That is because the node next to it is supplying
redundant power. Press the orange retaining tab upward and pull the black handle
to slide the power supply out of the node.

Slide the new power supply unit into the open bay in the back of the node until you
feel the unit click into place. Connect the power cord to the power supply. Rotate
the metal bale down over the power cord to hold the cord in place.

Follow the instructions in the guide to complete the replacement procedure. If you
encounter any difficulties while performing this task, contact Dell EMC Technical
Support.

PowerScale Hardware Maintenance

Page 42 © Copyright 2020 Dell Inc.


Customer Replaceable Units (CRUs)

Gen 6.5

Movie:

The web version of this content contains a movie.

Link:

https://edutube.emc.com/vlearning/launch/aYXb4cgLbhio9HDLNhC0Xg==/videodet
ails=false,comments=false,launch=yes

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 43


Customer Replaceable Units (CRUs)

SSD Replacement Considerations

• To update the drive firmware on nodes without bootflash drives, download and
install the latest drive firmware package.
• Power cycling drives during a firmware update might return unexpected results.
As a best practice, do not restart or power off nodes when the drive firmware is
being updated in a cluster.
• To update the drive firmware for your entire cluster, run the following command:
# isi devices drive list (--node-lnn node-lnn-number)
• Note that you must wait for the current upgrade operation to complete before
initiating another.
• To confirm that a node has finished updating, run the following command: #
isi devices -d <node-number>. A drive that is still updating displays a
status of FWUPDATE.
• OneFS updates drive sequentially.

PowerScale Hardware Maintenance

Page 44 © Copyright 2020 Dell Inc.


Replacing a Failed Chassis

Replacing a Failed Chassis

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 45


Replacing a Failed Chassis

Module Objectives

After completing this module, you will be able to:


• Describe chassis replacement process.

PowerScale Hardware Maintenance

Page 46 © Copyright 2020 Dell Inc.


Replacing a Failed Chassis

Chassis Replacement Procedure

On a Gen 6 node, a chassis replacement involves moving four compute modules


and their companion drives to the new chassis, because each Gen 6 chassis
contains either two or four nodes. This procedure allows you to avoid SmartFailing
one or more nodes, while maintaining data integrity. This procedure requires a
work area large enough to place two PowerScale nodes side by side. To avoid
losing the data stored on the node, replacing a chassis involves moving the node’s
internal components from the failed chassis to a new chassis. The process requires
following the steps exactly and in order. Perform this procedure only when ordered
by Dell EMC PowerScale Technical Support.

Click each tab to learn how to generate the procedure through SolVe Online.

Step 1

• Go to SolVe Online Isilon Product page, click Replacement Procedures and


select the node that you have to generate the procedure for.
• Then select Replace Node Chassis and click Next.

Step 2

• Enter the Usage Information and click Next.

Note that the procedure is still generated even though you do not enter any
information.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 47


Replacing a Failed Chassis

Step 3

• Click GENERATE.

Step 4

• You can find the generated procedure in My Content tab in SolVe Online.

PowerScale Hardware Maintenance

Page 48 © Copyright 2020 Dell Inc.


Replacing a Failed Chassis

Step 5

• You will also receive a mail with the link for the procedure.

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 49


Course Summary

Course Summary

PowerScale Hardware Maintenance

Page 50 © Copyright 2020 Dell Inc.


Course Summary

Course Summary

Now that you have completed this course, you should be able to:
→ Explain hardware maintenance procedures.
→ Prepare a node.
→ Replace Field Replaceable Units (FRUs).
→ Replace Customer Replaceable Units (CRUs).

PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 51


PowerScale Hardware Maintenance

© Copyright 2020 Dell Inc. Page 52

You might also like